Encryption Software Market 2020 Booming by Size, Revenue, Trend and Top Companies 2026 – Instant Tech News

New Jersey, United States, The report titled, Encryption Software Market Size and Forecast 2026 in Verified Market Research offers its latest report on the global Encryption Software market that includes comprehensive analysis on a range of subjects like competition, segmentation, regional expansion, and market dynamics. The report sheds light on future trends, key opportunities, top regions, leading segments, the competitive landscape, and several other aspects of the Encryption Software market. Get access to crucial market information. Market players can use the report back to peep into the longer term of the worldwide Encryption Software market and convey important changes to their operating style and marketing tactics to realize sustained growth.

Global Encryption Software Market was valued at USD 3.32 billion in 2016 and is projected to reach USD 30.54 billion by 2025, growing at a CAGR of 27.96% from 2017 to 2025.

Get | Download Sample Copy @https://www.verifiedmarketresearch.com/download-sample/?rid=1826&utm_source=ITN&utm_medium=002

Top 10 Companies in the Global Encryption Software Market Research Report:

Global Encryption Software Market: Competitive Landscape

Competitive landscape of a market explains strategies incorporated by key players of the market. Key developments and shift in management in the recent years by players has been explained through company profiling. This helps readers to understand the trends that will accelerate the growth of market. It also includes investment strategies, marketing strategies, and product development plans adopted by major players of the market. The market forecast will help readers make better investments.

Global Encryption Software Market: Drivers and Restrains

This section of the report discusses various drivers and restrains that have shaped the global market. The detailed study of numerous drivers of the market enable readers to get a clear perspective of the market, which includes market environment, government policies, product innovations, breakthroughs, and market risks.

The research report also points out the myriad opportunities, challenges, and market barriers present in the Global Encryption Software Market. The comprehensive nature of the information will help the reader determine and plan strategies to benefit from. Restrains, challenges, and market barriers also help the reader to understand how the company can prevent itself from facing downfall.

Global Encryption Software Market: Segment Analysis

This section of the report includes segmentation such as application, product type, and end user. These segmentations aid in determining parts of market that will progress more than others. The segmentation analysis provides information about the key elements that are thriving the specific segments better than others. It helps readers to understand strategies to make sound investments. The Global Encryption Software Market is segmented on the basis of product type, applications, and its end users.

Global Encryption Software Market: Regional Analysis

This part of the report includes detailed information of the market in different regions. Each region offers different scope to the market as each region has different government policy and other factors. The regions included in the report are North America, South America, Europe, Asia Pacific, and the Middle East. Information about different region helps the reader to understand global market better.

Ask for Discount @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=1826&utm_source=ITN&utm_medium=002

Table of Content

1 Introduction of Encryption Software Market

1.1 Overview of the Market 1.2 Scope of Report 1.3 Assumptions

2 Executive Summary

3 Research Methodology of Verified Market Research

3.1 Data Mining 3.2 Validation 3.3 Primary Interviews 3.4 List of Data Sources

4 Encryption Software Market Outlook

4.1 Overview 4.2 Market Dynamics 4.2.1 Drivers 4.2.2 Restraints 4.2.3 Opportunities 4.3 Porters Five Force Model 4.4 Value Chain Analysis

5 Encryption Software Market, By Deployment Model

5.1 Overview

6 Encryption Software Market, By Solution

6.1 Overview

7 Encryption Software Market, By Vertical

7.1 Overview

8 Encryption Software Market, By Geography

8.1 Overview 8.2 North America 8.2.1 U.S. 8.2.2 Canada 8.2.3 Mexico 8.3 Europe 8.3.1 Germany 8.3.2 U.K. 8.3.3 France 8.3.4 Rest of Europe 8.4 Asia Pacific 8.4.1 China 8.4.2 Japan 8.4.3 India 8.4.4 Rest of Asia Pacific 8.5 Rest of the World 8.5.1 Latin America 8.5.2 Middle East

9 Encryption Software Market Competitive Landscape

9.1 Overview 9.2 Company Market Ranking 9.3 Key Development Strategies

10 Company Profiles

10.1.1 Overview 10.1.2 Financial Performance 10.1.3 Product Outlook 10.1.4 Key Developments

11 Appendix

11.1 Related Research

Request Customization of Report Complete Report is Available @ https://www.verifiedmarketresearch.com/product/global-encryption-software-market-size-and-forecast-to-2025/?utm_source=ITN&utm_medium=002

Highlights of Report

About Us:

Verified market research partners with clients to provide insight into strategic and growth analytics; data that help achieve business goals and targets. Our core values include trust, integrity, and authenticity for our clients.

Analysts with high expertise in data gathering and governance utilize industry techniques to collate and examine data at all stages. Our analysts are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research reports.

Contact Us:

Mr. Edwyne Fernandes Call: +1 (650) 781 4080 Email: [emailprotected]

TAGS: Encryption Software Market Size, Encryption Software Market Growth, Encryption Software Market Forecast, Encryption Software Market Analysis, Encryption Software Market Trends, Encryption Software Market

The rest is here:
Encryption Software Market 2020 Booming by Size, Revenue, Trend and Top Companies 2026 - Instant Tech News

If you’re interested in artificial intelligence, this event might be for you – WYDaily

Jefferson Lab is hosting a free, hands-on experience where attendees can learn how artificial intelligence can relate to the field of nuclear physics. (WYDaily/ Courtesy of Pixabay)

Calling future hackers, this workshop might be for you

Jefferson Lab is hosting an A.I. Hack-A-Thon for those interested in learning about artificial intelligence.

The purpose is to generate interest in A.I. in the field of nuclear physics by giving participants a free, hands on experience, according to the news release.

The event is free and open to the public. The deadline to register is Friday.

The last 10 years have seen explosive growth in the field of A.I. according to the news release. This was fueled, in large part, by rapid increases in computational hardware alongside the accessibility of vast amounts of data. As A.I. becomes increasingly pervasive in society, the nuclear physics community has recognized its potential.

Attendees will learn about charged particle reconstruction in nuclear physics experiments and work in teams to solve challenges such as tracking charged particles through magnetic fields to reconstruct algorithms, according to the news release.

For those currently in the nuclear physics field, such as college students, graduate students and professors, can participate in another event, the A.I. for Nuclear Physics Workshop on March 4-6. Registration ranges from $50-$100.

The University of Virginias School of Data Science are also co-hosting both events.

The A.I. Hack-A-Thon at Jefferson Lab,12000 Jefferson Ave.,is on Tuesday, March 3 from 9 a.m. to 3 p.m. Food and beverages will be provided.

For more information or to register for either the event, visit the Jefferson Labs event page.

Always be informed. Click here to get the latest news and information delivered to your inbox

Here is the original post:
If you're interested in artificial intelligence, this event might be for you - WYDaily

High-risk Artificial Intelligence to be ‘certified, tested and controlled,’ Commission says – EURACTIV

Artificial Intelligence technologies carrying a high-risk of abuse that could potentially lead to an erosion of fundamental rights will be subjected to a series of new requirements, the European Commission announced on Wednesday (19 February).

As part of the executives White paper on AI, a series of high-risk technologies have been earmarked for future oversight, including those in critical sectors and those deemed to be of critical use.

Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include such technologies with a risk of death, damage or injury, or with legal ramifications.

Artificial Intelligence technologies coming under those two categories will be obliged to abide by strict rules, which could include compliance tests and controls, the Commission said on Wednesday.

Sanctions could be imposed should certain technologies fail to meet such requirements. Such high-risk technologies should also come under human control, according to Commission documents. Areas that are deemed to not be of high-risk, an option could be to introduce a voluntary labelling scheme, which would highlight the trustworthiness of an AI product by merit of the fact that it meets certain objective and standardised EU-wide benchmarks.

However, the Commission stopped short of identifying technology manufactured from outside the EU in certain authoritarian regimes as necessarily high risk.

Pressed on this point by EURACTIV on Wednesday, Thierry Breton, the Commissioner for the Internal Market, said manufacturers could be forced to retrain algorithms locally in Europe with European data.

We could be ready to do this if we believe it is appropriate for our needs and our security, Breton added.

Another area in which the Commission will seek to provide greater oversight is the use of potentially biased data sets that may negatively impact demographic minorities.

In this field, the executive has outlined plans to ensure that unbiased data sets are used in Artificial Intelligence technologies, avoiding discrimination of under-represented populations in algorithmic processes.

More generally, Commission President Ursula von der Leyen praised Europes efforts in the field of Artificial Intelligence thus far, saying that such technologies can be of vital use in a number of sectors including in healthcare, agriculture and energy, and can also help Europe to meet sustainable goals.

Conformity Assessment

However, she also noted the importance of ensuring that certain AI technologies meet certain standards in order to be of use to European citizens. High-risk AI technologies must be tested and certified before they reach the market, von der Leyen said.

Along this axis, the Commission will establish an objective, prior conformity assessment in order to ensure that AI systems are technically robust, accurate and trustworthy.

Such systems need to be developed in a responsible manner and with an ex-ante due and proper consideration of the risks that they may generate. Their development and functioning must be such to ensure that AI systems behave reliably as intended, stated the White Paper, which is open for public consultation until 19 May.

Such a conformity assessment could include procedures for testing, inspection and certification. Importantly, the Commission also states that such an assessment could include checks of the algorithms and of the data sets used in the development phase.

The EUs Vice-President for Digital, Margrethe Vestager, said on Wednesday that an assessment will be made in the future as to whether this approach is effective or not.

Facial Recognition

Elsewhere in the Artificial Intelligence White Paper, the Commission held back on introducing strict measures against facial recognition technologies. A leaked version of the document had previously floated the idea of putting forward a moratorium on facial recognition software.

However, the executive now plans to launch an EU-wide debate on the use of remote biometric identification, of which facial recognition technologies are a part.

On Wednesday, Vestager noted that facial recognition technologies in some cases are harmless but a wider consultation is required to identify the extent to which remote biometric identification as part of AI technologies should be permitted.

The Commission also highlightedthe fact that under current EU data protection rules, the processing of biometric data for the cause of identifying individuals is prohibited, unless specific conditions with regards to national security or public interest are met.

Article 6 of the EUs General Data Protection Regulation outlines the conditions under which personal data can be legally processed, one such requirement being that the data subject has given their explicit consent. Article 4 (14) of the legislation covers the processing of biometric data.

In recent months, EU member states have been charting future plans in the field of facial recognition technologies.

Germany wishes to roll out automatic facial recognition at 134 railway stations and 14 airports. France also has plans to establish a legal framework permitting video surveillance systems to be embedded with facial recognition technologies.

Rights groups have called for more stringent measures to be enacted in the future against facial recognition technologies, in response to the Commissions announcement on Wednesday.

It is of utmost importance and urgency that the EU prevents the deployment of mass surveillance and identification technologies without fully understanding their impact on people and their rights, and without ensuring that these systems are fully compliant with data protection and privacy law as well as all other fundamental rights, said Diego Naranjo, head of policy at European Digital Rights (EDRi).

[Edited by Zoran Radosavljevic]

Read the rest here:
High-risk Artificial Intelligence to be 'certified, tested and controlled,' Commission says - EURACTIV

The 5 Industries That Rely on Artificial Intelligence – Analytics Insight

The research on how to improve and implement artificial intelligence in our everyday lives is not stopping. Many multi-million companies are constantly trying out new technologies to make sure that they will be the ones that will lay the foundation towards the next step of human evolution. There are still some pros and cons to this concept and people are divided towards the idea of robotizing the world, but this will probably be the next step towards human progress.

What many people dont know, is that some form of AI is being used in certain industries these days. We wanted to state some of the details and unveil some of the industries that rely heavily on this type of technology. For example, casino games at NoviBet solely rely on AI, the education sector is also trying to implement this technology, etc. Lets check out some of the most dominant areas.

Healthcare has had a tight relationship with AI in the last couple of years. AI helps doctors by providing them with better diagnostics and detecting a medical problem in an individual much faster. The purpose of having this technology in the medical area is to make the process of examination much faster and more effective. Some statistics have shown that by using AI in healthcare, countries like the USA will save up to $150 billion per year by 2026.

In a world where the marketing industry has transferred to the online world, the need for AI in this sector is essential. Advertising on social media and other platforms requires AI to determine what are the preferences of the users by analyzing their cookie history. AI has made online advertising much easier and far more effective than traditional marketing tools.

Like we mentioned earlier, the online casino industry is also relying heavily on AI. In fact, if they didnt have this technology at their disposal, they wouldnt exist. Online casinos use artificial intelligence to enforce fair-play by making every outcome of every game random and by securing their sites and making sure that all of the information about the players in hidden and turned into an unbreakable code. They also give players anonymity, which is one of the reasons why many people started favouring them over the land-based casinos.

Logically, manufacturing is always in a desperate need of robots. They help in the building process and are far more reliable than humans. Artificial intelligence is playing a key role in this industry by taking care of the smallest details and making the process much more effective and efficient.

The first signs of self-driven cars are almost here. Artificial intelligence uses various algorithms to determine the speed of the car, the trajectory, and obstacles on the road. While this technology is still relatively young, it has the potential to make the future much easier for us.

Continued here:
The 5 Industries That Rely on Artificial Intelligence - Analytics Insight

EASA Expects Certification of First Artificial Intelligence for Aircraft Systems by 2025 – Aviation Today

The European Aviation Safety Agency expects to certify the first integration of artificial intelligence technology in aircraft systems by 2025.

The European Aviation Safety Agency (EASA) has published its Artificial Intelligence Roadmap in anticipation of the first certification for the use of AI in aircraft systems coming in 2025.

EASA published the 33-page roadmap after establishing an internal AI task force in October 2018 to identify staff competency, standards, protocols and methods to be developed ahead of moving forward with actual certification of new technologies. A representative for the agency confirmed in an emailed statement to Avionics International that they have already received project submissions from industry designed to provide certification for AI pilot assistance technology.

The Agency has indeed received its first formal applications for the certification of AI-based aircraft systems in 2019. It is not possible to be more specific on these projects at this stage due to confidentiality. The date in our roadmap, 2025, corresponds to the project certification target date anticipated by the applicants, the representative for EASA said.

In the roadmap document, EASA notes that moving forward, the agency will define AI as any technology that appears to emulate the performance of a human. The roadmap further divides AI applications into model-driven AI and data driven AI, while linking these two forms of AI to breakthroughs in machine learning, deep learning and the use of neural networks to enable applications such as computer vision and natural language processing.

In order to be ready by 2025 for the first certification of AI-based systems, the first guidance should be available in 2021, so that the applicant can be properly guided during the development phase. The guidance that EASA will develop will apply to the use of AI in all domains, including aircraft certification as well as drone operations, the representative for EASA said.

Eight specific domains of aviation are identified as potentially being impacted by the introduction of AI to aviation systems, including the following:

The roadmap foresees the potential use of machine learning for flight control laws optimization, sensor calibration, fuel tank quantity evaluation, icing detection to be among those aircraft systems where the need for human analysis of possible combination and associated parameter values could be replaced by machine learning.

The roadmap for EASA's certification of AI in aircraft systems. Photo: EASA

EASA also points to several research and development projects and prototypes featuring the use of artificial intelligence for air traffic management already available. These include Singapore ATM Research Institutes application that generates resolution proposals that can assist controllers in resolving airspace system conflicts. There is also the Single European Sky ATM Research Joint Undertakings BigData4ATM project tasked with analyzing passenger-centric geo-located data to identify patterns in airline passenger behavior and the Machine Learning of Speech Recognition Models for Controller Assistance (MALORCA) project that has developed a speech recognition tool for use by controllers.

Several aviation industry research and development initiatives have been looking at the integration of AI and ML into aircraft systems and air traffic management infrastructure in recent years as well. During a November visit to its facility in Toulouse, Thales showed some of the technologies it is researching and developing including a virtual assistant that will provide both voice and flight intention recognition to pilots as part of its next generation FlytX avionics suite.

Zurich, Switzerland-based startup Daedalean is also developing what it describes as the aviation industrys first autopilot system to feature an advanced form of artificial intelligence (AI) known as deep convolutional feed forward neural networks. The system is to feature software that can replicate a human pilots level of decision-making and situational awareness.

NATS, the U.K.s air navigation service provider (ANSP) is also pioneering an artificial intelligence for aviation platform. At Heathrow Airport, the company has installed 18 ultra-HD 4K cameras on the air traffic control tower and others along the airports northern runway that are feeding images to a platform developed by Searidge Technology called AIMEE. The goal is for AIMEEs advanced neural network framework to become capable of identifying when a runway is cleared for takeoffs and arrivals in low visibility conditions.

As the industry moves forward with more AI developments, EASA plans to continually update its roadmap with new insights. Their roadmap proposes a possible classification of AI and ML applications separated into three levels based on the level of human oversight on a machine. Level 1 is to categorize the use of artificial intelligence for routine tasks, while Level 2 features applications where a human is a performing a function and the machine is monitoring. Level 3 is to feature full autonomy, where machines perform functions with no human intervention.

At this stage version 1.0 identifies key elements that the Agency considers should be the foundation of its human-centric approach: integration of the ethical dimension, and the new concepts of trustworthiness, learning assurance and explainability of AI, the representative for EASA said. This should be the main take away for the agencys industry stakeholders. In essence, the roadmap aims at establishing the baseline for the Agencys vision on the safe development of AI.

See the original post:
EASA Expects Certification of First Artificial Intelligence for Aircraft Systems by 2025 - Aviation Today

Artificial intelligence makes a splash in efforts to protect Alaska’s ice seals and beluga whales – Stories – Microsoft

When Erin Moreland set out to become a research zoologist, she envisioned days spent sitting on cliffs, drawing seals and other animals to record their lives for efforts to understand their activities and protect their habitats.

Instead, Moreland found herself stuck in front of a computer screen, clicking through thousands of aerial photographs of sea ice as she scanned for signs of life in Alaskan waters. It took her team so long to sort through each survey akin to looking for lone grains of rice on vast mounds of sand that the information was outdated by the time it was published.

Theres got to be a better way to do this, she recalls thinking. Scientists should be freed up to contribute more to the study of animals and better understand what challenges they might be facing. Having to do something this time-consuming holds them back from what they could be accomplishing.

That better way is now here an idea that began, unusually enough, with the view from Morelands Seattle office window and her fortuitous summons to jury duty. She and her fellow National Oceanic and Atmospheric Administration scientists now will use artificial intelligence this spring to help monitor endangered beluga whales, threatened ice seals, polar bears and more, shaving years off the time it takes to get data into the right hands to protect the animals.

The teams are training AI tools to distinguish a seal from a rock and a whales whistle from a dredging machines squeak as they seek to understand the marine mammals behavior and help them survive amid melting ice and increasing human activity.

Morelands project combines AI technology with improved cameras on a NOAA turboprop airplane that will fly over the Beaufort Sea north of Alaska this April and May, scanning and classifying the imagery to produce a population count of ice seals and polar bears that will be ready in hours instead of months. Her colleague Manuel Castellote, a NOAA affiliate scientist, will apply a similar algorithm to the recordings hell pick up from equipment scattered across the bottom of Alaskas Cook Inlet, helping him quickly decipher how the shrinking population of endangered belugas spent its winter.

The data will be confirmed by scientists, analyzed by statisticians and then reported to people such as Jon Kurland, NOAAs assistant regional administrator for protected resources in Alaska.

Kurlands office in Juneau is charged with overseeing conservation and recovery programs for marine mammals around the state and its waters and helping guide all the federal agencies that issue permits or carry out actions that could affect those that are threatened or endangered.

Of the four types of ice seals in the Bering Sea bearded, ringed, spotted and ribbon the first two are classified as threatened, meaning they are likely to become in danger of extinction within the foreseeable future. The Cook Inlet beluga whales are already endangered, having steadily declined to a population of only 279 in last years survey, from an estimate of about a thousand 30 years ago.

Individual groups of beluga whales are isolated and dont breed with others or leave their home, so if this population goes extinct, no one else will come in; theyre gone forever, says Castellote. Other belugas wouldnt survive there because they dont know the environment. So youd lose that biodiversity forever.

Yet recommendations by Kurlands office to help mitigate the impact of human activities such as construction and transportation, in part by avoiding prime breeding and feeding periods and places, are hampered by a lack of timely data.

Theres basic information that we just dont have now, so getting it will give us a much clearer picture of the types of responses that may be needed to protect these populations, Kurland says. In both cases, for the whales and seals, this kind of data analysis is cutting-edge science, filling in gaps we dont have another way to fill.

The AI project was born years ago, when Moreland would sit at her computer in NOAAs Marine Mammal Laboratory in Seattle and look across Lake Washington toward Microsofts headquarters in Redmond, Washington. She felt sure there was a technological solution to her frustration, but she didnt know anyone with the right skills to figure it out.

She hit the jackpot one week while serving on a jury in 2018. She overheard two fellow jurors discussing AI during a break in the trial, so she began talking with them about her work. One of them connected her with Dan Morris from Microsofts AI for Earth program, who suggested they pitch the problem as a challenge that summer at the companys Hackathon, a week-long competition when software developers, programmers, engineers and others collaborate on projects. Fourteen Microsoft engineers signed up to work on the problem.

Across the wildlife conservation universe, there are tons of scientists doing boring things, reviewing images and audio, Morris says. Remote equipment lets us collect all kinds of data, but scientists have to figure out how to use that data. Spending a year annotating images is not only a bad use of their time, but the questions get answered way later than they should.

Morelands idea wasnt as simple as it may sound, though. While there are plenty of models to recognize people in images, there were none until now that could find seals, especially real-time in aerial photography. But the hundreds of thousands of examples NOAA scientists had classified in previous surveys helped technologists, who are using them to train the AI models to recognize which photographs and recordings contained mammals and which didnt.

Part of the challenge was that there were 20 terabytes of data of pictures of ice, and working on your laptop with that much data isnt practical, says Morris. We had daily handovers of hard drives between Seattle and Redmond to get this done. But the cloud makes it possible to work with all that data and train AI models, so thats how were able to do this work, with Azure.

Morelands first ice seal survey was in 2007, flying in a helicopter based on an icebreaker. Scientists collected 90,000 images and spent months scanning them but only found 200 seals. It was a tedious, imprecise process.

Ice seals live largely solitary lives, making them harder to spot than animals that live in groups. Surveys are also complicated because the aircraft have to fly high enough to keep seals from getting scared and diving, but low enough to get high-resolution photos that enable scientists to differentiate a ring seal from a spotted seal, for example. The weather in Alaska often rainy and cloudy further complicates efforts.

Subsequent surveys improved by pairing thermal and color cameras and using modified planes that had a greater range to study more area and could fly higher up to be quieter. Even so, thermal interference from dirty ice and reflections off jumbled ice made it difficult to determine what was an animal and what wasnt.

And then there was the problem of manpower to go along with all the new data. The 2016 survey produced a million pairs of thermal and color images, which a previous software system narrowed down to 316,000 hot spots that the scientists had to manually sort through and classify. It took three people six months.

Read more here:
Artificial intelligence makes a splash in efforts to protect Alaska's ice seals and beluga whales - Stories - Microsoft

Artificial Intelligence and Machine Learning in the Operating Room – 24/7 Wall St.

Most applications of artificial intelligence (AI) and machine learning technology provide only data to physicians, leaving the doctors to form a judgment on how to proceed. Because AI doesnt actually perform any procedure or prescribe a course of medication, the software that diagnoses health problems does not have to pass a randomized clinical trial as do devices such as insulin pumps or new medications.

A new study published Monday at JAMA Network discusses a trial including 68 patients undergoing elective noncardiac surgery under general anesthesia. The object of the trial was to determine if a predictive early warning system for possible hypotension (low blood pressure) during the surgery might reduce the time-weighted average of hypotension episodes during the surgery.

In other words, not only would the device and its software keep track of the patients mean average blood pressure, but it would sound an alarm if an 85% or greater risk of a patients blood pressure falling below 65 mm of mercury (Hg) was possible in the next 15 minutes. The device also encouraged the anesthesiologist to take preemptive action.

Patients in the control group were connected to the same AI device and software, but only routine pulse and blood pressure data were displayed. That means that the anesthesiologist had no early warning about a hypotension event and could take no action to prevent the event.

Among patients fully connected to the device and software, the median time-weighted average of hypotension was 0.1 mm Hg, compared to an average of 0.44 mm Hg in the control group. In the control group, the median time of hypotension per patient was 32.7 minutes, while it was just 8.0 minutes among the other patients. Most important, perhaps, two patients in the control group died from serious adverse events, while no patients connected to the AI device and software died.

The algorithm used by the device was developed by different researchers who had trained the software on thousands of waveform features to identify a possible hypotension event 15 minutes before it occurs during surgery. The devices used were a Flotrac IQ sensor with the early warning software installed and a HemoSphere monitor. The devices are made by Edwards Lifesciences, and Edwards also had five of eight researchers among the developers of the algorithm. The study itself was conducted in the Netherlands at Amsterdam University Medical Centers.

In an editorial at JAMA Network, associate editor Derek Angus wrote:

The final model predicts the likelihood of future hypotension via measurement of multiple variables characterizing dynamic interactions between left ventricular contractility, preload, and afterload. Although clinicians can look at arterial pulse pressure waveforms and, in combination with other patient features, make educated guesses about the possibility of upcoming episodes of hypotension, the likelihood is high that an AI algorithm could make more accurate predictions.

Among the past decades biggest health news stories were the development of immunotherapies for cancer and a treatment for cystic fibrosis. AI is off to a good start in the new decade.

By Paul Ausick

Link:
Artificial Intelligence and Machine Learning in the Operating Room - 24/7 Wall St.

Can we realistically create laws on artificial intelligence? – Open Access Government

Regulation is an industry, but effective regulation is an art. There are a number of recognised principles that should be considered when regulating an activity, such as efficiency, stability and regulatory structure, general principles, and the resolution of conflicts between these various competing principles. With the regulation of artificial intelligence (AI) technology, a number of factors make the centralised application of these principles difficult to realise but AI should be considered as a part of any relevant regulatory regime.

Because AI technology is still developing, it is difficult to discuss the regulation of AI without reference to a specific technology, field or application where these principles can be more readily applied. For example, optical character recognition (OCR) was considered to be AI technology when it was first developed, but today, few would call it AI.

Predictive technology for marketing and for navigation; Technology for ridesharing applications; Commercial flights routing; And even email spam filters.

These technologies are as different from each other as they are from OCR technology. This demonstrates why the regulation of AI technology (from a centralised regulatory authority or based on a centralised regulatory principle) is unlikely to truly work.

Efficiency-related principles include the promotion of competition between participants by avoiding restrictive practices that impair the provision of new AI-related technologies. This subsequently lowers barriers of entry for such technologies, providing the freedom of choice between AI technologies and creating competitive neutrality between existing AI technologies and new AI technologies (i.e. a level playing field). OCR technology was initially unregulated, at least from a central authority, and it was therefore allowed to develop and become faster and more efficient, even though there are many situations where OCR documents contained a large number of errors.

In a similar manner, a centralised regulation regime that encompasses all uses of AI mentioned above from a central authority or based on a single focus (e.g. avoiding privacy violations) would be inefficient.

The reason for this inefficiency is clear: the function and markets for these technologies are unrelated.

Strict regulations that require all AI applications to evaluate and protect the privacy of users might not only result in the failure to achieve any meaningful goals to protect privacy, but could also render those AI applications commercially unacceptable for reasons that are completely unrelated to privacy. For example, a regulation that requires drivers to be assigned based on privacy concerns could result in substantially longer wait times for riders if the closest drivers have previously picked up the passenger at that location. However, industry-specific regulation to address privacy issues might make sense, depending on the specific technology and specific concern within that industry.

Stability-related principles include providing incentives for the prudent assessment and management of risk, such as minimum standards, the use of regulatory requirements that are based on market values and taking prompt action to accommodate new AI technologies.

Using OCR as an example, if minimum standards for an acceptable number of errors in a document had been implemented, then the result would have been difficult to police, because documents have different levels of quality and some documents would no doubt result in less errors than others. In the case of OCR, the market was able to provide sufficient regulation, as companies competed with each other for the best solution, but for other AI technologies there may be a need for industry-specific regulations for ensuring minimum standards or other stability-related principles.

In regard to regulatory structure, these include following a functional/institutional approach to regulation, coordinating regulation by different agencies, and using a small number of regulatory agencies for any regulated activity. In that regard, there is no single regulatory authority that could implement and administer AI regulations across all markets, activities and technologies, or that would add a new regulatory regime to the ones already in place.

For example, in the US many state and federal agencies have OCR requirements that centre on specific technologies/software for document submission, and software application providers can either make their application compatible with those requirements or can seek to be included on a list of allowed applications. They do the latter by working with the state or federal agency to ensure that documents submitted using their applications will be compatible with the agencys uses. For other AI technologies there may be similarly industry-specific regulations that make sense in the context of the existing regulatory structure for that industry.

General principles of regulation include identifying the specific objectives of a regulation, cost-effectiveness, equitable distribution of the regulation costs , flexibility of regulation and a stable relationship between the regulators and regulated parties. Some of these principles could have been implemented for OCR, such as a specific objective in the terms of a number of errors per page. However, the other factors would have been more difficult to determine, and again would depend on an industry- or market-specific analysis. For many specific applications in specific industries, these factors were able to be addressed even though an omnibus regulatory structure was not implemented.

Preventing conflict between these different objectives requires a regime in which these different objectives can be achieved. For AI that would require an industry- or market-specific approach, and in the US, that approach has generally been followed for all AI-related technologies. As discussed, OCR-related technology is regulated by specific federal, state and local agencies as it pertains to their specific mission. Another AI technology is facial recognition, and a regulatory regime of federal, state and local regulation is in progress. The facial recognition technology space has been used by many of these authorities for different applications, with some recent push-back on the use of the technology by privacy advocates.

It is only when conflicts develop between such different regimes that input from a centralised authority may be required.

In the United States, an industry- and market-based approach is generally being adopted. In the 115th Congress, thirty-nine bills were introduced that had the phrase artificial intelligence in the text of the bill, and four were enacted into law. A large number of such bills were also introduced in the 116th Congress. As of April 2017, twenty-eight states had introduced some form of regulations for autonomous vehicles, and a large number of states and cities have proposed or implemented regulations for facial recognition technology.

While critics will no doubt assert that nothing much is being done to regulate AI, a simplistic and heavy-handed approach to AI regulation, reacting to a single concern such as privacy is unlikely to satisfy these principles of regulation, and should be avoided. Artificial intelligence requires regulation with real intelligence.

By Chris Rourk, Partner at Jackson Walker, a member of Globalaw.

Editor's Recommended Articles

See original here:
Can we realistically create laws on artificial intelligence? - Open Access Government

Comments on the OPC Consultation on Artificial Intelligence – Lexology

Introduction

On January 28, 2020, the Office of the Privacy Commissioner of Canada (OPC) published its Consultation on the OPCs Proposals for ensuring appropriate regulation of artificial intelligence (Consultation Paper). The Consultation Paper sets out several proposals for how the federal Personal Information Protection and Electronic Documents Act (PIPEDA) could be reformed, in the words of the OPC, "in order to bolster privacy protection and achieve responsible innovation in a digital era that involves artificial intelligence (AI) systems." The document also invites privacy experts to validate the OPCs understanding of how privacy principles should apply and whether its proposals would be consistent with the responsible development and deployment of artificial intelligence, calling for responses to be submitted by March 13, 2020.

The Consultation Paper considers the perspectives of other bodies that have treated the issues raised by the various proposals at length, including the OECD, the IEEE and the UK Information Commissioners Office among others. This makes the document substantial, and commenting on the Consultation Paper in its entirety is not feasible in a short post. In consequence, this bulletin will provide critical commentary on a few of the more salient points of interest.

There is no question that the recent convergence of cheap, on-demand computing resources, very large data collections and the development of machine learning platforms makes it timely to consider legal reforms that better address the promise and the risks surrounding the latest wave of developments in AI.

That said, any consideration of this topic must begin with a caveat: the term artificial intelligence has a long history in computer science, cognitive science and philosophy, and its meaning has become rather elastic.1 This can be useful for marketing but hinders legal analysis.

Defining AI

The first proposal provides a case in point. It considers whether reforms to PIPEDA should incorporate a definition of AI within the law that would serve to clarify which legal rules would apply only to it, and proposes the definition from the 2019 OECD Principles on Artificial Intelligence to which Canada is signatory as a possible contender:

a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.

The proposed definition, however well intentioned, is so broad as to be of little value for the intended purpose. Spam detection software falls squarely within it, yet that is clearly not what the OPC wishes to address with the proposed reforms. Decision-making software that takes its cues from a random noise generator could also fit within this scheme.2

If your reaction to the latter example is that random value selection is not a decision-making process, proceed with caution: a brief glance into scholarly literature about what constitutes a decision reveals discussion that typically imports notions of reasoning, considering, thinking and choosing,3 all of which are questionable concepts to deploy in the context of discrete state machines.

There is another problem to consider: there are many, many definitions of AI, each having their own merits and their own issues.4 Reading several of them in quick succession can lead to confusion as to whether the goal of such definitions is to describe systems that satisfy certain criteria in relation to cognitive processes, or just externally observable behavior. Moreover, some definitions introduce terms that lead to thornier issues similar to those alluded to above, such as perceiving,5 learning,6 or agency.7

It is difficult to avoid the conclusion that AI is an aspirational term that means fundamentally different things to different people. That is not a good foundation upon which to build a definition that will be enshrined in law. To the extent that legislators are only interested in drafting a definition that addresses the latest wave of technological developments that permit more sophisticated predictions and decision-making capacities than earlier iterations of computing machinery were capable of the slogan its only AI until you understand it; then its just software comes to mind. Although somewhat glib and condescending, it also contains a kernel of truth.

All AI currently in play is narrow AI,8 much of it is also weak AI,9 and frankly none of it is intelligent (said while acknowledging that human intelligence is not necessarily the only kind of system we could recognize as intelligent).10 The OPC appears to endorse, through quotation, the view that many of the machines we will interact with in the future will be more akin to agents than mere devices,11 but note the use of the word akin. If we ever manage to create machines that we genuinely regard as having agency and not just the appearance of agency, the law will require significant reform, and not just in the domain of privacy and data protection. We are not there yet. As such, for the time being, AI systems could be governed by the same rules as other forms of processing, as the OPC puts it.

A rights-based approach

The Consultation Paper also proposes the introduction of a rights-based approach, noting that it would be consistent with many international instruments, including the GDPR, which has incorporated a human rights-based approach to privacy within the EUs data protection legislation.

Adoption of this proposal would likely allay some of the concerns that have arisen around the widespread deployment of AI systems in circumstances where those systems make decisions for and about us with little to no human involvement.

In considering this proposal, the most important question to ask is a very general policy question: how may we best arrange our institutions that govern privacy and data protection in a way that protects individual privacy while allowing us to reap the benefits that this technology offers?

The OPC proposal, to reimagine PIPEDA as a fundamentally rights-based legislative instrument, could be seen as a departure from the current legal framework that seeks balance by recognizing both individual privacy rights and the needs of organizations.12 The OPC has mentioned that balance on numerous occasions, most recently in its 2019 annual report to Parliament.13 In that annual report, however, the OPC rejects what it sees as an implication arising from this discourse to the effect that privacy rights and economic interests are engaged in a zero-sum game, noting, a rights-based statute would serve to support responsible innovation by promoting trust in government and commercial activities.14

The OPC may be correct. No matter what approach or framework is settled upon, the question is whether it will protect individual rights while still supporting explorations of this technology that can lead to public benefit, economic or otherwise. There is no reason that the OPCs proposal would fail in principle, but it may be challenging to adopt a rights-based framework in a way that will provide sufficient latitude in that regard. The challenge could be met, in part, by providing alternative legal grounds for processing in certain circumstances, as suggested by one of the Papers later proposals, which states that alternative grounds should be available in instances where obtaining meaningful consent is not possible. While that sounds like a practical solution, to the extent that the OPC wishes to put robust limits around the invocation of alternative legal grounds, it puts a great deal of pressure on the concept of meaningful consent. The next section considers whether that notion can take the strain.

Transparency, explainability, and interpretability

Supporting the OPCs default position that organizations using AI should still obtain meaningful consent where possible, the Consultation Paper includes a proposal to [p]rovide individuals with a right to explanation and increased transparency when they interact with, or are subject to, automated processing. To the extent that anxiety over the use of AI systems for automated decision-making arises in part because (for most members of the public) they are mysterious black boxes, it is worth making this a focus of attention.

The OPC notes, as presently framed, the transparency principle lacks specificity. Better articulation of the transparency principle could greatly assist both individuals and organizations, and an explainability component could further assist in that regard, but only if the new law provides robust guidance on how transparency and explainability should play out in practice.

There is a good deal of uncertainty on the part of organizations as to how much explanation is appropriate and/or necessary when dealing with highly sophisticated processing systems, of which AI is just one example, particularly where such disclosures might reveal trade secrets. Having more explicit direction in the law could assist organizations in understanding their obligations and how those interact with their rights to maintain confidentiality around certain aspects of their systems, and if the new provisions are carefully fashioned, the outcome could be better individual understanding of how these systems work in ways that are pertinent to providing meaningful consent.

The challenge here should not be underestimated, however, given that the most prominent target for the AI reforms are the most sophisticated of these systems, the deep learning neural network architectures. The internal workings of these AI systems are notoriously opaque, even to their designers, and may not be explainable in the sense desired by the OPC.

Which leads us to a useful distinction that is sometimes made between explainability and interpretability.15 Interpretability is concerned with comprehending what the target of interpretation has done with a given input, or might have done with a different input. Explainability is concerned with providing the reasons for a given output.

Many systems that we interact with every day are interpretable to us, but not necessarily explainable: a mobile phone, a car, an elevator. For most people, such systems are black boxes. Through experience, individuals will come to associate a variety of inputs with outputs, reactions or responses, and can also make successful predictions about how one of these systems would react to a certain input (even if that particular input had never been made by that individual). For such individuals, such systems are interpretable.

Yet, faced with an unexpected output, individuals who have learned only how to interpret a system may be unable to explain the result because they do not truly understand the system. No behaviour of a system that is fully explainable will be unexpected, apart from malfunctions. Even if a system is explainable in that sense to an expert, however, it may not be explainable to the average individual. That is why we typically rely on interpretability: we skip the (many) details that we just would not understand anyway.

Does the OPC seek interpretability or explainability? The Consultation Paper does not invoke this distinction. Some of the OPCs comments suggest that it is trying to come to grips with some aspects of it, but those remarks also suggest that the office does not entirely understand the nature of the beast that it is trying to wrangle.

The OPC states that individuals should be provided with the reasoning underlying any automated processing of their data, and the consequences of such reasoning for their rights and interests. This suggests that the OPC is interested in requiring explainability. The OPC also states that it might support the notion of public filings for algorithms, and under another proposal, the OPC also seeks a requirement for algorithmic traceability. This suggests that the OPC imagines that the mechanics of all AI systems are amenable to an algorithmic reduction that makes them explainable, that the reasoning of these systems can be communicated in a meaningful way.

A true algorithmic trace of a deep learning system, moving stepwise from input to output, may be recoverable from examination of the weighted nodes and their interconnections; but the reasoning behind each step, and the sheer number of steps, would yield an algorithm that is no more comprehensible to regulators and individuals than it is to the systems designers. The patterns of interactions created by those nodes and interconnections are abstracts, complex and use clusters of factors that make "no intuitive or theoretical sense."16 Providing this information to individuals will not create the conditions for meaningful consent.

In fact, the question as to whether to provide full explanations or just enough information to make a system interpretable for the average individual predates the existence of automated decision-making systems. With the advent of deep learning AI, however, the problem is thrown into sharp relief.

As such, while it is laudable for the OPC to be devoting attention to matters of transparency and explainability, in order to provide a practical legal framework it will need to give far more attention to this problem than it may have anticipated.

The right to object

The Consultation Paper also considers a proposal to provide a right to object to automated decision-making and not to be subject to decisions based solely on automated processing, subject to certain exceptions. Such a right is worth considering provided the exceptions are broadly drafted. The GDPR, as the Consultation Paper notes, provides exceptions when an automated decision is necessary for a contract; authorized by law; or where explicit consent is obtained.

This is a reasonable approach. Although at present we may be skeptical as to the quality of decisions provided by these systems, we may eventually come to place more trust in the decisions they deliver than those of humans, in certain circumstances. The discourse in autonomous vehicles provides an interesting example: the technology has shown enough promise that regulators, municipalities, and insurers are considering a future in which there could be fewer accidents and more efficient traffic flows where in-vehicle automated systems make the decisions. That might ultimately lead to a future in which we would want to curtail the right of individuals to intervene in the driving process, and we may even come to expect that reasonable people would not try to drive manually. Any reforms in PIPEDA that import a right to object to automated decision-making should be drafted to accommodate shifts in reasonable expectations and public policy.

Conclusion

Reform of Canadas privacy laws is needed, and some of that reform should be crafted with AI in mind. Based on what the Consultation Paper discloses, however, it is not feasible to validate completely those of the OPCs proposals that were discussed in this bulletin. While there is merit in those proposals, attempting to create a special regime to address AI directly (however defined) at this stage of its development would be premature; we have only inklings of how the latest wave of developments will ultimately play out. In the face of such uncertainty, we should maintain the flexibility that a law of general application can provide.

Visit link:
Comments on the OPC Consultation on Artificial Intelligence - Lexology

Ohio to Analyze State Regulations with Artificial Intelligence – Governing

(TNS) A new Ohio initiative aims to use artificial intelligence to guide an overhaul of the states laws and regulations.

Lt. Gov. Jon Husted said his staff will use an AI software tool, developed for the state by an outside company, to analyze the states regulations, numbered at 240,000 in a recent study by a conservative think-tank, and narrow them down for further review.

Husted compared the tool to an advanced search engine that will automatically identify and group together like terms, getting more sophisticated the more its used.

He said the goal is to use the tool to streamline state regulations such as eliminating permitting requirements deemed to be redundant which is a long-standing policy goal of Republicans that lead the state government.

This gives us the capability to look at everything thats been done in 200 years in the state of Ohio and make sense of it, Husted said.

The project is part of two Husted-led projects the Common Sense Initiative, a state project to review regulations with the goal of cutting government red tape, and InnovateOhio,a Husted-led officethat aims to use technology to improve Ohios government operations

Husted announced the project on Thursday at a meeting of the Small Business Advisory Council. The panel advises the state on government regulations and tries to identify challenges they can pose for business owners.

State officials sought bids for projects last summer, authorized through the state budget. Starting soon, Husteds staff will load the states laws and regulations into the software, with the goal of starting to come up with recommendations for proposed law and rule changes before the summer.

Husteds office has authority to spend as much as $1.2 million on the project, although it could cost less, depending on how many user licenses they request.

I dont know if it will be a small success, a medium success, or a large success, Husted said. I dont want to over-promise, but we have great hope for it.

2020 The Plain Dealer, Cleveland.Distributed byTribune Content Agency, LLC.

View original post here:
Ohio to Analyze State Regulations with Artificial Intelligence - Governing