Harnessing the power of machine learning for improved decision-making – GCN.com

INDUSTRY INSIGHT

Across government, IT managers are looking to harness the power of artificial intelligence and machine learning techniques (AI/ML) to extract and analyze data to support mission delivery and better serve citizens.

Practically every large federal agency is executing some type of proof of concept or pilot project related to AI/ML technologies. The governments AI toolkit is diverse and spans the federal administrative state, according to a report commissioned by the Administrative Conference of the United States (ACUS). Nearly half of the 142 federal agencies canvassed have experimented with AI/ML tools, the report, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, states.

Moreover, AI tools are already improving agency operations across the full range of governance tasks, including regulatory mandate enforcement, adjudicating government benefits and privileges, monitoring and analyzing risks to public safety and health, providing weather forecasting information and extracting information from the trove of government data to address consumer complaints.

Agencies with mature data science practices are further along in their AI/ML exploration. However, because agencies are at different stages in their digital journeys, many federal decision-makers still struggle to understand AI/ML. They need a better grasp of the skill sets and best practices needed to derive meaningful insights from data powered by AI/ML tools.

Understanding how AI/ML works

AI mimics human cognitive functions such as the ability to sense, reason, act and adapt, giving machines the ability to act intelligently. Machine learning is a component of AI, which involves the training of algorithms or models that then give predictions about data it has yet to observe. ML models are not programmed like conventional algorithms. They are trained using data -- such as words, log data, time series data or images -- and make predictions on actions to perform.

Within the field of machine learning, there are two main types of tasks: supervised and unsupervised.

With supervised learning, data analysts have prior knowledge of what the output values for their samples should be. The AI system is specifically told what to look for, so the model is trained until it can detect underlying patterns and relationships. For example, an email spam filter is a machine learning program that can learn to flag spam after being given examples of spam emails that are flagged by users and examples of regular non-spam emails. The examples the system uses to learn are called the training set.

Unsupervised learning looks for previously undetected patterns in a dataset with no pre-existing labels and with a minimum of human supervision. For instance, data points with similar characteristics can be automatically grouped into clusters for anomaly detection, such as in fraud detection or identifying defective mechanical parts in predictive maintenance.

Supervised, unsupervised in action

It is not a matter of which approach is better. Both supervised and unsupervised learning are needed for machine learning to be effective.

Both approaches were applied recently to help a large defense financial management and comptroller office resolve over $2 billion in unmatched transactions in an enterprise resource planning system. Many tasks required significant manual effort, so the organization implemented a robotic process automation solution to automatically access data from various financial management systems and process transactions without human intervention. However, RPA fell short when data variances exceeded tolerance for matching data and documents, so AI/ML techniques were used to resolve the unmatched transactions.

The data analyst team used supervised learning with preexisting rules that resulted in these transactions. The team was then able to provide additional value because they applied unsupervised ML techniques to find patterns in the data that they were not previously aware of.

To get a better sense of how AI/ML can help agencies better manage data, it is worth considering these three steps:

Data analysts should think of these steps as a continuous loop. If the output from unsupervised learning is meaningful, they can incorporate it into the supervised learning modeling. Thus, they are involved in a continuous learning process as they explore the data together.

Avoiding pitfalls

It is important for IT teams to realize they cannot just feed data into machine learning models, especially with unsupervised learning, which is a little more art than science. That is where humans really need to be involved. Also, analysts should avoid over-fitting models seeking to derive too much insight.

Remember: AI/ML and RPA are meant to augment humans in the workforce, not merely replace people with autonomous robots or chatbots. To be effective, agencies must strategically organize around the right people, processes and technologies to harness the power of innovative technologies such as AI/ML to achieve the performance they need at scale.

About the Author

Samuel Stewart is a data scientist with World Wide Technology.

Read the original post:
Harnessing the power of machine learning for improved decision-making - GCN.com

Agencies may issue information request on AI adoption, Fed official says – American Banker

WASHINGTON The Federal Reserve and other banking regulators are considering a formal request for public feedback about the adoption of artificial intelligence in the financial services sector, Fed Gov. Lael Brainard said Tuesday.

If the agencies move forward with the request for information, it could be the first step toward an interagency policy on AI.

Brainard said the RFI would accompany the Feds own efforts to explore how AI and machine learning can be used for bank supervision purposes.

To ensure that society benefits from the application of AI to financial services, we must understand the potential benefits and risks, and make clear our expectations for how the risks can be managed effectively by banks, Brainard said in remarks for a symposium on AI hosted by the central bank. Regulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve.

Bloomberg News

Financial institutions have started using AI for operational risk management purposes and for customer-facing applications, as well as fraud prevention efforts, Brainard said. Those functions could remake the way banks monitor suspicious activity, she added.

Machine learning-based fraud detection tools have the potential to parse through troves of data both structured and unstructured to identify suspicious activity with greater accuracy and speed, and potentially enable firms to respond in real time, she said.

AI could also be used to analyze alternative data for customers, Brainard said. She added that that could be particularly helpful to the segment of the population that is credit invisible."

But Brainard also acknowledged challenges in the widespread adoption of AI and machine learning in banking. If models are based on historical data that has racial bias baked in, they could amplify rather than ameliorate racial gaps in access to credit and lead to digital redlining.

It is our collective responsibility to ensure that as we innovate, we build appropriate guardrails and protections to prevent such bias and ensure that AI is designed to promote equitable outcomes, she said.

Brainard explained that there is also often a lack of transparency in how AI and machine learning processes work behind the scenes to accomplish tasks. The adoption of the technology in the financial services space should work to avoid a one size fits all explanation, she said.

To ensure that the model comports with fair-lending laws that prohibit discrimination, as well as the prohibition against unfair or deceptive practices, firms need to understand the basis on which a machine learning model determines creditworthiness, she said.

Read more:
Agencies may issue information request on AI adoption, Fed official says - American Banker

Connected and autonomous vehicles: Protecting data and machine learning innovations – Lexology

The development of connected and autonomous vehicles (CAVs) is technology-driven and data-centric. Zenzics Roadmap to 2030 highlights that 'the intelligence of self-driving vehicles is driven by advanced features such as artificial intelligence (AI) or machine learning (ML) techniques'.[1] Developers of connected and automated mobility (CAM) technologies are engineering advances in machine learning and machine analysis techniques that can create valuable, potentially life-saving, insights from the massive well of data that is being generated.

Diego Black and Lucy Pegler take a look at the legal and regulatory issues involved in protecting data and innovations in CAVs.

The data of driving

It is predicted that the average driverless car will produce around 4TB of data per day, including data on traffic, route choices, passenger preferences, vehicle performance and many more data points[2].

'Data is foundational to emerging CAM technologies, products and services driving their safety, operation and connectivity'.[3]

As Burges Salmon and AXA UK outlined in their joint report as part of FLOURISH, an Innovate UK-funded CAV project, the data produced by CAVs can be broadly divided into a number of categories based on its characteristics. For example, sensitive commercial data, commercial data, personal data. How data should be protected will depend on its characteristics and importantly, the purposes for which it is used. The use of personal data (i.e. data from which an individual can be identified) attracts particular consideration.

The importance of data to the CAM industry and, in particular, the need to share data effectively to enable the deployment and operation of CAM, needs to be balanced against data protection considerations. In 2018, the Open Data Institute (ODI) published a report setting out that it considered that all journey data is personal data[4] consequently bringing journey data within the scope of the General Data Protection Regulation.[5]

Additionally, the European Data Protection Board (EDPB) has confirmed that the ePrivacy directive (2002/58/EC as revised by 2009/136/EC) applies to connected vehicles by virtue of 'the connected vehicle and every device connected to it [being] considered as a 'terminal equipment'.'[6] This means that any machine learning innovations deployed in CAVs will inevitably process vast amounts of personal data. The UK Information Commissioners Office has issued guidance on how to best deal with harnessing both big data and AI in relation to personal data, including emphasising the need for industry to deploy ethical principles, create ethics boards to monitor the new uses of data and ensure that machine learning algorithms are auditable.[7]

Navigating the legal frameworks that apply to the use of data is complex and whilst the EDPB has confirmed its position in relation to connected vehicles, automated vehicles and their potential use cases raise an entirely different set of considerations. Whilst the market is developing rapidly, use case scenarios for automated mobility will focus on how people consume services. Demand responsive transport and ride sharing are likely to play a huge role in the future of personal mobility.

The main issue policy makers now face is the ever evolving nature of the technology. As new, potentially unforeseen, technologies are integrated into CAVs, the industry will require both a stringent data protection framework on the one hand, and flexibility and accessibility on the other hand. These two policy goals are necessarily at odds with one another, and the industry will need to take a realistic, privacy by design approach to future development, working with rather than against regulators.

Whilst the GDPR and ePrivacy Directive will likely form the building blocks of future regulation of CAV data, we anticipate the development of a complementary framework of regulation and standards that recognises the unique applications of CAM technologies and the use of data.

Cyber security

The prolific and regular nature of cyber-attacks poses risks to both public acceptance of CAV technology and to the underlying business interests of organisations involved in the CAV ecosystem.

New technologies can present threat to existing cyber security measures. Tarquin Folliss of Reliance acsn highlights this noting that 'a CAVs mix of operational and information technology will produce systems complex to monitor, where intrusive endpoint monitoring might disrupt inadvertently the technology underpinning safety'. The threat is even more acute when thinking about CAVs in action and as Tarquin notes, the ability for 'malign actors to target a CAV network in the same way they target other critical national infrastructure networks and utilities, in order to disrupt'.

In 2017, the government announced 8 Key principles of Cyber Security for Connected and Automated Vehicles. This, alongside the DCMS IoT code of practice, the CCAVs CAV code of practice and the BSIs PAS 1885, provides a good starting point for CAV manufacturers. Best practices include:

Work continues at pace on cyber security for CAM. In May this year, Zenzic published its Cyber Resilience in Connected and Automated Mobility (CAM) Cyber Feasibility Report which sets out the findings of seven projects tasked with providing a clear picture of the challenges and potential solutions in ensuring digital resilience and cyber security within CAM.

Demonstrating the pace of work in the sector, in June 2020 the United Nations Economic Commission for Europe (UNECE) published two new UN Regulations focused on cyber security in the automotive sector. The Regulations represent another step-change in the approach to managing the significant cyber risk of an increasingly connected automotive sector.

Protecting innovation

As innovation in the CAV sector increases, issues regarding intellectual property and its protection and exploitation become more important. Companies that historically were not involved in the automotive sector are now rapidly becoming key partners providing expertise in technologies such as IT security, telecoms, block chain and machine learning. In autonomous vehicles many of the biggest patent filers in this area have software and telecoms backgrounds[8].

With the increasing use of in and inter-car connectivity and the accumulative amount of data having to be handled per second as levels of autonomy rises, innovators in the CAV space are having to handle issues regarding data security as well as determining how best to handle the large data sets. Furthermore, the recent UK government call for evidence on automated lane keeping systems is being seen by many as the first step of standards being introduced in autonomous vehicles.

In view of these developments new challenges are now being faced by companies looking to benefit from their innovations. Unlike more traditional automotive innovation where the innovations lay in improvements to engineering and machinery many of the innovations in the CAV space reside in electronics and software development. The ability to protect and exploit inventions in the software space has become increasingly of relevance in the automotive industry.

Multiple Intellectual Property rights exist that can be used to protect innovations in CAVs. Some rights can be particularly effective in areas of technology where standards exist, or are likely to exist. Two of the main ways seen at present are through the use of patents and trade secrets. Both can be used in combination, or separately, to provide an effective IP strategy. Such an approach is seen in other industries such as those involved in data security.

For companies that are developing or improving machine learning models, or training sets, the use of trade secrets is particularly common. Companies relying on trade secrets may often license access to, or sell the outputs of, their innovations. Advantageously, trade secrets are free and last indefinitely.

An effective strategy in such fields is to obtain patents that cover the technological standard. By definition if a third party were to adhere to the defined standard, they would necessarily fall within the scope of the patent, thus providing the owner of the patent with a potential revenue stream through licensing agreements. If, as anticipated, standards will be set in CAVs any company that can obtain patents to cover the likely standard will be at an advantage. Such licenses are typically offered under a fair, reasonable and non-discriminatory (FRAND) basis, to ensure that companies are not prevented by patent holders from entering the market.

A key consideration is that the use of trade secrets may be incompatible with the use of standards. If technology standards are introduced for autonomous vehicles, in order to comply with the standards companies would have to demonstrate that their technology complies with the standard. The use of trade secrets may be incompatible with the need to demonstrate compliance with a standard.

However, whilst a patent provides a stronger form of protection in order to enforce a patent the owner must be able to demonstrate a third party is performing the acts as defined in the patent. In the case of machine learning and mathematical-based methods such information is often kept hidden making providing infringement difficult. As a result patents in such areas are often directed towards a visible, or tangible, output. For example in CAVs this may be the control of a vehicle based on the improvements in the machine learning. Due to the difficulty in demonstrating infringement, many companies are choosing to protect their innovations with a mixture of trade secrets and patents.

Legal protections for innovations

For the innovations typically seen in the software side of CAVs, trade secrets and patents are the two main forms of protection.

Trade secrets are, as the name implies, where a company will keep all, or part of, their innovation a secret. In software-based inventions this may be in form of a black-box disclosure where the workings and functionality of the software are kept secret. However, steps do need to be taken to keep the innovation secret, and they do not prevent a third party from independently implementing, or reverse engineering, the innovation. Furthermore, once a trade secret is made public, the value associated with the trade secret is gone.

Patents are an exclusive right, lasting up to 20 years, which allow the holder to prevent, or request a license from, a third party utilising the technology that is covered by the scope of the patent in that territory. Therefore it is not possible to enforce say, a US patent in the UK. Unlike trade secrets publication of patents is an important part of the process.

In order for inventions to be patented they must be new (that is to say they have not been disclosed anywhere in the world before), inventive (not run-of-the-mill improvements), and concern non-excluded subject matter. The exclusions in the UK and Europe cover software, and mathematical methods, amongst other fields, as such. In the case of CAVs a large number of inventions are developed that could fall in the software and mathematical methods categories.

The test regarding whether or not an invention may be seen as excluded subject matter varies between jurisdictions. In Europe if an invention is seen to solve a technical problem, for example relating to the control of vehicles it would be deemed allowable. Many of the innovations in CAVs can be tied to technical problems relating to, for example, the control of vehicles or improvements in data security. As such on the whole CAV inventions may escape the exclusions.

What does the future hold?

Technology is advancing at a rapid rate. At the same time as industry develops more and more sophisticated software to harness data, bad actors gain access to more advanced tools. To combat these increased threats, CAV manufacturers need to be putting in place flexible frameworks to review and audit their uses of data now, looking toward the developments of tomorrow to assess the data security measures they have today. They should also be looking to protect some of their most valuable IP assets from the outset, including machine learning developments in a way that is secure and enforceable.

See original here:
Connected and autonomous vehicles: Protecting data and machine learning innovations - Lexology

AI Helps Solve Schrdinger’s Equation What Does The Future Hold? – Analytics India Magazine

Scientists at the Freie Universitt Berlin have come up with an AI-based solution for calculating the ground state of the Schrdinger equation in quantum chemistry.

The Schrdingers equation is primarily used to predict the chemical and physical properties of a molecule based on the arrangement of its atoms. The equation helps determine where the electrons and nuclei of a molecule are and under a given set of conditions what their energies are.

The equation has the same central importance as Newtons law motion, which can predict an objects position at a particular moment, but in quantum mechanics that is in atoms or subatomic particles.

The article describes how the neural network developed by the scientists at the Freie Universitt Berlin brings more accuracy in solving the Schrdingers equation and what does this mean for the future.

In principle, the Schrdingers equation can be solved to predict the exact location of atoms or subatomic particles in a molecule, but in practice, this is extremely difficult since it involves a lot of approximation.

Central to the equation is a mathematical object, a wave function that specifies electrons behaviour in a molecule. But the high dimensionality of the wave function makes it extremely difficult to find out how electrons affect each other. Thus the most you get from the mathematical representations is a probabilistic account of it and not exact answers.

This limits the accuracy with which we can find properties of a molecule like the configuration, conformation, size, and shape, which can help define the wave function. The process becomes so complex that it becomes impossible to implement the equation beyond a few atoms.

Replacing the mathematical building blocks, the scientists at Freie Universitt Berlin came up with a deep neural network that is capable of learning the complex patterns of how electrons are located around the nuclei.

The scientists developed a Deep Neural Networks (DNN) model, PauliNet, that has several advantages over conventional methods to study quantum systems like the Quantum Monte Carlo or other classical quantum chemistry methods.

The DNN model developed by these scientists is highly flexible and allows for a variational approach that can aid accurate calculation of electronic properties beyond the electronic energies.

Secondly, it also helps the easy calculation of many-body and more-complex correlation with fewer determinants, reducing the need for higher computation power. The model mainly helped solve a major tradeoff issue between accuracy and computational cost, often faced while solving the Schrodinger equation.

The model can also calculate the local energy of heavy nuclei like heavy metals without using pseudo-potentials or approximations.

Lastly, the model developed in the study has anti-symmetry functions and other principles crucial to electronic wave functions integrated into the DNN model, rather than let the model learn. Thus, building fundamental physics in the model has helped it make meaningful and accurate predictions.

In recent years, artificial intelligence has helped solve many scientific problems that otherwise seemed impossible using traditional methods.

AI has become instrumental in anticipating the results of experiments or simulations of quantum systems, especially due to its sciences complex nature. In 2018, reinforcement learning was used to design new quantum experiments in automated laboratories autonomously.

Recent efforts by the University of Warwick and another IBM and DeepMind have also tried to solve the Schrdingers equation. However, PauliNet, with its greater accuracy of solving the equation now, presents us with a potential to use it in many real-life applications.

Understanding molecules composition can help accelerate drug-discovery, which earlier was difficult due to the approximations to understand its properties.

Similarly, it could also help discover several other elements or metamaterials like new catalysts, industrial chemical applications, new pesticides, among others. It can be used in characterising molecules that are synthesised in laboratories.

Several academic and commercial software use Schrdingers equation at the core but are based on applications. The accuracy of this software will improve. Quantum computing in itself is based on quantum phenomena of superposition and is made up of qubits that take advantage of the principle. Quantum computing performance will improve as qubits will be able to be measured faster.

While the current study has come up with a faster, cheaper, and accurate solution, there are many challenges to overcome before it is industry-ready.

However, once it is ready, the world will witness many applications as a result of greater accuracy in solving Schrdingers equation.

Read more from the original source:
AI Helps Solve Schrdinger's Equation What Does The Future Hold? - Analytics India Magazine

The Year Ahead: 3 Predictions From the ‘Father of the Internet’ Vint Cerf – Nextgov

In 2011, the movie "Contagion" eerily predicted what a future world fighting a deadly pandemic would look like. In 2020, I, along with hundreds of thousands of people around the world, saw this Hollywood prediction play out by being diagnosed with COVID-19. It was a frightening year by any measure, as every person was impacted in unique ways.

Having been involved in the development of the Internet in the 1970s, Ive seen first-hand the impact of technology on peoples lives. We are now seeing another major milestone in our lifetimethe development of a COVID-19 vaccine.

What the"Contagion" didnt show is what happens after a vaccine is developed. Now, as we enter 2021, and with the first doses of a COVID-19 vaccine being administered, a return to normal feels within reach. But what will our return to normal look like really? Here are threepredictions for 2021.

1. Continuous and episodic Internet of Medical Things monitoring devices will prove popular for remote medical diagnosis. The COVID-19 pandemic has dramatically changed the practice of clinical medicine at least in the parts of the world where Internet access is widely available and at high enough speeds to support video conferencing. A video consult is often the only choice open to patients short of going to a hospital when outpatient care is insufficient. Video-medicine is unsatisfying in the absence of good clinical data (temperature, blood pressure, pulse for example). The consequence is that health monitoring and measurement devices are increasingly valued to support remote medical diagnosis.

My Prediction: While the COVID-19 pandemic persists into 2021, demand for remote monitoring and measurement will increase. In the long run, this will lead to periodic and continuous monitoring and alerting for a wide range of chronic medical conditions. Remote medicine and early warning health prediction will in turn help citizens save on health care costs and improve and further extend life expectancy.

2. Cities will (finally) adopt self-driving cars. Self-driving cars are anything but new, having emerged from a Defense Advanced Research Projects Agency Grand Challenge in 2004. Sixteen years later, many companies are competing to make this a reality but skeptics around this technology remain.

My Prediction: In the COVID-19 aftermath, I predict driverless car service will grow in 2021 as people will opt for rides that minimize exposure to drivers and self-clean after every passenger. More cities and states will embrace driverless technology to accommodate changing transportation and public transportation preferences.

3. A practical quantum computation will be demonstrated. In 2019, Google reported that it had demonstrated an important quantum supremacy milestone by showing a computation in minutes that would have taken a conventional computer thousands of years to complete. The computation, however, did not solve any particular practical problem.

My Prediction: In the intervening period, progress has been made and it seems likely that by 2021, we will see some serious application of quantum computing to solve one or more optimization problems in mechanical design, logistics scheduling or resource allocation that would be impractical with conventional supercomputing.

Despite the challenges 2020 presented, it also unlocked some opportunities like leapfrogging with tech adoption. My hope is that the public sector sustains the speed for innovation and development to unlock even greater advancements in the year ahead.

Vinton G. Cerf is vice president and chief Internet evangelist for Google. Cerf has held positions at MCI, the Corporation for National Research Initiatives, Stanford University, UCLA and IBM. Vint Cerf served as chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) and was founding president of the Internet Society. He served on the U.S. National Science Board from 2013-2018.

Read the original here:
The Year Ahead: 3 Predictions From the 'Father of the Internet' Vint Cerf - Nextgov

Photonic processor heralds new computing era | The Engineer The Engineer – The Engineer

A multinational team of researchers has developed a photonic processor that uses light instead of electronics and could help usher in a new dawn in computing.

Current computing relies on electrical current passed through circuitry on ever-smaller chips, but in recent years this technology has been bumping up against its physical limits.

To facilitate the next generation of computation-hungry technology such as artificial intelligence and autonomous vehicles, researchers have been searching for new methods to process and store data that circumvent those limits, and photonic processors are the obvious candidate.

Funding boost for UK quantum computing

Featuring scientists from the Universities of Oxford, Mnster, Exeter, Pittsburgh, cole Polytechnique Fdrale (EPFL) and IBM Research Europe, the team developed a new approach and processor architecture.

The photonic prototype essentially combines processing and data storage functionalities onto a single chip so-called in-memory processing, but using light.

Light-based processors for speeding up tasks in the field of machine learning enable complex mathematical tasks to be processed at high speeds and throughputs, said Mnster Universitys Wolfram Pernice, one of the professors who led the research.

This is much faster than conventional chips which rely on electronic data transfer, such as graphic cards or specialised hardware like TPUs [Tensor Processing Unit].

Led by Pernice, the team combined integrated photonic devices with phase-change materials (PCMs) to deliver super-fast, energy-efficient matrix-vector (MV) multiplications. MV multiplications underpin much of modern computing from AI to machine learning and neural network processing and the imperative to carry out such calculations at ever-increasing speeds, but with lower energy consumption, is driving the development of a whole new class of processor chips, so-called tensor processing units (TPUs).

The team developed a new type of photonic TPU capable of carrying out multiple MV multiplications simultaneously and in parallel. This was facilitated by using a chip-based frequency comb as a light source, which enabled the team to use multiple wavelengths of light to do parallel calculations since light has the property of having different colours that do not interfere.

Our study is the first to apply frequency combs in the field of artificially neural networks, said Tobias Kippenberg, Professor at EPFL

The frequency comb provides a variety of optical wavelengths which are processed independently of one another in the same photonic chip.

Described in Nature, the photonic processor is part of a new wave of light-based computing that could fundamentally reshape the digital world and prompt major advances in a range of areas, from AI and neural networks to medical diagnosis.

Our results could have a wide range of applications, said Prof Harish Bhaskaran from the University of Oxford.

A photonic TPU could quickly and efficiently process huge data sets used for medical diagnoses, such as those from CT, MRI and PET scanners.

Read more from the original source:
Photonic processor heralds new computing era | The Engineer The Engineer - The Engineer

Global Healthcare Artificial Intelligence Report 2020-2027: Market is Expected to Reach $35,323.5 Million – Escalation of AI as a Medical Device -…

Dublin, Jan. 08, 2021 (GLOBE NEWSWIRE) -- The "Artificial intelligence in Healthcare Global Market - Forecast To 2027" report has been added to ResearchAndMarkets.com's offering.

Artificial intelligence in healthcare global market is expected to reach $35,323.5 million by 2027 growing at an exponential CAGR from 2020 to 2027 due to the gradual transition from volume to value-based healthcare

The surging need to accelerate and increase the efficiency of drug discovery and clinical trial processes, advancement of precision medicines, escalation of AI as a medical device, increasing prevalence of chronic, communicable diseases and escalating geriatric population and the increasing trend of acquisitions, collaborations, investments in the AI in healthcare market.

Artificial intelligence (AI) is the collection of computer programs or algorithms or software to make machines smarter and enable them to simulate human intelligence and perform various higher-order value-based tasks like visual perception, translation between languages, decision making and speech recognition.

The rapidly evolving vast and complex healthcare industry is slowly deploying AI solutions into its mainstream workflows to increase the productivity of various healthcare services efficiently without burdening the healthcare personnel, to streamline and optimize the various healthcare-associated administrative workflows, to mitigate the physician deficit and burnout issues effectively, to democratize the value-based healthcare services across the globe and to efficiently accelerate the drug discovery and development process.

Artificial intelligence in healthcare global market is classified based on the application, end-user and geography.

Based on the application, the market is segmented into Medical diagnosis, drug discovery, precision medicines, clinical trials, Healthcare Documentation management and others consisting of AI guided robotic surgical procedures and AI-enhanced medical device and pharmaceutical manufacturing processes.

The AI-powered Healthcare documentation management solutions segment accounted for the largest revenue in 2020 and is expected to grow at an exponential CAGR from 2020 to 2027. AI-enhanced Drug Discovery solutions segment is the fastest emerging segment, growing at an exponential CAGR from 2020 to 2027.

The artificial intelligence in healthcare global end-users market is grouped into Hospitals and Diagnostic Laboratories, Pharmaceutical companies, Research institutes and other end-users consisting of health insurance companies, medical device and pharmaceutical manufacturers and patients or individuals in the home-care settings.

Among these end users, Hospitals and Diagnostic Laboratories segment accounted for the largest revenue in 2020 and is expected to grow at an exponential CAGR during the forecasted period. Pharmaceutical companies segment is the fastest-growing segment, growing at an exponential CAGR from 2020 to 2027.

The artificial intelligence in healthcare global market by geography is segmented into North America, Europe, Asia-Pacific and the Rest of the world (RoW). North American region dominated the global artificial intelligence in healthcare market in 2020 and is expected to grow at an exponential CAGR from 2020 to 2027. The Asia-Pacific region is the fastest-growing region, growing at an exponential CAGR from 2020 to 2027.

The artificial intelligence in healthcare market is consolidated with the top five players occupying majority of the market share and the remaining minority share of the market being occupied by other players. Key Topics Covered:

1 Executive Summary

2 Introduction

3 Market Analysis3.1 Introduction3.2 Market Segmentation3.3 Factors Influencing Market3.3.1 Drivers and Opportunities3.3.1.1 Aiabetting the Transition from Volume Based to Value Based Healthcare3.3.1.2 Acceleration and Increasing Efficiency of Drug Discovery and Clinical Trials3.3.1.3 Escalation of Artificial Intelligence as a Medical Device3.3.1.4 Advancement of Precision Medicines3.3.1.5 Acquisitions, Investments and Collaborations to Open An Array of Opportunities for the Market to Flourish3.3.1.6 Increasing Prevalence of Chronic, Communicable Diseases and Escalating Geriatric Population3.3.2 Restraints and Threats3.3.2.1 Data Privacy Issues3.3.2.2 Reliability Issues and Black Box Reasoning Challenges3.3.2.3 Ethical Issues and Increasing Concerns Over Human Workforce Replacement3.3.2.4 Requirement of Huge Investment for the Deployment of AI Solutions3.3.2.5 Lack of Interoperability Between AI Vendors3.4 Regulatory Affairs3.4.1 International Organization for Standardization3.4.2 Astm International Standards3.4.3 U.S.3.4.4 Canada3.4.5 Europe3.4.6 Japan3.4.7 China3.4.8 India3.5 Porter's Five Force Analysis3.6 Clinical Trials3.7 Funding Scenario3.8 Regional Analysis of AI Start-Ups3.9 Artificial Intelligence in Healthcare FDA Approval Analysis3.10 AI Leveraging Key Deal Analysis3.11 AI Enhanced Healthcare Products Pipeline3.12 Patent Trends3.13 Market Share Analysis by Major Players3.13.1 Artificial Intelligence in Healthcare Global Market Share Analysis3.14 Artificial Intelligence in Healthcare Company Comparison Table by Application, Sub-Category, Product/Technology and End-User

4 Artificial Intelligence in Healthcare Global Market, by Application4.1 Introduction4.2 Medical Diagnosis4.3 Drug Discovery4.4 Clinical Trials4.5 Precision Medicine4.6 Healthcare Documentation Management4.7 Other Application

5 Artificial Intelligence in Healthcare Global Market, by End-User5.1 Introduction5.2 Hospitals and Diagnostic Laboratories5.3 Pharmaceutical Companies5.4 Research Institutes5.5 Other End-Users

6 Regional Analysis

7 Competitive Landscape7.1 Introduction7.2 Partnerships7.3 Product Launch7.4 Collaboration7.5 Up-Gradation7.6 Adoption7.7 Product Approval7.8 Acquisition7.9 Others

8 Major Companies8.1 Alphabet Inc. (Google Deepmind, Verily Lifesciences)8.2 General Electric Company8.3 Intel Corporation8.4 International Business Machines Corporation (IBM Watson)8.5 Koninklijke Philips N.V.8.6 Medtronic Public Limited Company8.7 Microsoft Corporation8.8 Nuance Communications Inc.8.9 Nvidia Corporation8.10 Welltok Inc.

For more information about this report visit https://www.researchandmarkets.com/r/dxs2ch

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Link:
Global Healthcare Artificial Intelligence Report 2020-2027: Market is Expected to Reach $35,323.5 Million - Escalation of AI as a Medical Device -...

Does Artificial Intelligence Have Psychedelic Dreams and Hallucinations? – Analytics Insight

It is safe to say that the closest thing next to human intelligence and abilities is artificial intelligence. Powered by its tools in machine learning, deep learning and neural network, there are so many things that existing artificial intelligence models are capable of. However do they dream or have psychedelic hallucinations like humans? Can the generative feature of deep neural networks experience dream like surrealism?

Neural networks are type of machine learning, focused on building trainable systems for pattern recognition and predictive modeling. Here the network is made up of layersthe higher the layer, the more precise the interpretation. Input data feed goes through all the layers, as the output of one layer is fed into the next layer. Just like neuron is the basic unit of the human brain, in a neural network, it is perceptron which forms the essential building block. A perceptron in a neural network accomplishes simple signal processing, and these are then connected into a large mesh network.

Generative Adversarial Network (GAN) is a type of neural network that was first introduced in 2014 by Ian Goodfellow. Its objective is to produce fake images that are as realistic as possible. GANs havedisrupted the development of fake images: deepfakes. The deep in deepfake is drawn from deep learning. To create deepfakes, neural networks are trained on multiple datasets. These dataset can be textual, audio-visual depending on the type of content we want to generate. With enough training, the neural networks will be able to create numerical representations the new content like a deepfake image. Next all we have to do is rewire the neural networks to map the image on to the target. Deepfake can also be created using autoencoders, which is a type of unsupervised neural network. In fact, in most of the deepfakes, autoencoders is the primary type of neural network used in their creation.

In 2015, a mysterious photo appeared onRedditshowing a monstrous mutant. This photo was later revealed to be a result of Google artificial neural network. Many pointed out that this inhumanly and scary appearing photo had striking resemblance to what one sees on psychedelic substances such as mushrooms or LSD.Basically, Google engineers decided that instead of asking the software to generate a specific image, they would simply feed it an arbitrary image and then ask it what it saw.

As per an abstract on Popular Science, Google used the artificial neural netowrk to amplify patterns it saw in pictures. Each artificial neural layer works on a different level of abstraction, meaning some picked up edges based on tiny levels of contrast, while others found shapes and colors. They ran this process to accentuate color and form, and then told the network to go buck wild, and keep accentuating anything it recognizes. In the lower levels of network, the results were similar to Van Gogh paintings: images with curving brush strokes, or images with Photoshop filters. After running these images through the higher levels, which recognize full images, like dogs, over and over, leaves transformed into birds and insects and mountain ranges transformed into pagodas and other disturbing hallucinating images.

Few years ago, Googles AI company DeepMindwas working on a new technology, which allows robots to dream in order to improve their rate of learning.

In a new article published in the scientific journalNeuroscience of Consciousness, researchers demonstrate how classic psychedelic drugs such as DMT, LSD, and psilocybin selectively change the function of serotonin receptors in the nervous system. And for this they gave virtual versions of the substances to neural network algorithms to see what happens.

Scientists from Imperial College London and the University of Geneva managed to recreate DMT hallucinations by tinkering around with powerful image-generating neural nets so that their usually-photorealistic outputs became distorted blurs. Surprisingly, the results were a close match to how people have described their DMT trips. As per Michael Schartner, a member of the International Brain Laboratory at Champalimaud Centre for the Unknown in Lisbon, The process of generating natural images with deep neural networks can be perturbed in visually similar ways and may offer mechanistic insights into its biological counterpart in addition to offering a tool to illustrate verbal reports of psychedelic experiences.

The objective behind this was to betteruncover the mechanismsbehind the trippy visions.

One basic difference between human brain and neural network is that our neurons communicate in multi-directional manner unlike feed forward mechanism of Googles neural network. Hence, what we see is a combination of visual data and our brains best interpretation of that data. This is also why our brain tends to fail in case of optical illusion. Further under the influence of drugs, our ability to perceive visual data is impaired, hence we tend to see psychedelic and morphed images.

While we have found answer to Do Androids Dream of Electric Sheep? by Philip K. Dick, an American sci-fi novelist; which is NO!, as artificial intelligence have bizarre dreams, we are yet to uncover answers about our dreams. Once we achieve that, we can program neural models to produce visual output or deepfakes as we expect. Besides, we may also apparently solve the mystery behind black box decisions.

Here is the original post:
Does Artificial Intelligence Have Psychedelic Dreams and Hallucinations? - Analytics Insight

Artificial Intelligence ABCs: What Is It and What Does it Do? – JD Supra

Artificial intelligence is one of the hottest buzzwords in legal technology today, but many people still dont fully understand what it is and how it can impact their day-to-day legal work.

According to Brookings Institution, artificial intelligence generally refers to machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention. In other words, artificial intelligence is technology capable of making decisions that generally require a human level of expertise. It helps people anticipate problems or deal with issues as they come up. (For example, heres how artificial intelligence greatly improves contract review.)

Recently, we sat down with Onits Vice President of Product Management, technology expert and patent holder Eric Robertson to cover the ins and outs of artificial intelligence in more detail. In this first installment of our new blog series, well discuss what it is and its three main hallmarks.

At the core of artificial intelligence and machine learning are algorithms, or sequences of instructions that solve specific problems. In machine learning, the learning algorithms create the rules for the software, instead of computer programmers inputting them, as is the case with more traditional forms of technology. Artificial intelligence can learn from new data without additional step-by-step instructions.

This independence is crucial to our ability to use computers for new, more complex tasks that exceed the manual programming limitations things like photo recognition apps for the visually impaired or translating pictures into speech. Even things we now take for granted, like Alexa and Siri, are prime examples of artificial intelligence technology that once seemed impossible. We already encounter in our day-to-day lives in numerous ways and that influence will continue to grow.

The excitement about this quickly evolving technology is understandable, mainly due to its impacts on data availability, computing power and innovation. The billions of devices connected to the internet generate large amounts of data and lower the cost of mass data storage. Machine learning can use all this data to train learning algorithms and accelerate the development of new rules for performing increasingly complex tasks. Furthermore, we can now process enormous amounts of data around machine learning. All of this is driving innovation, which has recently become a rallying cry among savvy legal departments worldwide.

Once you understand the basics of artificial intelligence, its also helpful to be familiar with the different types of learning that make it up.

The first is supervised learning, where a learning algorithm is given labeled data in order to generate a desired output. For example, if the software is given a picture of dogs labeled dogs, the algorithm will identify rules to classify pictures of dogs in the future.

The second is unsupervised learning, where the data input is unlabeled and the algorithm is asked to identify patterns on its own. A typical instance of unsupervised learning is when the algorithm behind an eCommerce site identifies similar items often bought by a consumer.

Finally, theres the scenario where the algorithm interacts with a dynamic environment that provides both positive feedback (rewards) and negative feedback. An example of this would be a self-driving car where, if the driver stays within the lane, the software will receive points in order to reinforce that learning and reminders to stay in that lane.

Even after understanding the basic elements and learning models of artificial intelligence, the question often arises as to what the real essence of artificial intelligence is. The Brookings Institution boils the answer down to three main qualities:

In the next installment of our blog series, well discuss the benefits AI is already bringing to legal departments. We hope youll join us.

More:
Artificial Intelligence ABCs: What Is It and What Does it Do? - JD Supra

Artificial intelligence and transparency in the public sector – Lexology

The Centre for Data Ethics and Innovation has published its review into bias in algorithmic decision-making; how to use algorithms to promote fairness, not undermine it. We wrote recently about the report's observations on good governance of AI. Here, we look at the report's recommendations around transparency of artificial intelligence and algorithmic decision-making used in the public sector (we use AI here as shorthand).

The need for transparency

The public sector makes decisions which can have significant impacts on private citizens, for example related to individual liberty or entitlement to essential public services. The report notes that there is increasing recognition of the opportunities offered through the use of data and AI in decision-making. Whether those decisions are made using AI or not, transparency continues to be important to ensure that:

However, the report identifies, in our view, three particular difficulties when trying to apply transparency to public sector use of AI.

First, the risks are different. As the report explains at length there is a risk of bias when using AI. For example, where groups of people within a subgroup is small, data used to make generalisations can result in disproportionately high error rates amongst minority groups. In many applications of predictive technologies, false positives may have limited impact on the individual. However in particularly sensitive areas, false negatives and positives both carry significant consequences, and biases may mean certain people are more likely to experience these negative effects. The risk of using AI can be particularly great for decisions made by public bodies given the significant impacts they can have on individuals and groups.

Second, the CDEI's interviews found that it is difficult to map how widespread algorithmic decision-making is in local government. Without transparency requirements it is more difficult to see when AI is used in the public sector which risks suggested intended opacity (see our previous article on widespread use by local councils of algorithmic decision-making here), how the risks are managed, or to understand how decisions are made.

Third, there are already several transparency requirements on the public sector (think publications of public sector internal decision-making guidance, or equality impact assessments) but public bodies may find it unclear how some of these should be applied in the context of AI (data protection is a notable exception given guidance by the Information Commissioner's Office).

What is transparency?

What transparency means depends on the context. Transparency doesnt necessarily mean publishing algorithms in their entirety. That is unlikely to improve understanding or trust in how they are used. And the report recognises that some citizens may make, rightly or wrongly, decisions based on what they believe the published algorithms means.

The report sets out useful requirements to bear in mind when considering what type of transparency is desirable:

Recommendation - transparency obligation

In order to give clarity to what is meant by transparency, and to improve it, the report recommends:

Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence [by affecting the outcome in a meaningful way] on significant decisions [i.e. that have a direct impact, most likely one that has an adverse legal impact or significantly affects] affecting individuals. Government should conduct a project to scope this obligation more precisely, and to pilot an approach to implement it, but it should require the proactive publication of information on how the decision to use an algorithm was made, the type of algorithm, how it is used in the overall decision-making process, and steps taken to ensure fair treatment of individuals.

Some exceptions will be required, such as where transparency risks compromising outcomes, intellectual property, or for security & defence.

Further clarifications to the obligation, such as the meaning of "significant decisions" will also be required. As a starting point, though, the report anticipates a mandatory transparency publication to include:

The report expects that identifying the right level of information on the AI is the most novel aspect. CDEI expect that other examples of transparency may be a useful reference, including the Government of Canadas Algorithmic Impact Assessment, a questionnaire designed to help organisations assess and mitigate the risks associated with deploying an automated decision system (and which we referred to in a recent post about global perspectives on regulating for algorithmic accountability).

A public register?

Falling short of an official recommendation, the CDEI also notes that the House of Lords Science and Technology Select Committee and the Law Society have both recently recommended that parts of the public sector should maintain a register of algorithms in development or use (these echo calls from others for such a register as part of a discussion on the UK's National Data Strategy). However, the report notes the complexity in achieving such a register and therefore concludes that "the starting point here is to set an overall transparency obligation, and for the government to decide on the best way to coordinate this as it considers implementation" with a potential register to be piloted in a specific part of the public sector.

Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency. The UN Special Rapporteur on Extreme Poverty and Human Rights, Philip Alston.

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/939109/CDEI_review_into_bias_in_algorithmic_decision-making.pdf

More:
Artificial intelligence and transparency in the public sector - Lexology