The real reason we’re afraid of robots – Gulf News

Image Credit: Pixabay

Artificial intelligence is everywhere. It helps drive your car, recognises your face at the airports immigration checkpoint, interprets your CT scans, reads your resume, traces your interactions on social media, and even vacuums your carpet.

As AI encroaches on every aspect of our lives, people watch with a mixture of fascination, bewilderment and fear.

AIs overthrow of humanity is a familiar trope in popular culture, from Isaac Asimovs I, Robot to the Terminator movies and The Matrix. Some scholars express similar concerns.

The Oxford philosopher Nick Bostrom worries that artificial intelligence poses a greater threat to humanity than climate change, and the best-selling historian Yuval Noah Harari warns that the history of tomorrow may belong to the cult of Dataism, in which humanity willingly merges itself into the flow of information controlled by artificial systems.

But in truth, these doomsday scenarios are nowhere in sight. In a critical evaluation of AI, the cognitive and computer scientists Gary Marcus and Ernest Davies demonstrate that the state of the art of AI is still quite far from true intelligence.

When asked to provide a list of restaurants that are not McDonalds, Siri still spits out a list of local McDonalds restaurants; she just doesnt get the no part of no.

AIshortcomings

AI can also fail to recognise familiar objects in unfamiliar contexts (a baby on the highway) or to separate associations from causes.

In short, AI still lacks common sense. This doesnt bode well for the AI conspiracy. If your Tesla cannot reliably avoid an unfamiliar obstacle on the road, it is hard to see how it would take the initiative to hijack the vehicle.

Make no mistake AI does pose many real dangers to us: to our personal privacy and security and to the future of the economy. These are all very good reasons to watch it closely and regulate it aggressively.

Previous technological revolutions whether driven by steam, electricity or atomic energy have raised similar challenges. Yet in popular opinion, the AI risk is greater than those and different in kind.

People dont merely worry that the new technology could cause accidents or fall into the wrong hands. With AI, people worry that it will acquire autonomous agency and outsmart and overthrow its human masters. The question is why.

In fact, humanitys worries about being conquered by omnipotent, inanimate, man-made artefacts is much older than computer technology. In the 19th century, Mary Shelleys Dr. Frankenstein created a humanoid monster who promptly rebelled.

Hundreds of years earlier, there was the story of the golem an automaton created out of river clay and brought to life by kabbalistic magic.

Predictably, the golem rebelled, not unlike Adam in Genesis, who was likewise created out of dust and brought to life when God breathed his spirit into Adams nostrils.

And then there is our fascination with tales of zombies, corpses that are reanimated through witchcraft. Tales like these suggest that our fear of AI arises not from AI itself, but from the human mind.

Psychological distinction

This fear emanates from the psychological distinction we draw between mind and matter. If you saw a ball start rolling all by itself, youd be astonished. But you wouldnt be the least bit surprised to see me spontaneously rise from my seat on the couch and head toward the refrigerator.

That is because we instinctively interpret the actions of physical objects, like balls, and living agents, like people, according to different sets of principles. In our intuitive psychology, objects like balls always obey the laws of physics they move only by contact with other objects.

People, in contrast, are agents who have minds of their own, which endow them with knowledge, beliefs, and goals that motivate them to move on their own accord. We thus ascribe human actions, not to external material forces, but to internal mental states.

Of course, most modern adults know that thought occurs in the physical brain. But deep down, we feel otherwise. Our unconscious intuitive psychology causes us to believe that thinking is free from the physical constraints on matter.

Extensive psychological testing shows that this is true for people in all kinds of societies. The psychologist Paul Bloom suggests that intuitively, all people are dualists, believing that mind and matter are entirely distinct.

AI violates this bedrock belief. Siri and Roomba are man-made artefacts, but they exhibit some of the same intelligent behaviour that we typically ascribe to living agents.

Age of Siri

Their acts, like ours, are impelled by information (thinking), but their thinking arises from silicon, metal, plastic and glass. While in our intuitive psychology thinking minds, animacy and agency all go hand in hand, Siri demonstrates that these properties can be severed they think, but they are mindless; they are inanimate but semi-autonomous.

People dont tolerate this cognitive dissonance for very long. When we are faced with a fundamental challenge to our core beliefs, we tend to stick to our guns. Rather than revising our assumptions to match the facts, we tend to bend reality to fit our assumptions, especially when our world view is at stake.

So rather than admitting the possibility that machines endowed with AI can think, we ascribe to them immaterial mind and agency, and once we do, our view of AI shifts from faithful servant to rebellious menace.

That shift is internal to us and is entirely predictable. Indeed, the dissonance presented by a golems very existence a mix of matter and mind is frightening. And since people conflate fear with menace, they project it onto the golem, which is seen as rebellious and threatening.

Thus, the AI takeover narrative, its power and timelessness, arises directly from our core from a cognitive principle that seems to be part of human nature.

While none of this proves that the robot rebellion is impossible, it would be a mistake to ignore our own preset beliefs that contribute to these fears. As the ancient Greeks long ago observed, our blindness to our own psyches can exact a heavy toll.

When we focus so much of our attention on improbable scenarios, we run the risk of ignoring other problems posed by AI that are pressing and preventable.

Before we can give those very real dangers the attention they deserve, we should rein in our irrational fears that arise from within.

Iris Berent, a professor of psychology at Northeastern University, is author of The Blind Storyteller: How We Reason About Human Nature.

Read the rest here:
The real reason we're afraid of robots - Gulf News

Putting AI and Machine Learning to Work in Cloud-Based BI and Analytics – AiThority

Artificial intelligence (AI) and machine learning (ML) are powering a whole new generation of business intelligence (BI) solutions. And these mission-critical software packages are in turn one of the primary drivers behind the migration of enterprise big data to the cloud.

BI tools are designed to collect and analyze current and actionable data delivering insights into processes and workflows that can impact business operations in the near term. But what if you need those insights immediately, and you need them in the hands of employees and experts who are working simultaneously across the globe? IT stakeholders are turning to the cloud for faster, more accurate and timelier BI insights especially in the face of Covid-19 where companies are looking to operate as economically possible and millions are forced into remote working locations. Even before the pandemic, a 2019 survey by TechTarget found that 27% of respondents plan to deploy BI in the cloud in the coming year.

That same study points to an increase in cloud technology as the number two activity that companies are employing to improve employee experience and productivity, and notes that 38% of companies plan to bolster their cloud technology within the next year.

There are multiple reasons that organizations are moving their BI and analytics to the cloud.

First among them is cost: The move streamlines a workforce, so even though there are start-up costs involved in the migration process, the long-term cost-benefit analysis plays out in their favor. Companies are also able to run faster and lighter with cloud-based BI, with no need to run dedicated client-side applications and IT teams freed of the necessity of coordinating upgrades across an entire infrastructure.

Then theres security: Companies tap into a whole extra layer of security and protection for their data as there is only one point of access, and data cant accidentally be merged with another companys, or worse, intentionally and maliciously accessed by someone who does not have access.

Accessibility will also improve, as companies will be no longer tethered to one distinct physical location to store data. When their BI systems are migrated to the cloud, it offers real-time access to critical data and analyses from any laptops, tablets and smartphones, meaning that access to the information required to make better business decisions is constantly within reach.

Scalability will also jump dramatically, as the cloud offers an elastic infrastructure that provides a simple platform for scaling up as a company grows.

And, performance is enhanced since cloud infrastructure is customizable to each companys specific needs. An added benefit is centralized collaboration, allowing entire teams to work within the same framework with the same tools, no matter how scattered or far-flung they might be.

TDWIs recent report on BI and analytics notes that demand is rising for systems that can provide views, analytics, and prescriptive recommendations based on data generated by events happening now and predictive insights into what could happen in the future.

A vivid example of clouds analytics advantages is the use of Spark, with its extremely high memory demands. The elasticity of the cloud enables Spark to perform orders of magnitude faster than Hadoop/Hive on-prem. The differences can be dramatic: a 10- to 12-hour Hive query can literally take only 15 minutes with Spark in the cloud.

Increasingly, cloud big data vendors and their customers have rich AI-driven BI ecosystems at their disposal, like Snowflake and Tableau (which was acquired by Salesforce). For those using Apache Spark, Databricks provides a unified analytics platform that accelerates innovation by unifying data science, engineering and business with an extensive library of machine learning algorithms, interactive notebooks to build and train models, and cluster management capabilities that enable the provisioning of highly-tuned Spark clusters on-demand.

Businesses of every size are learning that leveraging AI technology can improve business processes and significantly enhance the customer experience. This is happening across several industries healthcare, finance, and life sciences (despite heavy regulation) are quickly adopting AI-driven business models, and AI is transforming medicine in how and when treatments are discovered and tested.

Cloud computing has completely transformed entire industries, computing paradigms and enterprises, and has become the ideal for storing and accessing big data.

The COVID-19 pandemic has only accelerated this move given the need to operate as economically as possible with more employees working remotely. Cloud computing saves both money and time, which makes it immediately attractive to businesses, while also increasing access for global companies, providing a synergic platform for coordination and cooperation between far-flung employees, and it creates an impressive security buffer through a single point of access that ensures companies data its most precious asset and its most critical investment is protected from malicious actors. AI-powered business intelligence and analytics are driving the migration of enterprise big data to the cloud.

Choosing the right BI platform can dramatically enhance productivity with unprecedented business insights, and a more intimate knowledge of customers and trends.

Share and Enjoy !

Visit link:
Putting AI and Machine Learning to Work in Cloud-Based BI and Analytics - AiThority

Regulating AI in Public Health: Systems Challenges and Perspectives – Observer Research Foundation

object(WP_Post)#917 (24) { ["ID"]=> int(70513) ["post_author"]=> string(1) "1" ["post_date"]=> string(19) "2020-07-27 10:45:37" ["post_date_gmt"]=> string(19) "2020-07-27 05:15:37" ["post_content"]=> string(87682) "

Artificial Intelligence (AI) is increasingly proliferating the healthcare landscape and has immense promise for improving health outcomes in a resource-constrained setting like India. With emerging technology still finding its footing in the healthcare industry in the country, there are systemic roadblocks to hurdle before AI can be made transformative up to the last mile of public health. AI also carries immense challenges for Indias mostly traditional regulators who have to walk the tightrope of propelling an AI innovation ecosystem while maintaining a core concern for patient safety, and affordability. This requires the regulators and relevant stakeholders to take a systemic view of the industry and understand the potential impact of regulation throughout the ecosystem. This landscape study outlines the contextual limitations within which Indian regulators for healthcare technology operate. It offers recommendations for a systems thinking approach to regulating AI in Indian health systems.

Attribution: Abhinav Verma, Krisstina Rao, Vivek Eluri and Yukti Sharma, Regulating AI in Public Health: Systems Challenges and Perspectives, ORF Occasional Paper No. 261, July 2020, Observer Research Foundation.

Artificial Intelligence (AI) in medicine relies on an ecosystem of health data to train machines that learn responses to diagnose, predict, or perform more complex medical tasks. Patient data is leveraged for supporting clinicians in decision-making, bringing to the fore patterns in the data that were not discernible to a clinicians eyes, and in some cases even charting out medical prognosis. Its uses have been well documented: electronic health record (EHR) systems have used machine learning algorithms to detect data from text[1] as well as undertaking predictive analysis to warn clinicians about high-risk conditions and co-morbidities.[2] This is in addition to guiding drug discovery[3] and more topically, allowing population-analysis for pandemic preparedness and response measures.[4]

The Indian government has been trying to nudge the health system towards greater overall digitisation for the last two decades. Frontline health workers are being trained to adopt digital health: moving from paper-and-pen-based entries that are transferred to a centralised digital portal, to now using mobile-phone applications that allow real-time information upload.[5] The shift to digitisation has been codified in the National Health Policy (2017) and is represented in the National Health Stack vision, detailing the need to leverage technologies such as Big Data analytics for data stored in universal registries.[6] The National Digital Health Blueprint (NDHB 2019) further builds on this vision to identify building blocks that leverage foundational technology towards expansive application development for varied uses and rely in a most rudimentary manner on high data integrity of the health system.[7]

While digitisation is a promising first step to creating interoperable digital systems, there are plenty of challenges to its adoption. EHR adoption, for example, has been laggard in public health institutions due to its high cost of implementation and high burden on clinicians owing to cumbersome input and maintenance procedures.[8] Even with well-integrated EHR systems in the West, clinicians are known to spend more time with the technology than the patient.[9] Such situation is likely to be exacerbated in India, where the public health system is under-staffed and technologically averse.

For emerging technology that relies on robust data systems for innovation, this is an existential challenge. The same user reluctance that plagues EHR is likely to curtail the uptake of more advanced technological tools. Quality benchmarks like EHR standards can address this reluctance to an extent by ensuring the standardisation of the tools design and function to allow the data collected at different sources to be accessible and functional to different users in the same way. Once trained in the setting up and use of one system, the seamless integration and accessibility of a patients health records for rapid diagnosis and treatment is likely to help users hurdle their technological reluctance. However, this may come at the cost of imposing heavy burdens on EHR developers.

In the nascent industry of emerging technology like AI and ML-based healthcare solutions, the technology is far more advanced than the standards, which are yet to be established. In the absence of a clear approval and market-access pathway, innovators have a higher price to pay to enter the healthcare innovations market. A regulator in this context must not only function to create boundary conditions to preserve patient safety, but also allow reasonable room for innovation and efficacy for promising solutions (See Figure 1).

Indias public health ecosystem provides service delivery through vertical programs for immunisation, and disease surveillance and management that focuses on population health maintenance. It also boasts of a formidable network of health services that encompasses 18 percent of the countrys total outpatient care, and 44 percent of total inpatient care,[10] all of which are highly subsidised or free for citizens. Although the reliance on public versus private healthcare varies across state, public healthcare centres often serve as the only point of care for the countrys 66 percent rural population. Yet it suffers from staff shortages, low staff motivation, inadequate or outdated medical equipment, and slow-responding medical institutions.[11] Notwithstanding the progress made by Ayushman Bharat (AB-PMJAY) and its pursuit of health for all through health and wellness centres and health insurance, public health service in India is overburdened.[12]

With a doctor-patient ratio of 1:10,189[13] (10 times short of the World Health Organizations [WHO] recommended ratio[14]) and severe resource shortages, the clarion call has never been louder for technology at scale to support healthcare delivery in the country. The response has been hopefulmore recently from emerging technological solutions. For example, an AI-based breast cancer screening device that uses a non-invasive, low-cost solution based on heat-mapping for early detection of breast cancer has been able to detect breast cancer up to five years earlier than a mammography with reduced reliance on trained technicians.[15] A smartphone-based anthropometry technology enables frontline health workers to accurately report baby weight,[16] solving for incongruencies in field reported data which is popularly tied to insufficient focus on and incorrect interventions for malnutrition on the field. In countries in the West, a rapid detection and response device directly alerts radiologists when it spots pneumothorax.[17] Various states have taken the initiative to embrace this mission. Telangana for example, has declared 2020 as the year of AI, with the intention of making AI-based innovation successful across e-governance, agriculture, healthcare and education.[18]

Healthcare is surely and steadily embracing digital health innovation to respond to critical health challenges. In response, regulations have been established for standardising the design and function of these technologies (as is the case with EHR, or medical devices) that recognise the risks associated with their use and protect the patient and users safety and rights. AI-based solutions have not only variable conditions of risk associated with their use,[a] but the risks associated with their use are also still being understood. In preparing for regulation of AI-based health technology, it is important to recognise the context and risks associated with each of these categories.

In many parts of the world, the use of AI in public healthcare delivery has increased in recent years. In the United Kingdom (UK), for example, the National Health Service or NHS adopted an AI chatbot-based triage system in 2019.[19] However, the known and unknown risks of making AI the norm for health service delivery have threatened to upend the values of equitable access that are synonymous with public health. While there are AI solutions that exist in speciality or tertiary care hospitals (especially diagnostic assistive tools), few solutions effectively reach out to the primary care setups, perhaps due to the high cost of development and operationalisation that deters affordable pricing for scale.[20]

Moreover, the more widespread use of AI is hampered by its complexity, rendering certain principles inexplicable to users and untrustworthy (AI black box).[b],[21] Due to its aggregation of several thousand data points, machine learning algorithms decision trajectories are often too complex to be traced back and made explainable to its users without human intervention.[22] Given the potential of AI to learn pre-existing patterns in data, AI has also been critiqued for replicating biases against disadvantaged social groups that clinicians would otherwise consciously rule out.[23] Concerns around the discrimination that might be inherent to using AI in medical contexts (that is further challenging to identify and isolate) also have severe implications in a medico-legal context where liability is difficult to ascertain and is instead shared.[24]

National AI strategies have committed ambitious targets for capital investments towards research and application of artificial intelligence. Encapsulating a proactive stance, these strategies have highlighted how research, innovation and permissive markets can catapult economies into the 4th Industrial revolution and also occupy a significant position in the welfare discourse.[25]

Indias National Strategy for AI sets precedent for AI capacity development through the institution of Centres of Research Excellence (COREs) focused on fundamental research, as well as International Centres on Transformational AI (ICTAIs) for applied research. In parallel, it acknowledges critical challenges around issues of privacy and safety, data integrity, and technical resource capacity. In a context of emerging technology such as AI finding a way to address public health challenges, regulation for standards of safety and efficacy cannot afford to simply react to known risks of technology[26] but must be proactive in collaborating for better safety standards.

The US Food and Drug Administration (USFDA), borrowing from the work of International Medical Device Regulators Forum or IMDRF, provides a useful lens for regulating AI/ML models in healthcare, categorising them as AI/ML SaMD or Software as a Medical Device.[27] Following a risk categorisation that ascertains an AIs potential risk to the patient and its intended use, USFDAs proposal involves treating regulation for AI as a series of iterative checkpoints rather than a one-time certification model. Given the potential threat considered against the intended use of the AI, specific clinical evidence is required to be submitted both before and after deployment of the SaMD. In weaning off a static regulation model, the USFDA upholds Good Machine Learning Practice (GMLP) on expectations of quality systems responsible for generating SaMD, including ensuring quality and relevance of data, and transparency of the output aimed at users.[28] In establishing checkpoints that include manufacturers reporting on specific performance and safety indicators post deployment, the SaMD regulation process allows for modifications to approved devices for greater efficacy of use. Yet in the absence of domestic regulatory expertise in AI regulation, adopting this gold-standard for regulation might be more expensive for domestic innovators.

The pacing problem[c] witnessed in the case of AI regulations for healthcare is stark. Historically utilised to safeguard social welfare, regulations have been risk-averse and prioritised consumer safety. This is an outcome that follows systematic review of the costs and benefits of innovation, in addition to striking its balance with relevant stakeholder interests. However, the rise of emerging technology such as AI has raised an important critique of slow-moving, non-adaptive regulation regimes that have not only challenged innovation but also curtailed economic growth.[29] It is predicted that the application of AI in healthcare in India will be worth INR 431.97 billion by 2021;[30] this is juxtaposed against a regulations system that has only just acknowledged software as a medical device[31] and an innovation ecosystem that is still burdened with high costs of experimentation and evaluation. A systematic review of the role of regulation in incentivising the uptake of AI for addressing public healths woes, while prioritising patient safety, is essential to guiding a regulations framework for AI-based SaMD in developing countries like India.

India has had the experience of building supportive regulations for the pharmaceutical sector that helped it develop from almost non-existent to one of the worlds leading suppliers of generic drugs. This was achieved through a mix of price controls, experimenting with process patents, and industrial promotion policies.[32] However, this agile and responsive policy development has yet to translate to medical devices or technologies, and Indias health system continues to be 75-percent dependent on imported medical technology.[33] The imperative for India is to develop its own medical innovations ecosystem.[34] This section outlines the existing context within which the regulatory system for AI in healthcare will have to function (See Figure 2).

An AI model is built on the foundation of robust and accurate data. Some innovators are able to invest in cumbersome primary data collection and create their own proprietary datasets while buy commercially available datasets to train their models. Both pathways require intensive capital investments that are not available with early-stage start-ups that create tools for the public health system at large.

In India, the government owns large swathes of data, both from public health facilities and national programmes. However, this data lacks accuracy and completeness, which usually results in incorrect conclusions. On aggregation, small errors like misspelled names or inaccurate counts at the facility level can cumulate into glaring misinterpretations.[35] This can also detract from the representativeness of the datasets used for training and potentially amplify data biases in the AI models, which can have severe social fallouts.

Therefore, a critical challenge for the government is to enable digitisation of most clinical transactions where citizens partake. Thereafter, it is necessary to develop a data culture and quality systems to enable that digital health data accurately depicts the realities of health outcomes at the population, sub-national and even individual facility or patient level. To achieve this goal, the government of India has already commenced an ecosystem building effort for digital health, at the core of which is the concept of EHR of all citizens along with a health information exchange platform to enable sharing of data across the continuum of care.

These building blocks are envisioned as free-flowing data exchange, but presently face immense challenges of portability, especially when it comes to including the private sector in this ecosystem. Physician compliance and adoption of standard terminologies like Systematized Nomenclature of Medicine- Clinical Terms (SNOMED CT) is not particularly incentivised in India as it was in the United States (US), where one could secure substantially higher rates of EHR development through a system of financial incentives and sanctions. Beyond this, private institutions with digital systems face roadblocks due to the absence of mechanisms for sharing data with the government or each other due to technical interoperability challenges. For instance, the governments Revised National TB Control Program cannot follow patients or monitor their care once they choose to seek treatment in the private sector, due to absence of sharing pathways.[36]

Unless interoperability across software systems and terminologies is uniformly secured across the healthcare system in India, the digital health ecosystem will remain fragmented and incomplete. While this normative ecosystem can aspire to digitise data that traditionally exists in paper registers, it does not necessarily assist AI innovators in their work unless easy, cost-effective and convenient modalities for sharing this data are instituted.

As health data is considered as Sensitive Personal Information under the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules 2011,[37] it is also necessary to have stronger privacy and security measures for digital health data, especially when it comes to sharing it. This is where privacy preserving processes like anonymisation and de-identification fit in: they will remove all personally identifiable marks from the data and prepare it for sharing for training AI models. However, it is now widely accepted that anonymisation is not absolute.[38] At the same time, annotation of health data, including pathological reports and radiological scans, is necessary for data to be usable for the machine to learn and draw patterns. Both privacy-preserving and annotation processes are cumbersome and investment-heavy activities[39] that can ultimately make the development process expensive and create entry barriers for new enterprises.

The role of technological innovation in addressing large-scale access challenges that are typical of a developing nations public healthcare system is also widely recognised in India. Investment patterns reflect this: the medical devices sector has seen an inflow of FDI worth US$1.8 billion between April 2000 and June 2019.[40] The pivotal drivers for this sector-specific growth have included increased healthcare consumption and insurance penetration, growing investment from private equity models, and diversified healthcare delivery mechanisms.[41]

In parallel, the governments flagship universal health coverage scheme AB-PMJAY is set to be established as the worlds largest health assurance scheme- providing INR 0.5 million per family to nearly 40 percent of the countrys population. Aiming to mainstream transformative technology and boost innovation for healthcare delivery through a dedicated Innovation Unit, AB-PMJAY has established the call for public health innovation in a version that is accessible and affordable to the most economically vulnerable. However, managing a precarious balance between long gestation periods of investments in medical technology and accelerating access for its 107.4 million target users is a direct challenge to the success of the scheme, and to revolutionising public health through innovation in general. The hope is for regulation to favour the market for innovation.

Extensive evaluation and testing processes of deep-tech solutions result in prolonged time spent at this stage, delaying the time for deployment and requiring a relatively longer period of lock-in for investors. For example, the average time taken by US medical technology companies for pre-market clearance is 5.6 years,[42] with 61 percent of them taking more than four years to get initial market approvals. Further, even medical technology with inconsequential risks to patientssuch as external aids like hearing aidscould be treated as high-risk investments due to the high uncertainty that comes with long-term health outcomes. It is not surprising that the average time taken to exit a medical device startup is 8.8 years, with the company burning an average of US$6.25 million every year.[43]

There is increased ambiguity around perceived risks of AI-based technology and a need for stricter vigilance that follow post-market modifications. It is therefore fair to assume that without significant market incentives, promising emerging technological solutions cannot perform without supportive regulation to bolster its entry into public health.

The role of regulation in making high-risk industries attractive for private investments can be illustrated through the pharmaceutical industry. The value chain in the industry is characterised by two specific kinds of activities: those involved in drug discovery, and those in the manufacturing and selling of the drug. The latter being relatively low-risk, drug discovery involves high and inherently unpredictable risks with returns being 10 percent of the cost of capital for the process[44] and can only be afforded by pharmaceutical sales giants. The high cost of innovation here is offset by patent protection measures and value-based pricing of drugs manufactured, irrespective of their capital costs. Regulation in the pharmaceutical industry has in this way offset high R&D costs undertaken by manufacturers and made the investments in innovation possible, an approach that might not be feasible in the field of AI.

An exciting opportunity for infusing capital in AI for healthcare lies in mobilising investments towards core infrastructure for digital health innovation. Supporting the development of core capacities like generating standardised and annotated health records and health data exchanges will allow higher penetration for emerging technology that leverages robust data systems and atop building blocks to optimise its application in healthcare. In turn, more accessible markets attract private capital to relatively high-risk solutions (like clinical decision support systems, for example) in the AI-based development value chain.

As regulation pursues the alignment of clinical performance with patient safety, an important consideration is how AI solutions interact with its users and in turn, how that affects clinical efficacy. Superior clinical evidence of an AI-based solution might not necessarily translate to superior adoption, or necessitate that the solution addresses the clinical condition it was meant to because of variables between the clinical environment and algorithms practice environment.

Unlike drugs, software and Information Technologies (IT) tools are known to be highly affected by organisational factors such as resources, staffing, skills, training, culture, workflow and processes[45] as delivery of healthcare interventions using these tools requires the healthcare staff to take on a more active role. A tale of caution comes from using CAD (computer-aided detection) for mammography to improve breast cancer detection wherein the CAD procedure performed no better (and in some ways worse) than the procedure without involving CAD.[46] Despite no real benefit to women for breast cancer screening, CAD-based mammographies increased nearly 70 percent after insurance reimbursement increased for this procedure in 2002.[47] Regulators thus need to account not only for the proven clinical efficacy of the solution, but the result of its presence in the market that might serve as a nudge for altering clinician behaviour around the target condition. Another element to consider is creating trust in AI models when it comes to patients, especially in cases where there is no human in the middle, like in the case of chatbots.[d]

In healthcare, human factors validation testing serves as a meaningful way to address adoption challenges that signal human interaction issues for the AI-based SaMD. This demonstrates that the final finished combination product-user interface can be used by intended users without serious issues, for its intended uses and under the expected use conditions.[48] In the public health context that is still struggling with technology adoption of the more fundamental applications (like EHR patient recording systems), AI explainability is an important consideration, in order to increase trust in these new systems, while studying and testing for possible risks of human-AI interaction.

Consensus from a study panel organised as part of Stanford Universitys One Hundred Year Study of Artificial Intelligence reflected that ...attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isnt any one thing), and the risks and considerations are very different in different domains.[49] Limited understanding of AI helps articulate the reluctance of regulatory bodies to deconstruct it for purposes of regulation.

At present, there is no domestic regulatory oversight in India for SaMD interventions, leaving AI-driven SaMD further out of its purview. Even if SaMD were recognised, across the world its regulatory approval is based on repeatability and certainty. However, when a software learns on its own and its outputs vary, the regulations need overhaul to adapt to it.[50]

Due to the evolving nature of algorithms and tedious standard regulatory processes, it is not hard to imagine that after an approval is granted and the product is marketed, an improved version of the algorithm can be released periodically as it collects and analyses new data. To eliminate the need to seek new approvals every single time a version of the algorithm has to be released, the USFDA has implemented a total product life cycle (TPLC) regulatory approach. This approach facilitates the rapid cycle of product improvement and requires pre-market submission for changes that affect safety or effectiveness, such as new indications for use, new clinical effects, or significant technology modifications that affect performance characteristics.[51] Incorporating a change management protocol is the welcome necessary step in dynamic evaluation of AI-driven products.

Acceptability of results of AI products is another impediment to its adoption. On the field, startups are advised to conduct clinical trials that are time consuming and expensive.[52] While rulebooks exist for drug-related clinical trials, regulations are scant in the context of medical devices, let alone AI-enabled SaMD. In the absence of a unified Medical Devices Policy, different agencies including the Central Drugs Standard Control Organization (CDSCO) and Bureau of Indian Standards (BIS) have enlisted their own set of requirements, but there is a lack of coordination amongst these agencies.[53] The absence of an overall guide has led to interpretation issues and prolonged approval times in complying with these interim measures.

Regulatory agility and responsiveness have a direct impact on the adoption of innovation. This regulatory framework needs to be continually fine-tuned to enable optimal innovations while controlling healthcare expenditure.[54] Regulatory certainty offers benefits to companies by increasing predictability and transparency. Moreover, regulations and standards can also increase compatibility of products[55] (interoperability for software products) that can lead to cost savings,[56] which are particularly beneficial for public health units. Indias medical device market has leaped and will continue to grow (pegged to be valued at US$50 billion by 2025),[57] but its regulatory infrastructure is likely to be a hurdle in many ways because of inherent deficits.

At their core, regulatory frameworks seek to fulfill the dual objective of ascertaining that a products probable benefits for its intended use trump its probable risks, and ensuring that these products are easily available to patients in need. This also involves undertaking an enabling function to kickstart industries and innovations.

The first challenge in this pursuit concerns the purview of Indian medical device regulations. Since 1989, when the first medical device was regulated in India, the regulators have only regulated hardware devices, treating them as identical to drugs.[58] A clear distinction between medical devices and pharmaceuticals for the purpose of regulation was made only in 2017 with the new Medical Device Rules.[59] These Rules expanded the scope of medical devices to all medical devices and in-vitro diagnostic devices that are notified by the government on the basis of their risk. However, these Rules did not recognise software as a medical device, something that was mentioned in its earlier 2016 draft.

It is only through two notifications issued on 11 February 2020 that India moved ahead of regulating just 37 categories of medical devices to bringing all devices, including a software or an accessory, intended to be used for a medical purpose under the purview of regulation.[60] Through these notifications, the government has also sought to ensure that all importers and manufacturers of medical devices have to be certified as compliant with ISO-13485 (Medical Devices Quality Management Systems Requirements for Regulatory Purposes). While the need for compliance with international quality norms can bring a certain assurance of product quality and safety, the standard is still not fit for quality assessments of dynamic and emerging technologies that are increasingly being integrated into health systems, including AI.

Overall, Indian regulators fall behind their international counterparts to truly promote innovations. At present, there are only nascent attempts at creating an ecosystem and infrastructure to conduct quality testing for devices similar to CE or USFDA.[61] The Gujarat government has already approved the setting up of India's first medical device testing lab,[62] but there is still much to be done for putting the right framework in place that can give impetus to local quality testing.

Industry players have been pushing for a separate and comprehensive regulatory regime for medical devices separate from the Drugs and Cosmetics Act. Such a legislation was also being proposed by the NITI Aayog with the Draft Medical Devices (Safety, Effectiveness and Innovation) Bill with its own proposed authority along the lines of the FSSAI.[63] This Bill with changes incorporating the consensus achieved with the Ministry of Health and Family Welfare will be introduced in the parliament in the near future.[64] However, there is little indication that this proposed regulatory framework will have specific provisions to deal with the dynamic demands of emerging technological solutions.

While India is moving slowly towards regulating a wider ambit of medical products being used de facto within the health system, its regulators need to play multiple roles, including that of protecting the patients through rigorous pre- and post-market evaluations as well as that of ensuring access to these products through affordability-inducing measures. For software solutions, India can swiftly adapt existing reference regulations (International Medical Device Regulators Forum or IMDRF) combined with institution of oversight procedures from local regulatory bodies. This might also require India to reassess its policymaking process and make it more participatory, with greater involvement of industry and academic stakeholders to create a synergetic ecosystem for AI in healthcare products.

The dynamic nature of artificial intelligence, coupled with variables introduced from its interaction with users make it apparent that a regulation for balancing patient safety with product efficiency will need to be monitored and reviewed, well into the deployment of the solution. This begs for the role of the regulator to be multifaceted and progressive, which in turn might necessitate structural changes in how regulations, evaluations and certifications, and monitoring is traditionally conducted in India for health-related products.

An overview of regulatory capacity-building for highly specialised markets such as health technology provides a useful insight: semi-governmental regulation (involving specialised functionaries to inform standards and their implementation) allows regulatory agencies to borrow technical standards from international bodies, while exercising care in adopting the same to their social and economic context.[65]

However, these approaches cannot be adopted as is into India, given the countrys unique ecosystem, industry and regulatory constraints. Adopting the USFDA-based quality and efficacy standards and mechanisms might also limit the AI innovations in healthcare to innovators that have the financial and technological resources to pursue the international gold-standard, and in turn make AI that much less accessible to public health at large.

In regulating AI-based medical devices to mitigate its potential risks to patient safety, the IMDRF risk-assessment framework of SaMD allows identifying categories of risk that require a higher degree of evaluation and monitoring. Focusing on the clinical acuity of the location of care (e.g., intensive care unit versus general preventive care setting), type of decision being suggested (immediately life-threatening versus clinical reminder), and type of decision support being provided (e.g., interruptive alert versus invisible nudge), the framework justifiably requires high-risk medical devices to be substantiated with evidence for its validity, reliability and clinical association, and also for the way in which it mitigates known risks to patients. Basing regulations on a risk-based evaluation can help prioritise deployment of lower-risk medical devices in the short-term[e] ), and resolve for more stringent regulatory concerns around high-risk medical devices in the long term.

Therefore, what is needed is an ecosystem building role where the regulator catalyses the industry through ensuring availability of foundational building blocks like data, promulgating regulatory processes that secures patient interest without overburdening the fledgling industry, and works through an experimental and consultative approach with all relevant stakeholders to institutionalize these frameworks. Key recommendations of how the regulators can fulfill these expectations are presented in the sections below.

For AI to truly permeate healthcare, data access cannot be centralised and cordoned off from those who need to use it. Privacy preservation and protection measures are largely in conflict with the access to large datasets needed for the development, certification and supervision of AI in health solutions. While there are innovative technological options like differential privacy, and comprehensive and dynamic consent management that can resolve the conflict, they are not widely available for an ecosystem that is already propelling at speed. Meanwhile, it is the regulators role to ensure data is democratised in a way that keeps the interest of its citizens at the forefrontboth in terms of protecting their privacy and ensuring their safety as patients.

Distinguishing between personal and non-personal data as well as setting up access pathways for both separately can be the first step towards data democratisation. To protect the citizens interest in the former, it might be reasonable to insist on in-depth documentation of data operating procedures along with regular audits. ISO-13485 and the General Data Protection Regulation (GDPR) requirements (in the absence of the Indian Personal Data Protection Bill) can provide broad guidance on data privacy and security practices that must be instituted.

For non-personal data, the government has a facilitators role to play, especially with respect to the data gathered through its own efforts and programmes. It also has the greater possibility of being representative and equally accessible to all.[66] Exploring pathways to publicly release government data in anonymised and digitised form should be the priority for enabling the industry. This effort needs to go beyond existing efforts like data.gov, which face their own challenges,[67] into a concentrated effort for investing in infrastructure and capacity building that enables quality data collection. This also requires a conscious effort to develop large, quality, consensual datasets fit for clinical AI innovations.

The regulatory role should also extend to standard-setting for data collection and consent, quality management, and consolidation, which the Health Ministry has been trying to fulfill with the EHR Standards (2016) and the NDHB (2019). This will propel the ecosystem and give it the technological interoperability to share data and aggregate it fit for AI. However, the Government should go beyond defining standards and future strategies, by creating data marketplaces and collaborative schemes to enable this data sharing.

Data quality issues are critical when it comes to building AI for clinical settings. It is, therefore, incumbent on the regulators of AI models to also ensure that the data used adheres to the FAIR (findability, accessibility, interoperability, and reusability) principles and is collected in an ethical manner before certifying the model as fit for the market. This could be further supplemented by organisational quality assessment in pre-market checkpoints. These conditions can signal to the industry that data integrity and ethical collection is of paramount importance to be eligible for the market, and lead to positive structural changes in how enterprises function.

The regulatory requirements of AI in healthcare continues to evolve as the industry is still in nascent stages. This is also the opportunity to have flexible regulation and learn from experience in striking the right balance between over-regulation (which may delay large-scale public health deployment for meaningful impact) and under-regulation (which may pose challenges to safety, effectiveness, adoption and user-trust). This results in two areas of consideration for the regulator: clinical evaluation of AI models, and post-market monitoring and surveillance of AI models in use.

This is what the USFDAs Pre-Certification Program intended to do, i.e. institute a least-burdensome regulatory oversight mechanism by ensuring that developers are trustworthy and have adequate quality management systems (organisational excellence and culture) that can support and maintain a safe and highly effective SaMD across its life-cycle. This is followed by a pre-market review process of the safety and efficacy of the model itself in the least intrusive way possible, and finally the USFDA uses post-market monitoring mechanisms to ensure continued safety, effectiveness, and quality performance of SaMD in the real world using real world data.

When it comes to clinical evaluations, the purpose of regulatory oversight is to prevent false results, errors and misinterpretations in the outputs of the AI models that could be detrimental to the clinical outcome it targets. Therefore, the checkpoint for the regulator might be satisfied in the leanest way possible by ensuring accuracy and relevancy of data inputs and its outputs generated through the operation of the algorithm.

A framework used in ethics of genome-wide association studies for multifactorial diseases to identify which genes are useful can be applied to the question of data for AI models as well. The framework identified three criteria necessary for a gene to be useful, which are: (i) data in the studies and work products derived from these genes must be reproducible and applicable to the target population; (ii) data and derived work products should have significant benefits for the patient population to whom they are applied; and (iii) resulting knowledge should lead to quantifiable utility for the patient in excess of the potential harm.[68]

Therefore, at the clinical evaluation stage, the regulator might be satisfied by evidence proving the benefits through a suite of options, viz. pilot data, observational and risk-adjusted assessment results, and even clinical trials. It is the risk classification of the device that should define the stringency of evidentiary requirements. At the same time, evidence that can point to the efficiency with which the AI prediction interacts with the human element in the loop can also be mandated for clinical high-risk devices. Even highly accurate predictions might be not fit to improve clinical outcomes unless they are followed up with effective interventions (actions) that are integrated into the clinical workflow.[69] Thus, evidence that not only points to the high predictive value of the model but how the prediction-action pair operates in the clinical setting might be better suited, but may be cumbersome to obtain and assess.

For a traditional regulatory framework like Indias, it might be challenging to leapfrog into complex institutional changes that AI evaluations and monitoring might necessitate. Effective use of regulatory sandboxes with relaxed regulations and anonymised data availability can help experiment with regulatory models to strike the effective balance needed while allowing for innovations to prosper. Sandboxing[f] can help decipher new models of collaborations between industry and the government while also helping understand the boundary conditions of effective regulation and ethics to drive innovations. Sandboxes are common and effective across the world. Most recently, UKs NHSx has called for a joint regulatory sandbox for AI in healthcare bringing together all the sandbox initiatives by different regulators and giving innovators a single, end-to-end safe space to develop and test their AI systems.[70]

The mechanisms to monitor the performance of the model following its deployment is a complex task involving collection and interpretation of real-world information. Further, self-learning models keep refining themselves from on-going datastreams making them complicated to monitor for safety periodically using a static and limited dataset. There are two possibilities for a nascent ecosystem here: limit itself to locked models that can be easily monitored, or develop novel ways to evaluate self-learning (unlocked or reinforced) models. The latter approach will require a consultative approach to work alongside the industry in instituting a balanced and cost-effective system, as experience shows that enhanced post-market surveillance has faced hardships in terms of compliance from developers and enforcement powers of regulators.[71] Some feasible pathways could be periodic evaluation of performance on stratified patient subgroups to assess if the model performs equally effectively across sub-categories of patients. Flagging of certain outputs as anomalous[g] and manual auditing of these can help improve reliability of the model.

Another element where the regulator might need a consultative approach with the developers is to set up processes for risk reporting to ensure absolute patient safety. Medical devices usually rely on hazard and operability studies, which can be used for clinical AI devices as well. However, since continuous learning and adaptive aspects of AI bring with it newer risks, it might also be necessary to adapt these risk assessment processes accordingly. Iterative system testing for risk on a continuous basis or period risk audits could be explored, but in ways that do not substantially add to developers operational costs. Developers can also list out the dependencies on which their nodes operations are based at the outset (e.g. continued access to users data on which the model is based) to be able to control and manage each of them.

The dynamism of AI-based technology in the clinical context means that its users are pushed to adapt to new workflows that integrate its functions to positively influence health outcomes or, conversely, having no positive influence but instead distorting the treatment pathway. Thus, even if a technology has no proven risk to the patient under given conditions, it needs to be tested for how it adapts with user workflows.[72]

During clinical evaluation, if a given medical device responds to the clinical outcome it intends to, there is merit in undertaking human factors validation testing considering the environment in which it will be used. The USFDA recommends that manufacturers determine whether the population using the device comprises professionals or nonprofessionals, what the users education levels are, what age the users are, what functional limitations they may have, and their mental and sensory conditions. Clinical efficacy for a specific device can be radically influenced by how the devices testing environment (a controlled laboratory ecosystem) is different from its application environment (a primary health clinic with limited internet connectivity). For frontline health workers with minimum digital literacy, complex interface functions on digital health applications could compromise the volume of beneficiaries they can respond to in a limited period of time, thus compromising health outcomes for the community. Regulation for medical devices therefore needs to articulate similar conditions that need to be tested for, and articulated for its specific usability in a public health context.

Given how trustworthy AI is likely to be adopted better and in many cases is a condition for operating in healthcare, regulators are expected to articulate how much evidence proves this trust. Deep learning methods have been lauded for higher accuracy but are also sufficiently opaque for users to distrust them and hold them accountable for high-risk clinical output. Given the high impact-high risk devices that AI promises to deliver for healthcare, simply prohibiting AI solutions employing opaque decision pathways is counter-productive. Instead, regulations could play a pivotal role in guiding manufacturers to a need-based framework for explainability including (1) articulating the operational and legal needs for regulation, (2) examination of technical tools available to address the same and (3) value the required level of explanation needed against the costs involved, emphasising that explanations are socially useful only when total social benefits exceed costs.[73]

In many ways, the true litmus test for an innovation is its responsiveness to the actual needs of the ecosystem in which it integrates. As regulators deliberate over conditioning the innovation ecosystem for AI in healthcare, favouring its responsiveness to public health goals allows manufacturers to innovate directly in response to a need. For this, the regulator needs to build a larger foundational ecosystem and take the role of an enabler, while simultaneously focusing on low-hanging fruits to start introducing emerging technologies in a substantial way into the market.

Under the Medical Devices Rules (2017), USFDA and CE certified medical devices can be marketed in India without having to undergo lengthy clinical trials. While it may be prudent to extend the regulation to include SaMDs, the step may not necessarily spur homegrown innovation. Certification programs under the USFDA and CE are prohibitively expensive for most startups owing to the high costs of clinical trials and regulatory filing in the US and Europe, respectively. Therefore, regulatory authorities in India should explore mechanisms to subsidise these certification expenses by providing direct financial incentives to the startups and MSMEs working on solutions for public health in the absence of Indian quality certification mechanisms. Further, subsidising costs at source could be explored via international agreements and partnerships with external certifying agencies.

There may already be solutions being developed internationally that can be easily contextualised to India. Incentivising these international companies to test solutions on Indian patients for their global trials and working with international agencies to accept and assess these tests for certification may be an important first step in preparing the Indian ecosystem. From a commercial point of view, the international solutions may have the first mover advantage but can prepare the market for indigenous solutions and lower the barriers to entry in the longer run. For this, regulators in India will need to quickly adapt its vigilance mechanisms as a first goal (as compared to comprehensive clinical evaluations) and ensure safe deployment in India. Learning from this experience, regulators can move further to define holistic certification and benchmarking guidelines for India.

International patent pooling for life-saving technologies can be negotiated by international consortiums on similar lines of Medical Patent Pools for life-saving drugs. While the technology will remain proprietary to the parent firm, the on-ground implantation of these technologies will have to be taken up by local firms that understand the diverse contexts of Indian health systems. Investment to ensure uptake of these solutions may lead to the creation of a smaller auxiliary industry that can quickly test and operationalize health technologies on the ground.

In an enabling role, the regulator also must take a forward-looking approach in building the foundational layers of the ecosystem through collaborations with other governmental, private sector and civil society players. The NDHB is a prime example of how an enterprise architectural approach with focus on base principles, standard-setting and open source technology layers can achieve the goal of kickstarting sustainable and scalable innovations on the top-most application layers. For the AI in the health ecosystem, the government can play a facilitators role in creating open technology layers like anonymisers and annotation tools, which can bring down the cost and effort required for innovators in developing and deploying solutions.

Finally, domestic regulatory clarity is pivotal for certainty amongst innovators innovating for an Indian market. India should freely borrow and co-opt norms for clinical assessment being set in AI by international organizations such as WHO-ITU.[74] Following an iterative approach to the discovery of India-specific norms by working with medical research institutes and AI solution providers in controlled environments such as AI sandboxes can prove to be hugely beneficial to all stakeholders involved the medical community, the regulator, the innovator and the citizen seeking health services.

While AI shows immense potential in meeting the needs of an under-resourced and overburdened health system, there is much to be done to create and institutionalise structures that can propel its development and optimise its benefits to all. A systems lens to regulate AI can help achieve this goal by drawing domestic regulators' attention to conditions of the ecosystem that can allow emerging technology to thrive.

Investments are required to build the digital health ecosystem in the country and unlock the large amounts of data that exists with stakeholders, which form the base for AI-driven systems. Further, capacities need to be built not only by the regulators and the government but by private firms and solutioners in the space to assess and ensure the long-term viability of AI. As for any emerging technology, adapting the current static regulatory approach to a more dynamic, iterative one will be key in allowing AI-based technology to thrive rather than struggle against laggard regulation. Perhaps most importantly, as was highlighted in a letter signed by eminent scholars such as Stephen Hawking and industry leaders like Elon Musk, this is an opportunity for the ecosystem to be developed in a way which maximises the social benefit of AI.[75] Ethical concerns, including, but not limited to biases may surreptitiously slip in, and can go unnoticed if sufficient checks are not put in place. It is the regulators responsibility to build a vibrant ecosystem which can fruitfully deliver systems that minimise bias and maximise social benefit as much as possible.

Gartner, a leading IT research and advisory company, said that AI in healthcare is on the rise or in some cases may have hit the peak of the Technology Hype Cycle.[76] These are still early days for AI in healthcare and much is to be ascertained in terms of the scalability into real-world use cases. Just as was the case in Artificial General Intelligence, there is a danger of overestimating the use of and the ability to build these complex systems to augment and replace existing healthcare systems. What is clear is that even narrow AI solutions (those operating in predetermined range and scope) have been demonstrated to be of help to medical professionals- influencing health outcomes in a promising way.[77] It is now for governments (as regulators), clinicians (as users) and patients (as beneficiaries) to collaboratively shape the terms that allow emerging technology to urgently respond to the countrys developmental goals.

[a] Some operate at the population level, assisting the government in effective health service delivery, while others in the clinical setting interacting with clinicians or even directly with patients.

[b] Black box AI is any artificial intelligence system whose inputs and operations are not directly visible or interpretable to its users.

[c] Conventional regulation design involves a comprehensive processes of matching regulation needs with incentives and penalties. Given that the development of emerging technology like artificial intelligence happens at a pace faster than that of creating and implementing regulations for it, regulations have to often catch up to the needs of the ecosystem in which technology needs to be placed and in doing so, remains reactive.

[d] Based on patterns analysed from typical human responses, chatbots are trained to provide pre-set answers to questions or in many cases indicate an action based on a pre-analysed pattern of human responses. In such uses of AI where the human is eliminated from the equation, there may be an additional layer of trust to be built among users about the credibility of the AIs indications and responses to guide its ethical use.

[e] Those that are needed largely for the public health system and frontline institutions, for example.

[f] In computer security, a sandbox is a security mechanism for separating running programs, usually in an effort to mitigate system failures or software vulnerabilities from spreading. It is increasingly being applied as an approach to experimenting with regulations whereby a regulator allows for live, time-bound testing of innovations under its oversight in a controlled environment with relaxed regulatory limitations to collect evidence.

[g] For instance, predictions that do not match human judgment in the clinical context.

[1] Bush, Jonathan. How AI Is Taking the Scut Work Out of Health Care. Harvard Business Review, March 5, 2018.

[2] Rajkomar, A., Oren, E., Chen, K. et al. Scalable and accurate deep learning with electronic health records. npj Digital Med 1, 18 (2018). https://doi.org/10.1038/s41746-018-0029-1

[3] Fleming, Nic. How Artificial Intelligence Is Changing Drug Discovery Citation Metadata. Nature (Vol. 557, Issue 7706), May 2018.

[4] Bora, Garima. Qure.ai Can Detect Covid-19 Lung Infections in Less than a Minute, Help Triage Patients. ET Online, April 30, 2020.

[5] 3.5 Crore People in 24 States Registered in Nutrition Monitoring Software: WCD Ministry. The Times of India, September 24, 2019.

[6] Rathi, Aayush. Is India's Digital Health System Foolproof? Economic and Political Weekly, December 11, 2019.

[7] National Digital Health Blueprint (NDHB). Ministry of Health and Family Welfare (MoHFW), Government of India, October 2019. Accessed April 27, 2020.

[8] Mabiyan, Rashmi. India Bullish on AI in Healthcare without Electronic Health Records. ETHealthWorld, January 6, 2020.

[9] Lee, Bruce Y. How Doctors May Be Spending More Time With Electronic Health Records Than Patients. Forbes, 13 Jan. 2020.

[10] Thayyil, Jayakrishnan, and Mathummal Cherumanalil Jeeja. Issues of Creating a New Cadre of Doctors for Rural India. International Journal of Medicine and Public Health, January 2013. https://doi.org/10.4103/2230-8598.109305.

[11] Vikas Bajpai. The Challenges Confronting Public Hospitals in India, Their Origins and Possible Solutions. Advances in Public Health, July 13, 2014. https://doi.org/10.1155/2014/898502.

[12] Gautam Chikermane, Oomen C. Kurian. Can PMJAY Fix Indias Healthcare System? Crossing Five Hurdles on the Path to Universal Health Coverage. ORF Occasional Paper No. 172, Oct 2018, Observer Research Foundation.

[13] Frost, Isabel, Jess Craig, Jyoti Joshi, and Ramanan Laxminarayan. Access Barriers to Antibiotics. The Center For Disease Dynamics, Economics & Policy, April 11, 2019.

[14] WHO. Global Health Observatory (GHO) Data Accessed May 10, 2020. https://www.who.int/gho/health_workforce/physicians_density/en/.

[15] Bhattacharya, Sudip, Keerti Bhusan Pradhan, Md Abu Bashar, Shailesh Tripathi, Jayanti Semwal, Roy Rillera Marzo, Sandeep Bhattacharya, and Amarjeet Singh. Artificial Intelligence Enabled Healthcare: A Hype, Hope or Harm. Journal of Family Medicine and Primary Care Vol.8, November 15, 2019. https://doi.org/0.4103/jfmpc.jfmpc_155_19.

[16] Wadhwani Institute of Artificial Intelligence. AI-Powered Anthropometry., n.d.

[17] FDA Clears GE Healthcare's Critical Care Suite Chest X-Ray AI. Imaging Technology News, September 12, 2019.

[18] Correspondent, Special. AI Technology to Bloom in Telangana. The Hindu, 3 Jan, 2020.

[19] Babylon Health. NHS General Practice Powered by Babylon. Accessed May 5, 2020. https://www.babylonhealth.com/

Read the original post:
Regulating AI in Public Health: Systems Challenges and Perspectives - Observer Research Foundation

Artificial Intelligence Is the Hope 2020 Needs – Bloomberg

Tyler Cowen is a Bloomberg Opinion columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. His books include The Complacent Class: The Self-Defeating Quest for the American Dream.

Your AI bartender will serve you now.

Photographer: Leon Neal/Getty Images Europe

Photographer: Leon Neal/Getty Images Europe

This year is likely to be remembered for the Covid-19 pandemic and for a significant presidential election, but there is a new contender for the most spectacularly newsworthy happening of 2020: the unveiling of GPT-3. As a very rough description, think of GPT-3 as giving computers a facility with words that they have had with numbers for a long time, and with images since about 2012.

The core of GPT-3, which is a creation of OpenAI, an artificial intelligence company based in San Francisco, is a general language model designed to perform autofill. It is trained on uncategorized internet writings, and basically guesses what text ought to come next from any starting point. That may sound unglamorous, but a language model built for guessing with 175 billion parameters 10 times more than previous competitors is surprisingly powerful.

The eventual uses of GPT-3 are hard to predict, but it is easy to see the potential. GPT-3 can converse at a conceptual level, translate language, answer email, perform (some) programming tasks, help with medical diagnoses and, perhaps someday, serve as a therapist. It can write poetry, dialogue and stories with a surprising degree of sophistication, and it is generally good at common sense a typical failing for many automated response systems. You can even ask it questions about God.

Imagine a Siri-like voice-activated assistant that actually did your intended bidding. It also has the potential to outperform Google for many search queries, which could give rise to a highly profitable company.

More from

GPT-3 does not try to pass the Turing test by being indistinguishable from a human in its responses. Rather, it is built for generality and depth, even though that means it will serve up bad answers to many queries, at least in its current state. As a general philosophical principle, it accepts that being weird sometimes is a necessary part of being smart. In any case, like so many other technologies, GPT-3 has the potential to rapidly improve.

It is not difficult to imagine a wide variety of GPT-3 spinoffs, or companies built around auxiliary services, or industry task forces to improve the less accurate aspects of GPT-3. Unlike some innovations, it could conceivably generate an entire ecosystem.

There is a notable buzz about GPT-3 in the tech community. One user in the U.K. tweeted: I just got access to gpt-3 and I can't stop smiling, i am so excited. Venture capitalist Paul Graham noted coyly: Hackers are fascinated by GPT-3. To everyone else it seems a toy. Pattern seem familiar to anyone? Venture capitalist and AI expert Daniel Gross referred to GPT-3 as a landmark moment in the field of AI.

I am not a tech person, so there is plenty about GPT-3 I do not understand. Still, reading even a bit about it fills me with thoughts of the many possible uses.

It is noteworthy that GPT-3 came from OpenAI rather than from one of the more dominant tech companies, such as Alphabet/Google, Facebook or Amazon. It is sometimes suggested that the very largest companies have too much market power but in this case, a relatively young and less capitalized upstart is leading the way. (OpenAI was founded only in late 2015 and is run by Sam Altman).

GPT-3 is also a sign of the underlying health and dynamism of the Bay Area tech world, and thus of the U.S. economy. The innovation came to the U.S. before China and reflects the power of decentralized institutions.

Like all innovations, GPT-3 involves some dangers. For instance, if prompted by descriptive ethnic or racial words, it can come up with unappetizing responses. One can also imagine that a more advanced version of GPT-3 would be a powerful surveillance engine for written text and transcribed conversations. Furthermore, it is not an obvious plus if you can train your software to impersonate you over email. Imagine a world where you never know who you are really talking to Is this a verified email conversation? Still, the hope is that protective mechanisms can at least limit some of these problems.

We have not quite entered the era where Skynet goes live, to cite the famous movie phrase about an AI taking over (and destroying) the world. But artificial intelligence does seem to have taken a major leap forward. In an otherwise grim year, this is a welcome and hopeful development. Oh, and if you would like to read more, here is an article about GPT-3 written by GPT-3.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

To contact the author of this story:Tyler Cowen at tcowen2@bloomberg.net

To contact the editor responsible for this story:Michael Newman at mnewman43@bloomberg.net

Before it's here, it's on the Bloomberg Terminal.

Tyler Cowen is a Bloomberg Opinion columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. His books include The Complacent Class: The Self-Defeating Quest for the American Dream.

Continued here:
Artificial Intelligence Is the Hope 2020 Needs - Bloomberg

Global Artificial Intelligence for Automotive Market 2020 Size, Share, Trends, Growth and Outlook with Company Analysis and Forecast to 2026 – Express…

The latest report on Artificial Intelligence for Automotive market is fabricated to provide details pertaining to companies operating in the industry space with competitive edge by scrutinizing the historic market dynamics while elaborating on major developments over this period. The study further enables the leaders to frame vital business expansion strategies by highlighting growth opportunities and ongoing trends in the market.

Information pertaining to growth parameters and prospects which influence the market growth graph over the forecast duration is entailed in the report. It also contains thorough investigation of challenges and restraints prevailing in the market sphere and how to overcome them.

The study extensively compares the past and present trends to evaluate the growth rate of the market over the analysis timeframe. It also elucidates the impact of COVID-19 pandemic on global as well as regional markets and outlines the tactics to help the industry players minimize the damage.

Request Sample Copy of this Report @ https://www.express-journal.com/request-sample/154652

Important Pointers from Table of Contents:

Product Scope:

Application Terrain:

Regional Spectrum:

Competitive Hierarchy:

Conclusively, the report examines Artificial Intelligence for Automotive market segmentations while focusing on other important aspects such as supply chain and sales channel which specifies data about upstream suppliers, raw materials, vendors, and downstream buyers existing in the industry.

Key Highlights from Artificial Intelligence for Automotive Market Study:

Income and Sales Estimation

Historical Revenue and deals volume is displayed and supports information is triangulated with best down and base up ways to deal with figure finish market measure and to estimate conjecture numbers for key areas shrouded in the Artificial Intelligence for Automotive report alongside arranged and very much perceived Types and end-utilize industry. Moreover, macroeconomic factors and administrative procedures are discovered explanations in Artificial Intelligence for Automotive industry advancement and perceptive examination.

Assembling Analysis

The Artificial Intelligence for Automotive report is presently broken down concerning different types and applications. The Artificial Intelligence for Automotive market gives a section featuring the assembling procedure examination approved utilizing essential data gathered through Industry specialists and Key authorities of profiled organizations.

Demand and Supply and Effectiveness

Artificial Intelligence for Automotive report moreover gives support, Production, Consumption, and (Export and Import).

Major Points Covered in Table of Contents:

In a word, the Artificial Intelligence for Automotive Market report provides major statistics on the state of the Artificial Intelligence for Automotive industry with a valuable source of guidance and direction for companies and individuals interested in the market. In the end, Artificial Intelligence for Automotive Market report delivers a conclusion which includes Research Findings, Market Size Evaluation, Global Market Share, Consumer Needs along with Customer Preference Change, Data Source. These factors will raise the growth of the business overall.

Request Customization on This Report @ https://www.express-journal.com/request-for-customization/154652

See more here:
Global Artificial Intelligence for Automotive Market 2020 Size, Share, Trends, Growth and Outlook with Company Analysis and Forecast to 2026 - Express...

Artificial intelligence is the hope 2020 needs | Commentary | Seattle Times – Walla Walla Union-Bulletin

This year is likely to be remembered for the COVID-19 pandemic and for a significant presidential election, but there is a new contender for the most spectacularly newsworthy happening of 2020: the unveiling of GPT-3. As a very rough description, think of GPT-3 as giving computers a facility with words that they have had with numbers for a long time, and with images since about 2012.

The core of GPT-3, which is a creation of OpenAI, an artificial intelligence company based in San Francisco, is a general language model designed to perform autofill. It is trained on uncategorized internet writings, and basically guesses what text ought to come next from any starting point. That may sound unglamorous, but a language model built for guessing with 175 billion parameters 10 times more than previous competitors is surprisingly powerful.

The eventual uses of GPT-3 are hard to predict, but it is easy to see the potential. GPT-3 can converse at a conceptual level, translate language, answer email, perform (some) programming tasks, help with medical diagnoses and, perhaps someday, serve as a therapist. It can write poetry, dialogue and stories with a surprising degree of sophistication, and it is generally good at common sense a typical failing for many automated response systems. You can even ask it questions about God.

Imagine a Siri-like voice-activated assistant that actually did your intended bidding. It also has the potential to outperform Google for many search queries, which could give rise to a highly profitable company.

GPT-3 does not try to pass the Turing test by being indistinguishable from a human in its responses. Rather, it is built for generality and depth, even though that means it will serve up bad answers to many queries, at least in its current state. As a general philosophical principle, it accepts that being weird sometimes is a necessary part of being smart. In any case, like so many other technologies, GPT-3 has the potential to rapidly improve.

It is not difficult to imagine a wide variety of GPT-3 spinoffs, or companies built around auxiliary services, or industry task forces to improve the less accurate aspects of GPT-3. Unlike some innovations, it could conceivably generate an entire ecosystem.

There is a notable buzz about GPT-3 in the tech community. One user in the U.K. tweeted: "I just got access to gpt-3 and I can't stop smiling, i am so excited." Venture capitalist Paul Graham noted coyly: "Hackers are fascinated by GPT-3. To everyone else it seems a toy. Pattern seem familiar to anyone?" Venture capitalist and AI expert Daniel Gross referred to GPT-3 as "a landmark moment in the field of AI."

I am not a tech person, so there is plenty about GPT-3 I do not understand. Still, reading even a bit about it fills me with thoughts of the many possible uses.

It is noteworthy that GPT-3 came from OpenAI rather than from one of the more dominant tech companies, such as Alphabet/Google, Facebook or Amazon. It is sometimes suggested that the very largest companies have too much market power but in this case, a relatively young and less capitalized upstart is leading the way. (OpenAI was founded only in late 2015 and is run by Sam Altman).

GPT-3 is also a sign of the underlying health and dynamism of the Bay Area tech world, and thus of the U.S. economy. The innovation came to the U.S. before China and reflects the power of decentralized institutions.

Like all innovations, GPT-3 involves some dangers. For instance, if prompted by descriptive ethnic or racial words, it can come up with unappetizing responses. One can also imagine that a more advanced version of GPT-3 would be a powerful surveillance engine for written text and transcribed conversations. Furthermore, it is not an obvious plus if you can train your software to impersonate you over email. Imagine a world where you never know who you are really talking to "Is this a verified email conversation?" Still, the hope is that protective mechanisms can at least limit some of these problems.

We have not quite entered the era where "Skynet goes live," to cite the famous movie phrase about an AI taking over (and destroying) the world. But artificial intelligence does seem to have taken a major leap forward. In an otherwise grim year, this is a welcome and hopeful development. Oh, and if you would like to read more, here is an article about GPT-3 written by GPT-3.

Tyler Cowen is a Bloomberg Opinion columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. His books include "Big Business: A Love Letter to an American Anti-Hero."

See the article here:
Artificial intelligence is the hope 2020 needs | Commentary | Seattle Times - Walla Walla Union-Bulletin

Artificial Intelligence Loses Some Of Its Edginess, But Is Poised To Take Off – Forbes

AI advances

More than a decade ago, Nicholas Carr,in his workDoes IT Matter, suggested that the widespread availability and low prices of technology made it more of a utility like electricity or water, versus a competitive differentiator. This may be happening with artificial intelligence to some degree.

It appears that AIs early adopter phase is ending; the market is now moving into the early majority chapter of this maturing set of technologies, write Beena Ammanath, David Jarvis and Susanne Hupfer, all with Deloitte, in their most recent analysis of the enterprise AI space. Early-mover advantage may fade soon. As adoption becomes ubiquitous, AI-powered organizations may have to work harder to maintain an edge over their industry peers.

Seventy-four prevent of 2,727 executives responding to a Deloitte survey agree that AI will be integrated into all of their enterprise applications within three years. Although adopters are still bullish on AI, their advantage may wane as barriers to adoption fall and usage grows, Ammanath and her co-authors state. Organizations are finding it easier and easier to employ AI technologies. Data science and machine learning platforms have proliferated; AI-optimized hardware is providing greater compute power. It is now easier to train algorithms through self-service data preparation tools, synthetic data, small data, and pretrained models.

It is increasingly clear that we are on the path toward an era of pervasive AI, they add. The challenge now is leveraging AI in innovative ways to maintain its advantages, the study finds. For example, much of the work with AI is still confined to managing IT systems. In addition, there still isnt enough AI talent to go around.

At least 26% of the companies surveyed can be considered seasoned AI adopters, meaning they have undertaken a large number of AI production deployments and have developed a high level of AI expertise across the board. These AI leaders are still seeing competitive advantage 45% of this group said that AI technologies have enabled them to establish a significant lead over their competitors, versus 26% of the entire sample.

Still, this means a majority of even the most advanced AI companies, 55%, still arent seeing competitive advantage. Part of this may be due to the fact that AI is still confined to IT departments and functions, including cybersecurity Forty-seven percent of respondents indicated that IT was one of the top two functions for which AI was primarily used, the survey shows.

This could mean that companies are using AI for IT-related applications such as analyzing IT infrastructure for anomalies, automating repetitive maintenance tasks, or guiding the work of technical support teams, Ammanath and her co-authors note. Tellingly, business functions such as marketing, human resources, legal, and procurement ranked at the bottom of the list of AI-driven functions.

An area that needs work is finding or preparing individuals to work with AI systems. Fewer than half of executives (45%) say they have a high level of skill around integrating AI technology into their existing IT environments, the survey shows. This could include data science and machine learning platforms, enterprise applications powered by AI, tools for developing conversation interfaces, and low-code or no-code tools. Across all these different technology areas, 93% are using cloud-based AI capabilities, while 78% employ open-source AI capabilities.

Ammanath and her team offer some suggestions for keeping the edge with AI:

Pursue creative approaches. Take inspiration from inventive use cases to develop solutions that are both useful and novel.

Push boundaries. Expand your view of what may be possible to accomplish with AI technologies. Try to pursue a more diverse portfolio of projects that could potentially enhance multiple business functions across the enterprise.

Create the new. Look to develop new AI-powered products and services that take advantage of the technologies ability to learn and solve problems that humans cant.

Expand the circle. Move AI beyond the IT department by involving more of the business in AI efforts. Look for new vendors, partnerships, data sources, tools, and techniques to advance your efforts.

Leverage a diverse team. Include both technical and business experts in selecting AI technologies and suppliers. Having a broad perspective from developers, integrators, end users, and business owners can help ensure organizational alignment and a focus on business outcomes. Along with any vendor support consider using working groups, dedicated leaders, or communities of practice.

Actively address risks. Developing a set of principles and processes to actively manage the range of AI risks can help build trust within your business and with customers and partners.

Challenge vendors. While it is important to build trust and transparency with providers of your AI-powered systems, it can be equally essential to ensure that what they provide is aligned with your organizations ethical principles.

Read more:
Artificial Intelligence Loses Some Of Its Edginess, But Is Poised To Take Off - Forbes

Imint is the Swedish firm that gives Chinese smartphones an edge in video production – TechCrunch

If your phone takes amazing photos, chances are its camera has been augmented by artificial intelligence embedded in the operating system. Now videos are getting the same treatment.

In recent years, smartphone makers have been gradually transforming their cameras into devices that capture data for AI processing beyond what the lens and sensor pick up in a single shot. That effectively turns a smartphone into a professional camera on auto mode and lowers the bar of capturing compelling images and videos.

In an era of TikTok and vlogging, theres a huge demand to easily produce professional-looking videos on the go. Like still images, videos shot on smartphones rely not just on the lens and sensor but also on enhancement algorithms. To some extent, those lines of codes are more critical than the hardware, argued Andreas Lifvendahl, founder and chief executive of Swedish company Imint, whose software now enhances video production in roughly 250 million devices most of which come from Chinese manufacturers.

[Smartphone makers] source different kinds of camera solutions motion sensors, gyroscopes, and so on. But the real differentiator, I would say, is more on the software side, Lifvendahl told TechCrunch over the phone.

Imint started life in 2007 as a spin-off academic research team from Uppsala University in Sweden. It spent the first few years building software for aerial surveillance, just as many cutting-edge innovations that find their first clients in the defense market. In 2013, Lifvendahl saw the coming of widespread smartphone adaptation and a huge opportunity to bring the same technology used in defense drones into the handsets in peoples pockets.

Smartphone companies were investing a lot in camera technology and that was a clever move, he recalled. It was very hard to find features with a direct relationship to consumers in daily use, and the camera was one of those because people wanted to document their life.

But they were missing the point by focusing on megapixels and still images. Consumers wanted to express themselves in a nice fashion of using videos, the founder added.

Source: Imints video enhancement software, Vidhance

The next February, the Swedish founder attended Mobile World Congress in Barcelona to gauge vendor interest. Many exhibitors were, unsurprisingly, Chinese phone makers scouring the conference for partners. They were immediately intrigued by Imints solution, and Lifvendahl returned home to set about tweaking his software for smartphones.

Ive never met this sort of open attitude to have a look so quickly, a clear signal that something is happening here with smartphones and cameras, and especially videos, Lifvendahl said.

Vidhance, Imints video enhancement software suite mainly for Android, was soon released. In search of growth capital, the founder took the startup public on the Stockholm Stock Exchange at the end of 2015. The next year, Imint landed its first major account with Huawei, the Chinese telecoms equipment giant that was playing aggressive catch-up on smartphones at the time.

It was a turning point for us because once we could work with Huawei, all the other guys thought, Okay, these guys know what they are doing,' the founder recalled. And from there, we just grew and grew.

The hyper-competitive nature of Chinese phone makers means they are easily sold on new technology that can help them stand out. The flipside is the intensity that comes with competition. The Chinese tech industry is both well-respected and notorious for its fast pace. Slow movers can be crushed in a matter of a few months.

In some aspects, its very U.S.-like. Its very straight to the point and very opportunistic, Lifvendahl reflected on his experience with Chinese clients. You can get an offer even in the first or second meeting, like, Okay, this is interesting, if you can show that this works in our next product launch, which is due in three months. Would you set up a contract now?'

Thats a good side, he continued.The drawback for a Swedish company is the demand they have on suppliers. They want us to go on-site and offer support, and thats hard for a small Swedish company. So we need to be really efficient, making good tools and have good support systems.

The fast pace also permeates into the phone makers development cycle, which is not always good for innovation, suggested Lifvendahl. They are reacting to market trends, not thinking ahead of the curve what Apple excels in or conducting adequate market research.

Despite all the scrambling inside, Lifvendahl said he was surprised that Chinese manufacturers could get such high-quality phones out.

They can launch one flagship, maybe take a weekend break, and then next Monday they are rushing for the next project, which is going to be released in three months. So theres really no time to plan or prepare. You just dive into a project, so there would be a lot of loose ends that need to be tied up in four or five weeks. You are trying to tie hundreds of different pieces together with fifty different suppliers.

Imint is one of those companies that thrive by finding a tough-to-crack niche. Competition certainly exists, often coming from large Japanese and Chinese companies. But theres always a market for a smaller player who focuses on one thing and does it very well. The founder compares his company to a little niche boutique in the corner, the hi-fi store with expensive speakers. His competitors, on the other hand, are the Walmarts with thick catalogs of imaging software.

The focused strategy is what allows Imints software to enhance precision, reduce motion, track moving objects, auto-correct horizon, reduce noise, and enhance other aspects of a video in real-time all through deep learning.

About three-quarters of Imints revenues come from licensing its proprietary software that does these tricks. Some clients pay royalties on the number of devices shipped that use Vidhance, while others opt for a flat annual fee. The rest of the income comes from licensing its development tools or SDK, and maintenance fees.

Imint now supplies its software to 20 clients around the world, including the Chinese big-four of Huawei, Xiaomi, Oppo and Vivo as well as chip giants like Qualcomm and Mediatek. ByteDance also has a deal to bake Imints software into Smartisan, which sold its core technology to the TikTok parent last year. Imint is beginning to look beyond handsets into other devices that can benefit from high-quality footage, from action cameras, consumer drones, through to body cameras for law enforcement.

So far, the Swedish company has been immune from the U.S.-China trade tensions, but Lifvendahl worried as the two superpowers move towards technological self-reliance, outsiders like itself will have a harder time entering the two respective markets.

We are in a small, neutral country but also are a small company, so were not a strategic threat to anyone. We come in and help solve a puzzle, assured the founder.

See the rest here:
Imint is the Swedish firm that gives Chinese smartphones an edge in video production - TechCrunch

How artificial intelligence is being used for divorce and separations with apps like Amica, Adieu and Penda – Newcastle Herald

newsletters, editors-pick-list,

Artificial intelligence is being used to help divorcing couples divide assets and develop a parenting plan for their children. The technology has the potential to make family law matters cheaper and less stressful. University of Newcastle researchers Professor Tania Sourdin and Dr Bin Li have examined this technology, including the apps Amica, Adieu and Penda. Their work was part of a major research project on justice apps at Newcastle Law School. The Newcastle academics say complex family law cases can "cost each party more than $200,000". "There is a need for cheaper, smarter dispute resolution options in the family law area," said Professor Sourdin, who is Dean of the university's law school. Asked if the apps could improve family law, Professor Sourdin said: "Yes, for some people an app can help". "In a way it may make it easier because the time frames might be shorter. "Also not having to talk with your former partner can be helpful where there are no children." Some apps also support "easy referral to counselling, mediation and other services, which can be very useful". "Often people who are self represented need support and there is evidence that a lot of people have difficulty finalising arrangements without a lawyer," she said. Apps can help reduce legal costs while ensuring that people have access to a lawyer when needed. The coronavirus pandemic has put a spotlight on relationships, amid reports that lockdown and job losses has led to more strain between couples and separations. The use of apps to settle family law disputes seem suited to the times. "It is where society is heading. Many people want to access the justice system from their home 24-7," Professor Sourdin said. "Apps can help with this and also provide referral to professionals when needed." Dr Li said the trend towards such apps was "much clearer" in the pandemic. The federal government is supporting apps with artificial intelligence to "empower separating couples to resolve their family law disputes online". Attorney-General Christian Porter issued a press release last month about the Amica app. National Legal Aid developed Amica with $3 million in federal funding. This app is suitable for couples whose relationship is "relatively amicable". "Amica uses artificial-intelligence technology to suggest the split of assets," Mr Porter said. He added that Amica considers a couple's circumstances, agreements reached by couples in similar situations and how courts generally handle disputes of the same nature. "The tool can also assist parents to develop a parenting plan for their children." The Morrison government wants to improve the family law system to make it "faster, simpler, cheaper and much less stressful for separating couples and their children". The government believes Amica will help couples resolve disputes between themselves and avoid court. The app is aimed at reducing legal bills for separating couples and pressure on family law courts. Dr Li, a lecturer at the university's law school, said there had been "extensive discussion and debate on the reform of the family law system in Australia". He said the apps could "alleviate the burden of courts and the load on judges". Professor Sourdin said the apps "need to be carefully developed". "There are concerns they may not function well and that the data used to power the AI [artificial intelligence] is deficient," she said. "There are also real issues about how effective justice apps can be where there is a lack of agreement about what the issues are or what evidence is correct. "Law can be very complex and requires contextual understandings." There are also concerns about digital literacy and access to technology. However, Professor Sourdin was surprised when their review of the Adieu justice app showed users were older than expected. "For example 41 per cent of the 800 or so people who had used the Adieu app had a relationship of more than 15 years," she said. Dr Li said there was also concern about data and privacy protection. "What if the data collected by apps are hijacked and used by an unauthorised third party?", he said. Nevertheless, they say apps in the justice sector can have many benefits. Professor Sourdin said justice apps could be used to "help people with their legal rights". "The DoNotPay app that is used in the US is a good example. This app can help people with simple matters - from parking fines to travel refunds," she said.

https://nnimgt-a.akamaihd.net/transform/v1/crop/frm/3AijacentBN9GedHCvcASxG/3a380c63-19c6-4322-9665-f43828eb021a.jpg/r48_0_4729_2645_w1200_h678_fmax.jpg

Artificial intelligence is being used to help divorcing couples divide assets and develop a parenting plan for their children.

The technology has the potential to make family law matters cheaper and less stressful.

University of Newcastle researchers Professor Tania Sourdin and Dr Bin Li have examined this technology, including the apps Amica, Adieu and Penda.

Their work was part of a major research project on justice apps at Newcastle Law School.

The Newcastle academics say complex family law cases can "cost each party more than $200,000".

"There is a need for cheaper, smarter dispute resolution options in the family law area," said Professor Sourdin, who is Dean of the university's law school.

Asked if the apps could improve family law, Professor Sourdin said: "Yes, for some people an app can help".

"In a way it may make it easier because the time frames might be shorter.

"Also not having to talk with your former partner can be helpful where there are no children."

Some apps also support "easy referral to counselling, mediation and other services, which can be very useful".

"Often people who are self represented need support and there is evidence that a lot of people have difficulty finalising arrangements without a lawyer," she said.

Apps can help reduce legal costs while ensuring that people have access to a lawyer when needed.

The coronavirus pandemic has put a spotlight on relationships, amid reports that lockdown and job losses has led to more strain between couples and separations.

The use of apps to settle family law disputes seem suited to the times.

"It is where society is heading. Many people want to access the justice system from their home 24-7," Professor Sourdin said.

"Apps can help with this and also provide referral to professionals when needed."

Dr Li said the trend towards such apps was "much clearer" in the pandemic.

The federal government is supporting apps with artificial intelligence to "empower separating couples to resolve their family law disputes online".

Attorney-General Christian Porter issued a press release last month about the Amica app.

National Legal Aid developed Amica with $3 million in federal funding.

This app is suitable for couples whose relationship is "relatively amicable".

"Amica uses artificial-intelligence technology to suggest the split of assets," Mr Porter said.

He added that Amica considers a couple's circumstances, agreements reached by couples in similar situations and how courts generally handle disputes of the same nature.

"The tool can also assist parents to develop a parenting plan for their children."

The Morrison government wants to improve the family law system to make it "faster, simpler, cheaper and much less stressful for separating couples and their children".

The government believes Amica will help couples resolve disputes between themselves and avoid court.

The app is aimed at reducing legal bills for separating couples and pressure on family law courts.

Dr Li, a lecturer at the university's law school, said there had been "extensive discussion and debate on the reform of the family law system in Australia".

He said the apps could "alleviate the burden of courts and the load on judges".

Professor Sourdin said the apps "need to be carefully developed".

"There are concerns they may not function well and that the data used to power the AI [artificial intelligence] is deficient," she said.

"There are also real issues about how effective justice apps can be where there is a lack of agreement about what the issues are or what evidence is correct.

"Law can be very complex and requires contextual understandings."

There are also concerns about digital literacy and access to technology.

However, Professor Sourdin was surprised when their review of the Adieu justice app showed users were older than expected.

"For example 41 per cent of the 800 or so people who had used the Adieu app had a relationship of more than 15 years," she said.

Dr Li said there was also concern about data and privacy protection.

"What if the data collected by apps are hijacked and used by an unauthorised third party?", he said.

Nevertheless, they say apps in the justice sector can have many benefits.

Professor Sourdin said justice apps could be used to "help people with their legal rights".

"The DoNotPay app that is used in the US is a good example. This app can help people with simple matters - from parking fines to travel refunds," she said.

Here is the original post:
How artificial intelligence is being used for divorce and separations with apps like Amica, Adieu and Penda - Newcastle Herald

Global Automotive Artificial Intelligence Market Analysis by Emerging Trends, Size, Share, Future Growth, Current Statistics, Brand Endorsements and…

The Global Automotive Artificial Intelligence Market 2020 Report Covers the in-depth valuable analysis on Automotive Artificial Intelligence Market Size, Share, Overview, Trends, Analysis, Technology, Applications, Growth, Market Status, Demands, Insights, Development, Research and Forecast 2020-2026.

The Automotive Artificial Intelligence Market Research report shed light on the past survey, it offers the future accurate forecast including other factors influencing the growth rate. This global report gives the comprehensive analysis of the influential factors such as market dynamics(supply, demand, price, quantity, and other specific terms), PEST, and PORTER which assists the growth of the Automotive Artificial Intelligence Industry.

Get Free Sample Report(including full TOC, Tables and Figures): @

https://www.globalmarketers.biz/report/consumer-goods-and-services/global-automotive-artificial-intelligence-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/156880#request_sample

Major Players Of Automotive Artificial Intelligence Market

Hyundai Motor CompanyInternational Business Machines CorporationUber TechnologiesBayerische Motoren Werke AGTeslaDaimler AGHarman International IndustriesFord Motor CompanyToyota Motor CorporationVolvo Car CorporationMicrosoft CorporationStart-Up EcosystemDidi ChuxingAlphabetAudi AGGeneral Motors CompanyIntel CorporationHonda MotorXilinxQualcomm

This report covers the Types as well as Application data for Automotive Artificial Intelligence Market along with the country level information for the period of 2020-2026

Market Segmented By Types and By its Applications:

Global Automotive Artificial Intelligence Market Segmentation: By Types

HumanMachine InterfaceSemi-autonomous DrivingAutonomous Driving

Global Automotive Artificial Intelligence Market Segmentation: By Applications

Deep LearningMachine LearningContext AwarenessComputer VisionNatural Language Processing

Global Automotive Artificial Intelligence Market Scope and Features

Global Automotive Artificial Intelligence Market Introduction and Overview Includes Automotive Artificial Intelligence market Definition, Market Scope and Market Size Estimation and region-wise Automotive Artificial Intelligence Value and Growth Rate history from 2015-2026,

Automotive Artificial Intelligence market dynamics: Drivers, Limitations, challenges that are faced, emerging countries of Automotive Artificial Intelligence, Industry News and Policies by Regions.

Industry Chain Analysis To describe upstream raw material suppliers and cost structure of Automotive Artificial Intelligence, major players of Automotive Artificial Intelligence with company profile, Automotive Artificial Intelligence manufacturing base and market share, manufacturing cost structure analysis, Market Channel Analysis and major downstream buyers of Automotive Artificial Intelligence.

Global Automotive Artificial Intelligence Market Analysis by Product Type and Application It gives Automotive Artificial Intelligence market share, value, status, production, Automotive Artificial Intelligence Value, and Growth Rate analysis by type from 2015-2019. Although downstream market overview, Automotive Artificial Intelligence consumption, Market Share, growth rate, by an application (2015-2019).

Hurry up to get huge discount:

Note: Upto 30% Discount: Get this reports in Discounted Price

Ask For Discount: https://www.globalmarketers.biz/discount_inquiry/discount/156880

Regional Analysis This segment of the report covers the analysis of Automotive Artificial Intelligence production, consumption, import, export, Automotive Artificial Intelligence market value, revenue, market share and growth rate, market status and SWOT analysis, Automotive Artificial Intelligence price and gross margin analysis by regions.

Competitive Landscape, Trends, And Opportunities: It includes the provides competitive situation and market concentration status of major players of Automotive Artificial Intelligence with basic information i.e company profile, Product Introduction, Market share, Value, Price, Gross Margin 2015-2019E

Automotive Artificial Intelligence Market Analysis and Forecast by Region Includes Market Value and Consumption Forecast (2014-2026) of Automotive Artificial Intelligence market Of the following region and sub-regions including North America, Europe(Germany, UK, France, Italy, Spain, Russia, Poland), China, Japan, Southeast Asia (Malaysia, Singapore, Philippines, Indonesia, Thailand, Vietnam) the Middle East and Africa(Saudi Arabia, United Arab Emirates, Turkey, Egypt, South Africa, Nigeria), India, South America(Brazil, Mexico, Colombia)

Do You Have Any Query or Specific Requirement? Ask Our Industry [emailprotected]

https://www.globalmarketers.biz/report/consumer-goods-and-services/global-automotive-artificial-intelligence-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/156880#inquiry_before_buying

Table Of Content

1 Automotive Artificial Intelligence Introduction and Market Overview

2 Industry Chain Analysis

3 Global Automotive Artificial Intelligence Value (US$ Mn) and Market Share, Production , Value (US$ Mn) , Growth Rate and Average Price (US$/Ton) analysis by Type (2015-2019E)

4 Automotive Artificial Intelligence Consumption, Market Share and Growth Rate (%) by Application (2015-2019E) by Application

5 Global Automotive Artificial Intelligence Production, Value (US$ Mn) by Region (2015-2019E)

6 Global Automotive Artificial Intelligence Production (K Units), Consumption (K Units), Export (%), Import (%) by Regions (2015-2019E)

7 Global Automotive Artificial Intelligence Market Status by Regions

8 Competitive Landscape Analysis

9 Global Automotive Artificial Intelligence Market Analysis and Forecast by Type and Application

10 Automotive Artificial Intelligence Market Analysis and Forecast by Region

11 New Project Feasibility Analysis

12 Research Finding and Conclusion13 Appendix13.1 Methodology13.2 Research Data Source

Table of Content & Table Of Figures

https://www.globalmarketers.biz/report/consumer-goods-and-services/global-automotive-artificial-intelligence-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/156880#table_of_contents

Link:
Global Automotive Artificial Intelligence Market Analysis by Emerging Trends, Size, Share, Future Growth, Current Statistics, Brand Endorsements and...