UK firm reaches final stages of the NIST quest for quantum-proof encryption algorithms – www.computing.co.uk

Post Quantum CEO Andersen Cheng

London-based encryption specialist Post Quantum has reached the final stage of the NIST competition to find practical encryption standards capable of withstanding attacks by a quantum computer.

The US National Institute of Standards and Technology (NIST) launched its competition for Public-Key Post-Quantum Cryptographic Algorithms, in 2016 with the aim of arriving at quantum-safe standards by 2024. Successful candidates will enhance or replace the three paradigms considered most vulnerable to quantum attack: the digital signature standard FIPS 186-4 and the public key cryptography standards NIST SP 800-56AandNIST SP 800-56B.

Many of the current encryption algorithms use one-way functions to derive encryption/decryption key pairs, for example factorising very large integers into primes. This method is used by the general purpose RSA algorithms that form the basis of the secure internet protocols SSL and TLS. Elliptic curve cryptography, often preferred in IoT and mobile devices, also uses a one-way mathematical function. Unfortunately both are vulnerable to attack by quantum computers.

Last year NIST whittled down the original 69 candidates to 26, and in a third round announced last week reduced this number to 15: seven finalists "most likely to be ready for standardisation soon after the end of the third round", and eight alternate candidates' "regarded as potential candidates for future standardisation". Candidates fall into three functional categories: Code-based, multivariate and lattice-based cryptography, which cover the variety of different use cases for which post quantum (PQ) encryption will be required. In addition, some candidates are suitable for public key exchange while others are better suited to digital signatures.

The only remaining candidate in the code-based category is Classic McEliece, which is a merger of Post Quantum's Never-The-Same Key Encapsulation Mechanism (NTS-KEM) and work done in the same area by a team led by Professor Daniel Bernstein of University of Illinois at Chicago. The joint candidate, known as Classic McEliece', is based on the McEliece cryptosystem first proposed in the 1970s.

It works by injecting random error codes into the cyphertext. The error correction codes allow the recipient of the encrypted message to cut out the random noise added to the message when decrypting it, a facility not available to any eavesdropper intercepting the message.

"Classic McEliece has a somewhat unusual performance profileit has a very large public key but the smallest ciphertexts of all competing KEMs [key-encapsulation mechanisms]. This is not a good fit for general use in internet protocols as they are currently specified, but in some applications, the very small ciphertext size could make Classic McEliece an appealing choice," NIST says, offering a possible use case as protecting VPNs.

Cheng said he was pleased to join forces with Bernstein's team, adding that the need for viable PQ encryption is urgent.

"The entire world needs to upgrade its encryption, and we last did that in 1978, when RSA came in. The stakes couldn't be higher with record levels of cyber-attack and heightened nation state activity - if China or Russia is the first to crack RSA then cyber Armageddon will begin," Cheng said.

"This isn't an academic exercise for us, we are already several years down the commercialisation path with real-world quantum-safe products for identity authentication and VPN. If you work for an organisation with intellectual property or critical data with a long shelf life, and you're working from home during lockdown, you should already be using a quantum-safe VPN."

Here is the original post:
UK firm reaches final stages of the NIST quest for quantum-proof encryption algorithms - http://www.computing.co.uk

Celebrating The 19th Amendment And The First Safe Haven Hostel For Women – Forbes

Susan B. Anthony and Elizabeth Cady Stanton 1899. Two pioneers in the Equal Rights cause.

This August marks the centennial of the passing of the 19th Amendment to the U.S. Constitution which reads, The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of sex. It was an extremely difficult battle; at the time, married women could not own property and had no legal claim to any money they might earn. No female had the right to vote. Women were expected to focus on housework and motherhood.

Members of the American National Woman Suffrage Association marching from Pennsylvania Terminal to ... [+] their headquarters after welcoming home Mrs. Carrie Chapman Catt, president of the Association on her arrival from Tennessee.

Beginning in the 1820s, even before the Civil War, reform groups began to grow throughout the United States including temperance leagues, the abolitionist movement and religious groups. In 1848, thesuffragemovement began as awomen'srights convention held in Seneca Falls, New York. More than 300 people, mostly women, took part, but a few men attended as well, including activist Frederick Douglas.

It was there thatSusan B. Anthony, Elizabeth Cady Stanton, Lucretia Mott and otherwomen'srights pioneers (known assuffragists)circulated petitions and lobbied Congress to pass a constitutional amendment to enfranchisewomen.

A group of women's rights activists meeting at the Hotel Figueroa, formerly one of the first female ... [+] hostels in America.

This August also marks the 94th anniversary of one of the nations first-ever female hostelries, the Hotel Figueroa in Downtown Los Angeles, an enduring, collaborative safe haven for women. From the battle for womens rights to the current global pandemic,HotelFigueroahas served as a backdrop for some of the most challenging and memorable moments in American history, a representative of both the endurance of women and the City of Angels entrepreneurial, creative, and resilient spirit.

Originally opened in 1926 as an exclusive womens hostel by the YWCA, Hotel Figueroa was, according the Los Angeles Times, the largest project of its kind in the United States to be financed, owned, and operated by women. It was advertised as an ideal stopping place for ladies unattended. The first managing director was Maude N. Bouldin, the first female hotel manager in America who regularly flew planes, rode motorcycles, and openly challenged the gender norms that often kept women from achieving their full potential. Under her leadership, the space served as a meeting place for almost every woman's club in Los Angeles.

Earlier days at the pool of the Hotel Figueroa

The HotelFigueroa continues its rich history deeply rooted in the Los Angeles womens movement. Today, the majority of the boutique hotel's staff is female. Signs of Maude Bouldin and the women of the YWCA can be seen throughout the hotel including a large portrait hanging in the hotels lobby of Maude Bouldin on a motorcycle. The Grande Salle, the hotel's private event room, includes vintage photographs of the women of the YWCA. Over an original fireplace is the famous YMCA triangle relic, indicating female strength and power, a symbol of the YWCA hotel for women.

Exotic Exterior lounge of Hotel Figueroa

This year the iconic Figueroa Hotel is celebrating a year-long Featured Artist Seriespartnershipshowcasing the works of local independent female artists. This fall, Featured Artist, Stephanie DeAngelis will make her debut with an art collection tied tospiritually connecting folks from all walks of life in these times of social isolation.

Hotel Figueroa is open and implementing strict social distancing/safety policies. It is one of the firstClean+SafeCertifiedhotels by theCalifornia Hotel+Lodging Association including rigorouscleaning and sanitization of guest rooms and public spaces, UV sterilizer technology,face masks required indoors, outdoor pool lounge chairs spaced six+feet apart,and fresh towels placed on fully sanitized chairs between uses.

The iconic Hotel Figueroa is an ideal place to celebrate 100 years of the passing of the 19th Amendment, where so many brave women fought for its passage.

The pool at the iconic Hotel Figueroa, where there are now as many men as woman.

Excerpt from:

Celebrating The 19th Amendment And The First Safe Haven Hostel For Women - Forbes

Federal response needed to handle Portland chaos – Boston Herald

The city of Portland, Ore., is besieged by violent rioters who have committed to destroy everything in their crosshairs, including property of the federal government. Local leaders like Mayor Ted Wheeler and Gov. Kate Brown have been reticent to do anything meaningful to stop the carnage.

It is proper that President Trump deploy federal resources to protect federal property and quell some of the violence in the streets.

The progressive mayor on Wednesday night entered the chaotic epicenter of Portland to engage in listening sessions with the rioters. Instead, the crowds castigated him and presented him with a list of demands that included defunding the police and for Wheeler himself to resign.

The people destroying Portland are not protesters, they are violent anarchists who have been wreaking havoc upon the city every night for over 50 straight days. They attack federal personnel and destroy federal property. They use commercial grade fireworks as weapons and incendiary devices and employ hard projectiles to harm their targets.

It has become routine for these violent rioters to set fire to the federal courthouse and attempt to breach its doors.

A Department of Homeland Security report written Wednesday describes the previous evening: rioters were able to tear down an entire panel of plywood protecting the building and proceeded to use part of the citys abandoned fence from the park to attempt to block federal officers from exiting the building. When officers left the building in response, rioters responded by firing commercial grade fireworks at the officers.

This timeline of events has become the norm.

When officers do eventually leave the federal buildings and make arrests, many in the media make pronouncements of Stormtrooper tactics being employed and other such wild misclassifications.

Sadly, politicians are being equally irresponsible.

House Speaker Nancy Pelosi and Rep. Earl Blumenauer of Portland issued a statement reading, in part, We are again reminded of the immense power of peaceful protest in the fight against racial injustice and police brutality.

The Trump Administration shows its lack of respect for the dignity and First Amendment rights of all Americans, it continued. Now, videos show them (federal officers) kidnapping protestors in unmarked cars in Portland all with the goal of inflaming tensions for their own gain.

The federal law enforcement agents largely are members of units within border patrol who are given jurisdiction within 100 miles of any border or ocean in this case the Pacific Ocean. They all fall under the umbrella of the Department of Homeland Security.

Historically, federal agencies have been employed to carry out laws that the state is unable or unwilling to carry out.

The anarchic destruction of American cities under the watch of timid or complicit elected leaders cannot be allowed to continue. Such a condition tears at the fabric of a community and shatters the rule of law.

Portland requires a federal response if it is to be restored as a civilized city and not a continual cauldron of anarchy.

More:

Federal response needed to handle Portland chaos - Boston Herald

Implementing Encryption and Decryption of Data in Python – Analytics India Magazine

Cryptography is a process which is mainly used for safe and secure communication. It works on different mathematical concepts and algorithms to transfer the encoded data into a secret code which is difficult to decode. It involves the process of encrypting and decrypting the data, for eg. If I need to send my personal details to someone over mail, I can convert the information using Encryption techniques and send it, on the other hand, the receiver will decrypt the information received using Decryption Techniques and there will be no data tampering in between.

The ciphertext is a data or text which is encrypted into a secret code using a mathematical algorithm, it can be deciphered using different mathematical Algorithms. Encryption is converting the text into a secret message, technically known as converting the plaintext to ciphertext and Decryption is converting the ciphertext back to plaintext so that only the authorized users can decipher and use the data. Generally, it uses a key that is known to both the sender and the receiver so that they can cipher and decipher the text.

Python has the following modules/libraries which are used for cryptography namely:

In this article, we will be exploring:

Cryptography is a python package that is helpful in Encrypting and Decrypting the data in python. It provides cryptographic recipes to python developers.

Let us explore Cryptography and see how to encrypt and decrypt data using it.

Implementation:

We first need to install the library using pip install cryptography.

a. Importing the library

Fernet function is used for encryption and decryption in Cryptography. Let us import the Fernet function from the library.

b. Generating the Key

Cryptography works on authentication for which we will need to generate a key. Lets define a function to generate a key and write it to a file. This function will create a key file where our generated key will be stored.

This function will create a pass.key file in your directory as shown in the image below.

c. Loading the Key

The key generated above is a unique key and it will be used further for all encryption and decryption processes so In order to call this key, again and again, let us define a function to load the key whenever required.

d. Encrypting the Data

The next step would be passing the message you want to encrypt in the encode function, initializing the Fernet class, and encrypt the data using encrypt function.

As you can see we have successfully encrypted the data.

e. Decryption of Data

The message will be decrypted with the same key that we used to encrypt it and by using the function decrypt. Let us decode the encrypted message.

As you can see here we have successfully decoded the message. While using cryptography it is necessary to keep the Key file safe and secure to decode the message because if the Key is misplaced the message/data will not be decoded.

Similarly, Cryptography module can be used to convert data/text files, we just need to pass the file to the argument and encode and decode it.

It is a python module which is fast and converts the plaintext to ciphertext and ciphertext to plain text in seconds and with just a single line of code.

Implementation:

We first need to install the library using, pip install simple-crypt

a. Loading the Library

b. Encrypting and Decrypting

Simple-crypt has two pre-defined functions encrypt and decrypt which controls the process of encryption and decryption. For encryption, we need to call the encrypt function and pass the key and message to be encrypted.

Similarly, we can call the decrypt function and decode the original message from this ciphertext.

Here you can see that we have used AIM as the password and it is the same for encryption and decryption.

In simple-crypt, we should keep in mind that the same key should be provided for encryption and decryption otherwise messages will not be decoded back to original.

Hashlib is an open-source python library used for encoding and it contains most of the popular hashing algorithms used by big tech firms for security purposes.Hash is a function that takes variable length as an input and gives the fixed-length output sequence.Unlike the modules discussed earlier in Hashlib decoding is a very difficult and time-consuming job this is why Hashing is considered as the most secure and safe encoding.

Home Implementing Encryption and Decryption of Data in Python

The Hashlib functions that we will be exploring are MD5 and SHA1

MD5 Algorithm/Function produces a hash value which is 128 bit. It converts the strings to bytes so that it is accepted by hash. MD5 is mainly used for checking Data Integrity. It is predefined in hashlib.

Implementation:

We need to install the hashlib library to use MD5 using, pip install hashlib

a. Importing the library

b. Encrypting the data

In order to encrypt the data, we need to pass the message/data to the MD5 function to convert it into bytes. Here you will see that we will type b before typing the message because it converts the string to bytes so that it will be accepted by hash. The hexdigest function will encode the message and return the encoded message as a HEX string.

If we do not want the message to be encoded in HEX string and show it in a sequence of bytes then we will use the digest function.

Secure Hash Algorithms are more secured than MD5. It s a set of the algorithm like SHA1, SHA256, etc. It is widely used for cryptographic applications.

We have already imported the hashlib library so we will directly Encode the message/data using SHA1.

Encryption of Data

In order to encrypt the data, we need to pass the message/data to the SHA1 function to convert it into bytes. Similar to MD5 here also you will see that we will type b before typing the message because it converts the string to bytes so that it will be accepted by hash. The hexdigest function will encode the message and return the encoded message as a HEX string.

Similar to MD5 if we do not want the message to be encoded in HEX string and show it in a sequence of bytes then we will use the digest function.

Similarly, we can try different hashing algorithms for Encoding/Encryption.

In this article, we went through:

comments

Read the original here:
Implementing Encryption and Decryption of Data in Python - Analytics India Magazine

The state of blockchain in supply chain management today – Information Age

Steve Treagust, vice-president, industries program management at IFS, discusses the role blockchain plays in supply chain management

Blockchain is an increasingly important cog in the supply chain management wheel.

Some organisations associate blockchain with cryptocurrencies or the dark web but not as having relevance for general supply chain operations. Others, however, understand that it can be disruptive technologies that solve logistical problems in complex supply chains among larger groups of organisations.

One example of a visionary team is found in Maersk and IBM, who together launched TRADELens, the worlds first blockchain-enabled shipping solution. More than 90 organisations signed up to take part in this open standard blockchain-driven platform, including more than 20 port and terminal operators and customs authorities in the Netherlands, Saudi Arabia, Singapore, Australia and Peru. Added to that list more recently is India where, starting in July 2020, the first trials will begin of the electronic bill of lading. The solution enables organisations to interact seamlessly and securely, sharing real-time access to IoT and sensor data such as temperature control or container weight, as well as providing standardised protocols and processes.

Access to data generated by the Internet of Things (IoT) can be difficult to control, but how can companies go about achieving this safely? Read here

Blockchain today is making halting inroads into settings where trading partners want to share a distributed ledger to facilitate and track transactions. The Farmer Direct Platform, which provides traceability of food products back from farm to fork, is partnering with Smuckers Folgers brand to use blockchain to provide enhanced supply chain visibility to customers. In most manufacturing and industrial supply chain settings, blockchain remains on the horizon however, although it has strong potential to support processes including 3D printing, where it can help enforce intellectual property, supply chain management and the Internet of Things (IoT).

As more sophisticated supply chains go into production with blockchain-enabled supply chains, others may consider whether the technology solves problems for them as well. Theres a lot of buzz around blockchain, but many organisations and individuals still dont really understand where it fits into their technology stack, due in large part to the association with the cryptocurrency use case.

Some data security and trust concerns still exist around the idea of running a mission-critical supply chain in blockchain, although cryptography-secured chains were shown to provide very high levels of security. All these concerns and misunderstandings can be addressed with education. There is nothing that spurs faster learning than having to compete with an organisation already successfully using blockchain to reduce supply chain costs.

While enterprise and supply chain technologists may not trust blockchain yet, it is somewhat ironic that a lack of trust in the banking industry after the 2008 financial crisis was the original incentive for blockchain. In fact, blockchain was born as the channel to deliver Bitcoin, allowing organisations to avoid banks.

Now, a lack of trust in these same digital assets is the main hurdle to mainstream use. There are two addressable elements to improving trust in blockchain. First is education around the value of cryptography and blockchain technology. Second, when others begin to successfully use blockchain, this will also provide a platform for trust.

The ultimate benefit that blockchain brings to supply chain management very much depends on the type and size of the supply chain. I outlined six business benefits of blockchain in a 2017 blog post, including efficiency, auditability, traceability, transparency, security and feedback. Any of these six could be the number one benefit depending on the business and their individual goals.

If I was to pick one benefit that most applies to all supply chains, it would be traceability. Being able to enable a trace of supply at any time during and after the chain has been completed can provide a competitive advantage, allowing organizations to comply with ever-changing regulations and create overall efficiencies.

Kevin Curran, IEEE senior member, security professor at Ulster University and editor of the Journal of British Blockchain Association (JBBA), explains how blockchain has transformed industry and society. Read here

The next step for blockchain in supply chain management is broader adoption. Its not a case of if this will happen, but when. In my opinion, to boost trust and provide stability, organisations must work together to establish trust-based relationships, creating diverse communities intent on delivering positive value from the blockchain ecosystem. This would also require enforceable regulatory control enough to curb the worst of human behaviour in free markets, but not enough to stifle blockchain innovation.

Continue reading here:
The state of blockchain in supply chain management today - Information Age

Artificial Intelligence & Advanced Machine learning Market is expected to grow at a CAGR of 37.95% from 2020-2026 – Bulletin Line

According toBlueWeave Consulting, The globalArtificial Intelligence market&Advanced Machinehas reached USD 29.8 Billion in 2019 and projected to reach USD 281.24 Billion by 2026 and anticipated to grow with CAGR of 37.95% during the forecast period from 2020-2026, owing to increasing overall global investment in Artificial Intelligence Technology.

Request to get the report sample pages at :https://www.blueweaveconsulting.com/artificial-intelligence-and-advanced-machine-learning-market-bwc19415/report-sample

Artificial Intelligence (AI) is a computer science algorithm and analytics-driven approach to replicate human intelligence in a machine and Machine learning (ML) is an enhanced application of artificial intelligence, which allows software applications to predict the resulted accurately. The development of powerful and affordable cloud computing infrastructure is having a substantial impact on the growth potential of artificial intelligence and advanced machine learning market. In addition, diversifying application areas of the technology, as well as a growing level of customer satisfaction by users of AI & ML services and products is another factor that is currently driving the Artificial Intelligence & Advanced Machine Learning market. Moreover, in the coming years, applications of machine learning in various industry verticals is expected to rise exponentially. Proliferation in data generation is another major driving factor for the AI & Advanced ML market. As natural learning develops, artificial intelligence and advanced machine learning technology are paving the way for effective marketing, content creation, and consumer interactions.

In the organization size segment, large enterprises segment is estimated to have the largest market share and the SMEs segment is estimated to grow at the highest CAGR over the forecast period of 2026. The rapidly developing and highly active SMEs have raised the adoption of artificial intelligence and machine learning solutions globally, as a result of the increasing digitization and raised the cyber risks to critical business information and data. Large enterprises have been heavily adopting artificial intelligence and machine learning to extract the required information from large amounts of data and forecast the outcome of various problems.

Predictive analysis and machine learning and is rapidly used in retail, finance, and healthcare. The trend is estimated to continue as major technology companies are investing resources in the development of AI and ML. Due to the large cost-saving, effort-saving, and the reliable benefits of AI automation, machine learning is anticipated to drive the global artificial intelligence and Advanced machine learning market during the forecast period of 2026.

Digitalization has become a vital driver of artificial intelligence and advanced machine learning market across the region. Digitalization is increasingly propelling everything from hotel bookings, transport to healthcare in many economies around the globe. Digitalization had led to rising in the volume of data generated by business processes. Moreover, business developers or crucial executives are opting for solutions that let them act as data modelers and provide them an adaptive semantic model. With the help of artificial intelligence and Advanced machine learning business users are able to modify dashboards and reports as well as help users filter or develop reports based on their key indicators.

Geographically, the Global Artificial Intelligence & Advanced Machine Learning market is bifurcated into North America, Asia Pacific, Europe, Middle East, Africa & Latin America. The North America is dominating the market due to the developed economies of the US and Canada, there is a high focus on innovations obtained from R&D. North America has rapidly changed, and the most competitive global market in the world. The Asia-pacific region is estimated to be the fastest-growing region in the global AI & Advanced ML market. The rising awareness for business productivity, supplemented with competently designed machine learning solutions offered by vendors present in the Asia-pacific region, has led Asia-pacific to become a highly potential market.

Request to get the report sample pages at :https://www.blueweaveconsulting.com/artificial-intelligence-and-advanced-machine-learning-market-bwc19415/

The major market players in the Artificial Intelligence & Advanced Machine Learning market are ICarbonX, TIBCO Software Inc., SAP SE, Fractal Analytics Inc., Next IT, Iflexion, Icreon, Prisma Labs, AIBrain, Oracle Corporation, Quadratyx, NVIDIA, Inbenta, Numenta, Intel, Domino Data Lab, Inc., Neoteric, UruIT, Waverley Software, and Other Prominent Players are expanding their presence in the market by implementing various innovations and technology.

About Us

BlueWeave Consulting is a one-stop solution for market intelligences regarding various products and services online & offline. We offer worldwide market research reports by analysing both qualitative and quantitative data to boost up the performance of your business solution. Our primary forte lies in publishing more than 100 research reports annually. We have a seasoned team of analysts working only for various sub-domains like Chemical and Materials, Information Technology, Telecommunication, Medical Devices/Equipment, Healthcare, Automotive and many more. BlueWeave has built its reputation from the scratches by delivering quality performance and nourishing the long-lasting relationships with its clients for years. We are one of the leading digital market intelligence generation company delivering unique solutions for blooming your business and making the morning, more rising & shining.

Contact Us:

[emailprotected]

https://www.blueweaveconsulting.com

Global Contact: +1 866 658 6826

Continued here:
Artificial Intelligence & Advanced Machine learning Market is expected to grow at a CAGR of 37.95% from 2020-2026 - Bulletin Line

Regulating AI in Public Health: Systems Challenges and Perspectives – Observer Research Foundation

object(WP_Post)#917 (24) { ["ID"]=> int(70513) ["post_author"]=> string(1) "1" ["post_date"]=> string(19) "2020-07-27 10:45:37" ["post_date_gmt"]=> string(19) "2020-07-27 05:15:37" ["post_content"]=> string(87682) "

Artificial Intelligence (AI) is increasingly proliferating the healthcare landscape and has immense promise for improving health outcomes in a resource-constrained setting like India. With emerging technology still finding its footing in the healthcare industry in the country, there are systemic roadblocks to hurdle before AI can be made transformative up to the last mile of public health. AI also carries immense challenges for Indias mostly traditional regulators who have to walk the tightrope of propelling an AI innovation ecosystem while maintaining a core concern for patient safety, and affordability. This requires the regulators and relevant stakeholders to take a systemic view of the industry and understand the potential impact of regulation throughout the ecosystem. This landscape study outlines the contextual limitations within which Indian regulators for healthcare technology operate. It offers recommendations for a systems thinking approach to regulating AI in Indian health systems.

Attribution: Abhinav Verma, Krisstina Rao, Vivek Eluri and Yukti Sharma, Regulating AI in Public Health: Systems Challenges and Perspectives, ORF Occasional Paper No. 261, July 2020, Observer Research Foundation.

Artificial Intelligence (AI) in medicine relies on an ecosystem of health data to train machines that learn responses to diagnose, predict, or perform more complex medical tasks. Patient data is leveraged for supporting clinicians in decision-making, bringing to the fore patterns in the data that were not discernible to a clinicians eyes, and in some cases even charting out medical prognosis. Its uses have been well documented: electronic health record (EHR) systems have used machine learning algorithms to detect data from text[1] as well as undertaking predictive analysis to warn clinicians about high-risk conditions and co-morbidities.[2] This is in addition to guiding drug discovery[3] and more topically, allowing population-analysis for pandemic preparedness and response measures.[4]

The Indian government has been trying to nudge the health system towards greater overall digitisation for the last two decades. Frontline health workers are being trained to adopt digital health: moving from paper-and-pen-based entries that are transferred to a centralised digital portal, to now using mobile-phone applications that allow real-time information upload.[5] The shift to digitisation has been codified in the National Health Policy (2017) and is represented in the National Health Stack vision, detailing the need to leverage technologies such as Big Data analytics for data stored in universal registries.[6] The National Digital Health Blueprint (NDHB 2019) further builds on this vision to identify building blocks that leverage foundational technology towards expansive application development for varied uses and rely in a most rudimentary manner on high data integrity of the health system.[7]

While digitisation is a promising first step to creating interoperable digital systems, there are plenty of challenges to its adoption. EHR adoption, for example, has been laggard in public health institutions due to its high cost of implementation and high burden on clinicians owing to cumbersome input and maintenance procedures.[8] Even with well-integrated EHR systems in the West, clinicians are known to spend more time with the technology than the patient.[9] Such situation is likely to be exacerbated in India, where the public health system is under-staffed and technologically averse.

For emerging technology that relies on robust data systems for innovation, this is an existential challenge. The same user reluctance that plagues EHR is likely to curtail the uptake of more advanced technological tools. Quality benchmarks like EHR standards can address this reluctance to an extent by ensuring the standardisation of the tools design and function to allow the data collected at different sources to be accessible and functional to different users in the same way. Once trained in the setting up and use of one system, the seamless integration and accessibility of a patients health records for rapid diagnosis and treatment is likely to help users hurdle their technological reluctance. However, this may come at the cost of imposing heavy burdens on EHR developers.

In the nascent industry of emerging technology like AI and ML-based healthcare solutions, the technology is far more advanced than the standards, which are yet to be established. In the absence of a clear approval and market-access pathway, innovators have a higher price to pay to enter the healthcare innovations market. A regulator in this context must not only function to create boundary conditions to preserve patient safety, but also allow reasonable room for innovation and efficacy for promising solutions (See Figure 1).

Indias public health ecosystem provides service delivery through vertical programs for immunisation, and disease surveillance and management that focuses on population health maintenance. It also boasts of a formidable network of health services that encompasses 18 percent of the countrys total outpatient care, and 44 percent of total inpatient care,[10] all of which are highly subsidised or free for citizens. Although the reliance on public versus private healthcare varies across state, public healthcare centres often serve as the only point of care for the countrys 66 percent rural population. Yet it suffers from staff shortages, low staff motivation, inadequate or outdated medical equipment, and slow-responding medical institutions.[11] Notwithstanding the progress made by Ayushman Bharat (AB-PMJAY) and its pursuit of health for all through health and wellness centres and health insurance, public health service in India is overburdened.[12]

With a doctor-patient ratio of 1:10,189[13] (10 times short of the World Health Organizations [WHO] recommended ratio[14]) and severe resource shortages, the clarion call has never been louder for technology at scale to support healthcare delivery in the country. The response has been hopefulmore recently from emerging technological solutions. For example, an AI-based breast cancer screening device that uses a non-invasive, low-cost solution based on heat-mapping for early detection of breast cancer has been able to detect breast cancer up to five years earlier than a mammography with reduced reliance on trained technicians.[15] A smartphone-based anthropometry technology enables frontline health workers to accurately report baby weight,[16] solving for incongruencies in field reported data which is popularly tied to insufficient focus on and incorrect interventions for malnutrition on the field. In countries in the West, a rapid detection and response device directly alerts radiologists when it spots pneumothorax.[17] Various states have taken the initiative to embrace this mission. Telangana for example, has declared 2020 as the year of AI, with the intention of making AI-based innovation successful across e-governance, agriculture, healthcare and education.[18]

Healthcare is surely and steadily embracing digital health innovation to respond to critical health challenges. In response, regulations have been established for standardising the design and function of these technologies (as is the case with EHR, or medical devices) that recognise the risks associated with their use and protect the patient and users safety and rights. AI-based solutions have not only variable conditions of risk associated with their use,[a] but the risks associated with their use are also still being understood. In preparing for regulation of AI-based health technology, it is important to recognise the context and risks associated with each of these categories.

In many parts of the world, the use of AI in public healthcare delivery has increased in recent years. In the United Kingdom (UK), for example, the National Health Service or NHS adopted an AI chatbot-based triage system in 2019.[19] However, the known and unknown risks of making AI the norm for health service delivery have threatened to upend the values of equitable access that are synonymous with public health. While there are AI solutions that exist in speciality or tertiary care hospitals (especially diagnostic assistive tools), few solutions effectively reach out to the primary care setups, perhaps due to the high cost of development and operationalisation that deters affordable pricing for scale.[20]

Moreover, the more widespread use of AI is hampered by its complexity, rendering certain principles inexplicable to users and untrustworthy (AI black box).[b],[21] Due to its aggregation of several thousand data points, machine learning algorithms decision trajectories are often too complex to be traced back and made explainable to its users without human intervention.[22] Given the potential of AI to learn pre-existing patterns in data, AI has also been critiqued for replicating biases against disadvantaged social groups that clinicians would otherwise consciously rule out.[23] Concerns around the discrimination that might be inherent to using AI in medical contexts (that is further challenging to identify and isolate) also have severe implications in a medico-legal context where liability is difficult to ascertain and is instead shared.[24]

National AI strategies have committed ambitious targets for capital investments towards research and application of artificial intelligence. Encapsulating a proactive stance, these strategies have highlighted how research, innovation and permissive markets can catapult economies into the 4th Industrial revolution and also occupy a significant position in the welfare discourse.[25]

Indias National Strategy for AI sets precedent for AI capacity development through the institution of Centres of Research Excellence (COREs) focused on fundamental research, as well as International Centres on Transformational AI (ICTAIs) for applied research. In parallel, it acknowledges critical challenges around issues of privacy and safety, data integrity, and technical resource capacity. In a context of emerging technology such as AI finding a way to address public health challenges, regulation for standards of safety and efficacy cannot afford to simply react to known risks of technology[26] but must be proactive in collaborating for better safety standards.

The US Food and Drug Administration (USFDA), borrowing from the work of International Medical Device Regulators Forum or IMDRF, provides a useful lens for regulating AI/ML models in healthcare, categorising them as AI/ML SaMD or Software as a Medical Device.[27] Following a risk categorisation that ascertains an AIs potential risk to the patient and its intended use, USFDAs proposal involves treating regulation for AI as a series of iterative checkpoints rather than a one-time certification model. Given the potential threat considered against the intended use of the AI, specific clinical evidence is required to be submitted both before and after deployment of the SaMD. In weaning off a static regulation model, the USFDA upholds Good Machine Learning Practice (GMLP) on expectations of quality systems responsible for generating SaMD, including ensuring quality and relevance of data, and transparency of the output aimed at users.[28] In establishing checkpoints that include manufacturers reporting on specific performance and safety indicators post deployment, the SaMD regulation process allows for modifications to approved devices for greater efficacy of use. Yet in the absence of domestic regulatory expertise in AI regulation, adopting this gold-standard for regulation might be more expensive for domestic innovators.

The pacing problem[c] witnessed in the case of AI regulations for healthcare is stark. Historically utilised to safeguard social welfare, regulations have been risk-averse and prioritised consumer safety. This is an outcome that follows systematic review of the costs and benefits of innovation, in addition to striking its balance with relevant stakeholder interests. However, the rise of emerging technology such as AI has raised an important critique of slow-moving, non-adaptive regulation regimes that have not only challenged innovation but also curtailed economic growth.[29] It is predicted that the application of AI in healthcare in India will be worth INR 431.97 billion by 2021;[30] this is juxtaposed against a regulations system that has only just acknowledged software as a medical device[31] and an innovation ecosystem that is still burdened with high costs of experimentation and evaluation. A systematic review of the role of regulation in incentivising the uptake of AI for addressing public healths woes, while prioritising patient safety, is essential to guiding a regulations framework for AI-based SaMD in developing countries like India.

India has had the experience of building supportive regulations for the pharmaceutical sector that helped it develop from almost non-existent to one of the worlds leading suppliers of generic drugs. This was achieved through a mix of price controls, experimenting with process patents, and industrial promotion policies.[32] However, this agile and responsive policy development has yet to translate to medical devices or technologies, and Indias health system continues to be 75-percent dependent on imported medical technology.[33] The imperative for India is to develop its own medical innovations ecosystem.[34] This section outlines the existing context within which the regulatory system for AI in healthcare will have to function (See Figure 2).

An AI model is built on the foundation of robust and accurate data. Some innovators are able to invest in cumbersome primary data collection and create their own proprietary datasets while buy commercially available datasets to train their models. Both pathways require intensive capital investments that are not available with early-stage start-ups that create tools for the public health system at large.

In India, the government owns large swathes of data, both from public health facilities and national programmes. However, this data lacks accuracy and completeness, which usually results in incorrect conclusions. On aggregation, small errors like misspelled names or inaccurate counts at the facility level can cumulate into glaring misinterpretations.[35] This can also detract from the representativeness of the datasets used for training and potentially amplify data biases in the AI models, which can have severe social fallouts.

Therefore, a critical challenge for the government is to enable digitisation of most clinical transactions where citizens partake. Thereafter, it is necessary to develop a data culture and quality systems to enable that digital health data accurately depicts the realities of health outcomes at the population, sub-national and even individual facility or patient level. To achieve this goal, the government of India has already commenced an ecosystem building effort for digital health, at the core of which is the concept of EHR of all citizens along with a health information exchange platform to enable sharing of data across the continuum of care.

These building blocks are envisioned as free-flowing data exchange, but presently face immense challenges of portability, especially when it comes to including the private sector in this ecosystem. Physician compliance and adoption of standard terminologies like Systematized Nomenclature of Medicine- Clinical Terms (SNOMED CT) is not particularly incentivised in India as it was in the United States (US), where one could secure substantially higher rates of EHR development through a system of financial incentives and sanctions. Beyond this, private institutions with digital systems face roadblocks due to the absence of mechanisms for sharing data with the government or each other due to technical interoperability challenges. For instance, the governments Revised National TB Control Program cannot follow patients or monitor their care once they choose to seek treatment in the private sector, due to absence of sharing pathways.[36]

Unless interoperability across software systems and terminologies is uniformly secured across the healthcare system in India, the digital health ecosystem will remain fragmented and incomplete. While this normative ecosystem can aspire to digitise data that traditionally exists in paper registers, it does not necessarily assist AI innovators in their work unless easy, cost-effective and convenient modalities for sharing this data are instituted.

As health data is considered as Sensitive Personal Information under the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules 2011,[37] it is also necessary to have stronger privacy and security measures for digital health data, especially when it comes to sharing it. This is where privacy preserving processes like anonymisation and de-identification fit in: they will remove all personally identifiable marks from the data and prepare it for sharing for training AI models. However, it is now widely accepted that anonymisation is not absolute.[38] At the same time, annotation of health data, including pathological reports and radiological scans, is necessary for data to be usable for the machine to learn and draw patterns. Both privacy-preserving and annotation processes are cumbersome and investment-heavy activities[39] that can ultimately make the development process expensive and create entry barriers for new enterprises.

The role of technological innovation in addressing large-scale access challenges that are typical of a developing nations public healthcare system is also widely recognised in India. Investment patterns reflect this: the medical devices sector has seen an inflow of FDI worth US$1.8 billion between April 2000 and June 2019.[40] The pivotal drivers for this sector-specific growth have included increased healthcare consumption and insurance penetration, growing investment from private equity models, and diversified healthcare delivery mechanisms.[41]

In parallel, the governments flagship universal health coverage scheme AB-PMJAY is set to be established as the worlds largest health assurance scheme- providing INR 0.5 million per family to nearly 40 percent of the countrys population. Aiming to mainstream transformative technology and boost innovation for healthcare delivery through a dedicated Innovation Unit, AB-PMJAY has established the call for public health innovation in a version that is accessible and affordable to the most economically vulnerable. However, managing a precarious balance between long gestation periods of investments in medical technology and accelerating access for its 107.4 million target users is a direct challenge to the success of the scheme, and to revolutionising public health through innovation in general. The hope is for regulation to favour the market for innovation.

Extensive evaluation and testing processes of deep-tech solutions result in prolonged time spent at this stage, delaying the time for deployment and requiring a relatively longer period of lock-in for investors. For example, the average time taken by US medical technology companies for pre-market clearance is 5.6 years,[42] with 61 percent of them taking more than four years to get initial market approvals. Further, even medical technology with inconsequential risks to patientssuch as external aids like hearing aidscould be treated as high-risk investments due to the high uncertainty that comes with long-term health outcomes. It is not surprising that the average time taken to exit a medical device startup is 8.8 years, with the company burning an average of US$6.25 million every year.[43]

There is increased ambiguity around perceived risks of AI-based technology and a need for stricter vigilance that follow post-market modifications. It is therefore fair to assume that without significant market incentives, promising emerging technological solutions cannot perform without supportive regulation to bolster its entry into public health.

The role of regulation in making high-risk industries attractive for private investments can be illustrated through the pharmaceutical industry. The value chain in the industry is characterised by two specific kinds of activities: those involved in drug discovery, and those in the manufacturing and selling of the drug. The latter being relatively low-risk, drug discovery involves high and inherently unpredictable risks with returns being 10 percent of the cost of capital for the process[44] and can only be afforded by pharmaceutical sales giants. The high cost of innovation here is offset by patent protection measures and value-based pricing of drugs manufactured, irrespective of their capital costs. Regulation in the pharmaceutical industry has in this way offset high R&D costs undertaken by manufacturers and made the investments in innovation possible, an approach that might not be feasible in the field of AI.

An exciting opportunity for infusing capital in AI for healthcare lies in mobilising investments towards core infrastructure for digital health innovation. Supporting the development of core capacities like generating standardised and annotated health records and health data exchanges will allow higher penetration for emerging technology that leverages robust data systems and atop building blocks to optimise its application in healthcare. In turn, more accessible markets attract private capital to relatively high-risk solutions (like clinical decision support systems, for example) in the AI-based development value chain.

As regulation pursues the alignment of clinical performance with patient safety, an important consideration is how AI solutions interact with its users and in turn, how that affects clinical efficacy. Superior clinical evidence of an AI-based solution might not necessarily translate to superior adoption, or necessitate that the solution addresses the clinical condition it was meant to because of variables between the clinical environment and algorithms practice environment.

Unlike drugs, software and Information Technologies (IT) tools are known to be highly affected by organisational factors such as resources, staffing, skills, training, culture, workflow and processes[45] as delivery of healthcare interventions using these tools requires the healthcare staff to take on a more active role. A tale of caution comes from using CAD (computer-aided detection) for mammography to improve breast cancer detection wherein the CAD procedure performed no better (and in some ways worse) than the procedure without involving CAD.[46] Despite no real benefit to women for breast cancer screening, CAD-based mammographies increased nearly 70 percent after insurance reimbursement increased for this procedure in 2002.[47] Regulators thus need to account not only for the proven clinical efficacy of the solution, but the result of its presence in the market that might serve as a nudge for altering clinician behaviour around the target condition. Another element to consider is creating trust in AI models when it comes to patients, especially in cases where there is no human in the middle, like in the case of chatbots.[d]

In healthcare, human factors validation testing serves as a meaningful way to address adoption challenges that signal human interaction issues for the AI-based SaMD. This demonstrates that the final finished combination product-user interface can be used by intended users without serious issues, for its intended uses and under the expected use conditions.[48] In the public health context that is still struggling with technology adoption of the more fundamental applications (like EHR patient recording systems), AI explainability is an important consideration, in order to increase trust in these new systems, while studying and testing for possible risks of human-AI interaction.

Consensus from a study panel organised as part of Stanford Universitys One Hundred Year Study of Artificial Intelligence reflected that ...attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isnt any one thing), and the risks and considerations are very different in different domains.[49] Limited understanding of AI helps articulate the reluctance of regulatory bodies to deconstruct it for purposes of regulation.

At present, there is no domestic regulatory oversight in India for SaMD interventions, leaving AI-driven SaMD further out of its purview. Even if SaMD were recognised, across the world its regulatory approval is based on repeatability and certainty. However, when a software learns on its own and its outputs vary, the regulations need overhaul to adapt to it.[50]

Due to the evolving nature of algorithms and tedious standard regulatory processes, it is not hard to imagine that after an approval is granted and the product is marketed, an improved version of the algorithm can be released periodically as it collects and analyses new data. To eliminate the need to seek new approvals every single time a version of the algorithm has to be released, the USFDA has implemented a total product life cycle (TPLC) regulatory approach. This approach facilitates the rapid cycle of product improvement and requires pre-market submission for changes that affect safety or effectiveness, such as new indications for use, new clinical effects, or significant technology modifications that affect performance characteristics.[51] Incorporating a change management protocol is the welcome necessary step in dynamic evaluation of AI-driven products.

Acceptability of results of AI products is another impediment to its adoption. On the field, startups are advised to conduct clinical trials that are time consuming and expensive.[52] While rulebooks exist for drug-related clinical trials, regulations are scant in the context of medical devices, let alone AI-enabled SaMD. In the absence of a unified Medical Devices Policy, different agencies including the Central Drugs Standard Control Organization (CDSCO) and Bureau of Indian Standards (BIS) have enlisted their own set of requirements, but there is a lack of coordination amongst these agencies.[53] The absence of an overall guide has led to interpretation issues and prolonged approval times in complying with these interim measures.

Regulatory agility and responsiveness have a direct impact on the adoption of innovation. This regulatory framework needs to be continually fine-tuned to enable optimal innovations while controlling healthcare expenditure.[54] Regulatory certainty offers benefits to companies by increasing predictability and transparency. Moreover, regulations and standards can also increase compatibility of products[55] (interoperability for software products) that can lead to cost savings,[56] which are particularly beneficial for public health units. Indias medical device market has leaped and will continue to grow (pegged to be valued at US$50 billion by 2025),[57] but its regulatory infrastructure is likely to be a hurdle in many ways because of inherent deficits.

At their core, regulatory frameworks seek to fulfill the dual objective of ascertaining that a products probable benefits for its intended use trump its probable risks, and ensuring that these products are easily available to patients in need. This also involves undertaking an enabling function to kickstart industries and innovations.

The first challenge in this pursuit concerns the purview of Indian medical device regulations. Since 1989, when the first medical device was regulated in India, the regulators have only regulated hardware devices, treating them as identical to drugs.[58] A clear distinction between medical devices and pharmaceuticals for the purpose of regulation was made only in 2017 with the new Medical Device Rules.[59] These Rules expanded the scope of medical devices to all medical devices and in-vitro diagnostic devices that are notified by the government on the basis of their risk. However, these Rules did not recognise software as a medical device, something that was mentioned in its earlier 2016 draft.

It is only through two notifications issued on 11 February 2020 that India moved ahead of regulating just 37 categories of medical devices to bringing all devices, including a software or an accessory, intended to be used for a medical purpose under the purview of regulation.[60] Through these notifications, the government has also sought to ensure that all importers and manufacturers of medical devices have to be certified as compliant with ISO-13485 (Medical Devices Quality Management Systems Requirements for Regulatory Purposes). While the need for compliance with international quality norms can bring a certain assurance of product quality and safety, the standard is still not fit for quality assessments of dynamic and emerging technologies that are increasingly being integrated into health systems, including AI.

Overall, Indian regulators fall behind their international counterparts to truly promote innovations. At present, there are only nascent attempts at creating an ecosystem and infrastructure to conduct quality testing for devices similar to CE or USFDA.[61] The Gujarat government has already approved the setting up of India's first medical device testing lab,[62] but there is still much to be done for putting the right framework in place that can give impetus to local quality testing.

Industry players have been pushing for a separate and comprehensive regulatory regime for medical devices separate from the Drugs and Cosmetics Act. Such a legislation was also being proposed by the NITI Aayog with the Draft Medical Devices (Safety, Effectiveness and Innovation) Bill with its own proposed authority along the lines of the FSSAI.[63] This Bill with changes incorporating the consensus achieved with the Ministry of Health and Family Welfare will be introduced in the parliament in the near future.[64] However, there is little indication that this proposed regulatory framework will have specific provisions to deal with the dynamic demands of emerging technological solutions.

While India is moving slowly towards regulating a wider ambit of medical products being used de facto within the health system, its regulators need to play multiple roles, including that of protecting the patients through rigorous pre- and post-market evaluations as well as that of ensuring access to these products through affordability-inducing measures. For software solutions, India can swiftly adapt existing reference regulations (International Medical Device Regulators Forum or IMDRF) combined with institution of oversight procedures from local regulatory bodies. This might also require India to reassess its policymaking process and make it more participatory, with greater involvement of industry and academic stakeholders to create a synergetic ecosystem for AI in healthcare products.

The dynamic nature of artificial intelligence, coupled with variables introduced from its interaction with users make it apparent that a regulation for balancing patient safety with product efficiency will need to be monitored and reviewed, well into the deployment of the solution. This begs for the role of the regulator to be multifaceted and progressive, which in turn might necessitate structural changes in how regulations, evaluations and certifications, and monitoring is traditionally conducted in India for health-related products.

An overview of regulatory capacity-building for highly specialised markets such as health technology provides a useful insight: semi-governmental regulation (involving specialised functionaries to inform standards and their implementation) allows regulatory agencies to borrow technical standards from international bodies, while exercising care in adopting the same to their social and economic context.[65]

However, these approaches cannot be adopted as is into India, given the countrys unique ecosystem, industry and regulatory constraints. Adopting the USFDA-based quality and efficacy standards and mechanisms might also limit the AI innovations in healthcare to innovators that have the financial and technological resources to pursue the international gold-standard, and in turn make AI that much less accessible to public health at large.

In regulating AI-based medical devices to mitigate its potential risks to patient safety, the IMDRF risk-assessment framework of SaMD allows identifying categories of risk that require a higher degree of evaluation and monitoring. Focusing on the clinical acuity of the location of care (e.g., intensive care unit versus general preventive care setting), type of decision being suggested (immediately life-threatening versus clinical reminder), and type of decision support being provided (e.g., interruptive alert versus invisible nudge), the framework justifiably requires high-risk medical devices to be substantiated with evidence for its validity, reliability and clinical association, and also for the way in which it mitigates known risks to patients. Basing regulations on a risk-based evaluation can help prioritise deployment of lower-risk medical devices in the short-term[e] ), and resolve for more stringent regulatory concerns around high-risk medical devices in the long term.

Therefore, what is needed is an ecosystem building role where the regulator catalyses the industry through ensuring availability of foundational building blocks like data, promulgating regulatory processes that secures patient interest without overburdening the fledgling industry, and works through an experimental and consultative approach with all relevant stakeholders to institutionalize these frameworks. Key recommendations of how the regulators can fulfill these expectations are presented in the sections below.

For AI to truly permeate healthcare, data access cannot be centralised and cordoned off from those who need to use it. Privacy preservation and protection measures are largely in conflict with the access to large datasets needed for the development, certification and supervision of AI in health solutions. While there are innovative technological options like differential privacy, and comprehensive and dynamic consent management that can resolve the conflict, they are not widely available for an ecosystem that is already propelling at speed. Meanwhile, it is the regulators role to ensure data is democratised in a way that keeps the interest of its citizens at the forefrontboth in terms of protecting their privacy and ensuring their safety as patients.

Distinguishing between personal and non-personal data as well as setting up access pathways for both separately can be the first step towards data democratisation. To protect the citizens interest in the former, it might be reasonable to insist on in-depth documentation of data operating procedures along with regular audits. ISO-13485 and the General Data Protection Regulation (GDPR) requirements (in the absence of the Indian Personal Data Protection Bill) can provide broad guidance on data privacy and security practices that must be instituted.

For non-personal data, the government has a facilitators role to play, especially with respect to the data gathered through its own efforts and programmes. It also has the greater possibility of being representative and equally accessible to all.[66] Exploring pathways to publicly release government data in anonymised and digitised form should be the priority for enabling the industry. This effort needs to go beyond existing efforts like data.gov, which face their own challenges,[67] into a concentrated effort for investing in infrastructure and capacity building that enables quality data collection. This also requires a conscious effort to develop large, quality, consensual datasets fit for clinical AI innovations.

The regulatory role should also extend to standard-setting for data collection and consent, quality management, and consolidation, which the Health Ministry has been trying to fulfill with the EHR Standards (2016) and the NDHB (2019). This will propel the ecosystem and give it the technological interoperability to share data and aggregate it fit for AI. However, the Government should go beyond defining standards and future strategies, by creating data marketplaces and collaborative schemes to enable this data sharing.

Data quality issues are critical when it comes to building AI for clinical settings. It is, therefore, incumbent on the regulators of AI models to also ensure that the data used adheres to the FAIR (findability, accessibility, interoperability, and reusability) principles and is collected in an ethical manner before certifying the model as fit for the market. This could be further supplemented by organisational quality assessment in pre-market checkpoints. These conditions can signal to the industry that data integrity and ethical collection is of paramount importance to be eligible for the market, and lead to positive structural changes in how enterprises function.

The regulatory requirements of AI in healthcare continues to evolve as the industry is still in nascent stages. This is also the opportunity to have flexible regulation and learn from experience in striking the right balance between over-regulation (which may delay large-scale public health deployment for meaningful impact) and under-regulation (which may pose challenges to safety, effectiveness, adoption and user-trust). This results in two areas of consideration for the regulator: clinical evaluation of AI models, and post-market monitoring and surveillance of AI models in use.

This is what the USFDAs Pre-Certification Program intended to do, i.e. institute a least-burdensome regulatory oversight mechanism by ensuring that developers are trustworthy and have adequate quality management systems (organisational excellence and culture) that can support and maintain a safe and highly effective SaMD across its life-cycle. This is followed by a pre-market review process of the safety and efficacy of the model itself in the least intrusive way possible, and finally the USFDA uses post-market monitoring mechanisms to ensure continued safety, effectiveness, and quality performance of SaMD in the real world using real world data.

When it comes to clinical evaluations, the purpose of regulatory oversight is to prevent false results, errors and misinterpretations in the outputs of the AI models that could be detrimental to the clinical outcome it targets. Therefore, the checkpoint for the regulator might be satisfied in the leanest way possible by ensuring accuracy and relevancy of data inputs and its outputs generated through the operation of the algorithm.

A framework used in ethics of genome-wide association studies for multifactorial diseases to identify which genes are useful can be applied to the question of data for AI models as well. The framework identified three criteria necessary for a gene to be useful, which are: (i) data in the studies and work products derived from these genes must be reproducible and applicable to the target population; (ii) data and derived work products should have significant benefits for the patient population to whom they are applied; and (iii) resulting knowledge should lead to quantifiable utility for the patient in excess of the potential harm.[68]

Therefore, at the clinical evaluation stage, the regulator might be satisfied by evidence proving the benefits through a suite of options, viz. pilot data, observational and risk-adjusted assessment results, and even clinical trials. It is the risk classification of the device that should define the stringency of evidentiary requirements. At the same time, evidence that can point to the efficiency with which the AI prediction interacts with the human element in the loop can also be mandated for clinical high-risk devices. Even highly accurate predictions might be not fit to improve clinical outcomes unless they are followed up with effective interventions (actions) that are integrated into the clinical workflow.[69] Thus, evidence that not only points to the high predictive value of the model but how the prediction-action pair operates in the clinical setting might be better suited, but may be cumbersome to obtain and assess.

For a traditional regulatory framework like Indias, it might be challenging to leapfrog into complex institutional changes that AI evaluations and monitoring might necessitate. Effective use of regulatory sandboxes with relaxed regulations and anonymised data availability can help experiment with regulatory models to strike the effective balance needed while allowing for innovations to prosper. Sandboxing[f] can help decipher new models of collaborations between industry and the government while also helping understand the boundary conditions of effective regulation and ethics to drive innovations. Sandboxes are common and effective across the world. Most recently, UKs NHSx has called for a joint regulatory sandbox for AI in healthcare bringing together all the sandbox initiatives by different regulators and giving innovators a single, end-to-end safe space to develop and test their AI systems.[70]

The mechanisms to monitor the performance of the model following its deployment is a complex task involving collection and interpretation of real-world information. Further, self-learning models keep refining themselves from on-going datastreams making them complicated to monitor for safety periodically using a static and limited dataset. There are two possibilities for a nascent ecosystem here: limit itself to locked models that can be easily monitored, or develop novel ways to evaluate self-learning (unlocked or reinforced) models. The latter approach will require a consultative approach to work alongside the industry in instituting a balanced and cost-effective system, as experience shows that enhanced post-market surveillance has faced hardships in terms of compliance from developers and enforcement powers of regulators.[71] Some feasible pathways could be periodic evaluation of performance on stratified patient subgroups to assess if the model performs equally effectively across sub-categories of patients. Flagging of certain outputs as anomalous[g] and manual auditing of these can help improve reliability of the model.

Another element where the regulator might need a consultative approach with the developers is to set up processes for risk reporting to ensure absolute patient safety. Medical devices usually rely on hazard and operability studies, which can be used for clinical AI devices as well. However, since continuous learning and adaptive aspects of AI bring with it newer risks, it might also be necessary to adapt these risk assessment processes accordingly. Iterative system testing for risk on a continuous basis or period risk audits could be explored, but in ways that do not substantially add to developers operational costs. Developers can also list out the dependencies on which their nodes operations are based at the outset (e.g. continued access to users data on which the model is based) to be able to control and manage each of them.

The dynamism of AI-based technology in the clinical context means that its users are pushed to adapt to new workflows that integrate its functions to positively influence health outcomes or, conversely, having no positive influence but instead distorting the treatment pathway. Thus, even if a technology has no proven risk to the patient under given conditions, it needs to be tested for how it adapts with user workflows.[72]

During clinical evaluation, if a given medical device responds to the clinical outcome it intends to, there is merit in undertaking human factors validation testing considering the environment in which it will be used. The USFDA recommends that manufacturers determine whether the population using the device comprises professionals or nonprofessionals, what the users education levels are, what age the users are, what functional limitations they may have, and their mental and sensory conditions. Clinical efficacy for a specific device can be radically influenced by how the devices testing environment (a controlled laboratory ecosystem) is different from its application environment (a primary health clinic with limited internet connectivity). For frontline health workers with minimum digital literacy, complex interface functions on digital health applications could compromise the volume of beneficiaries they can respond to in a limited period of time, thus compromising health outcomes for the community. Regulation for medical devices therefore needs to articulate similar conditions that need to be tested for, and articulated for its specific usability in a public health context.

Given how trustworthy AI is likely to be adopted better and in many cases is a condition for operating in healthcare, regulators are expected to articulate how much evidence proves this trust. Deep learning methods have been lauded for higher accuracy but are also sufficiently opaque for users to distrust them and hold them accountable for high-risk clinical output. Given the high impact-high risk devices that AI promises to deliver for healthcare, simply prohibiting AI solutions employing opaque decision pathways is counter-productive. Instead, regulations could play a pivotal role in guiding manufacturers to a need-based framework for explainability including (1) articulating the operational and legal needs for regulation, (2) examination of technical tools available to address the same and (3) value the required level of explanation needed against the costs involved, emphasising that explanations are socially useful only when total social benefits exceed costs.[73]

In many ways, the true litmus test for an innovation is its responsiveness to the actual needs of the ecosystem in which it integrates. As regulators deliberate over conditioning the innovation ecosystem for AI in healthcare, favouring its responsiveness to public health goals allows manufacturers to innovate directly in response to a need. For this, the regulator needs to build a larger foundational ecosystem and take the role of an enabler, while simultaneously focusing on low-hanging fruits to start introducing emerging technologies in a substantial way into the market.

Under the Medical Devices Rules (2017), USFDA and CE certified medical devices can be marketed in India without having to undergo lengthy clinical trials. While it may be prudent to extend the regulation to include SaMDs, the step may not necessarily spur homegrown innovation. Certification programs under the USFDA and CE are prohibitively expensive for most startups owing to the high costs of clinical trials and regulatory filing in the US and Europe, respectively. Therefore, regulatory authorities in India should explore mechanisms to subsidise these certification expenses by providing direct financial incentives to the startups and MSMEs working on solutions for public health in the absence of Indian quality certification mechanisms. Further, subsidising costs at source could be explored via international agreements and partnerships with external certifying agencies.

There may already be solutions being developed internationally that can be easily contextualised to India. Incentivising these international companies to test solutions on Indian patients for their global trials and working with international agencies to accept and assess these tests for certification may be an important first step in preparing the Indian ecosystem. From a commercial point of view, the international solutions may have the first mover advantage but can prepare the market for indigenous solutions and lower the barriers to entry in the longer run. For this, regulators in India will need to quickly adapt its vigilance mechanisms as a first goal (as compared to comprehensive clinical evaluations) and ensure safe deployment in India. Learning from this experience, regulators can move further to define holistic certification and benchmarking guidelines for India.

International patent pooling for life-saving technologies can be negotiated by international consortiums on similar lines of Medical Patent Pools for life-saving drugs. While the technology will remain proprietary to the parent firm, the on-ground implantation of these technologies will have to be taken up by local firms that understand the diverse contexts of Indian health systems. Investment to ensure uptake of these solutions may lead to the creation of a smaller auxiliary industry that can quickly test and operationalize health technologies on the ground.

In an enabling role, the regulator also must take a forward-looking approach in building the foundational layers of the ecosystem through collaborations with other governmental, private sector and civil society players. The NDHB is a prime example of how an enterprise architectural approach with focus on base principles, standard-setting and open source technology layers can achieve the goal of kickstarting sustainable and scalable innovations on the top-most application layers. For the AI in the health ecosystem, the government can play a facilitators role in creating open technology layers like anonymisers and annotation tools, which can bring down the cost and effort required for innovators in developing and deploying solutions.

Finally, domestic regulatory clarity is pivotal for certainty amongst innovators innovating for an Indian market. India should freely borrow and co-opt norms for clinical assessment being set in AI by international organizations such as WHO-ITU.[74] Following an iterative approach to the discovery of India-specific norms by working with medical research institutes and AI solution providers in controlled environments such as AI sandboxes can prove to be hugely beneficial to all stakeholders involved the medical community, the regulator, the innovator and the citizen seeking health services.

While AI shows immense potential in meeting the needs of an under-resourced and overburdened health system, there is much to be done to create and institutionalise structures that can propel its development and optimise its benefits to all. A systems lens to regulate AI can help achieve this goal by drawing domestic regulators' attention to conditions of the ecosystem that can allow emerging technology to thrive.

Investments are required to build the digital health ecosystem in the country and unlock the large amounts of data that exists with stakeholders, which form the base for AI-driven systems. Further, capacities need to be built not only by the regulators and the government but by private firms and solutioners in the space to assess and ensure the long-term viability of AI. As for any emerging technology, adapting the current static regulatory approach to a more dynamic, iterative one will be key in allowing AI-based technology to thrive rather than struggle against laggard regulation. Perhaps most importantly, as was highlighted in a letter signed by eminent scholars such as Stephen Hawking and industry leaders like Elon Musk, this is an opportunity for the ecosystem to be developed in a way which maximises the social benefit of AI.[75] Ethical concerns, including, but not limited to biases may surreptitiously slip in, and can go unnoticed if sufficient checks are not put in place. It is the regulators responsibility to build a vibrant ecosystem which can fruitfully deliver systems that minimise bias and maximise social benefit as much as possible.

Gartner, a leading IT research and advisory company, said that AI in healthcare is on the rise or in some cases may have hit the peak of the Technology Hype Cycle.[76] These are still early days for AI in healthcare and much is to be ascertained in terms of the scalability into real-world use cases. Just as was the case in Artificial General Intelligence, there is a danger of overestimating the use of and the ability to build these complex systems to augment and replace existing healthcare systems. What is clear is that even narrow AI solutions (those operating in predetermined range and scope) have been demonstrated to be of help to medical professionals- influencing health outcomes in a promising way.[77] It is now for governments (as regulators), clinicians (as users) and patients (as beneficiaries) to collaboratively shape the terms that allow emerging technology to urgently respond to the countrys developmental goals.

[a] Some operate at the population level, assisting the government in effective health service delivery, while others in the clinical setting interacting with clinicians or even directly with patients.

[b] Black box AI is any artificial intelligence system whose inputs and operations are not directly visible or interpretable to its users.

[c] Conventional regulation design involves a comprehensive processes of matching regulation needs with incentives and penalties. Given that the development of emerging technology like artificial intelligence happens at a pace faster than that of creating and implementing regulations for it, regulations have to often catch up to the needs of the ecosystem in which technology needs to be placed and in doing so, remains reactive.

[d] Based on patterns analysed from typical human responses, chatbots are trained to provide pre-set answers to questions or in many cases indicate an action based on a pre-analysed pattern of human responses. In such uses of AI where the human is eliminated from the equation, there may be an additional layer of trust to be built among users about the credibility of the AIs indications and responses to guide its ethical use.

[e] Those that are needed largely for the public health system and frontline institutions, for example.

[f] In computer security, a sandbox is a security mechanism for separating running programs, usually in an effort to mitigate system failures or software vulnerabilities from spreading. It is increasingly being applied as an approach to experimenting with regulations whereby a regulator allows for live, time-bound testing of innovations under its oversight in a controlled environment with relaxed regulatory limitations to collect evidence.

[g] For instance, predictions that do not match human judgment in the clinical context.

[1] Bush, Jonathan. How AI Is Taking the Scut Work Out of Health Care. Harvard Business Review, March 5, 2018.

[2] Rajkomar, A., Oren, E., Chen, K. et al. Scalable and accurate deep learning with electronic health records. npj Digital Med 1, 18 (2018). https://doi.org/10.1038/s41746-018-0029-1

[3] Fleming, Nic. How Artificial Intelligence Is Changing Drug Discovery Citation Metadata. Nature (Vol. 557, Issue 7706), May 2018.

[4] Bora, Garima. Qure.ai Can Detect Covid-19 Lung Infections in Less than a Minute, Help Triage Patients. ET Online, April 30, 2020.

[5] 3.5 Crore People in 24 States Registered in Nutrition Monitoring Software: WCD Ministry. The Times of India, September 24, 2019.

[6] Rathi, Aayush. Is India's Digital Health System Foolproof? Economic and Political Weekly, December 11, 2019.

[7] National Digital Health Blueprint (NDHB). Ministry of Health and Family Welfare (MoHFW), Government of India, October 2019. Accessed April 27, 2020.

[8] Mabiyan, Rashmi. India Bullish on AI in Healthcare without Electronic Health Records. ETHealthWorld, January 6, 2020.

[9] Lee, Bruce Y. How Doctors May Be Spending More Time With Electronic Health Records Than Patients. Forbes, 13 Jan. 2020.

[10] Thayyil, Jayakrishnan, and Mathummal Cherumanalil Jeeja. Issues of Creating a New Cadre of Doctors for Rural India. International Journal of Medicine and Public Health, January 2013. https://doi.org/10.4103/2230-8598.109305.

[11] Vikas Bajpai. The Challenges Confronting Public Hospitals in India, Their Origins and Possible Solutions. Advances in Public Health, July 13, 2014. https://doi.org/10.1155/2014/898502.

[12] Gautam Chikermane, Oomen C. Kurian. Can PMJAY Fix Indias Healthcare System? Crossing Five Hurdles on the Path to Universal Health Coverage. ORF Occasional Paper No. 172, Oct 2018, Observer Research Foundation.

[13] Frost, Isabel, Jess Craig, Jyoti Joshi, and Ramanan Laxminarayan. Access Barriers to Antibiotics. The Center For Disease Dynamics, Economics & Policy, April 11, 2019.

[14] WHO. Global Health Observatory (GHO) Data Accessed May 10, 2020. https://www.who.int/gho/health_workforce/physicians_density/en/.

[15] Bhattacharya, Sudip, Keerti Bhusan Pradhan, Md Abu Bashar, Shailesh Tripathi, Jayanti Semwal, Roy Rillera Marzo, Sandeep Bhattacharya, and Amarjeet Singh. Artificial Intelligence Enabled Healthcare: A Hype, Hope or Harm. Journal of Family Medicine and Primary Care Vol.8, November 15, 2019. https://doi.org/0.4103/jfmpc.jfmpc_155_19.

[16] Wadhwani Institute of Artificial Intelligence. AI-Powered Anthropometry., n.d.

[17] FDA Clears GE Healthcare's Critical Care Suite Chest X-Ray AI. Imaging Technology News, September 12, 2019.

[18] Correspondent, Special. AI Technology to Bloom in Telangana. The Hindu, 3 Jan, 2020.

[19] Babylon Health. NHS General Practice Powered by Babylon. Accessed May 5, 2020. https://www.babylonhealth.com/

Read the original post:
Regulating AI in Public Health: Systems Challenges and Perspectives - Observer Research Foundation

Everywhere and nowhere: The many layers of ‘cancel culture’ – The Daily Times

This combination photo shows authors J.K. Rowling, left, and Salman Rushdie. Rowling, threatened legal action against a British news site that suggested she was transphobic after referring to controversial tweets she has written in recent months. Rushdie was forced into hiding because of death threats because of his novel The Satanic Verses. (AP Photo)

NEW YORK So youve probably read a lot about cancel culture. Or know about a new poll that shows a plurality of Americans disapproving of it. Or you may have heard about a letter in Harpers Magazine condemning censorship and intolerance.

But can you say exactly what cancel culture is? Some takes:

It seems like a buzzword that creates more confusion than clarity, says the author and journalist George Packer, who went on to call it a mechanism where a chorus of voices, amplified on social media, tries to silence a point of view that they find offensive by trying to damage or destroy the reputation of the person who has given offense.

I dont think its real. But there are reasonable people who believe in it, says the author, educator and sociologist Tressie McMillan Cottom. From my perspective, accountability has always existed. But some people are being held accountable in ways that are new to them. We didnt talk about cancel culture when someone was charged with a crime and had to stay in jail because they couldnt afford the bail.

Cancel culture tacitly attempts to disable the ability of a person with whom you disagree to ever again be taken seriously as a writer/editor/speaker/activist/intellectual, or in the extreme, to be hired or employed in their field of work, says Letty Cottin Pogrebin, the author, activist and founding editor of Ms. magazine.

It means different things to different people, says Ben Wizner, director of the ACLUs Speech, Privacy, and Technology Project.

In tweets, online letters, opinion pieces and books, conservatives, centrists and liberals continue to denounce what they call growing intolerance for opposing viewpoints and the needless ruining of lives and careers. A Politico/Morning Consult poll released last week shows 44% of Americans disapprove of it, 32% approve and the remaining 24% had no opinion or didnt know what it was.

For some, cancel culture is the coming of the thought police. For others, it contains important chances to be heard that didnt exist before.

Recent examples of unpopular cancellations include the owner of a chain of food stores in Minneapolis whose business faced eviction and calls for boycotts because of racist social media posts by his then-teenage daughter, and a data analyst fired by the progressive firm Civis Analytics after he tweeted a study finding that nonviolent protests increase support for Democratic candidates and violent protests decrease it. Civis Analytics has denied he was fired for the tweet.

These incidents damage the lives of innocent people without achieving any noble purpose, Yascha Mounk wrote in The Atlantic last month. Mounk himself has been criticized for alleging that an astonishing number of academics and journalists proudly proclaim that it is time to abandon values like due process and free speech.

Debates can be circular and confusing, with those objecting to intolerance sometimes openly uncomfortable with those who dont share their views. A few weeks ago, more than 100 artists and thinkers endorsed a letter co-written by Packer and published by Harpers. It warned against a new set of moral attitudes and political commitments that tend to weaken our norms of open debate and toleration of differences in favor of ideological conformity.

The letter drew signatories from many backgrounds and political points of view, ranging from the far-left Noam Chomsky to the conservative David Frum, and was a starting point for contradiction.

The writer and trans activist Jennifer Finney Boylan, who signed the letter, quickly disowned it because she did not know who else had attached their names. Although endorsers included Salman Rushdie, who in 1989 was forced into hiding over death threats from Iranian Islamic leaders because of his novel The Satanic Verses, numerous online critics dismissed the letter as a product of elitists who knew nothing about censorship.

One of the organizers of the letter, the writer Thomas Chatterton Williams, later announced on Twitter that he had thrown a guest out of his home over criticisms of letter-supporter Bari Weiss, the New York Times columnist who recently quit over what she called a Twitter-driven culture of political correctness. Another endorser, Harry Potter author J.K. Rowling, threatened legal action against a British news site that suggested she was transphobic after referring to controversial tweets that she has written in recent months.

The only speech these powerful people seem to care about is their own, the author and feminist Jessica Valenti wrote in response to the Harpers letter. (Cancel culture ) is certainly not about free speech: After all, an arrested journalist is never referred to as canceled, nor is a woman who has been frozen out of an industry after complaining about sexual harassment. Canceled is a label we all understand to mean a powerful person whos been held to account.

Cancel culture is hard to define, in part because there is nothing confined about it no single cause, no single ideology, no single fate for those allegedly canceled.

Harvey Weinstein and Bill Cosby, convicted sex offenders, are in prison. Former television personality Charlie Rose has been unemployable since allegations of sexual abuse and harassment were published in 2017-18. Oscar winner Kevin Spacey has made no films since he faced allegations of harassment and assault and saw his performance in All the Money in the World replaced by Christopher Plummers.

Others are only partially canceled. Woody Allen, accused by daughter Dylan Farrow of molesting her when she was 7, was dropped by Amazon, his U.S. film distributor, but continues to release movies overseas. His memoir was canceled by Hachette Book Group, but soon acquired by Skyhorse Publishing, which also has a deal with the previously canceled Garrison Keillor. Sirius XM announced last week that the late Michael Jackson, who seemed to face posthumous cancellation after the 2019 documentary Leaving Neverland presented extensive allegations that he sexually abused boys, would have a channel dedicated to his music.

Cancellation in one subculture can lead to elevation in others. Former San Francisco 49ers quarterback Colin Kaepernick has not played an NFL game since 2016 and has been condemned by President Donald Trump and many others on the right after he began kneeling during the National Anthem to protest a country that oppresses black people and people of color. But he has appeared in Nike advertisements, been honored by the ACLU and Amnesty International and reached an agreement with the Walt Disney Co. for a series about his life.

You can say the NFL canceled Colin Kaepernick as a quarterback and that he was resurrected as a cultural hero, says Julius Bailey, an associate professor of philosophy at Wittenberg University who writes about Kaepernick in his book Racism, Hypocrisy and Bad Faith.

In politics, Virginia Governor Ralph Northam, a Democrat, remains in his job 1 1/2 years after acknowledging he appeared in a racist yearbook picture while in college. Sen. Al Franken, a Democrat from Minnesota, resigned after multiple women alleged he had sexually harassed them, but Lt. Governor Justin Fairfax of Virginia defied orders to quit after two women accused him of sexual assault.

Sometimes even multiple allegations of sexual assault, countless racist remarks and the disparagement of wounded military veterans arent enough to induce cancellation. Trump, a Republican, has labeled cancel culture far-left fascism and the very definition of totalitarianism while so far proving immune to it.

Politicians can ride this out because they were hired by the public. And if the public is willing to go along, then they can sometimes survive things perhaps they shouldnt survive, Packer says.

I think you can say that Trumps rhetoric has had a boomerang effect on the rest of our society, says PEN America CEO Suzanne Nossel, who addresses free expression in her book Dare to Speak, which comes out next week. People on the left feel that he can get away with anything, so they do all they can to contain it elsewhere.

Today's breaking news and more in your inbox

View post:

Everywhere and nowhere: The many layers of 'cancel culture' - The Daily Times

The 6 Biggest Technology Trends In Accounting And Finance – Forbes

The explosion in data that has launched the Fourth Industrial Revolution, an era when business will be transformed by cyber-physical systems, has enabled several technology trends to develop. Every business can leverage these important trends and should pay attention to how best to use them, but accountants should really evaluate how these six technologies can be used strategically to achieve the companys business strategy.

The 6 Biggest Technology Trends In Accounting and Finance

1.Big Data

Data is crucial to make business financial decisions. Today, data isn't just numbers and spreadsheets that accountants have been familiar with for years; it also includes unstructured data that can be analyzed through natural language processing. This can allow for real-time status monitoring of financial matters. Data is the fuel that powers other technology trends that are transforming finance and accounting in the Fourth Industrial Revolution. Even the audit process has been digitalized. In the financial realm, data produces valuable insights, drives results and creates a better experience for clients. Since everything leaves a digital footprint, the unprecedented digitalization of our world is creating opportunities to glean new insights from data that wasnt possible before. These insights help improve internal operations and build revenue.

2.Increased Computing Power

Just as it is for other companies, all the data created by our digitalized world would be useless or at least less powerful if it weren't for the advances in computing power. These changes allow accounting and finance departments and firms to store and use the data effectively. First, there are the cloud services from providers such as Amazon, Google, and Microsoft that provide scalable systems and software to leverage that can be accessed wherever and whenever it's needed. Edge computing has also grown. This is where the computing happens not in the cloud, but right where the data is collected. The adoption of 5G (fifth generation) cellular network technology will be the backbone of a smarter world. When quantum computing is fully adopted, it will be transformative in a way that cannot even be predicted at this point since it

will catapult our computing power exponentially. Quantum computers will be able to provide services and solve problems that werent possible with traditional computers. There will be tremendous value in the financial world for this capability.

3.Artificial Intelligence (AI)

Artificial intelligence can help accounting and finance professionals be more productive. AI algorithms allow machines to take over time-consuming, repetitive, and redundant tasks. Rather than just crunch numbers, with the support of AI, financial professionals will be able to spend more time delivering actionable insight. Machines can help reduce costs and errors by streamlining operations. The more finance professionals rely on AI to do what it does bestanalyze and process a tremendous amount of data and take care of monotonous tasksthe more time humans will recover to do what they do best. New technology has changed the expectations clients have when working with companies, and it's the same for accounting. AI helps accountants be more efficient.

4.Intelligence of Things

When the internet of things, the system of interconnected devices and machines, combines with artificial intelligence, the result is the intelligence of things. These items can communicate and operate without human intervention and offer many advantages for accounting systems and finance professionals. The intelligence of things helps finance professionals track ledgers, transactions, and other records in real-time. With the support of artificial intelligence, patterns can be identified, or issues can be resolved quickly. This continuous monitoring makes accounting activities such as audits much more streamlined and stress-free. In addition, the intelligence of things improves inventory tracking and management.

5.Autonomous Robots

Robots don't have to be physical entities. In accounting and finance, robotic process automation (RPA) can handle repetitive and time-consuming tasks such as document analysis and processing, which is abundant in any accounting department. Freed up from these mundane tasks, accountants are able to spend time on strategy and advisory work. Intelligent automation (IA) is capable of mimicking human interaction and can even understand inferred meaning in client communication and adapt to an activity based on historical data. In addition, drones and unmanned aerial vehicles can even be deployed on appraisals and the like.

6. Blockchain

The final tech trend that has significant implications for accounting and finance professionals that I wish to cover is blockchain. A distributed ledger or blockchain is a highly secure database. It's a way to securely store and accurately record data, which has broad applications in accounting and financial records. Blockchain enables smart contracts, protecting and transferring ownership of assets, verifying people's identities and credentials, and more. Once blockchain is widely adopted, and challenges around industry regulation are overcome, it will benefit businesses by reducing costs, increasing traceability, and enhancing security.

To learn more about these technology trends as well as other key trends that are shaping the 4th industrial revolution, you can take a look at my new book, Tech Trends in Practice: The 25 Technologies That Are Driving The 4th Industrial Revolution

Continue reading here:
The 6 Biggest Technology Trends In Accounting And Finance - Forbes

Hybrid Cloud is the future of IT in the Post COVID-19 World – CXOToday.com

The pace of cloud adoption has intensified due to the Coronavirus pandemic as organizations are moving to remote working and almost everything is going digital. In such a scenario, hybrid cloud that offers the best of both worlds for organizations due to its inherent flexibility, agility and efficiencies is seeing a phenomenal growth. In a recent interaction with CXOToday, Gurpreet Singh, Managing Director, Arrow PC Network, explains why hybrid cloud is the future of IT in the post COVID-19 world.

CXOToday: What are the current trends shaping the cloud landscape, both in the country and globally?

Gurpreet Singh: The increase in the demand for infrastructure requirements has influenced cloud services, and we especially see a growth in hybrid cloud adoption. As hybrid is a combination of public and a private cloud platform it provides the best of both, and with barriers between the platforms disappearing due to technological advancement, the adaptation of hybrid cloud is only going to increase. Besides, cloud is also being influenced by privacy-preserving multi-party analytics in a public cloud, hardware-based security, homomorphic encryption topics and IoT-based services. Apart from that, serverless computing, omni-cloud, quantum computing, Kubernetes are also the latest trends shapingup cloud landscape. Globally, the influence of these trends on the cloud will only increase allowing seamless operation of systems across the globe in organizations of all sizes. The cloud market is also betting high on mobile cloud where mobile applications are built, operated and hosted with cloud technology.

Could you give your views on how cloud adoption has picked up during the ongoing pandemic and how is it impacting cloud service providers?

Gurpreet Singh: There was a time, say a decade or two ago, when cloud was considered an inconsequential expense, but now it is a necessity. Due to COVID-19 pandemic, there was a radical shift in the work environment. Many enterprises chose to downgrade their office size and shifted to cloud services. This resulted in increasing the business for cloud service providers, as customers at all levels started adapting to cloud services in different flavors. For example, organizations are utilizing cloud automation to increase their online presence by developing commerce websites on cloud platforms. SaaS segment that relies on service desks, accounting packages, customer relationship management, human resource management and enterprise resource planning gained growth potential even during the COVID times. Healthcare systems needed scalable and secured cloud infrastructure to manage and maintain patient information with high speed and flexibility which certainly helped during the pandemic.

With teams working remotely, business continuity and security are the biggest concerns for any enterprise at present. How can cloud address these issues while also ensuring flexibility and seamless operation to the employees?

Gurpreet Singh: Centralized workspace is a thing of the past, millennials love to work from anywhere as opposed to the traditional format of working from a cubical in an office or their homes and this trend is only set to see a rise. Integrating the workforce consisting of digital natives and the traditional technological generation will certainly give an edge to an organization. To be on par with the millennial work-environment expectation, business continuity layered with security will be the topmost priority for enterprises. Cloud was born to provide the required not only in terms of new work environments but also flexibility in terms of cost and usage. Enterprises despite the lack of on-site IT personnel were able to leverage cloud capabilities to check, maintain, and monitor their server and storage installations in data centers, aiding the uninterrupted function of their workforce and thus helping themselves in terms of business continuity. These are challenging times as businesses are facing uncertainties and wish to plan for months as compared to years, hence features like pay-as-you-use and grow-as-you-require are receiving more visibility as organizations wish to move their expenses from CapEx to Opex and cloud can help in this transition.

With businesses fast moving on to the cloud in the past months, there has also been a surge in the development of many cloud applications. How is this going to impact the cloud market in the long run?

Gurpreet Singh: The global cloud market is expected to grow $295 billion by 2021 at a CAGR of 12.5%. The evolving hybrid cloud mode of deployment, along with the growing demand for content streaming, presents a favorable opportunity for market growth. New business ideas will leverage the cloud and multiply faster than ever. Startups with cloud applications solving big business concerns will pick up pace and play a key role in business growth in the new and existing verticals. The present traditional applications are also choosing to move to the cloud, in turn pushing the cloud applications also to develop. There will be continuous growth in the demand for cloud infrastructure services along with expenditure on specialized software, communications equipment, and telecom services. Players have adopted various growth strategies, such as partnerships and new service launches, to expand their presence further in the impact of COVID-19 on the cloud market and broaden their customer base. The APAC region / Asia-Pacific region is expected to depict the highest CAGR and this is attributable to the increase in demand for cloud identity and access management across multiple sectors.

How can the partner community benefit out of this and see this as a big opportunity for growing their business?

Gurpreet Singh: As companies are making strategic acquisitions and launching new services, cloud consultancies have expressed confidence in continued cloud demand. There is also a possibility where companies are looking to reduce their disparate solutions and focus on a partner community that meets multiple needs. Customers will look for partners who can help them in the digital transformation journey and those partners with the capability to handle end-to-end solutions for the customers will prefer hybrid cloud giving it a big opportunity in the marketplace. This is the time to unlearn and relearn. Our teams need to learn new technology solutions and take them to the customer to reap early mover advantage.

What will be the future of cloud in a post-COVID 19 world?

Gurpreet Singh: Customers will consume technology in all forms depending upon their workloads and hybrid cloud platforms will reap utmost benefit from it as this has opened up a myriad of possibilities in terms of technological advancement. A research found that the hybrid cloud market will grow to $97.6 billion by 2023, at a CAGR of 17 percent as hybrid not only helps in easing the economic factors for an organization but also delivers security. Even human cloud is also an emerging trend in the B2B sector anticipating 22% year-over-year growth. Although opting for the hybrid cloud strategy can pose a challenge for larger enterprises due to complex IT architecture and security challenges, hybrid cloud is poised to serve as a revolution for organizations due to the inherent flexibility, agility and efficiencies they offer. In simple terms, hybrid cloud is the future of IT in the post COVID-19 world.

Continued here:
Hybrid Cloud is the future of IT in the Post COVID-19 World - CXOToday.com