Artificial Intelligence and the Biopharmaceutical Industry: What’s Next? – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

Go here to see the original:

Artificial Intelligence and the Biopharmaceutical Industry: What's Next? - JD Supra

Artificial Intelligence, Foresight, and the Offense-Defense Balance – War on the Rocks

There is a growing perception that AI will be a transformative technology for international security. The current U.S. National Security Strategy names artificial intelligence as one of a small number of technologies that will be critical to the countrys future. Senior defense officials have commented that the United States is at an inflection point in the power of artificial intelligence and even that AI might be the first technology to change the fundamental nature of war.

However, there is still little clarity regarding just how artificial intelligence will transform the security landscape. One of the most important open questions is whether applications of AI, such as drone swarms and software vulnerability discovery tools, will tend to be more useful for conducting offensive or defensive military operations. If AI favors the offense, then a significant body of international relations theory suggests that this could have destabilizing effects. States could find themselves increasingly able to use force and increasingly frightened of having force used against them, making arms-racing and war more likely. If AI favors the defense, on the other hand, then it may act as a stabilizing force.

Anticipating the impact of AI on the so-called offense-defense balance across different military domains could be extremely valuable. It could help us to foresee new threats to stability before they arise and act to mitigate them, for instance by pursuing specific arms agreements or prioritizing the development of applications with potential stabilizing effects.

Unfortunately, the historical record suggests that attempts to forecast changes in the offense-defense balance are often unsuccessful. It can even be difficult to detect the changes that newly adopted technologies have already caused. In the lead-up to the First World War, for instance, most analysts failed to recognize that the introduction of machine guns and barbed wire had tilted the offense-defense balance far toward defense. The years of intractable trench warfare that followed came as a surprise to the states involved.

While there are clearly limits on the ability to anticipate shifts in the offense-defense balance, some forms of technological change have more predictable effects than others. In particular, as we argue in a recent paper, changes that essentially scale up existing capabilities are likely to be much easier to analyze than changes that introduce fundamentally new capabilities. Substantial insight into the impacts of AI can be achieved by focusing on this kind of quantitative change.

Two Kinds of Technological Change

In a classic analysis of arms races, Samuel Huntington draws a distinction between qualitative and quantitative changes in military capabilities. A qualitative change involves the introduction of what might be considered a new form of force. A quantitative change involves the expansion of an existing form of force.

Although this is a somewhat abstract distinction, it is easy to illustrate with concrete examples. The introduction of dreadnoughts in naval surface warfare in the early twentieth century is most naturally understood as a qualitative change in naval technology. In contrast, the subsequent naval arms race which saw England and Germany competing to manufacture ever larger numbers of dreadnoughts represented a quantitative change.

Attempts to understand changes in the offense-defense balance tend to focus almost exclusively on the effects of qualitative changes. Unfortunately, the effects of such qualitative changes are likely to be especially difficult to anticipate. One particular reason why foresight about such changes is difficult is that the introduction of a new form of force from the tank to the torpedo to the phishing attack will often warrant the introduction of substantially new tactics. Since these tactics emerge at least in part through a process of trial and error, as both attackers and defenders learn from the experience of conflict, there is a limit to how much can ultimately be foreseen.

Although quantitative technological changes are given less attention, they can also in principle have very large effects on the offense-defense balance. Furthermore, these effects may exhibit certain regularities that make them easier to anticipate than the effects of qualitative change. Focusing on quantitative change may then be a promising way forward to gain insight into the potential impact of artificial intelligence.

How Numbers Matter

To understand how quantitative changes can matter, and how they can be predictable, it is useful to consider the case of a ground invasion. If the sizes of two armies double in the lead-up to an invasion, for example, then it is not safe to assume that the effect will simply cancel out and leave the balance of forces the same as it was prior to the doubling. Rather, research on combat dynamics suggests that increasing the total number of soldiers will tend to benefit the attacker when force levels are sufficiently low and benefit the defender when force levels are sufficiently high. The reason is that the initial growth in numbers primarily improves the attackers ability to send soldiers through poorly protected sections of the defenders border. Eventually, however, the border becomes increasingly saturated with ground forces, eliminating the attackers ability to exploit poorly defended sections.

Figures 1: A simple model illustrating the importance of force levels. The ability of the attacker (in red) to send forces through poorly defended sections of the border rises and then falls as total force levels increase.

This phenomenon is also likely to arise in many other domains where there are multiple vulnerable points that a defender hopes to protect. For example, in the cyber domain, increasing the number of software vulnerabilities that both an attacker and defender can each discover will benefit the attacker at first. The primary effect will initially be to increase the attackers ability to discover vulnerabilities that the defender has failed to discover and patch. In the long run, however, the defender will eventually discover every vulnerability that can be discovered and leave behind nothing for the attacker to exploit.

In general, growth in numbers will often benefit the attacker when numbers are sufficiently low and benefit the defender when they are sufficiently high. We refer to this regularity as offensive-then-defensive scaling and suggest that it can be helpful for predicting shifts in the offense-defense balance in a wide range of domains.

Artificial Intelligence and Quantitative Change

Applications of artificial intelligence will undoubtedly be responsible for an enormous range of qualitative changes to the character of war. It is easy to imagine states such as the United States and China competing to deploy ever more novel systems in a cat-and-mouse game that has little to do with quantity. An emphasis on qualitative advantage over quantitative advantage is a fairly explicit feature of the American military strategy and has been since at least the so-called Second Offset strategy that emerged in the middle of the Cold War.

However, some emerging applications of artificial intelligence do seem to lend themselves most naturally to competition on the basis of rapidly increasing quantity. Armed drone swarms are one example. Paul Scharre has argued that the military utility of these swarms may lie in the fact that they offer an opportunity to substitute quantity for quality. A large swarm of individually expendable drones may be able to overwhelm the defenses of individual weapon platforms, such as aircraft carriers, by attacking from more directions or in more waves than the platforms defenses are capable of managing. If this method of attack is in fact viable, one could see a race to build larger and larger swarms that ultimately results in swarms containing billions of drones. The phenomenon of offensive-then-defensive scaling suggests that growing swarm sizes could initially benefit attackers who can focus their attention increasingly intensely on less well-defended targets and parts of targets before potentially allowing defensive swarms to win out if sufficient growth in numbers occurs.

Automated vulnerability discovery tools also stand out as another relevant example, which have the potential to vastly increase the number of software vulnerabilities that both attackers and defenders can discover. The DARPA Cyber Grand Challenge recently showcased machine systems autonomously discovering, patching, and exploiting software vulnerabilities. Recent work on novel techniques such as deep reinforcement fuzzing also suggests significant promise. The computer security expert Bruce Schneier has suggested that continued progress will ultimately make it feasible to discover and patch every single vulnerability in a given piece of software, shifting the cyber offense-defense balance significantly toward defense. Before this point, however, there is reason for concern that these new tools could initially benefit attackers most of all.

Forecasting the Impact of Technology

The impact of AI on the offense-defense balance remains highly uncertain. The greatest impact might come from an as-yet-unforeseen qualitative change. Our contribution here is to point out one particularly precise way in which AI could impact the offense-defense balance, through quantitative increases of capabilities in domains that exhibit offensive-then-defensive scaling. Even if this idea is mistaken, it is our hope that by understanding it, researchers are more likely to see other impacts. In foreseeing and understanding these potential impacts, policymakers could be better prepared to mitigate the most dangerous consequences, through prioritizing the development of applications that favor defense, investigating countermeasures, or constructing stabilizing norms and institutions.

Work to understand and forecast the impacts of technology is hard and should not be expected to produce confident answers. However, the importance of the challenge means that researchers should still try while doing so in a scientific, humble way.

This publication was made possible (in part) by a grant to the Center for a New American Security from Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author(s).

Ben Garfinkel is a DPhil scholar in International Relations, University of Oxford, and research fellow at the Centre for the Governance of AI, Future of Humanity Institute.

Allan Dafoe is associate professor in the International Politics of AI, University of Oxford, and director of the Centre for the Governance of AI, Future of Humanity Institute. For more information, see http://www.governance.ai and http://www.allandafoe.com.

Image: U.S. Air Force (Photo by Tech. Sgt. R.J. Biermann)

Original post:

Artificial Intelligence, Foresight, and the Offense-Defense Balance - War on the Rocks

7 tips to get your resume past the robots reading it – CNBC

There are about 7.3 million open jobs in the U.S., according to the most recent Job Openings and Labor Turnover Survey from the Bureau of Labor Statistics. And for many job seekers vying for these openings, the likelihood they'll submit their application to an artificial intelligence-powered hiring system is growing.

A 2017 Deloitte report found 33% of employers already use some form of AI in the hiring process to save time and reduce human bias. These algorithms scan applications for specific words and phrases around work history, responsibilities, skills and accomplishments to identify candidates who match well with the job description.

These assessments may also aim to predict a candidate's future success by matching their abilities and accomplishments to those held by a company's top performers.

But it remains unclear how effective these programs are.

As Sue Shellenbarger reports for The Wall Street Journal, many vendors of these systems don't tell employers how their algorithms work. And employers aren't required to inform job candidates when their resumes will be reviewed by these systems.

That said, "it's sometimes possible to tell whether an employer is using an AI-driven tool by looking for a vendor's logo on the employer's career site," Shellenbarger writes. "In other cases, hovering your cursor over the 'submit' button will reveal the URL where your application is being sent."

CNBC Make It spoke with career experts about how to make sure your next application makes it past the initial robot test.

AI-powered hiring platforms are designed to identify candidates whose resumes match open job descriptions the most. These machines are nuanced, but their use still means very specific wording, repetition and prioritization of certain phrases matter.

Job seekers can make sure to highlight the right skills to get past initial screens by using tools, such as an online cloud generator, to understand what the AI system will prioritize most. Candidates can drop in the text of a job description and see which words appear most often, based on how large they appear within the word cloud.

CareerBuilder also created an AI resume builder to help candidates include skills on an application they may not have identified on their own.

Including transferable skills mentioned in the job description can also increase your resume odds. After all, executives from a recent IBM report say soft skills such as flexibility, time management, teamwork and communication are some of the most important skills in the workforce today.

"Job seekers should be cognizant of how they are positioning their professional background to put their best foot forward," Michelle Armer, chief people officer at talent acquisition company CareerBuilder, tells CNBC Make It. "Since a candidate's skill set will help set them apart from other applicants, putting these front and center on a resume will help make sure you're giving skills the attention they deserve."

It's also worth noting that AI enables employers to source candidates from the entire application system more easily, rather than limiting consideration just to people who applied to a specific role. "As a result," says TopResume career expert Amanda Augustine, "you could be contacted for a role the company believes is a good fit even if you never specifically applied for that opportunity."

When it comes to actually writing your resume, here are seven ways to make sure it looks best for the robots who will be reading it.

Use a text-based application like Microsoft Word rather than a PDF, HTML, Open Office, or Apple Pages document so buzzwords can be accurately scanned by AI programs. Augustine suggests job seekers skip images, graphics and logos, which might not be readable. Test how well bots will comprehend your resume by copying it into a plain text file, then making sure nothing gets out of order and no strange symbols pop up.

Mirror the job description in your work history. Job titles should be listed in reverse-chronological order, Augustine says, because machines favor documents with a clear hierarchy to their information. For each role, prioritize the most relevant information that matches the critical responsibilities and requirements of the job you're applying for. "The bullets that directly match one of the job requirements should be listed first," Augustine adds, "and other notable contributions or accomplishments should be listed lower in a set of bullets."

Include keywords from the job description, such as the role's day-to-day responsibilities, desired previous experience and overall purpose within the organization. Consider having a separate skills section, Augustine says, where you list any certifications, technical skills and soft skills mentioned in the job description.

Quantify performance results, Shellenbarger writes. Highlight ones that involve meeting company goals, driving revenue, leading a certain number of people or projects, being efficient with costs and so on.

Tailor each application to the description of each role you're applying for. These AI systems are generally built to weed out disqualifying resumes that don't match enough of the job description. The more closely you mirror the job description in your application, the better, Augustine says.

Don't place information in the document header or footer, even though resumes traditionally list contact information here. According to Augustine, many application systems can't read the information in this section, so crucial details may be omitted.

Network within the company to build contacts and get your resume to the hiring manager's inbox directly. "While AI helps employers narrow down the number of applicants they will move forward with for interviews," Armer says, "networking is also important."

AI hiring programs show promise at filling roles with greater efficiency, but can also perpetuate bias when they reward candidates with similar backgrounds and experiences as existing employees. Armer stresses hiring algorithms need to be built by teams of diverse individuals across race, ethnicity, gender, experience and other background factors in order to minimize bias.

This is also where getting your resume in front of a human can pay off the most.

"When you have someone on the inside advocating for you, you are often able to bypass the algorithm and have your application delivered directly to the recruiter or hiring manager, rather than getting caught up in the screening process," Augustine says.

Augustine recommends job seekers take stock of their existing network and identify those who may know someone at the companies they're interested in working at. "Look for professional organizations and events that are tied to your industry 10times.com is a great place to find events around the world for every imaginable field," she adds.

Finally, Armer recommends those starting their job hunt review and polish their social media profiles.

Like this story? Subscribe to CNBC Make It on YouTube!

Don't miss: This algorithm can predict when workers are about to quithere's how

Read more:

7 tips to get your resume past the robots reading it - CNBC

Finland offers crash course in artificial intelligence to EU – The Associated Press

HELSINKI (AP) Finland is offering a techy Christmas gift to all European Union citizens a free-of-charge online course in artificial intelligence in their own language, officials said Tuesday.

The tech-savvy Nordic nation, led by the 34-year-old Prime Minister Sanna Marin, is marking the end of its rotating presidency of the EU at the end of the year with a highly ambitious goal.

Instead of handing out the usual ties and scarves to EU officials and journalists, the Finnish government has opted to give practical understanding of AI to 1% of EU citizens, or about 5 million people, through a basic online course by the end of 2021.

It is teaming up with the University of Helsinki, Finlands largest and oldest academic institution, and the Finland-based tech consultancy Reaktor.

Teemu Roos, a University of Helsinki associate professor in the department of computer science, described the nearly $2 million project as a civics course in AI to help EU citizens cope with societys ever-increasing digitalization and the possibilities AI offers in the jobs market.

The course covers elementary AI concepts in a practical way and doesnt go into deeper concepts like coding, he said.

We have enormous potential in Europe but what we lack is investments into AI, Roos said, adding that the continent faces fierce AI competition from digital giants like China and the United States.

The initiative is paid for by the Finnish ministry for economic affairs and employment, and officials said the course is meant for all EU citizens whatever their age, education or profession.

Since its launch in Finland in 2018 The Elements of AI has been phenomenally successful the most popular course ever offered by the University of Helsinki, which traces its roots back to 1640 with more than 220,000 students from over 110 countries having taken it so far online, Roos said.

A quarter of those enrolled so far are aged 45 and over, and some 40% are women. The share of women is nearly 60% among Finnish participants - a remarkable figure in the male-dominated technology domain.

Consisting of several modules, the online course is meant to be completed in about six weeks full time - or up to six months on a lighter schedule - and is currently available in Finnish, English, Swedish and Estonian.

Together with Reaktor and local EU partners, the university is set to translate it to the remaining 20 of the EUs official languages in the next two years.

Megan Schaible, COO of Reaktor Education, said during the projects presentation in Brussels last week that the company decided to join forces with the Finnish university to prove that AI should not be left in the hands of a few elite coders.

An official University of Helsinki diploma will be provided to those passing and Roos said many EU universities would likely give credits for taking the course, allowing students to include it in their curriculum.

For technology aficionados, the University of Helsinkis computer science department is known as the alma mater of Linus Torvalds, the Finnish software engineer who developed the Linux operating system during his studies there in the early 1990s.

In September, Google set up its free-of-charge Digital Garage training hub in the Finnish capital with the intention of helping job-seekers, entrepreneurs and children to brush up their digital skills including AI.

Original post:

Finland offers crash course in artificial intelligence to EU - The Associated Press

How Artificial Intelligence Is Humanizing the Healthcare Industry – HealthITAnalytics.com

December 17, 2019 -Seventy-nine percent of healthcare professionals indicate that artificial intelligence tools have helped mitigate clinician burnout, suggesting that the technology enables providers to deliver more engaging, patient-centered care, according to a survey conducted by MIT Technology Review and GE Healthcare.

As artificial intelligence tools have slowly made their way into the healthcare industry, many have voiced concerns that the technology will remove the human aspect of patient care, leaving individuals in the care of robots and machines.

Healthcare institutions have been anticipating the impact that artificial intelligence (AI) will have on the performance and efficiency of their operations and their workforcesand the quality of patient care, the report stated.

Contrary to common, yet unproven, fears that machines will replace human workers, AI technologies in health care may actually be re-humanizing healthcare, just as the system itself shifts to value-based care models that may favor the outcome patients receive instead of the number of patients seen.

Through interviews with over 900 healthcare professionals, researchers found that providers are already using AI to improve data analysis, enable better treatment and diagnosis, and reduce administrative burdensall of which free up clinicians time to perform other tasks.

READ MORE: Using Artificial Intelligence to Strengthen Suicide Prevention

Numerous technologies are in play today to allow healthcare professionals to deliver the best care, increasingly customized to patients, and at lower costs, the report said.

Our survey has found medical professionals are already using AI tools, to improve both patient care and back-end business processes, from increasing the accuracy of oncological diagnosis to increasing the efficiency of managing schedules and workflow.

The survey found that medical staff with pilot AI projects spend one-third less time writing reports, while those with extensive AI programs spend two-thirds less time writing reports. Additionally, 45 percent of participants said that AI has helped increase consultation time, as well as time to perform surgery and other procedures.

For those with the most extensive AI rollouts, 70 percent expect to spend more time performing procedures than doing administrative or other work.

AI is being used to assume many of a physicians more mundane administrative responsibilities, such as taking notes or updating electronic health records, researchers said. The more AI is deployed, the less time doctors spend at their computers.

READ MORE: Patient, Provider Support Key to Healthcare Artificial Intelligence

Respondents also indicated that AI is helping them gain an edge in the healthcare market. Eighty percent of business and administrative healthcare professionals said that AI is helping them improve revenue opportunities, while 81 percent said they think AI will make them more competitive providers.

The report also showed that AI-related projects will continue to receive an increasing portion of healthcare spending now and in the future. Seventy-nine percent of respondents said they will be spending more to develop AI applications.

Respondents also indicated that AI has increased the operational efficiency of healthcare organizations. Seventy-eight percent of healthcare professionals said that their AI deployments have already created workflow improvements in areas including schedule management.

Using AI to optimize schedule management and other administrative tasks creates opportunities to leverage AI for more patient-facing applications, allowing clinicians to work with patients more closely.

AIs core value proposition is in both improving diagnosing abilities and reducing regulatory and data complexities by automating and streamlining workflow. This allows healthcare professionals to harness the wealth of insight the industry is generating, without drowning in it, the report said.

READ MORE: GE Launches Program to Ease Artificial Intelligence Adoption

AI has also helped healthcare professionals reduce clinical errors. Medical staff who dont use AI cited fighting clinical error as a key challenge two-thirds of the timemore than double that of medical staff who have AI deployments.

Additionally, advanced tools are helping users identify and treat clinical issues. Seventy-five percent of respondents agree that AI has enabled better predictions in the treatment of disease.

AI-enabled decision-support algorithms allow medical teams to make more accurate diagnoses, researchers noted.

This means doing something big by doing something really small: noticing minute irregularities in patient information. That could be the difference between acting on a life-threatening issueor missing it.

While AI has shown a lot of promise in the industry, the technology still comes with challenges. Fifty-seven percent of respondents said that integrating AI applications into existing systems is challenging, and more than half of professionals planning to deploy AI raise concerns about medical professional adoption, support from top management, and technical support.

To overcome these challenges, researchers recommended that clinical staff collaborate to implement and deploy AI tools.

AI needs to work for healthcare professionals as part of a robust, integrated ecosystem. It needs to be more than deploying technologyin fact, the more humanized the application of AI is, the more it will be adopted and improve results and return on investment. After all, in healthcare, the priority is the patient, researchers concluded.

View original post here:

How Artificial Intelligence Is Humanizing the Healthcare Industry - HealthITAnalytics.com

Zebra Medical Vision Announces Agreement With DePuy Synthes to Deploy Cloud Based Artificial Intelligence Orthopaedic Surgical Planning Tools -…

KIBBUTZ SHEFAYIM, Israel--(BUSINESS WIRE)--Zebra Medical Vision, the deep learning medical imaging analytics company, announces today a global co-development and commercialization agreement with DePuy Synthes* to bring Artificial Intelligence (AI) opportunities to orthopaedics, based on imaging data.

Every year, millions of orthopaedic procedures worldwide use traditional two-dimensional (2D) CT scans or MRI imaging to assist with pre-operative planning. CT scans and MRI imaging can be expensive, and CT scans are associated with more radiation and are uncomfortable for some patients. Zebra-Meds technology uses algorithms to create three-dimensional (3D) models from X-ray images. This technology aims to bring affordable pre-operative surgical planning to surgeons worldwide without the need for traditional MRI or CT-based imaging.

We are thrilled to start this collaboration and have the opportunity to impact and improve orthopaedic procedures and outcomes in areas including the knee, hip, shoulder, trauma, and spine care, says Eyal Gura, Co-Founder and CEO of Zebra Medical Vision. We share a common vision surrounding the impact we can have on patients lives through the use of AI, and we are happy to initiate such a meaningful strategic partnership, leveraging the tools and knowledge we have built around bone health AI in the last five years.

This technology is planned to be introduced as part of DePuy Synthes VELYS Digital Surgery solutions for pre-operative, operative, and post-operative patient care.

Read more on Zebra-Meds blog: https://zebramedblog.wordpress.com/another-dimension-to-zebras-ai-how-we-impact-the-orthopedic-world

About Zebra Medical VisionZebra Medical Visions imaging analytics platform allows healthcare institutions to identify patients at risk of disease and offer improved, preventative treatment pathways, to improve patient care. The company is funded by Khosla Ventures, Marc Benioff, Intermountain Investment Fund, OurCrowd Qure, Aurum, aMoon, Nvidia, Johnson & Johnson Innovation JJDC, Inc. (JJDC) and Dolby Ventures. Zebra Medical Vision has raised $52 million in funding to date, and was named a Fast Company Top-5 AI and Machine Learning company. Zebra-Med is a global leader in AI FDA cleared products, and is installed in hospitals globally, from Australia to India, Europe to the U.S, and the LATAM region.

*Agreement is between DePuy Ireland Unlimited Company and Zebra Medical Vision.

See the original post:

Zebra Medical Vision Announces Agreement With DePuy Synthes to Deploy Cloud Based Artificial Intelligence Orthopaedic Surgical Planning Tools -...

Why video games and board games arent a good measure of AI intelligence – The Verge

Measuring the intelligence of AI is one of the trickiest but most important questions in the field of computer science. If you cant understand whether the machine youve built is cleverer today than it was yesterday, how do you know youre making progress?

At first glance, this might seem like a non-issue. Obviously AI is getting smarter is one reply. Just look at all the money and talent pouring into the field. Look at the milestones, like beating humans at Go, and the applications that were impossible to solve a decade ago that are commonplace today, like image recognition. How is that not progress?

Another reply is that these achievements arent really a good gauge of intelligence. Beating humans at chess and Go is impressive, yes, but what does it matter if the smartest computer can be out-strategized in general problem-solving by a toddler or a rat?

This is a criticism put forward by AI researcher Franois Chollet, a software engineer at Google and a well-known figure in the machine learning community. Chollet is the creator of Keras, a widely used program for developing neural networks, the backbone of contemporary AI. Hes also written numerous textbooks on machine learning and maintains a popular Twitter feed where he shares his opinions on the field.

In a recent paper titled On the Measure of Intelligence, Chollet also laid out an argument that the AI world needs to refocus on what intelligence is and isnt. If researchers want to make progress toward general artificial intelligence, says Chollet, they need to look past popular benchmarks like video games and board games, and start thinking about the skills that actually make humans clever, like our ability to generalize and adapt.

In an email interview with The Verge, Chollet explained his thoughts on this subject, talking through why he believes current achievements in AI have been misrepresented, how we might measure intelligence in the future, and why scary stories about super intelligent AI (as told by Elon Musk and others) have an unwarranted hold on the publics imagination.

This interview has been lightly edited for clarity.

In your paper, you describe two different conceptions of intelligence that have shaped the field of AI. One presents intelligence as the ability to excel in a wide range of tasks, while the other prioritizes adaptability and generalization, which is the ability for AI to respond to novel challenges. Which framework is a bigger influence right now, and what are the consequences of that?

In the first 30 years of the history of the field, the most influential view was the former: intelligence as a set of static programs and explicit knowledge bases. Right now, the pendulum has swung very far in the opposite direction: the dominant way of conceptualizing intelligence in the AI community is the blank slate or, to use a more relevant metaphor, the freshly-initialized deep neural network. Unfortunately, its a framework thats been going largely unchallenged and even largely unexamined. These questions have a long intellectual history literally decades and I dont see much awareness of this history in the field today, perhaps because most people doing deep learning today joined the field after 2016.

Its never a good thing to have such intellectual monopolies, especially as an answer to poorly understood scientific questions. It restricts the set of questions that get asked. It restricts the space of ideas that people pursue. I think researchers are now starting to wake up to that fact.

In your paper, you also make the case that AI needs a better definition of intelligence in order to improve. Right now, you argue, researchers focus on benchmarking performance in static tests like beating video games and board games. Why do you find this measure of intelligence lacking?

The thing is, once you pick a measure, youre going to take whatever shortcut is available to game it. For instance, if you set chess-playing as your measure of intelligence (which we started doing in the 1970s until the 1990s), youre going to end up with a system that plays chess, and thats it. Theres no reason to assume it will be good for anything else at all. You end up with tree search and minimax, and that doesnt teach you anything about human intelligence. Today, pursuing skill at video games like Dota or StarCraft as a proxy for general intelligence falls into the exact same intellectual trap.

This is perhaps not obvious because, in humans, skill and intelligence are closely related. The human mind can use its general intelligence to acquire task-specific skills. A human that is really good at chess can be assumed to be pretty intelligent because, implicitly, we know they started from zero and had to use their general intelligence to learn to play chess. They werent designed to play chess. So we know they could direct this general intelligence to many other tasks and learn to do these tasks similarly efficiently. Thats what generality is about.

But a machine has no such constraints. A machine can absolutely be designed to play chess. So the inference we do for humans can play chess, therefore must be intelligent breaks down. Our anthropomorphic assumptions no longer apply. General intelligence can generate task-specific skills, but there is no path in reverse, from task-specific skill to generality. At all. So in machines, skill is entirely orthogonal to intelligence. You can achieve arbitrary skills at arbitrary tasks as long as you can sample infinite data about the task (or spend an infinite amount of engineering resources). And that will still not get you one inch closer to general intelligence.

The key insight is that there is no task where achieving high skill is a sign of intelligence. Unless the task is actually a meta-task, that involves acquiring new skills over a broad [range] of previously unknown problems. And thats exactly what I propose as a benchmark of intelligence.

If these current benchmarks dont help us develop AI with more generalized, flexible intelligence, why are they so popular?

Theres no doubt that the effort to beat human champions at specific well-known video games is primarily driven by the press coverage these projects can generate. If the public wasnt interested in these flashy milestones that are so easy to misrepresent as steps toward superhuman general AI, researchers would be doing something else.

I think its a bit sad because research should about answering open scientific questions, not generating PR. If I set out to solve Warcraft III at a superhuman level using deep learning, you can be quite sure that I will get there as long as I have access to sufficient engineering talent and computing power (which is on the order of tens of millions of dollars for a task like this). But once Id have done it, what would I have learned about intelligence or generalization? Well, nothing. At best, Id have developed engineering knowledge about scaling up deep learning. So I dont really see it as scientific research because it doesnt teach us anything we didnt already know. It doesnt answer any open question. If the question was, Can we play X at a superhuman level?, the answer is definitely, Yes, as long as you can generate a sufficiently dense sample of training situations and feed them into a sufficiently expressive deep learning model. Weve known this for some time. (I actually said as much a while before the Dota 2 and StarCraft II AIs reached champion level.)

What do you think the actual achievements of these projects are? To what extent are their results misunderstood or misrepresented?

One stark misrepresentation Im seeing is the argument that these high-skill game-playing systems represent real progress toward AI systems, which can handle the complexity and uncertainty of the real world [as OpenAI claimed in a press release about its Dota 2-playing bot OpenAI Five]. They do not. If they did, it would be an immensely valuable research area, but that is simply not true. Take OpenAI Five, for instance: it wasnt able to handle the complexity of Dota 2 in the first place because it was trained with 16 characters, and it could not generalize to the full game, which has over 100 characters. It was trained over 45,000 years of gameplay then again, note how training data requirements grow combinatorially with task complexity yet, the resulting model proved very brittle: non-champion human players were able to find strategies to reliably beat it in a matter of days after the AI was made available for the public to play against.

If you want to one day become able to handle the complexity and uncertainty of the real world, you have to start asking questions like, what is generalization? How do we measure and maximize generalization in learning systems? And thats entirely orthogonal to throwing 10x more data and compute at a big neural network so that it improves its skill by some small percentage.

So what would be a better measure of intelligence for the field to focus on?

In short, we need to stop evaluating skill at tasks that are known beforehand like chess or Dota or StarCraft and instead start evaluating skill-acquisition ability. This means only using new tasks that are not known to the system beforehand, measuring the prior knowledge about the task that the system starts with, and measuring the sample-efficiency of the system (which is how much data is needed to learn to do the task). The less information (prior knowledge and experience) you require in order to reach a given level of skill, the more intelligent you are. And todays AI systems are really not very intelligent at all.

In addition, I think our measure of intelligence should make human-likeness more explicit because there may be different types of intelligence, and human-like intelligence is what were really talking about, implicitly, when we talk about general intelligence. And that involves trying to understand what prior knowledge humans are born with. Humans learn incredibly efficiently they only require very little experience to acquire new skills but they dont do it from scratch. They leverage innate prior knowledge, besides a lifetime of accumulated skills and knowledge.

[My recent paper] proposes a new benchmark dataset, ARC, which looks a lot like an IQ test. ARC is a set of reasoning tasks, where each task is explained via a small sequence of demonstrations, typically three, and you should learn to accomplish the task from these few demonstrations. ARC takes the position that every task your system is evaluated on should be brand-new and should only involve knowledge of a kind that fits within human innate knowledge. For instance, it should not feature language. Currently, ARC is totally solvable by humans, without any verbal explanations or prior training, but it is completely unapproachable by any AI technique weve tried so far. Thats a big flashing sign that theres something going on there, that were in need of new ideas.

Do you think the AI world can continue to progress by just throwing more computing power at problems? Some have argued that, historically, this has been the most successful approach to improving performance. While others have suggested that were soon going to see diminishing returns if we just follow this path.

This is absolutely true if youre working on a specific task. Throwing more training data and compute power at a vertical task will increase performance on that task. But it will gain you about zero incremental understanding of how to achieve generality in artificial intelligence.

If you have a sufficiently large deep learning model, and you train it on a dense sampling of the input-cross-output space for a task, then it will learn to solve the task, whatever that may be Dota, StarCraft, you name it. Its tremendously valuable. It has almost infinite applications in machine perception problems. The only problem here is that the amount of data you need is a combinatorial function of task complexity, so even slightly complex tasks can become prohibitively expensive.

Take self-driving cars, for instance. Millions upon millions of training situations arent sufficient for an end-to-end deep learning model to learn to safely drive a car. Which is why, first of all, L5 self-driving isnt quite there yet. And second, the most advanced self-driving systems are primarily symbolic models that use deep learning to interface these manually engineered models with sensor data. If deep learning could generalize, wed have had L5 self-driving in 2016, and it would have taken the form of a big neural network.

Lastly, given youre talking about constraints for current AI systems, it seems worth asking about the idea of superintelligence the fear that an extremely powerful AI could cause extreme harm to humanity in the near future. Do you think such fears are legitimate?

No, I dont believe the superintelligence narrative to be well-founded. We have never created an autonomous intelligent system. There is absolutely no sign that we will be able to create one in the foreseeable future. (This isnt where current AI progress is headed.) And we have absolutely no way to speculate what its characteristics may be if we do end up creating one in the far future. To use an analogy, its a bit like asking in the year 1600: Ballistics has been progressing pretty fast! So, what if we had a cannon that could wipe out an entire city. How do we make sure it would only kill the bad guys? Its a rather ill-formed question, and debating it in the absence of any knowledge about the system were talking about amounts, at best, to a philosophical argument.

One thing about these superintelligence fears is that they mask the fact that AI has the potential to be pretty dangerous today. We dont need superintelligence in order for certain AI applications to represent a danger. Ive written about the use of AI to implement algorithmic propaganda systems. Others have written about algorithmic bias, the use of AI in weapons systems, or about AI as a tool of totalitarian control.

Theres a story about the siege of Constantinople in 1453. While the city was fighting off the Ottoman army, its scholars and rulers were debating what the sex of angels might be. Well, the more energy and attention we spend discussing the sex of angels or the value alignment of hypothetical superintelligent AIs, the less we have for dealing with the real and pressing issues that AI technology poses today. Theres a well-known tech leader that likes to depict superintelligent AI as an existential threat to humanity. Well, while these ideas are grabbing headlines, youre not discussing the ethical questions raised by the deployment of insufficiently accurate self-driving systems on our roads that cause crashes and loss of life.

If one accepts these criticisms that there is not currently a technical grounding for these fears why do you think the superintelligence narrative is popular?

Ultimately, I think its a good story, and people are attracted to good stories. Its not a coincidence that it resembles eschatological religious stories because religious stories have evolved and been selected over time to powerfully resonate with people and to spread effectively. For the very same reason, you also find this narrative in science fiction movies and novels. The reason why its used in fiction, the reason why it resembles religious narratives, and the reason why it has been catching on as a way to understand where AI is headed are all the same: its a good story. And people need stories to make sense of the world. Theres far more demand for such stories than demand for understanding the nature of intelligence or understanding what drives technological progress.

Follow this link:

Why video games and board games arent a good measure of AI intelligence - The Verge

New Findings Show Artificial Intelligence Software Improves Breast Cancer Detection and Physician Accuracy – P&T Community

CHICAGO, Dec. 19, 2019 /PRNewswire/ --A New York City based large volume private practice radiology group conducted a quality assurance review that included an 18 monthsoftware evaluation in the breast center comprised of nine (9) specialist radiologists using an FDA cleared artificial intelligence software by Koios Medical, Inc as a second opinion for analyzing and assessing lesions found during breast ultrasound examinations.

Over the evaluation period, radiologists analyzed over 6,000 diagnostic breast ultrasound exams. Radiologists used Koios DS Breast decision support software (Koios Medical, Inc.) to assist in lesion classification and risk assessment. As part of the normal diagnostic workflow, radiologists would activate Koios DS and review the software findings with clinical details to formulate the best management.

Analysis was then performed comparing the physicians' diagnostic performance to the 18-month period prior to the introduction of the AI enabled software. Comparing the two periods, physicians recommended biopsy for suspicious lesions at a similar rate (17%) and performed 14% more biopsies increasing the cancer detection rate (from 8.5 to 11.8 per 1,000 diagnostic exams) while simultaneously experiencing a significant reduction in benign biopsies (aka, false positives). Noteworthy is the aggregate nature of the findings where adoption of the software gradually increased over time during the 18-month evaluation period. Trailing 6-month results indicate a benign biopsy reduction exceeding 20% across the group. Positive predictive value, the percentage a positive test returns a positive result, improved over 20%.

"Physicians were skeptical in the beginning that software could help them given their years of training and specialization focusing on breast radiology. With experience using Koios software, however, over time and seeing the preliminary analysis they came to realize that the Koios AI software was gradually impacting patient care in a very positive way.Initially, radiologists completed internal studies that verified Koios software's accuracy, and discovered the larger impact happens gradually over time. In looking at the statistics, physicians were pleasantly surprised to see the benefit was even greater than expected. The software has the potential to make a profound impact on overall quality," says Vice President of Activations Amy Fowler.

Koios DS Breast 2.0 is artificial intelligence software designed around a dataset of over 450,000 breast ultrasound images with known results intended for use to assist physicians analyzing breast ultrasound images and aligns a machine learning-generated probability of malignancy. This probabilityis then checked against and aligned to the lesion's assigned BI-RADScategory, the scale physicians use to recommend care pathways.

"We are seeing the promise of machine learning as a physician's assistant coming to fruition. This will undoubtedly improve quality, outcomes, and patient experiencesand ultimately save lives. Koios DS Breast 2.0 is proving this within several physician groups across the US," says company CFO Graham Anderson.

Koios DS Breast 2.0 can be used in conjunction and integrated directly into most major viewing workstation platforms and is directly available on the LOGIQTME10, GE Healthcare's next generation digital ultrasound system that integrates artificial intelligence, cloud connectivity, and advanced algorithms. Artificial intelligence software generated results can be exported directly into a patient's record. Koios Medical continues to experiment with thyroid ultrasound image data and expects to add to its offering in the next year.

"We could not be more encouraged by the results these physicians are seeing. All our prior testing on historical images have consistently demonstrated high levels of system accuracy. Now, and for the first time ever, physicians using AI software as a second opinion with patients in real-time, within their practice, are delivering on the promise to measurably elevate quality of care. Catching more cancers earlier while reducing avoidable procedures and improving patient experiences is fast becoming a reality," says Koios Medical CEO Chad McClennan.

Discussing future plans during the recent Radiological Society of North America (RSNA) annual meeting in Chicago, McClennan shared, "Several major academic medical centers and community hospitals are utilizing our software and conducting studies into the quality impact for publication. We expect those results to mimic these early clinical findings and further validate the experience of our physician customers in both in New York City and across the country, and most importantly, the positive patient impact."

About KoiosMedical:

Koios Medical develops medical software to assist physicians interpreting ultrasound images and applies deep machine learning methods to the process of reaching an accurate diagnosis. The FDA cleared Koios DS platform uses advanced AI algorithms to assist in the early detection of disease while reducing recommendations for biopsy of benign tissue. Patented technology saves physicians time, helps improve patient outcomes, and reduces healthcare costs. Koios Medical is presently focused on breast and thyroid cancer diagnosis assistance market. Women with dense breast tissue (over 40% in the US) often require an alternative to mammography for diagnosis. Ultrasound is a widely available and effective alternative to mammography with no radiation and is standard of care for breast cancer diagnosis. To learn more please contact us at info@koiosmedical.comor (732) 529-5755.

Learn more about Koios at: koiosmedical.com

View original content to download multimedia:http://www.prnewswire.com/news-releases/new-findings-show-artificial-intelligence-software-improves-breast-cancer-detection-and-physician-accuracy-300978087.html

SOURCE Koios Medical

Read the rest here:

New Findings Show Artificial Intelligence Software Improves Breast Cancer Detection and Physician Accuracy - P&T Community

How Artificial Intelligence Is Reshaping the Future of Stock Picking – InsideHook

AI is coming to Wall Street

Unsplash

Computers arent new to Wall Street, but for most of their lives, their function has been primarily quantitative. That, says Forbes William Baldwin, is about to change thanks to the rise of artificial intelligence.

Created by a trio of former MBA classmates at at UC Berkeley, EquBot is the answer to its creators dream of producing a computer capable of blending precise financial data with less clear information found in annual reports and news articles.

EquBot has since opened up AI Powered Equity ETF, adding AI Powered International Equity in 2018. According to Forbes, the system absorbs 1.3 million texts a day, including news, blogs, social media and SEC filings. IBMs Watson the AI system that proved artificial intelligence can be trained to recognize relationships and patterns previously thought to be the sole domain of the human mind when it bested two human Jeopardy champions in 2011 then digests the language, picking up facts to feed into a knowledge graph of a million nodes.

While Forbessays its too early to tell whether EquBot will succeed long term, theres no doubt AI is going to change things on Wall Street. Donna Dillenberger, an IBM scientist in Yorktown Heights, New York, is reportedly working on a stock market model with millions of nodes, while IBM is already selling AI up and down Wall Street.

Subscribe herefor our free daily newsletter.

Read the full story at Forbes

Read the original post:

How Artificial Intelligence Is Reshaping the Future of Stock Picking - InsideHook

How Internet of Things and Artificial Intelligence pave the way to climate neutrality – EURACTIV

Calls for action on the climate emergency have reached a crescendo with the COP25 in Madrid. It is good to see the new Commission re-claiming EUs leadership in climate technology with the Green Deal presented this Wednesday. But for a faster energy transition, it is not enough just to have more renewables, writes Hanno Schoklitsch.

Hanno Schoklitsch is the CEO and founder of Kaiserwetter Energy Asset Management.

Communication makes the right points when they promise to accelerate the energy transition and clearly state that Artificial Intelligence, Internet of Things and Cloud Computing can have an important impact on tackling environmental challenges. However, the specific impact on the energy transition is ignored.

For an accelerated energy transition, just more renewables are not enough. Germany, for example, has an installed renewable capacity of almost 120 Gigawatt whilst peak demand is never higher than 75 Gigawatt.

Nevertheless, Germany is far behind its climate targets. You see: We need more efficiency and accurateness in the energy transition. This is above all a data problem, but it is a problem that is easily be resolvable by innovative technology.

The future of energy, driven by IoT and AI

To understand the whole context, we have to see: The future of energy will be marked by the radical decentralization of energy supply, including so-called flexibility options like storage, load management, power-to-heat or power-to-gas.

Virtual power plants will assume a central role. All those technologies will help to realize a demand-side economic approach. This means that the power supply follows the energy demand.

And for this approach Internet of Things (IoT) combined with Artificial Intelligence (AI: Machine Learning, Deep Learning) is key. They will help to optimize the match between regional generation and regional demand something that is unthinkable without advanced data intelligence.

For more than a century, we have lived in a baseload world which means that a few central megawatt power plants run the whole year, more or less independently from the actual demand. The unintelligent, inefficient usage of dirty energy resources is doubtlessly the main cause of the climate crisis.

Therefore, the energy transition must be seen as a shift towards renewables and energy intelligence. To fulfil the Paris goals, we need a faster energy transition, for sure, but above all, we need a more intelligent energy system.

The Energy Cloud for Nation our approach to attaining energy intelligence

While most of the energy value chain will be organized in a decentralised way, data collection and analytics must be organised centrally. There are solutions providing national and international governments and authorities with detailed insights into their energy systems based on real-time production.

Planning of new capacities, including renewable generation, storage, grid expansion and load shifting gains a new, unprecedented accurateness. Speeding up energy transition without the risk of false decision-making and failed investments becomes possible.

IoT and AI can help governments and authorities to cope with the increasing complexity of the energy transition an important point especially for countries that aspire a pioneering role in climate policy but fear the energy transitions ramifications.

Attracting and activating the needed investment capital is one of the major challenges, and risk mitigation and investment certainty will need to be considered as key. IoT and AI can make a crucial difference.

The Green Revolution also a digitisation revolution

The combination of IoT and AI will be key drivers for a successful, risk-minimized shift to a green economy in general. Inefficient usage of resources was characteristic of the 19th and 20th centuries.

The digitisation will make it easy to open a new economy mode characterized by the efficient, spatially and timely accurate match between supply and demand. The energy sector will be the front-runner followed by other sectors that use critical resources such as water, agriculture, transportation and so on.

It is based on that reasoning that I am convinced that IoT and AI can make a major contribution to securing the planet for generations to come.

Read the original:

How Internet of Things and Artificial Intelligence pave the way to climate neutrality - EURACTIV