AI Tool Created to Study the Universe, Unlock the Mysteries of Dark Energy – Newsweek

An artificial intelligence tool has been developed to help predict the structure of the universe and aid research into the mysteries of dark energy and dark matter.

Researchers in Japan used two of the world's fastest astrophysical simulation supercomputers, known as ATERUI and ATERUI II, to create an aptly-named "Dark Emulator" tool, which is able to ingest vast quantities of data and produce analysis of the universe in seconds.

The AI could play a role in studying the nature of dark energy, which seems to make up a large amount of the universe but remains an enigma.

Read more

When observed from a distance, the team noted how the universe appears to consist of clusters of galaxies and massive voids that appear to be empty.

But as noted by NASA, leading models of the universe indicate it is made of entities that cannot be seen. Dark matter is suspected of helping to hold galaxy clusters in place gravitationally, while dark energy is believed to play a role in how the universe is expanding.

According to the researchers responsible for Dark Emulator, the AI tool is able to study possibilities about the "origin of cosmic structures" and how dark matter distribution may have changed over time, using data from some of the top observational surveys conducted about space.

"We built an extraordinarily large database using a supercomputer, which took us three years to finish, but now we can recreate it on a laptop in a matter of seconds," said Associate Prof. Takahiro Nishimichi, of the Yukawa Institute for Theoretical Physics.

"Using this result, I hope we can work our way towards uncovering the greatest mystery of modern physics, which is to uncover what dark energy is. I also think this method we've developed will be useful in other fields such as natural sciences or social sciences."

Nishimichi added: "I feel like there is great potential in data science."

The teams, which included experts from the Kavli Institute for the Physics and Mathematics of the Universe and the National Astronomical Observatory of Japan, said in a media release this week that Dark Emulator had already shown promising results during extensive tests.

In seconds, the tool predicted some of effects and patterns found in previous research projects, including the Hyper Suprime-Cam Survey and Sloan Digital Sky Survey. The emulator "learns" from huge quantities of data and "guesses outcomes for new sets of characteristics."

As with all AI tools, data is key. The scientists said the supercomputers have essentially created "hundreds of virtual universes" to play with, and Dark Emulator predicts the outcome of new characteristics based on data, without having to start new simulations every time.

Running simulations through a supercomputer without the AI would take days, researchers noted. Details of the initial study were published in The Astrophysical Journal last October. The team said they hope to input data from upcoming space surveys throughout the next decade.

While work on this one study remains ongoing, there is little argument within the scientific community that understanding dark energy remains a key objective.

"Determining the nature of dark energy [and] its possible history over cosmic time is perhaps the most important quest of astronomy for the next decade and lies at the intersection of cosmology, astrophysics, and fundamental physics," NASA says in a fact-sheet on its website.

See the original post here:

AI Tool Created to Study the Universe, Unlock the Mysteries of Dark Energy - Newsweek

How Google And Amazon Are Torpedoing The Retail Industry With Data, AI And Advertising – Forbes


Windows IT Pro
How Google And Amazon Are Torpedoing The Retail Industry With Data, AI And Advertising
Forbes
(Note: After an award-winning career in the media business covering the tech industry, Bob Evans was VP of Strategic Communications at SAP in 2011, and Chief Communications Officer at Oracle from 2012 to 2016. He now runs his own firm, Evans Strategic ...
Street Sees Dollar Signs as Microsoft Invests in Cloud, Artificial IntelligenceWindows IT Pro

all 34 news articles »

Continue reading here:

How Google And Amazon Are Torpedoing The Retail Industry With Data, AI And Advertising - Forbes

Dynatrace Drives Digital Innovation With AI Virtual Assistant – Forbes


Forbes
Dynatrace Drives Digital Innovation With AI Virtual Assistant
Forbes
Innovation in the white-hot digital performance management (DPM) market continues to accelerate, and it was clear from this week's Perform conference in Las Vegas that Dynatrace is setting the pace. In fact, Dynatrace's innovations are so cutting-edge ...

Here is the original post:

Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes

RSIP Vision CEO: AI in medical devices is reducing dependence on human skills and improving surgical procedures and outcome – PRNewswire

By now, we all know that more and more AI driven applications are introduced to the Radiology market, changing the diagnostic field and improving the diagnostic process. Not far behind, a substantial shift is happening in another critical medical segment the operating room.

More and more medical robotic companies are taking advantage of the new capabilities driven from AI and innovative computer vision algorithms, to provide solutions that add accuracy and stability, save time and improve supervision on many daily surgery procedures. Besides improving the surgery process, this AI revolution also shortens recovery time and reduces infection risks, by favoring minimally invasive methods, which expose only the minimal needed surface for the robotic arms operation.

"Our teams are developing state-of-the-art algorithmics solutions that make this revolution so real, giving surgical systems the capability to understand the medical scene, detect and monitor surgical tools during the procedure, offer a crystal clear real-time view of the treated organ, while allowing precise depth measurements and video supervision of the whole procedure indicating irregularities, missing tools or disposables and much more," explained Ron Soferman in a recent interview.

RSIP Vision, is a global leader in artificial intelligence (AI) and computer vision technology, is developing and providing these advanced modules to Medical Surgery Vendors allowing them to provide specific solutions for a variety of surgical operations, in the Orthopedics, Aesthetics and General Surgery fields.

RSIP Vision is headquartered in Jerusalem Israel, and has a U.S. office in San Jose, CA.

More information is available on the company website: http://www.rsipvision.com.

SOURCE RSIP Vision

http://www.rsipvision.com

See the rest here:

RSIP Vision CEO: AI in medical devices is reducing dependence on human skills and improving surgical procedures and outcome - PRNewswire

Brand-New Graphcore Partner Program Built With AI In Mind – CRN

As solution providers up their game to meet customer demand for the hottest technology, the right vendor partner program is key to their success.

CRNtv welcomes vendors to The Partner Program Pitch to share with the channel what makes their channel program unique, starting with an elevator pitch on why solution providers should join their partner program.

In this episode of The PPP, CRNtvs Jennifer Zarate talks with Victoria Rege, director of alliances and strategic partnerships at Graphcore, about how the UK-based chipmaker is bringing the excitement back to hardware.

For quite sometime its been all about software, and now the VARs (value added resellers) and resellers, and our channel get the opportunity to learn about our unique hardware and help customers do amazing things with it, said Rege.

Company At A Glance

Location: Bristol, United Kingdom

Number of partners: 15 launch partners

Percentage of sales that go through the channel: Too early to disclose

Product and services in which Graphcore specializes in:

- Software Polar Programming Stack

- AI Hardware for Data Centers

The field of AI is shifting so quickly and growing, and changing that the opportunity for [partners] to learn on the ground with us as we put [our second generation IPU-M2000 packs] into production is really a great learning opportunity and a great sales opportunity, Rege added.

Partner Program Details

Program Tiers: Elite and Gold

Partner program requirements:

Commitment to joint GTM activities, quarterly business and marketing planning, and review.

Eligible partner-types:

- Resellers

- Original equipment manufacturers (OEMs)

- Software partners

- Storage companies

- VARs.

Head over to CRNtv to learn more on why you should partner with Graphcore.

Continue reading here:

Brand-New Graphcore Partner Program Built With AI In Mind - CRN

Global AI in Manufacturing Market 2020-2026: COVID-19’s Impact on the Industry and Future Projections – PRNewswire

DUBLIN, Aug. 7, 2020 /PRNewswire/ -- The "Artificial Intelligence in Manufacturing Market by Offering (Hardware, Software, and Services), Technology (Machine Learning, Computer Vision, Context-Aware Computing, and NLP), Application, End-user Industry and Region - Global Forecast to 2026" report has been added to ResearchAndMarkets.com's offering.

The AI in the manufacturing market is expected to be valued at USD 1.1 billion in 2020 and is likely to reach USD 16.7 billion by 2026; it is expected to grow at a CAGR of 57.2% during the forecast period.

The major drivers for the market are the increasing number of large and complex datasets (often known as big data), evolving Industrial IoT and automation, improving computing power, and increasing venture capital investments. The major restraint for the market is the reluctance among manufacturers to adopt AI-based technologies. The critical challenges facing the AI in the manufacturing market include limited skilled workforce, concerns regarding data privacy, and significant financial and operational impact of the COVID-19 outbreak on manufacturing.

The machine learning technology is expected to account for the largest size of the AI in manufacturing market during the forecast period.

Machine learning's ability to collect and handle big data and its applications in real-time speech translation, robotics, and facial analysis is fuelling its growth in the manufacturing market. AI constitutes various technologies that play a vital role in developing its ecosystem. As AI enables machines to perform activities similar to those performed by human beings, enormous market opportunities have opened.

The predictive maintenance and machinery inspection application of the AI in manufacturing market is projected to hold the largest share during the forecast period.

The predictive maintenance and machinery inspection application held the largest share of the AI in the manufacturing market in 2019. Extensive use of computer vision cameras in machinery inspection, adoption of the Industrial Internet of Things (IIoT), and use of big data in the manufacturing industry are the factors driving the growth of the AI in the manufacturing market for predictive maintenance and machinery inspection application. The increasing demand for reducing the operational costs and machine downtime is also supplementing the growth of predictive maintenance and machinery inspection application in industries.

The automobile industry held the largest size of the AI in manufacturing market in 2019.

The extensive use of computer vision cameras in machinery inspection and adoption industrial IoT are the factors driving the growth of the AI in the manufacturing market for the automobile industry. The application of AI to boost employee productivity, improve quality control, and gain better control over business support functions is supporting the growth of AI in the automobile industry.

Impact of COVID-19 on the AI in the manufacturing market

The market is likely to witness a slight plunge in terms of year-on-year growth in 2020. This is largely attributed to the affected supply chains and limited adoption of AI in manufacturing in 2020 due to the lockdowns and shifting priorities of different industries. The ongoing COVID-19 pandemic has caused disruptions in economies. It is likely to cause supply chain mayhem and eventually force companies and entire industries to rethink and adapt to the global supply chain model.

Many manufacturing companies have halted their production, which has collaterally damaged the supply chain and the industry. This disruption has caused a delay in the adoption of AI-based software and hardware products in the manufacturing sector. The industries have started to restructure their business model for 2020, and many SMEs and large manufacturing plants have halted/postponed any new technology upgrade in their factories in order to recover from the losses caused by the lockdown and economic slowdown.

Research Coverage

The AI in the manufacturing market has been segmented based on offering, technology, application, industry and region. It also provides a detailed view of the market across 4 main regions: North America, Europe, APAC, and RoW.

Key Topics Covered1 Introduction

2 Research Methodology

3 Executive Summary3.1 COVID-19 Impact Analysis: AI in Manufacturing Market3.1.1 Pre-COVID-19 Scenario3.1.2 Post-COVID-19 Scenario

4 Premium Insights4.1 Attractive Opportunities in AI in Manufacturing Market4.2 AI in Manufacturing Market, by Offering4.3 AI in Manufacturing Market, by Technology4.4 APAC: AI in Manufacturing Market, by Industry and Country4.5 AI in Manufacturing Market, by Country

5 Market Overview5.1 Introduction5.2 Market Dynamics5.2.1 Drivers5.2.1.1 Increasingly Large and Complex Dataset5.2.1.2 Evolving Industrial IoT and Automation5.2.1.3 Improving Computing Power5.2.1.4 Increasing Venture Capital Investments5.2.2 Restraints5.2.2.1 Reluctance Among Manufacturers to Adopt AI-Based Technologies5.2.3 Opportunities5.2.3.1 Growth in Operational Efficiency of Manufacturing Plants5.2.3.2 Application of AI for Intelligent Business Process5.2.3.3 Adoption of Automation Technologies to Curb Effects of COVID-195.2.4 Challenges5.2.4.1 Limited Skilled Workforce5.2.4.2 Concerns Regarding Data Privacy5.2.4.3 Significant Financial and Operational Impact of COVID-19 Outbreak on Manufacturing5.3 Value Chain Analysis5.4 Case Studies5.4.1 Siemens Gamesa Uses Fujitsu's AI Solution to Accelerate Inspection of Turbine Blades5.4.2 Volvo Uses Machine Learning-Driven Data Analytics for Predicting Breakdown and Failures5.4.3 Rolls-Royce Using Microsoft Cortana Intelligence for Predictive Maintenance5.4.4 Paper Packaging Firm Used Sight Machine's Enterprise Manufacturing Analytics to Improve Production5.5 Adjacent and Related Markets

6 Artificial Intelligence in Manufacturing Market, by Offering6.1 Introduction6.2 Hardware6.3 Software6.4 Services6.5 Impact of COVID-19 on Various Offering of AI Technology for Manufacturing

7 Artificial Intelligence in Manufacturing Market, by Technology7.1 Introduction7.2 Machine Learning7.3 Natural Language Processing7.4 Context-Aware Computing7.5 Computer Vision7.6 Impact of COVID-19 on Various Technologies of AI in Manufacturing

8 Artificial Intelligence in Manufacturing Market, by Application8.1 Introduction8.2 Predictive Maintenance and Machinery Inspection8.3 Material Movement8.4 Production Planning8.5 Field Services8.6 Quality Control8.7 Cybersecurity8.8 Industrial Robots8.9 Reclamation

9 Artificial Intelligence in Manufacturing Market, by Industry9.1 Introduction9.2 Automobile9.3 Energy and Power9.4 Pharmaceuticals9.5 Heavy Metals and Machine Manufacturing9.6 Semiconductors and Electronics9.7 Food & Beverages9.8 Others

10 Artificial Intelligence in Manufacturing Market, by Region10.1 Introduction10.2 North America10.3 Europe10.4 APAC10.5 RoW

11 Competitive Landscape11.1 Overview11.2 Ranking of Players, 201911.3 Competitive Leadership Mapping11.3.1 Visionary Leaders11.3.2 Dynamic Differentiators11.3.3 Innovators11.3.4 Emerging Companies11.4 Competitive Scenario11.4.1 Product Launches and Developments11.4.2 Collaborations, Partnerships, and Agreements11.4.3 Acquisitions & Joint Ventures

12 Company Profiles12.1 Key Players12.1.1 Nvidia12.1.2 Intel12.1.3 IBM12.1.4 Siemens12.1.5 General Electric (GE) Company12.1.6 Google12.1.7 Microsoft12.1.8 Micron Technology12.1.9 Amazon Web Services (AWS)12.1.10 Sight Machine12.2 Other Companies12.2.1 Progress Software Corporation (DataRPM)12.2.2 AIbrain12.2.3 General Vision12.2.4 Rockwell Automation12.2.5 Cisco Systems12.2.6 Mitsubishi Electric12.2.7 Oracle12.2.8 SAP12.2.9 Vicarious12.2.10 Ubtech Robotics12.2.11 Aquant12.2.12 Bright Machines12.2.13 Rethink Robotics GmbH12.2.14 Sparkcognition12.2.15 Flutura

For more information about this report visit https://www.researchandmarkets.com/r/erpy1o

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

See more here:

Global AI in Manufacturing Market 2020-2026: COVID-19's Impact on the Industry and Future Projections - PRNewswire

Top 10 AI-Powered Telecom Companies in World – AiThority

Telecommunications, one of the fastest-growing industries, uses Artificial Intelligence and in several business operations to enhance customer experience, improve network reliability and predictive maintenance. Also, telecom companies implement AI-powered solutions to extract relevant business insights from massive amounts of data collected from multiple data sources. These insights enable them to offer better customer experience, scale up operations and impact overall revenue health of the organization.

Gartner predicts the use of approximately 20.4 billion connected devices by 2020. Hence, communication service providers (CSPs) across the globe are realizing and exploring opportunities to harness the power of Artificial Intelligence.

Telecom companies leverage AI, Machine Learning and Predictive Analytics to collect and analyze huge data sets. Insights from collected data automatically detect failures in transmission resulting in quick corrective action. Automated support services help build transparency and deliver customer delight. AI applications complement Cloud operations such as IoT, email and database storage.

Many telecom companies across the globe are experimenting with AI and harnessing its capabilities. In this blog, we have listed the top 10 telecom companies leveraging AI.

The company is transforming customer experiences using AI and Machine Learning applications. These applications have enabled the company to improve forecasting and capacity planning with field staff to deliver efficient customer assistance. Optimized schedules help technicians complete more tasks during the day and minimizes commute time thereby maximizing customer satisfaction. AT&T recorded a 7% reduction in miles traveled per dispatch and a 5% increase in productivity.

Additionally, a Machine Learning program has enhanced their end-to-end incident management process. The application detects network issues in real-time even before the customers get a hint of the problem. In this way, the company can manage 15 million alarms per day. AT&T is exploring the scope of AI and ML to deliver effective and efficient 5G network experience to its customers.

Sentio, COLTs on-demand AI platform provides automated service optimization and network restoration. Leveraging the existing IQ Network, the platform supports dynamic real-time quoting, ordering and provisioning of high bandwidth connectivity between various customer locations data centers, Cloud service providers and enterprise buildings. Customers gain full control and can flex bandwidth needs in real-time. Pricing options are flexible in this model. Customers choose plans on an hourly basis or for a fixed-term contract.

The company has developed a chatbot, Tinka, which is similar to a search engine. Continuous updates to search results help the company to provide round-the-clock assistance to customers in Austria. The icon of a young woman with long hair appears on the users screen, with a box below for typing the search query. Tinka processes about 80% of the queries. The unanswered queries are forwarded to a human customer support representative.

Vanda, another chatbot, focuses on NLP including semantics, customer support, and appropriate behavior. Hub:raum is another digital assistant developed by Deutsche Telekom. This chatbot answers questions about job offers facilitating personnel recruitment. It is fast, well-informed and available 24/7.

Globe Telecom combines ML with Cloudera to enhance Omnichannel customer experience, boost product optimization, and comply with the latest industry standards. Leveraging AI and Predictive Analytics the company uses insights to make informed business decisions quickly and design target-specific Marketing campaigns.

Aura, an AI-powered platform enables the company to develop a new customer relationship model with the help of personal data and other cognitive services. This platform helps business users to reimagine customer interaction, data transparency, personalized and contextualized customer support services, round-the-clock assistance and technical support.

TOBi, their Machine Learning chatbot, has launched in 11 markets and is quite popular. The company plans to launch in 5 more markets. In Italy, the chatbot has a huge market reach. It has automated two-third of the companys customer interaction thus, enabling human support agents to focus on strategic tasks resulting in higher productivity and growth across the entire organization.

ZBrain an AI platform developed by ZeroStack analyzes private cloud telemetry storage. It is used for improving capacity planning, upgrades, and general management tasks.

Tier 1 telecom companies are implementing Aria Networks, an AI-based solution to automate and optimize supply chain operations. The solution leverages prescriptive analytics and automates design processes for telecom and OTT service providers.

NetFusion an AI-powered platform by Sedona Systems, optimizes traffic routing and quick delivery of 5G-enabled services like AR/VR

Nokia launched AVA, a Machine Learning platform and Cloud network management solution. It improves capacity planning and predicts service degradations on cell sites a week in advance.

AI will be an integral part of the future digital marketplace. Increased adoption of AI in the telecommunication industry enabling CSPs manage, maintain and optimize infrastructure and support operations. The use cases mentioned in this blog exhibit the impact of AI on the telecom industry. It has enabled enterprises to deliver enhanced customer experience and boost business value.

As emerging technologies become more sophisticated and accessible, industry experts expect AI to accelerate growth in the business world. Are you ready to take the plunge?

Read more: LegalTech: DocuSigns Seal Software Acquisition Will Boost AIs Role in Contract Automation

Original post:

Top 10 AI-Powered Telecom Companies in World - AiThority

In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it – The Register

Script kid-ai ... What the malware-writing bot doesn't look like

Feature The magic AI wand has been waved over language translation, and voice and image recognition, and now: computer security.

Antivirus makers want you to believe they are adding artificial intelligence to their products: software that has learned how to catch malware on a device. There are two potential problems with that. Either it's marketing hype and not really AI or it's true, in which case don't forget that such systems can still be hoodwinked.

It's relatively easy to trick machine-learning models especially in image recognition. Change a few pixels here and there, and an image of a bus can be warped so that the machine thinks its an ostrich. Now take that thought and extend it to so-called next-gen antivirus.

Enter Endgame, a cyber-security biz based in Virginia, USA, which you may recall popped up at DEF CON this year. It has effectively pitted two machine-learning systems against each other: one trained to detect malware in downloaded files, and the other is trained to customize malware so it slips past the aforementioned detector. The aim is to craft software that can manipulate malware into potentially undetectable samples, and then use those variants to improve machine-learning-based scanners, creating a constantly improving antivirus system.

The key thing is recognizing that software classifiers from image recognition to antivirus can suck, and that you have to do something about it.

Machine learning is not a one-stop shop solution for security, said Hyrum Anderson, principal data scientist and researcher at Endgame. He and his colleagues have teamed up with researchers from the University of Virginia to create this aforementioned cat and mouse game that breeds better and better malware and learns from it.

When I tell people what Im trying to do, it raises eyebrows, Anderson told TheRegister. People ask me, Youre trying to do what now? But let me explain.

A lot of data is required to train machine learning models. It took ImageNet which contains tens of millions of pictures split into thousands of categories to boost image recognition models to the performance possible today.

The goal of the antivirus game is to generate adversarial samples to harden future machine learning models against increasingly stealthy malware.

To understand how this works, imagine a software agent learning to play the game Breakout, Hyrum says. The classic arcade game is simple. An agent controls a paddle, moving it left or right to hit a ball bouncing back and forth from a brick wall. Every time the ball strikes a brick, it disappears and the agent scores a point. To win the game, the brick wall has to be cleared and the agent has to continuously bat the ball and prevent it from falling to the bottom of the screen.

Endgames malware game is somewhat similar, but instead of a ball the bot is dealing with malicious Windows executables. The aim of the game is to fudge the file, changing bytes here and there, in a way so that it hoodwinks an antivirus engine into thinking the harmful file is safe. The poisonous file slips through like the ball carving a path through the brick wall in Breakout and the bot gets a point.

It does this by manipulating the contents, and changing the bytes in the malware, but the resulting data must still be executable and fulfill its purpose after it passes through the AV engine. In other words, the malware-generating agent can't output a corrupted executable that slips past the scanner but, due to deformities introduced in the binary to evade detection, it crashes or doesn't work properly when run.

The virus-cooking bot is rewarded for getting working malicious files past the antivirus engine, so over time it learns the best sequence of moves for changing a malicious files in a way that it still functions and yet tricks the AV engine into thinking the file is friendly.

Its a much more difficult challenge than tricking image recognition models. The file still has to be able to perform the same function and have the same format. Were trying to mimic what a real adversary could do if they didnt have the source code, says Hyrum.

Its a method of brute force. The agent and the AV engine are trained on 100,000 input malware seeds after training, 200 malware files are given to the agent to tamper with. These samples were then fed into the AV engine and about 16per cent of evil files dodged the scanner, we're told. That seems low, but imagine crafting a strain of spyware that is downloaded and run a million times: that turns into 160,000 potentially infected systems to your control. Not bad.

After the antivirus engine model was updated and retrained using those 200 computer-customized files, and it was given another fresh 200 samples churned from the virus-tweaking agent, the evasion rate dropped to half as the scanner got wise to the agent's tricks.

Sponsored: The Joy and Pain of Buying IT - Have Your Say

See the original post:

In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it - The Register

AI used to predict Covid-19 patients’ decline before proven to work – STAT

Dozens of hospitals across the country are using an artificial intelligence system created by Epic, the big electronic health record vendor, to predict which Covid-19 patients will become critically ill, even as many are struggling to validate the tools effectiveness on those with the new disease.

The rapid uptake of Epics deterioration index is a sign of the challenges imposed by the pandemic: Normally hospitals would take time to test the tool on hundreds of patients, refine the algorithm underlying it, and then adjust care practices to implement it in their clinics.

Covid-19 is not giving them that luxury. They need to be able to intervene to prevent patients from going downhill, or at least make sure a ventilator is available when they do. Because it is a new illness, doctors dont have enough experience to determine who is at highest risk, so they are turning to AI for help and in some cases cramming a validation process that often takes months or years into a couple weeks.

advertisement

Nobody has amassed the numbers to do a statistically valid test of the AI, said Mark Pierce, a physician and chief medical informatics officer at Parkview Health, a nine-hospital health system in Indiana and Ohio that is using Epics tool. But in times like this that are unprecedented in U.S. health care, you really do the best you can with the numbers you have, and err on the side of patient care.

Epics index uses machine learning, a type of artificial intelligence, to give clinicians a snapshot of the risks facing each patient. But hospitals are reaching different conclusions about how to apply the tool, which crunches data on patients vital signs, lab results, and nursing assessments to assign a 0 to 100 score, with a higher score indicating an elevated risk of deterioration. It was already used by hundreds of hospitals before the outbreak to monitor hospitalized patients, and is now being applied to those with Covid-19.

advertisement

At Parkview, doctors analyzed data on nearly 100 cases and found that 75% of hospitalized patients who received a score in a middle zone between 38 and 55 were eventually transferred to the intensive care unit. In the absence of a more precise measure, clinicians are using that zone to help determine who needs closer monitoring and whether a patient in an outlying facility needs to be transferred to a larger hospital with an ICU.

Meanwhile, the University of Michigan, which has seen a larger volume of patients due to a cluster of cases in that state, found in an evaluation of 200 patients that the deterioration index is most helpful for those who scored on the margins of the scale.

For about 9% of patients whose scores remained on the low end during the first 48 hours of hospitalization, the health system determined they were unlikely to experience a life-threatening event and that physicians could consider moving them to a field hospital for lower-risk patients. On the opposite end of the spectrum, it found 10% to 12% of patients who scored on the higher end of the scale were much more likely to need ICU care and should be closely monitored. More precise data on the results will be published in coming days, although they have not yet been peer-reviewed.

Clinicians in the Michigan health system have been using the score thresholds established by the research to monitor the condition of patients during rounds and in a command center designed to help manage their care. But clinicians are also considering other factors, such as physical exams, to determine how they should be treated.

This is not going to replace clinical judgement, said Karandeep Singh, a physician and health informaticist at the University of Michigan who participated in the evaluation of Epics AI tool. But its the best thing weve got right now to help make decisions.

Stanford University has also been testing the deterioration index on Covid-19 patients, but a physician in charge of the work said the health system has not seen enough patients to fully evaluate its performance. If we do experience a future surge, we hope that the foundation we have built with this work can be quickly adapted, said Ron Li, a clinical informaticist at Stanford.

Executives at Epic said the AI tool, which has been rolled out to monitor hospitalized patients over the past two years, is already being used to support care of Covid-19 patients in dozens of hospitals across the United States. They include Parkview, Confluence Health in Washington state, and ProMedica, a health system that operates in Ohio and Michigan.

Our approach as Covid was ramping up over the last eight weeks has been to evaluate does it look very similar to (other respiratory illnesses) from a machine learning perspective and can we pick up that rapid deterioration? said Seth Hain, a data scientist and senior vice president of research and development at Epic. What we found is yes, and the result has been that organizations are rapidly using this model in that context.

Some hospitals that had already adopted the index are simply applying it to Covid-19 patients, while others are seeking to validate its ability to accurately assess patients with the new disease. It remains unclear how the use of the tool is affecting patient outcomes, or whether its scores accurately predict how Covid-19 patients are faring in hospitals. The AI system was initially designed to predict deterioration of hospitalized patients facing a wide array of illnesses. Epic trained and tested the index on more than 100,000 patient encounters at three hospital systems between 2012 and 2016, and found that it could accurately characterize the risks facing patients.

When the coronavirus began spreading in the United States, health systems raced to repurpose existing AI models to help keep tabs on patients and manage the supply of beds, ventilators and other equipment in their hospitals. Researchers have tried to develop AI models from scratch to focus on the unique effects of Covid-19, but many of those tools have struggled with bias and accuracy issues, according to a review published in the BMJ.

The biggest question hospitals face in implementing predictive AI tools, whether to help manage Covid-19 or advanced kidney disease, is how to act on the risk score it provides. Can clinicians take actions that will prevent the deterioration from happening? If not, does it give them enough warning to respond effectively?

In the case of Covid-19, the latter question is the most relevant, because researchers have not yet identified any effective treatments to counteract the effects of the illness. Instead, they are left to deliver supportive care, including mechanical ventilation if patients are no longer able to breathe on their own.

Knowing ahead of time whether mechanical ventilation might be necessary is helpful, because doctors can ensure that an ICU bed and a ventilator or other breathing assistance is available.

Singh, the informaticist at the University of Michigan, said the most difficult part about making predictions based on Epics system, which calculates a score every 15 minutes, is that patients ratings tend to bounce up and down in a sawtooth pattern. A change in heart rate could cause the score to suddenly rise or fall. He said his research team found that it was often difficult to detect, or act on, trends in the data.

Because the score fluctuates from 70 to 30 to 40, we felt like its hard to use it that way, he said. A patient whos high risk right now might be low risk in 15 minutes.

In some cases, he said, patients bounced around in the middle zone for days but then suddenly needed to go to the ICU. In others, a patient with a similar trajectory of scores could be managed effectively without need for intensive care.

But Singh said that in about 20% of patients it was possible to identify threshold scores that could indicate whether a patient was likely to decline or recover. In the case of patients likely to decline, the researchers found that the system could give them up to 40 hours of warning before a life-threatening event would occur.

Thats significant lead time to help intervene for a very small percentage of patients, he said. As to whether the system is saving lives, or improving care in comparison to standard nursing practices, Singh said the answers will have to wait for another day. You would need a trial to validate that question, he said. The question of whether this is saving lives is unanswerable right now.

See the original post:

AI used to predict Covid-19 patients' decline before proven to work - STAT

Now More Than Ever We Should Take Advantage of the Transformational Benefits of AI and ML in Healthcare – Managed Healthcare Executive

As healthcare businesses transform for a post-COVID-19 era, they are embracing digital technologies as essential for outmaneuvering the uncertainty faced by businesses and as building blocks for driving more innovation. Maturing digital technologies such as social, mobile, analytics and cloud (SMAC); emerging technologies such as distributed ledger, artificial intelligence, extended reality and quantum computing (DARQ);and scientific advancements (e.g., CRISPR, materials science) are helping to make innovative breakthroughs a reality.

These technologies are also proving essential in supporting COVID-19 triage efforts. For example, hospitals in China are using artificial intelligence (AI) to scan lungs, which is reducing the burden on healthcare providers and enabling earlier intervention. Hospitals in the United States are also using AI to intercept individuals with COVID-19 symptoms from visiting patients in the hospital.

Because AI and machine learning (ML) definitions can often be confused, it may be best to start by defining our terms.

AI can be defined as a collection of different technologies that can be brought together to enable machines to act with what appears to be human-like levels of intelligence. AI provides the ability for technology to sense, comprehend, act and learn in a way that mimics human intelligence.

ML can be viewed as a subset of AI that provides software, machines and robots the ability to learn without static program instructions.

ML is currently being used across the health industry to generate personalized product recommendations to consumers, identify the root cause of quality problems and fix them, detect healthcare claims fraud, and discover and recommend treatment options to physicians. ML-enabled processes rely on software, systems, robots or other machines which use ML algorithms.

For the healthcare industry, AI and ML represent a set of inter-related technologiesthat allow machines to perform and help with both administrative and clinical healthcare functions. Unlike legacy technologies that are algorithm-based tools that complement a human, health-focused AI and ML today can truly augment human activity.

The full potential of AI is moving beyond mere automation of simple tasks into a powerful tool enabling collaboration between humans and machines. AI is presenting an opportunity to revolutionize healthcare jobs for the better.

Recent research indicates that in order to maximize the potential of AI and to be digital leaders, healthcare organizations must re-imagine and re-invent their processes and create self-adapting, self-optimizing living processes that use ML algorithms and real-time data to continuously improve.

In fact, theres consensus among healthcare organizations hat ML-enabled processes help achieve previously hidden or unobtainable value, and that these processes are finding solutions to previously unsolved business problems.

Despite these key findings, additional research surprisingly finds that only 39% of healthcare organizations report that they have inclusive design or human-centric design principles in place to support human-machine collaboration. Machines themselves will become agents of process change, unlocking new roles and new ways for humans and machines to work together.

In order to tap into the unique strengths of AI, healthcare businesses will need to rely on their peoples talent and ability to steward, direct, and refine the technology. Advances in natural language processing and computer vision can help machines and people collaborate and understand one another and their surroundings more effectively. It will be vital to prioritize explainability to help organizations ensure that people understand AI.

Powerful AI capabilities are already delivering profound results across other industries such as retail and automotive. Healthcare organizations now have an opportunity to integrate the new skills needed to enable fluid interactions between human and machines and adapt to the workforce models needed to support these new forms of collaboration.

By embracing the growing adoption of AI, healthcare organizations will soon see the potential benefits and value of AI such as organizational and workflow improvements that can unleash improvements in cost, quality and access. Growth in the AI health market is expected to reach $6.6 billion by 2021 thats a compound annual growth rate of 40%. In just the next couple of years,the health AI market will grow more than 10 times.

AI generally, and ML specifically, gives us technology that can finally perform specialized nonroutine tasks as it learns for itself without explicit human programing shifting nonclinical judgment tasks away from healthcare enterprise workers.

What will be key to the success of healthcare organizations leveraging AI and ML across every process, piece of data and worker? When AI and ML are effectively added to the operational picture, we will see healthcare systems where machines will take on simple, repetitive tasks so that humans can collaborate on a larger scale and work at a higher cognitive level. AI and ML can foster a powerful combination of strategy, technology and the future of work that will improve both labor productivity and patient care.

Brian Kalis is a managing director of digital health and innovation for Accenture's health business.

See the rest here:

Now More Than Ever We Should Take Advantage of the Transformational Benefits of AI and ML in Healthcare - Managed Healthcare Executive

The Father of Siri Has Grown Wary of the Artificial Intelligence He Helped Create – Willamette Week

As a psychologist, Tom Gruber is in awe of Facebook. As a computer scientist and citizen of the earth, it scares the crap out of him.

Facebook runs experiments on human behavior that psychologists can only dream about, Gruber says. The trials are done on millions of people, a sample size that's impossible in academia. Dozens of times a day, Mark Zuckerberg tweaks his artificial intelligence to see what will keep his 2.5 billion subscribers scrolling through Facebook, and to make them confuse advertising with news so they click on the ads, Gruber says.

"They have the world's largest psychology experiment at their disposal every single day," Gruber says. "They can do experiments that science can't do, at scale."

Gruber, who speaks at TechfestNW this April, is hardly a bomb-thrower. He is a pioneer in artificial intelligence and the co-inventor of Siri, the digital assistant on the iPhone that uses AI and speech recognition to answer billions of questions each year.

Since selling Siri to Apple in 2010, though, Gruber has become one of a small group of technologists who have grown wary of the AI they helped create. He plans to talk about the dangerand promiseof artificial intelligence at TechfestNW.

Facebook and YouTube have more than 2 billion users each, making them as big as the world's two biggest religions, Christianity and Islam, Gruber says.

"And I would add that even the people who pray to Mecca five times a day, only do it five times a day," Gruber says. "Our millennials check their phones 150 times a day."

Gruber has deep roots in techdom. He earned a bachelor's degree in computer science and psychology from Loyola University in New Orleans, got his Ph.D. in computer and information science from the University of Massachusetts, then did research at Stanford University for five years.

Siri grew out of a Stanford spinoff called SRI International. Gruber consulted at SRI in 2007, and, soon after, he and two others, Dag Kittlaus and Adam Cheyer, spun off newer digital-assistant technology that went beyond the DARPA work. They named the new company Siri, which means "beautiful woman who leads you to victory."

Siri is actually a collection of powerful neural networks: mathematical formulas running on computers that analyze huge amounts of data and learn the patterns within them. Turn a neural net loose on a million samples of spoken language, and it will start to recognize words and their meaning. No longer do programmers have to tell computers what to do, logic step by logic step.

Steve Jobs persuaded Gruber and his partners to sell to Apple in 2010 for some $200 million, according to Wired magazine.

Gruber retired from Apple in 2018 and founded Humanistic AI, a firm that helps companies use machine intelligence to collaborate with humans, not replaceor terrorizethem.

Unlike some AI doomsayers, including Tesla inventor Elon Musk and podcasting neuroscientist Sam Harris, Gruber thinks AI can be tamed. Right now, it's a science experiment gone wrong. Frankenstein never meant for his monster to become a killer, and Zuckerberg, he says, never intended Facebook to set us at each other's throats, over politics or anything else.

"My argument is that this is an unintended consequence," Gruber says. "We'll give them a pass on being evil geniuses. Maybe some of them are. But let's assume good intentions."

When it comes to Zuckerberg, assuming good intentions is controversial. In July, Facebook agreed to pay a record $5 billion fine to settle charges by the Federal Trade Commission that it abused users' personal information.

So call Gruber an optimist. He thinks the same algorithms that prey on our bad habits can be used to encourage good ones.

Tech companies make excuses for why they can't police their networks, and most involve money. So far, humans are better at sorting lies from truth, and hate from news. That means you have to hire a lot of humans, which is anathema to the tech monopolies. Gruber says they need to suck it up.

"It's like when the auto industry said, 'Air bags are going to put us out of business, so don't impose this onerous thing on us,'" Gruber says. "It's all bullshit."

And there's more. Why not run all these vast experiments on human behavior to improve human life, instead of wrecking it? Why not use AI to change the habits that lead to type 2 diabetes, heart disease, hypertension, and suicide?

"We have weak theories about what makes people tick, and what to do to help them do better things," Gruber says. "But AI has shown that if you want to get 2 billion people addicted to something that's not good for them, you can do it."

AI doesn't know if it's operating for good or evil, Gruber says. Someday it may, but for now, it's up to humans to direct it.

So far, we've been crappy shepherds.

GO: TechfestNW is at Portland State University's Viking Pavilion, 930 SW Hall St., techfestnw.com. Thursday-Friday, April 2-3. Visit the website for tickets.

View original post here:

The Father of Siri Has Grown Wary of the Artificial Intelligence He Helped Create - Willamette Week

Japanese Scientists Develop AI to Show What The Universe is Made Of – Tech Times

A multi-university team of researchers from Japan creates the world's fastest astrophysics simulator using anartificial intelligence(AI) system to predict the shape of the universe itself. The scientists hope that in doing so, they'll liberate the mysteries surrounding dark matter and dark energy.

Dubbed "Dark Emulator," the AI device parses massive troves of astrophysics data. The device makes use of the facts to build simulations of our universe. It taps into a big database complete of records gleaned from special telescopes that compare current data with what scientists anticipate based on theories surrounding the universe's origin.

The simulation basically attempts to demonstrate what theuniversemay look like, such as its edges, based on the big bang concept and the subsequent rapid growth that keeps taking place.

The lead author on the team's research paper, Takahiro Nishimichi, toldPhys.Orgthey built an "extraordinarily" big database using asupercomputer, which took them three years to finish.

"Using this result, I hope we could work toward uncovering the greatest mystery of modern physics, which is to uncover what dark energy is," Nishimichi said.

Scientists would be able to form better theories on how dark matter works by understanding the overall cosmology of the entire universe. However, nobody could still prove that dark matter exists through scientific rigor, observation, and measurement. And that leaves astrophysicists struggling to provide a unified concept of the universe that encompasses all of the different thoughts in play.

ALSO READ:Different Versions Of Reality Can Exist In The Quantum World, Study Confirms

Nishimichi said the method they've conceptualized would be useful in other fields such as natural sciences or social sciences.

The group from Japan hopes to reconcile theories with the information we're capable of glean from Dark Emulator. The AI gadget doesn't merely analyze information for free ends; it learns from every simulation it creates and uses the output to tell the subsequent iteration.

It does this by studying the invisible tendrils between galaxies and performing astronomical (literally) feats of mathematics to create more specific simulations. According to apaperthe group posted in Astrophysical Journal, it's extraordinarily accurate.

"The emulator predicts the halo-matter cross-correlation, relevant for galaxy-galaxy lensing, with an accuracy better than 2% and the halo autocorrelation, applicable for galaxy clustering correlation, with an accuracy higher than 4%."

ALSO READ:Theory Explains Dark Matter By Finding A Link Between Quantum Mechanics And General Relativity

Eventually, this technology could help flesh out our know-how of the universe and permit scientists to determine exactly what dark matter is and how darkish energy works. For now, the move would mean filling in some big blanks we have in our know-how of what the universe honestly looks like.

But in the future, having clear information of darkish energy could result in myriad far-off technology fiction technology along with warp drives, time-travel, and teleportation. That is, of course, if dark matter even exists.

ALSO READ:Is Artificial Intelligence Really A Threat To Humanity?

2018 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Read more from the original source:

Japanese Scientists Develop AI to Show What The Universe is Made Of - Tech Times

Emerson and QRI to Deliver AI-Based Analytics to the Global Oil and Gas Industry – Yahoo Finance

Technology agreement will offer solutions to optimize reservoir management

Emerson (NYSE: EMR) and Quantum Reservoir Impact (QRI) announced today they have teamed up to develop and market next-generation applications for artificial intelligence (AI)-based analytics and decision-making tools customized for oil and gas exploration and production (E&P). Together, the two E&P software industry leaders will help oil and gas customers embrace digital transformation technologies and harness vast amounts of data to optimize their reservoir management strategies.

The collaboration combines the power of Emersons global reach and the worlds largest independent E&P software portfolio with QRIs leading industry expertise in applying augmented AI, machine learning and advanced analytics for asset and reservoir management.

"The combination of our technologies and deep E&P expertise in offshore, unconventional and mature fields results in a robust offering that can give customers a significant advantage in the marketplace," said Steve Santy, president for E&P software at Emerson. "Collaborating with QRI enhances our capabilities to give customers meaningful analytics to maximize production and capital efficiency and for better reserve assessment."

As part of the ongoing collaboration, the companies will apply advanced computational technologies to help geoscientists and engineers make actionable and reliable field development decisions quickly, mitigating risks and leading to higher productivity and better performance.

"People, process and data are as important as technology to the success of the solution. Our partnership with Emerson makes for a very powerful team to ensure that our offerings will become a prominent choice in the market," said Dr. Nansen Saleri, QRIs chairman, CEO and co-founder. "As our industry continues to transform, we share Emersons vision of applying state-of-the-art deep learning tools to automate next-generation workflows and offer our customers a rapid means of generating value."

For more information, visit http://www.Emerson.com/EPSoftware.

About Emerson

Emerson (NYSE: EMR), headquartered in St. Louis, Missouri (USA), is a global technology and engineering company providing innovative solutions for customers in industrial, commercial and residential markets. Our Automation Solutions business helps process, hybrid and discrete manufacturers maximize production, protect personnel and the environment while optimizing their energy and operating costs. Our Commercial & Residential Solutions business helps ensure human comfort and health, protect food quality and safety, advance energy efficiency and create sustainable infrastructure. For more information visit Emerson.com.

About QRI

Quantum Reservoir Impact (QRI) was founded in 2007 with a goal to help clients increase production, reserves and capital efficiency using a metrics-based approach. Today, QRI is leading the industry as a value creation advisory company and an artificial intelligence (AI) solutions provider reinventing the way asset teams manage their oil and gas portfolios. Applying Augmented AI and Advanced Analytics to automate complex workflows, SpeedWise technologies give clients the ability to harness vast amounts of data and optimize reservoir management and CapEx/OpEx strategies. For more information visit QRIGroup.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200304005025/en/

Contacts

For EmersonDenise ClarkePhone: 512-587-5879Email: denise.clarke@fleishman.com

Read the original:

Emerson and QRI to Deliver AI-Based Analytics to the Global Oil and Gas Industry - Yahoo Finance

Dave Copps’ stealthy AI startup emerges with $10 million in backing – The Dallas Morning News

Dallas entrepreneur Dave Copps new venture is coming out of stealth mode with $10 million in backing from the venture funds of two energy giants and an Austin private equity firm.

Copps company, Worlds, is developing an artificial intelligence platform with applications for the energy industry. Its being spun out of Hypergiant Sensory Sciences, a division of Hypergiant Industries founded by Dallas serial entrepreneur Ben Lamm. Hypergiant designs AI tools to help companies decipher big data.

The funding round led by Austin-based Align Capital brought in the investing arms of oil giants Chevron and Petronas. Hypergiant Industries is also an investor.

Worlds technology combines artificial intelligence and internet of things capabilities in a 4D environment, giving companies what it describes as active physical analytics. The company is led by Copps and Chris Rohde, who launched and sold two previous machine learning and AI companies. Brainspace, one of their prior startups, sold in 2016 as part of $2.8 billion deal to turn a large footprint of data centers and several companies into a global cybersecurity firm.

Barbara Burger, president of Chevron Technology Ventures, said its investment in Worlds reflects a belief that digital innovation plays a critical role in accelerating business value at Chevron.

Its also the first disclosed investment by San Francisco-based Piva, a recently launched venture capital firm and Petronas subsidiary that operates independently from the Malaysian company, according to tech web site Axios. Piva is looking to invest its first $250 million fund in disruptive energy and industrial startups.

Piva CEO Ricardo Angel wrote in a blog post that hes long been impressed with Copps and Rohde.

Weve seen them build great teams, great companies and great technologies before, and weve been highly looking forward to seeing what they do next, he said.

AI and automation companies like Worlds will play a critical part in industrys future, Angel said.

Were seeing many corporations in verticals such as oil and gas, manufacturing and logistics, investing in hardware solutions often generating too much data without getting valuable insights, he wrote. As the number of IoT devices continues to grow, so will the need for AI and machine learning solutions to help manage the massive influx of data these devices will create.

Worlds funding round continues the fast start this year for Dallas-based startups in attracting growth capital. At least $86 million in funding has flowed into the region in the first five weeks of the year.

North Texas startups and early stage companies attracted more than $753 million in growth capital last year, up nearly 35% from $560 million in 2018, according to data compiled by the National Venture Capital Association and PitchBook.

Link:

Dave Copps' stealthy AI startup emerges with $10 million in backing - The Dallas Morning News

Box deepens partnership with Microsoft and turns its attention to AI and machine learning – TechCrunch

When I spoke to Box CEO Aaron Levie last year at the Boxworks customer conference, I had to ask the obligatory machine learning question. Surely Box was of sufficient size with enough data running through its systems to take advantage of machine learning. All he would say was they were thinking about it.

Today, the company announced a deepening relationship with Microsoft in which Box will take advantage of Redmonds pure go-to-market clout, its data centers (via Box Zones) and, yes, its AI and machine learning algorithms.

And with that we could start to see Box turning its attention to the next content management transformation. Using machine learning, the company can not only automate metadata creation, a task humans are notoriously bad at, it could then take advantage of that metadata to add intelligence across the entire platform.

What does Microsoft get out of this deal? Well, it gets a significant cloud partner in Box and a partnership that works for both companies in spite of the fact that they continue to compete with one another on several levels. For us its a great partnership, recognizing Azure leadership and bringing solutions customers are asking for, Julia White, corporate vice president at Microsoft said.

That last point is key. In fact, its something that customers are demanding, says Box SVP of Platform and Chief Strategy Officer, Jeetu Patel.You start with the customer and work backwards, he says. They want to use Box and they want to work in Azure.

When Box decided to go after the enterprise in 2009, it had a seismic impact on the content management market, dragging the entire industry into the cloud era. The cloud has reached a level of maturity by now, and the next great transformative technology is sweeping over us in the form of AI and machine learning and Box clearly understands this, according to Patel.

In the next five years, the way people engage and interact with content will be completely different than the last 25 years with new ways to engage and extract meaning [from content], and we have a pretty shared commitment [with Microsoft] in how that will change, Patel told TechCrunch.

That said, its unlikely the company will rely solely on Microsofts algorithms, says Patel. Being able to use Azure machine learning is a pretty big incentive [for this partnership] based on the investment [Microsoft has made] there, but we will keep our options open. We want to be the most open cloud content management platform in the world. We will go wherever the innovation goes, he added.

For now, that is taking advantage of Microsofts technology, and while todays partnership is significant for both companies, it is a relationship that could only be born in the cloud where interoperability is an imperative.

As CEO, Aaron Levie wrote in a blog post announcing the partnership, The days of closed IT architectures and data lock-in are over This deal is proof of that.

Continue reading here:

Box deepens partnership with Microsoft and turns its attention to AI and machine learning - TechCrunch

AI Augmentation: The Real Future of Artificial Intelligence – Forbes

While artificial intelligence continues to drive completely autonomous technologies, its real value ... [+] comes in enhancing the capabilities of the people that use it.

I love Grammarly, the writing correction software from Grammarly, Inc. As a writer, it has proved invaluable to me time and time again, popping up quietly to say that I forgot a comma, got a bit too verbose on a sentence, or have used too many adverbs. I even sprung for the professional version.

Besides endorsing it, I bring Grammarly up for another reason. It is the face of augmentative AI. It is AI because it uses some very sophisticated (and likely recursive) algorithms to determine when grammar is being used improperly or even to provide recommendations for what may be a better way to phrase things. It is augmentative because, rather than completely replacing the need for a writer, it instead is intended to nudge the author in a particular direction, to give them a certain degree of editorial expertise so that they can publish with more confidence or reduce the workload on a copy editor.

This may sound like it eliminates the need for a copy editor, but even thats not really the case. Truth is, many copy editors also use Grammarly, and prefer that their writers do so well, because they usually prefer the much more subtle task of improving well wrought prose, rather than the tedious and maddening task of correcting grammatical and spelling errors.

As a journalist I use Ciscos Webex a great deal. Their most recent products have introduced something that Ive found to be invaluable - the ability to transcribe audio in real time. Once again, this natural language processing (NLP) capability, long the holy grail of AI, is simply there. It has turned what was once a tedious day long operation into a comparatively short editing session (no NLP is 100% accurate), meaning that I can spend more time gathering the news than having to transcribe it.

Word Cloud with NLP related tags

These examples may seem to be a far cry from the popular vision of AI as a job stealer - from autonomous cars and trucks to systems that will eliminate creatives and decision makers - but they are actually pretty indicative of where Artificial Intelligence is going. Ive written before about Adobe Photoshops Select Subject feature, which uses a fairly sophisticated AI to select that part of an image that looks like its the focus of the shot. This is an operation that can be done by hand, but it is slow, tedious and error prone. With it, Photoshop will select what I would have most of the time, and the rest can then be added relatively easily.

Whats evident from these examples is that this kind of augmentative AI can be used to do those parts of a task or operation that were high cost for very little value add otherwise. Grammarly doesnt change my voice significantly as a writer. Auto-transcription takes a task that would likely take me several hours to do manually and reduces it to seconds so that I can focus on the content. Photoshops Select Subject eliminates the need for very painstaking selection of an image. It can be argued in all three cases, that this does eliminate the need for a human being to do these tasks, but lets face it - these are tasks that nobody would prefer to do unless they really had no choice.

These kinds of instances do not flash artificial intelligence at first blush. When Microsoft Powerpoint suggests alternatives visualizations to the boring old bullet points slide, the effect is to change behavior by giving a nudge. The program is saying This looks like a pyramid, or a timeline, or a set of bucket categorizations. Why dont you use this kind of presentation?

Over time, youll notice that certain presentations float to the top more often than others, because you tend to choose them more often, though occasionally, the AI mixes things up, because it realizes through analysing your history with the app that you may be going overboard with that particular layout and should try others for variety. Grammarly (and related services such as Textio) follow grammatical rules, but use these products for a while, and youll find that the systems begin making larger and more complex recommendations that begin to match your own writing style.

You see this behavior increasingly in social media platforms, especially in longer form business messaging such as Linked-In where the recommendation engine will often provide recommended completion content that can be sentence length or longer. Yes, you are saving time, but the AI is also training you even as you train it, putting forth recommendations that sound more professional and that, by extension, teach you to prefer that form of rhetoric, to be more aware of certain grammatical constructions without necessarily knowing exactly what those constructions are.

It is this subtle interplay between human and machine agency that makes AI augmentation so noteworthy. Until comparatively recently, this capability didnt exist in the same way. When people developed applications, they created capabilities - modules - that added functionality, but that functionality was generally bounded. Auto-saving a word processing document, for instance, is not AI; it is using a simple algorithm to determine when changes were made, then providing a save call after certain activity (such as typing) stops for a specific period of time.

Word Cloud with NLP related tags

However, work with an intelligent word processor long enough and several things will begin to configure to better accommodate your writing style. Word and grammatical recommendations will begin to reflect your specific usage. Soft grammatical rules will be suppressed if you continue to ignore them, the application making the reasonable assumption that you are deliberately ignoring them when pointed out.

Ironically, this can also mean that if someone else uses your particular trained word processing application, they will likely get frustrated because the recommendations being made do not fit with their writing style, not because they are programmed to follow a given standard, but because they have been trained to facilitate your style instead.

Training is the process of providing input data into machine learning in order to establish the ... [+] parameters from subsequence categorization.

In effect, the use of augmented AI personalizes that AI - the AI becomes a friend and confidant, not just a tool. This isnt some magical, mystical computer science property. Human beings are social creatures, and when we are lonely we tend to anthropomorphize even inanimate objects around us so that we have someone to talk to. Tom Hanks, in one of his best roles to date (Cast Away), made this obvious in his humanizing of a volleyball as Wilson, an example of what TVtropes.com calls Companion Cubes, named for a similar anthropomorphized object from the Portal game franchise. Augmented AIs are examples of such companion cubes, ones that increasingly are capable of conversation and remembered history ( Hey, Siri, do you remember that beach ball in that movie we watched about a cast away who talked to it?, I think the balls name was Wilson. Why do you ask?)

Remembered history is actually a pretty good description for how most augmented AIs work. Typically, most AIs are trained to pick up anomalous behavior from a specific model, weighing both the type and weight of that anomaly and adjusting the model accordingly. In lexical analysis this includes the presence of new words or phrases and the absence of previously existing words or phrases (which are in turn kept in some form of index). A factory-reset AI will likely change fairly significantly as a user interacts with it, but over time, the model will more closely represent the optimal state for the user.

In some cases, the model itself is also somewhat self-aware, and will deliberately mutate the weightings based upon certain parameters to mix things up a bit. News filters, for instance, will normally gravitate towards a state where certain topics predominate (news about artificial intelligence or sports balls for instance, based upon a users selections), but every so often, a filter will pick up something thats three or four hops away along a topic selection graph, in order to keep the filter from being too narrow.

This, of course, also highlights one of the biggest dangers of augmenting AIs. Such filters will create an intrinsic, self selected bias in the information that gets through. If your personal bias tends to favor a certain political ideology, you get more stories (or recommendations) that favor that bias, and fewer that counter it. This can create a bubble in which what you see reinforces what you believe, while counter examples just never get through the filters. Because this affect is invisible, it may not even be obvious that it is happening, but it is one reason why periodically any AI should nudge itself out of its calculated presets.

Just as a sound mixer can be used to adjust the input weights of various audio signals, so too does ... [+] machine learning set the weights of various model parameters.

The other issue that besets augmented AIs is in the initial design of model. One of the best analogs to the way that most machine learning in particular works is to imagine a sound mixer with several dozen (or several thousand) dials that automatically adjusts themselves to determine the weights of various inputs. In an ideal world, each dial is hooked up to a variable that is independent of other variables (changing one variable doesnt effect any other variable). In reality, its not unusual for some variables to be somewhat (or even heavily) correlated, which means that if one variable changes, it causes other variables to change automatically, though not necessarily in completely known ways.

For instance, age and political affiliation might not, at first glance be obviously correlated, but as it turns out, there are subtle (and not completely linear) correlations that do tend to show up when a large enough sample of the population is taken. In a purely linear model (the domain primarily of high school linear algebra) the variables usually are completely independent, but in real life, the coupling between variables can become chaotic and non-linear unpredictably, and one of the big challenges that data scientist face is determining whether the model in question is linear within the domain being considered.

Every AI has some kind of model that determines the variables (columns) that are adjusted as learning takes place. If there are too few variables, the model may not match that well. If there are too many, the curves being delineated may be too restrictive, and if specific variables are correlated in some manner, then small variations in input can explode and create noise in the signal. This means that few models are perfect (and the ones that are perfect are too simple to be useful), and sometimes the best you can do is to keep false positives and negatives below a certain threshold.

Deep learning AIs are similar, but they essentially have the ability to determine the variables (or axes) that are most orthogonal to one another. However, this comes at a significant cost - it may be far from obvious how to interpret those variables. This explainability problem is one of the most vexing facing the field of AI, because if you dont know what a variable actually means, you cant conclusively prove that the model actually works.

Sometimes the patterns that emerge in augmented AI are not the ones we think they are.

A conversation at an Artificial Intelligence Meetup in Seattle illustrated this problem graphically. In one particular deep analysis of various patients at a given hospital, a deep learning model emerged from analysis that seemed to perfectly predict from a persons medical record if that patient had cancer. The analysts examining the (OCR-scanned) data were ecstatic, thinking theyd found a foolproof model for cancer detection, when one of the nurses working on the study pointed out to them that every cancer patients paper records had a written on one corner of the form to let the nurses quickly see who had cancer and who didnt. The AI had picked this up in the analysis, and not surprisingly it accurately predicated that if the was in that corner, the patient was sure to have cancer. Once this factor was eliminated, the accuracy rate of the model dropped considerably. (Thanks to Reza Rassool, CTO of RealNetworks, for this particular story).

Augmentation is likely to be, for some time to come, the way that most people will directly interact with artificial intelligence systems. The effects will be subtle - steadily improving the quality of the digital products that people produce, reducing the number of errors that show up, and reducing the overall time to create intellectual works - art, writing, coding, and so forth. At the same time, they raise intriguing ethical questions, such as if an AI is used to create new content, to what extent is that augmenting technology actually responsible for whats created?

It also raises serious questions about simulcra in the digital world. Daz Studio, a freemium 3D rendering and rigging software product, has recently included an upgrade that analyses portraits and generates 3D models and materials using facial recognition software. While the results are still (mostly) in the uncanny valley territory, such a tool makes it possible to create photographs and animations that can look surprisingly realistic and in many cases close enough to a person to be indistinguishable. If you think about actors, models, business people, political figures and others, you can see where these kinds of technologies can be used for political mischief.

This means that augmentation AI is also likely to be the next front of an ethical battleground, as laws, social conventions and ethics begin to catch up with the technology.

There is no question that artificial intelligence is rewriting the rules, for good and bad, and augmentation, the kind of AI that is here today and is becoming increasingly difficult to discern from human-directed software, is a proving ground for how the human/computer divide asserts itself. Pay attention to this space.

#AIAugmentation #machineLearning #deepLearning #creativity #AIethics #theCagleReport

See the rest here:

AI Augmentation: The Real Future of Artificial Intelligence - Forbes

Reducing bias in AI-based financial services – Brookings Institution

Artificial intelligence (AI) presents an opportunity to transform how we allocate credit and risk, and to create fairer, more inclusive systems. AIs ability to avoid the traditional credit reporting and scoring system that helps perpetuate existing bias makes it a rare, if not unique, opportunity to alter the status quo. However, AI can easily go in the other direction to exacerbate existing bias, creating cycles that reinforce biased credit allocation while making discrimination in lending even harder to find. Will we unlock the positive, worsen the negative, or maintain the status quo by embracing new technology?

This paper proposes a framework to evaluate the impact of AI in consumer lending. The goal is to incorporate new data and harness AI to expand credit to consumers who need it on better terms than are currently provided. It builds on our existing systems dual goals of pricing financial services based on the true risk the individual consumer poses while aiming to prevent discrimination (e.g., race, gender, DNA, marital status, etc.). This paper also provides a set of potential trade-offs for policymakers, industry and consumer advocates, technologists, and regulators to debate the tensions inherent in protecting against discrimination in a risk-based pricing system layered on top of a society with centuries of institutional discrimination.

AI is frequently discussed and ill defined. Within the world of finance, AI represents three distinct concepts: big data, machine learning, and artificial intelligence itself. Each of these has recently become feasible with advances in data generation, collection, usage, computing power, and programing. Advances in data generation are staggering: 90% of the worlds data today were generated in the past two years, IBM boldly stated. To set parameters of this discussion, below I briefly define each key term with respect to lending.

Big data fosters the inclusion of new and large-scale information not generally present in existing financial models. In consumer credit, for example, new information beyond the typical credit-reporting/credit-scoring model is often referred to by the most common credit-scoring system, FICO. This can include data points, such as payment of rent and utility bills, and personal habits, such as whether you shop at Target or Whole Foods and own a Mac or a PC, and social media data.

Machine learning (ML) occurs when computers optimize data (standard and/or big data) based on relationships they find without the traditional, more prescriptive algorithm. ML can determine new relationships that a person would never think to test: Does the type of yogurt you eat correlate with your likelihood of paying back a loan? Whether these relationships have casual properties or are only proxies for other correlated factors are critical questions in determining the legality and ethics of using ML. However, they are not relevant to the machine in solving the equation.

What constitutes true AI is still being debated, but for purposes of understanding its impact on the allocation of credit and risk, lets use the term AI to mean the inclusion of big data, machine learning, and the next step when ML becomes AI. One bank executive helpfully defined AI by contrasting it with the status quo: Theres a significant difference between AI, which to me denotes machine learning and machines moving forward on their own, versus auto-decisioning, which is using data within the context of a managed decision algorithm.

Americas current legal and regulatory structure to protect against discrimination and enforce fair lending is not well equipped to handle AI. The foundation is a set of laws from the 1960s and 1970s (Equal Credit Opportunity Act of 1974, Truth in Lending Act of 1968, Fair Housing Act of 1968, etc.) that were based on a time with almost the exact opposite problems we face today: not enough sources of standardized information to base decisions and too little credit being made available. Those conditions allowed rampant discrimination by loan officers who could simply deny people because they didnt look credit worthy.

Today, we face an overabundance of poor-quality credit (high interest rates, fees, abusive debt traps) and concerns over the usage of too many sources of data that can hide as proxies for illegal discrimination. The law makes it illegal to use gender to determine credit eligibility or pricing, but countless proxies for gender exist from the type of deodorant you buy to the movies you watch.

Americas current legal and regulatory structure to protect against discrimination and enforce fair lending is not well equipped to handle AI.

The key concept used to police discrimination is that of disparate impact. For a deep dive into how disparate impact works with AI, you can read my previous work on this topic. For this article, it is important to know that disparate impact is defined by the Consumer Financial Protection Bureau as when: A creditor employs facially neutral policies or practices that have an adverse effect or impact on a member of a protected class unless it meets a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact.

The second half of the definition provides lenders the ability to use metrics that may have correlations with protected class elements so long as it meets a legitimate business need,andthere are no other ways to meet that interest that have less disparate impact. A set of existing metrics, including income, credit scores (FICO), and data used by the credit reporting bureaus, has been deemed acceptable despite having substantial correlation with race, gender, and other protected classes.

For example, consider how deeply correlated existing FICO credit scores are with race. To start, it is telling how little data is made publicly available on how these scores vary by race. The credit bureau Experian is eager to publicize one of its versions of FICO scores by peoples age, income, and even what state or city they live in, but not by race. However, federal law requires lenders to collect data on race for home mortgage applications, so we do have access to some data. As shown in the figure below, the differences are stark.

Among people trying to buy a home, generally a wealthier and older subset of Americans, white homebuyers have an average credit score 57 points higher than Black homebuyers and 33 points higher than Hispanic homebuyers. The distribution of credit scores is also sharply unequal: More than 1 in 5 Black individuals have FICOs below 620, as do 1 in 9 among the Hispanic community, while the same is true for only 1 out of every 19 white people. Higher credit scores allow borrowers to access different types of loans and at lower interest rates. One suspects the gaps are even broader beyond those trying to buy a home.

If FICO were invented today, would it satisfy a disparate impact test? The conclusion of Rice and Swesnik in their law review article was clear: Our current credit-scoring systems have a disparate impact on people and communities of color. The question is mute because not only is FICO grandfathered, but it has also become one of the most important factors used by the financial ecosystem. I have described FICO as the out of tune oboe to which the rest of the financial orchestra tunes.

New data and algorithms are not grandfathered and are subject to the disparate impact test. The result is a double standard whereby new technology is often held to a higher standard to prevent bias than existing methods. This has the effect of tilting the field against new data and methodologies, reinforcing the existing system.

Explainability is another core tenant of our existing fair lending system that may work against AI adoption. Lenders are required to tell consumers why they were denied. Explaining the rationale provides a paper trail to hold lenders accountable should they be engaging in discrimination. It also provides the consumer with information to allow them to correct their behavior and improve their chances for credit. However, an AIs method to make decisions may lack explainability. As Federal Reserve Governor Lael Brainard described the problem: Depending on what algorithms are used, it is possible that no one, including the algorithms creators, can easily explain why the model generated the results that it did. To move forward and unlock AIs potential, we need a new conceptual framework.

To start, imagine a trade-off between accuracy (represented on the y-axis) and bias (represented on the x-axis). The first key insight is that the current system exists at the intersection of the axes we are trading off: the graphs origin. Any potential change needs to be considered against the status-quonot an ideal world of no bias nor complete accuracy. This forces policymakers to consider whether the adoption of a new system that contains bias, but less than that in the current system, is an advance. It may be difficult to embrace an inherently biased framework, but it is important to acknowledge that the status quo is already highly biased. Thus, rejecting new technology because it contains some level of bias does not mean we are protecting the system against bias. To the contrary, it may mean that we are allowing a more biased system to perpetuate.

As shown in the figure above, the bottom left corner (quadrant III) is one where AI results in a system that is more discriminatory and less predictive. Regulation and commercial incentives should work together against this outcome. It may be difficult to imagine incorporating new technology that reduces accuracy, but it is not inconceivable, particularly given the incentives in industry to prioritize decision-making and loan generation speed over actual loan performance (as in the subprime mortgage crisis). Another potential occurrence of policy moving in this direction is the introduction of inaccurate data that may confuse an AI into thinking it has increased accuracy when it has not. The existing credit reporting system is rife with errors: 1 out of every 5 people may have material error on their credit report. New errors occur frequentlyconsider the recent mistake by one student loan servicer that incorrectly reported 4.8 million Americans as being late on paying their student loans when in fact in the government had suspended payments as part of COVID-19 relief.

The data used in the real world are not as pure as those model testing. Market incentives alone are not enough to produce perfect accuracy; they can even promote inaccuracy given the cost of correcting data and demand for speed and quantity. As one study from the Federal Reserve Bank of St. Louis found, Credit score has not acted as a predictor of either true risk of default of subprime mortgage loans or of the subprime mortgage crisis. Whatever the cause, regulators, industry, and consumer advocates ought to be aligned against the adoption of AI that moves in this direction.

The top right (quadrant I) represents incorporation of AI that increases accuracy and reduces bias. At first glance, this should be a win-win. Industry allocates credit in a more accurate manner, increasing efficiency. Consumers enjoy increased credit availability on more accurate terms and with less bias than the existing status quo. This optimistic scenario is quite possible given that a significant source of existing bias in lending stems from the information used. As the Bank Policy Institute pointed out in its in discussion draft of the promises of AI: This increased accuracy will benefit borrowers who currently face obstacles obtaining low-cost bank credit under conventional underwriting approaches.

One prominent example of a win-win system is the use of cash-flow underwriting. This new form of underwriting uses an applicants actual bank balance over some time frame (often one year) as opposed to current FICO based model which relies heavily on seeing whether a person had credit in the past and if so, whether they were ever in delinquency or default. Preliminary analysis by FinReg Labs shows this underwriting system outperforms traditional FICO on its own, and when combined with FICO is even more predictive.

Cash-flow analysis does have some level of bias as income and wealth are correlated with race, gender, and other protected classes. However, because income and wealth are acceptable existing factors, the current fair-lending system should have little problem allowing a smarter use of that information. Ironically, this new technology meets the test because it uses data that is already grandfathered.

That is not the case for other AI advancements. New AI may increase credit access on more affordable terms than what the current system provides and still not be allowable. Just because AI has produced a system that is less discriminatory does not mean it passes fair lending rules. There is no legal standard that allows for illegal discrimination in lending because it is less biased than prior discriminatory practices. As a 2016 Treasury Department study concluded, Data-driven algorithms may expedite credit assessments and reduce costs, they also carry the risk of disparate impact in credit outcomes and the potential for fair lending violations.

For example, consider an AI that is able, with a good degree of accuracy, to detect a decline in a persons health, say through spending patterns (doctors co-pays), internet searches (cancer treatment), and joining new Facebook groups (living with cancer). Medical problems are a strong indicator of future financial distress. Do we want a society where if you get sick, or if a computer algorithm thinks you are ill, that your terms of credit decrease? That may be a less biased system than we currently have, and not one that policymakers and the public would support. Of all sudden what seems like a win-win may not actually be one that is so desirable.

AI that increases accuracy but introduces more bias gets a lot of attention, deservedly so. This scenario represented in the top left (quadrant II) of this framework can range from the introduction of data that are clear proxies for protected classes (watch Lifetime or BET on TV) to information or techniques that, on a first glance, do not seem biased but actually are. There are strong reasons to believe that AI will naturally find proxies for race, given that there are large income and wealth gaps between races. As Daniel Schwartz put it in his article on AI and proxy discrimination: Unintentional proxy discrimination by AIs is virtually inevitable whenever the law seeks to prohibit discrimination on the basis of traits containing predictive information that cannot be captured more directly within the model by non-suspect data.

Proxy discrimination by AI is even more concerning because the machines are likely to uncover proxies that people had not previously considered.

Proxy discrimination by AI is even more concerning because the machines are likely to uncover proxies that people had not previously considered. Think about the potential to use whether or not a person uses a Mac or PC, a factor that is both correlated to race and whether people pay back loans, even controlling for race.

Duke Professor Manju Puri and co-authors were able to build a model using non-standard data that found substantial predictive power in whether a loan was repaid through whether that persons email address contained their name. Initially, that may seem like a non-discriminatory variable within a persons control. However, economists Marianne Bertrand and Sendhil Mullainathan have shown African Americans with names heavily associated with their race face substantial discrimination compared to using race-blind identification. Hence, it is quite possible that there is a disparate impact in using what seems like an innocuous variable such as whether your name is part of your email address.

The question for policymakers is how much to prioritize accuracy at a cost of bias against protected classes. As a matter of principle, I would argue that our starting point is a heavily biased system, and we should not tolerate the introduction of increased bias. There is a slippery slope argument of whether an AI produced substantial increases in accuracy with the introduction of only slightly more bias. Afterall, our current system does a surprisingly poor job of allocating many basic credits and tolerates a substantially large amount of bias.

Industry is likely to advocate for inclusion of this type of AI while consumer advocates are likely to oppose its introduction. Current law is inconsistent in its application. Certain groups of people are afforded strong anti-discrimination protection against certain financial products. But again, this varies across financial product. Take gender for example. It is blatantly illegal under fair lending laws to use gender or any proxy for gender in allocating credit. However, gender is a permitted use for price difference for auto insurance in most states. In fact, for brand new drivers, gender may be the single biggest factor used in determining price absent any driving record. America lacks a uniform set of rules on what constitutes discrimination and what types of attributes cannot be discriminated against. Lack of uniformity is compounded by the division of responsibility between federal and state governments and, within government, between the regulatory and judicial system for detecting and punishing crime.

The final set of trade-offs involve increases in fairness but reductions in accuracy (quadrant IV in the bottom right). An example includes an AI with the ability to use information about a persons human genome to determine their risk of cancer. This type of genetic profiling would improve accuracy in pricing types of insurance but violates norms of fairness. In this instance, policymakers decided that the use of that information is not acceptable and have made it illegal. Returning to the role of gender, some states have restricted the use of gender in car insurance. California most recently joined the list of states no longer allowing gender, which means that pricing will be more fair but possibly less accurate.

Industry pressures would tend to fight against these types of restrictions and press for greater accuracy. Societal norms of fairness may demand trade-offs that diminish accuracy to protect against bias. These trade-offs are best handled by policymakers before the widespread introduction of this information such as the case with genetic data. Restricting the use of this information, however, does not make the problem go away. To the contrary, AIs ability to uncover hidden proxies for that data may exacerbate problems where society attempts to restrict data usage on the grounds of equity concerns. Problems that appear solved by prohibitions then simply migrate into the algorithmic world where they reappear.

The underlying takeaway for this quadrant is one in which social movements that expand protection and reduce discrimination are likely to become more difficult as AIs find workarounds. As long as there are substantial differences in observed outcomes, machines will uncover differing outcomes using new sets of variables that may contain new information or may simply be statistically effective proxies for protected classes.

The status quo is not something society should uphold as nirvana. Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed. The data explosion coupled with the significant growth in ML and AI offers tremendous opportunity to rectify substantial problems in the current system. Existing anti-discrimination frameworks are ill-suited to this opportunity. Refusing to hold new technology to a higher standard than the status quo results in an unstated deference to the already-biased current system. However, simply opening the flood gates under the rules of can you do better than today opens up a Pandoras box of new problems.

The status quo is not something society should uphold as nirvana. Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed.

Americas fractured regulatory system, with differing roles and responsibilities across financial products and levels of government, only serves to make difficult problems even harder. With lacking uniform rules and coherent frameworks, technological adoption will likely be slower among existing entities setting up even greater opportunities for new entrants. A broader conversation regarding how much bias we are willing to tolerate for the sake of improvement over the status quo would benefit all parties. That requires the creation of more political space for sides to engage in a difficult and honest conversation. The current political moment in time is ill-suited for that conversation, but I suspect that AI advancements will not be willing to wait until America is more ready to confront these problems.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative, and Apple, Facebook, and IBM provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Original post:

Reducing bias in AI-based financial services - Brookings Institution

Red Clay to review decision ending AI du Pont season – The News Journal

Red Clay Consolidated School District will review a decision by the A.I. du Pont High School principal to ban the boys basketball team from participating in the upcoming DIAA Boys Basketball Tournament. 2/23/17 Damian Giletto/The News Journal

Gerald Wilmore, whose son Jaison Wilmore is a senior on the A.I. du Pont High School basketball team, asks questions Thursday about the incident at Delaware Military Academy on Feb. 16 that lead to a decision to pull the team from participating in the upcoming DIAA Boys Basketball Tournament.(Photo: Jennifer Corbett, The News Journal )Buy Photo

Red Clay Consolidated School District will reviewa decision by the A.I. du Pont High School principal to ban the boys basketball team from participating in the upcoming DIAA Boys Basketball Tournament.

The review was announced Thursday following a rally at the Greenville school by parents and supporters opposed toPrincipal KevinPalladinetti's decision to end the teams season.Palladinetti made the decision on Tuesdaybased on an incident following the Tigers 58-46 loss at the Delaware Military Academy on Feb. 16.

A group of about 20, which included parents of players, at least three Wilmington City Council members, New Castle County District 10 Councilman Jea Street and other supporters, assembled in the parking lot outside the school at 10 a.m. Thursday. After speaking with reporters for 35 minutes, the group was invited into a classroom by Red Clay spokesperson Pati Nash.

The group demanded to speak to Palladinetti, who arrived 20 minutes later and gave his timeline of events and the reasoning behind his decision.

Several parents of A.I. du Pont players have alleged that racial slurs were spoken by DMA players, fans and students during the game. But Palladinetti said A.I. boys basketball coach Tom Tabb, Assistant Principal Damon Saunders (both of whom are black)and the other A.I. assistant coaches did not report hearing any racial slurs.

DMA's commandant, Anthony Pullella, responds to accusations his students used provoked an incident between A.I. players and fans and DMA fans during a basketball game last week. JOHN J. JANKOWSKI JR./SPECIAL TO THE NEWS JOURNAL

DMA Commandant Anthony Pullella was at the game and said he did not hear any racial comments. Michael Ryan, the athletic director, said DMA officials conducted their own investigation, questioning parents, players, coaches and fans. He said no evidence was uncovered about any racial comment being used.

LOTTERY: $435 million Powerball ticket sold

There were no, absolutely zero reasons for us to discipline our kids," Ryan said, "or therewasabsolutely no issues brought up that our kids were in the wrong in any way."

Ryan said no official footage of the incident existsbecause a video camera that was recording the game turned off during the fourth quarter.

When asked whether a conversation about race should occur to ease tensions between predominantly white DMA and A.I. du Pont, which is 34 percent white, Pullella said, We dont think [the incident] had anything to do with race at all.

A.I. du Pont High School Principal Kevin Palladinetti tries to answer questions from parents and political leaders Thursday about an incident after the team's 58-46 loss at Delaware Military Academy on Feb. 16.(Photo: Jennifer Corbett, The News Journal )

Palladinetti said he was in his office at 6:45 p.m. Feb. 16 when a person attending the game at DMA called him. The person said, Kevin, youre going to need to be over here. Something is happening with your team, Palladinetti said.

Palladinetti texted Saunders, who was at the game. Saunders called the principal at about 7:30.

His response to me was, Its bad. Its a bad situation right now, Palladinetti said. And I said, OK.

Palladinetti said he spoke with Tabb and Ryan that night, but no decision was made regarding the teams season at that time.

Tabb said his team expected to defeat DMA easily. The Tigers came into the game with an 11-7 record, while DMA was 6-11.

We were playing a team that we thought we were better than, that we knew we were better than, Tabb said. The kids said, 'We should be able to score 100 against them.' So we were a little cocky going into the game.

But the Seahawks upset A.I. 58-46. Game officials called the majority of the fouls against the Tigers, as DMA shot 35 free throws and A.I. shot only 10.

The ball didnt bounce our way during the game, Tabb said. [The players] got frustrated. I could see embarrassment setting in.

POLITICS: Delaware Senate control on the line

With 40 seconds left, an A.I. player was given a technical foul. At that point, Tabb said he told the players on the bench to skip the customary postgame handshake line. Instead, the coach told the team he would shake hands with the DMA team, and the players were to remain behind him and follow him off the court as a group.

I thought I was doing what was right, in the best interests of the kids at that moment, Tabb said. When the game was over, a player started to walk and then sprinted, which caused a chain reaction where the other players followed, the coaches followed, parents followed, some DMA parents followed.

Luis Ortega tries to ask A.I. du Pont High School Principal Kevin Palladinetti about why his son, who is on the boys basketball team, with be kept from participating in the upcoming DIAA Boys Basketball Tournament.(Photo: Jennifer Corbett, The News Journal )

Officials from both schools said the A.I. players ran toward a stairwell leading to thesecond level of the gymnasium, whereDMA students and fans had been watching the game.

The entire team takes off, makes a beeline for our mezzanine, Pullella said.

Pullella said he and two DMA parents blocked the groups access to the mezzanine.On the stairwell landing, it was pandemonium, Pullella said.

Meanwhile, Pullella said another teacher directed DMA students out of the mezzanine through an emergency door. DMA athletic director Ryan took an elevator to the second floor and joined Pullella and the parents. Pullella said the confrontation ended a couple of minutes later when approximately 10 Delaware State Police and New Castle County Police officers arrived.

In a statement, state police Master Cpl. Jeffrey Hale said police responded at 6:46 p.m.

It was learned that members of the A.I. du Pont mens basketball team attempted to run up toward the student body of DMA following the game, Hale said. They were prevented from doing so by staff members. The situation was quickly brought under control. No fights. No injuries. No charges. That is all the information I have.

Later that night, Palladinetti said he decided to forfeit A.I.s final regular-season game, a home game against Smyrna scheduled for Tuesday. But he held off on making a decision about the teams state tournament participation. Palladinetti met with Tabb at the school at 7 a.m. Friday.

When the game ended, and its important to know that this is the part that I care about, the entire team vacated the court, Palladinetti said. There is a player who many are reporting was the first to lead the charge off the court. And that happened within about three seconds of the buzzer sounding. Mr. Saunders attempted to stop one player from leaving the court. He was able to shimmy past Mr. Saunders, and then the rest of the team just followed suit and bolted into the hallway, then stairwell, to ultimately get upstairs.

When I heard that both Coach Tabb and Mr. Saunders had a plan in place to try to quell any tensions that were brewing on the court, and for our team to vacate in that manner, without an assistant coach, without anybody walking them off, to run in the manner in which they did, it created alarm; it created panic.

Their actions then sparked a significant event, in my opinion, Palladinetti added. I would have thought that a team in the 19th game of the season would have been a little more disciplined, would have respected the coachs request to stay put. Had they not run off the court in the manner in which they did, we wouldnt be having this conversation today.

A.I. du Pont High School basketball coach Tom Tabb recalls the events from his team's 58-46 loss at Delaware Military Academy on Feb. 16.(Photo: Jennifer Corbett, The News Journal )

School was not in session on Friday or Monday. Palladinetti met with the team on Tuesday, when he said players were encouraged to complete written statements. He said only one prepared a statement.

After meeting with the team, Palladinetti decided to cancel the rest of the season. He acknowledged that statements have been received from players, fansand parents following his decision. Parents cited the A.I. du Pont code of student conduct, which they said requires administrators to hold a conference with students and their parents before taking any disciplinary action.

To make the decision to cancel senior night and any game without an appropriate investigation you were supposed to call a conference with the students and the parents, said Jennifer Field, mother of A.I. player Jude Gulotti. There were a lot of different angles that were seen. We were all over that place, and a lot of things happened that you dont know about. I think it was inappropriate to make any decision.

Jennifer Field, whose son is on the A.I. du Pont High School basketball team, asks administrative officials about why her son won't be participating in the upcoming DIAA Boys Basketball Tournament.(Photo: Jennifer Corbett, The News Journal )

Gerald Wilmore, father of player Jason Wilmore, criticized Tabb for not talking with the team since the incident. Devon Hynson, executive director of Education Voices Inc., a student advocacy organization, said mistakes were made on all sides.

I just think its completely racist, Hynson said, addressing Tabb and Palladinetti. Because I think its been made very clear that you didnt follow the [student] code of conduct. Were all here because we all make mistakes. What were saying is, [the players] made a mistake, you guys made a mistake. Reinstate the boys back on the team. At the end of the day, I think that you owe them that.

Tommie Neubauer, executive director of the Delaware Interscholastic Athletic Association, said a committee will meet at 9 a.m. Friday to determine the seedings and pairings for the DIAA Boys Basketball Tournament.

The tournament is scheduled to begin Wednesday with eight games, all starting at 7 p.m. Neubauer said A.I. du Pont officials need to let DIAA know whether the school will participate or not by 9 a.m. Friday.

Its not like we are keeping them out, Neubauer said. Right now, were going to seed the full tournament as is. Its an A.I. du Pont High School/Red Clay School District question right now.

A.I. du Pont was 8-1 midway through the season, but, counting the forfeit loss to Smyrna, lost eight of its last 11 games to finish the regular season at 11-9. Still, the Tigers are expected to have earned enough points to qualify for the 24-team tournament.

Midway through the contentious, 50-minute classroom meeting with Palladinetti, New Castle County Councilman Street erupted, threatening legal action if the principals decision is not overturned.

If yall think youre going to get away with this without a legal battle, you are sadly mistaken, Street said. You do what you want to do, and Im going to tell you what I have to do. Because by any legal means necessary, and Im asking the City Council members present right now, that we know full well that this was provoked in a racial manner. Im asking you if this is not overturned, for the city to file an injunction in Chancery Court prohibiting the state from going forward with the tournament until such time as this is resolved in a fair and appropriate manner.

New Castle County Councilman Jea Street demands further investigation into the events following A.I. du Pont High School's 58-46 loss at Delaware Military Academy on Feb. 16.(Photo: Jennifer Corbett, The News Journal )

Street then left the room to applause, as Wilmington City Council President Hanifa Shabazz and council members VaShun Turner and Ernest Trippi Congo II signaled their support for his statement.

Tabb, who said he toldPalladinetti before this incident that he would be stepping down after 10 years as the Tigers coach, said he was disappointed that the players didnt follow his instructions.

I know Im the fall guy, Tabb said. I get thrown under the bus for this, and Im cool with that. Im perfectly fine with that.

Palladinetti said the decision has consumed my life since Feb. 16.

It pains me to think that a decision I made has brought us to this arena today, the principal said. Ive lost sleep over this, as I know many of you have. Its not something that has been taken lightly, and its not something that has just been dismissed at any point.

Contact Brad Myers at bmyers@delawareonline.com, or on Twitter @BradMyersTNJ. Contact Esteban Parra at (302) 324-2299, eparra@delawareonline.com or Twitter @eparra3.

EDITOR'S NOTE: Earlier versions of this story misspelled the name of Master Cpl. Jeffrey Hale.

Excerpt from:

Red Clay to review decision ending AI du Pont season - The News Journal

Addressing the Social Determinants of Health with AI, Partnerships – HealthITAnalytics.com

June 11, 2020 -In the healthcare industry today, it is widely understood that optimal health outcomes require addressing patients clinical and non-clinical needs their social determinants of health.

So much of individuals health is determined by factors beyond the doctors office. Where someone lives, works, and plays has a direct impact on her well-being, and its critical for health systems to gather and understand their patients social determinants data.

However, for many healthcare organizations, it can be challenging to know where to start with addressing patients social determinants. A lack of industry standards makes it difficult to collect and share this essential information, and healthcare entities may not be equipped to address unmet social needs.

These obstacles have been highlighted even more as the industry has increasingly understood the important role social determinants play in overall health, Amy Salerno, MD, MHS, Director of Community Health and Well-Being at the University of Virginia (UVA) Health System, told HealthITAnalytics.

Industry-wide and society-wide, theres a lot more recognition of the social determinants of health. In Charlottesville specifically, weve been looking at inequities and disparities in our local community and among our patients, and noting the tie to health outcomes, she said.

READ MORE: COVID-19 Deaths Linked to Social Determinants of Health Data

Because the health system is not necessarily the expert in addressing housing instability, food insecurity, or other socioeconomic factors, we need to partner with organizations in our community that are the experts and create a robust network to support individuals social needs.

In 2018, UVA Health System started to partner with community organizations to help patients beyond clinical settings. The organization selected a technology-based referral network, called Pieces, to better connect patients with community resources that meet their specific needs.

Before UVA could achieve better community health outcomes, the entity needed to develop a comprehensive network to connect patients with community groups.

We had to consider what a robust network and partnership should look like, where we respect and honor the expertise that our community partners bring to the table. We wanted to be able to address these complex issues using shared decision-making, Salerno said.

That led us to look for solutions that align with the strategic objectives of UVA from an operational standpoint, but that also serve the broader community and not just our patients. We wanted it to be a win-win, both for our community and for UVA.

READ MORE: Social Determinants of Health Vital for Assessing Heart Disease Risk

UVA implemented a real-time artificial intelligence platform that integrated directly with hospital EHRs. The platform leverages natural language processing tools to extract social determinants and clinical risk factors from unstructured notes for more timely interventions.

For UVA, the technology could help hospitals improve patient care in several key metrics, Salerno said.

The areas where we have the chance to make the biggest impact is around hospital length of stay and readmissions, she said.

For us, its been huge just to have a starting point of really robust data to tell us exactly where we are compared to other National Hospitals. The tool can also tell us where we have our biggest gaps in spaces to be able to make improvements.

The platform has also helped UVA detect gaps or disparities in COVID-19 testing and outcomes.

READ MORE: Genetics, Social Determinants Have Joint Impact on Cognitive Health

A lot of my work is around health equity and identifying potential disparities in testing access individuals who may have met testing criteria based on their symptoms, but maybe didnt receive testing, Salerno said.

Weve been able to look for care access disparities and outcomes disparities in the ICU, admissions, and mortalities. And that allows us to think about changing our processes and look at the data to see if these changes improved our disparities.

The health system also adopted a fully-linked case management platform that can be used by nearly any community-based organization, hospital, or clinic. The platform enables closed-loop referrals and care plans that help health systems address social determinants.

Multiple different types of agencies in our community have adopted the platform, Salerno said.

Weve been able to use this platform to stand up our own community calling in response to coronavirus. It acts almost like a case management and call log platform for all of the individuals looking for access to a physician and call line, especially if they have yet to be part of our UVA system. So, they dont have yet a medical record number, but we can connect them to social services if they need them.

An essential part of implementing the platform and ensuring its success was working together with community partners to find quality solutions, Salerno explained.

We had community organizations help us assess tools and look at the options, and tell us what was important to them. We also have a community coalition of organizations, including housing providers, food providers, mental health providers, transportation groups, and education leaders, that are helping us implement the tool in a community-wide way, she said.

You have to strategically align the goals of your organization with those of your community and have conversations with them about what would be helpful. What are their fears, and how can you support them?

Also critical for an effort like this: good data.

You have to have really robust, quality data before people are willing to invest the resources for change, she said.

Were an academic medical center, so for us, data is king. Having a comprehensive data platform that uses natural language processing to identify our patients needs, and enables us to connect them to resources, was a really great place for us to start.

Going forward, these cross-sector partnerships, coupled with innovative platforms and tools, will help healthcare organizations better address patients social determinants of health, leading to improved outcomes.

Health is so dependent on the non-clinical factors of your life, like food access, safe and affordable housing, employment, and other determinants. Healthcare providers are not the experts in any of those factors, although all of those factors contribute to overall well-being, Salerno concluded.

Making sure that youre partnered in creating a comprehensive system to support people along their journey and in taking care of their health and well-being is critical. Knowing that your community partners are experts in what they do, and being able to lean into them and ask for their help those mutually beneficial partnerships are essential.

Read more from the original source:

Addressing the Social Determinants of Health with AI, Partnerships - HealthITAnalytics.com

MammoScreen AI Tool Improves Diagnostic Performance of Radiologists in Detecting Breast Cancer – Cancer Network

A clinical investigation published in Radiology: Artificial Intelligence demonstrated that the concurrent use of a new artificial intelligence (AI) tool improved the diagnostic performance of radiologists in the detection of breast cancer by mammography without prolonging their workflow.1

Researchers used MammoScreen, an AI tool designed to identify regions suspicious for breast cancer on 2D digital mammograms and determine their likelihood of malignancy. The system produces a set of image positions with scores for suspicion of malignancy that are extracted from the 4 views of a standard mammogram.

The results show that MammoScreen may help to improve radiologists performance in breast cancer detection, Serena Pacil, PhD, clinical research manager at Therapixel, where the software was developed, said in a press release.2

In this multireader, multicase retrospective study, a dataset including 240 digital mammography images were analyzed by 14 radiologists by a counterbalance design, where each half of the dataset was read either with or without AI in the first session and vice versa for a second session, with the 2 sessions separated by a washout period. End points assessed by the investigators included area under the receiver operating characteristic curve (area under the curve [AUC]), sensitivity, specificity, and reading time.

Overall, the average AUC across readers was 0.769 (95% CI, 0.724-0.814) without the use of AI and 0.797 (95% CI, 0.754-0.840) with AI. The average difference in AUC was 0.028 (95% CI, 0.002-0.055; P = .035). The investigators said these data indicate greater interreader reliability with the aid of AI, resulting in more standardized results.

Further, average sensitivity was increased by 0.033 when AI support was utilized (P = .021). Reading time changed dependently with the AI-tool score.

For those with a low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. For those with a higher likelihood of malignancy, the reading time was generally increased with the use of AI.

It should be noted that in real conditions, additional factors may have an impact on reading time (ie, stress, tiredness, etc), and that those factors were obviously not considered in the present analysis, explained the authors.

Importantly, the main limitation of this study was that the used dataset was not representative of normal screening practices. Specifically, a high rate of false-positive readings may have resulted due to readers awareness of the dataset being enriched with cancer cases, causing a laboratory effect. Moreover, because readers had no access to prior mammograms of the examined patients, other images, or additional patient information, the assessment was more challenging than a typical screening mammography reading workflow.

the overall conclusion of this clinical investigation was that the concurrent use of this AI tool improved the diagnostic performance of radiologists in the mammographic detection of breast cancer, wrote the authors. In addition, the use of AI was shown to reduce false negatives without affecting the specificity.

In March, the FDA cleared MammoScreen for use in the clinic, where it could aid in reducing the workload of radiologists. Moving forward, the investigators plan to continue to explore the behavior of the AI tool on a large screening-based population and its ability to detect breast cancer earlier.

References:1. Pacil S, Lopez J, Chone P, Bertinotti T, Grouin JM, Fillard P. Improving breast cancer detection accuracy of mammography with the concurrent use of an artificial intelligence tool. Published November 4, 2020. Radiology: Artificial Intelligence. doi:10.1148/ryai.2020190208

2. AI tool improves breast cancer detection on mammography. News release. Radiological Society of North America. Published November 4, 2020. Accessed December 3, 2020. https://www.eurekalert.org/pub_releases/2020-11/rson-ati110220.php

Read more here:

MammoScreen AI Tool Improves Diagnostic Performance of Radiologists in Detecting Breast Cancer - Cancer Network