What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence? – Forbes

Theres been a great deal of hype and excitement in the artificial intelligence (AI) world around a newly developed technology known as GPT-3. Put simply; it's an AI that is better at creating content that has a language structure human or machine language than anything that has come before it.

What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?

GPT-3 has been created by OpenAI, a research business co-founded by Elon Musk and has been described as the most important and useful advance in AI for years.

But theres some confusion over exactly what it does (and indeed doesnt do), so here I will try and break it down into simple terms for any non-techy readers interested in understanding the fundamental principles behind it. Ill also cover some of the problems it raises, as well as why some people think its significance has been overinflated somewhat by hype.

What is GPT-3?

Starting with the very basics, GPT-3 stands for Generative Pre-trained Transformer 3 its the third version of the tool to be released.

In short, this means that it generates text using algorithms that are pre-trained theyve already been fed all of the data they need to carry out their task. Specifically, theyve been fed around 570gb of text information gathered by crawling the internet (a publicly available dataset known as CommonCrawl) along with other texts selected by OpenAI, including the text of Wikipedia.

If you ask it a question, you would expect the most useful response would be an answer. If you ask it to carry out a task such as creating a summary or writing a poem, you will get a summary or a poem.

More technically, it has also been described as the largest artificial neural network every created I will cover that further down.

What can GPT-3 do?

GPT-3 can create anything that has a language structure which means it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code.

In fact, in one demo available online, it is shown creating an app that looks and functions similarly to the Instagram application, using a plugin for the software tool Figma, which is widely used for app design.

This is, of course, pretty revolutionary, and if it proves to be usable and useful in the long-term, it could have huge implications for the way software and apps are developed in the future.

As the code itself isn't available to the public yet (more on that later), access is only available to selected developers through an API maintained by OpenAI. Since the API was made available in June this year, examples have emerged of poetry, prose, news reports, and creative fiction.

This article is particularly interesting where you can see GPT-3 making a quite persuasive attempt at convincing us humans that it doesnt mean any harm. Although its robotic honesty means it is forced to admit that "I know that I will not be able to avoid destroying humankind," if evil people make it do so!

How does GPT-3 work?

In terms of where it fits within the general categories of AI applications,GPT-3 is a language prediction model. This means that it is an algorithmic structure designed to take one piece of language (an input) and transform it into what it predicts is the most useful following piece of language for the user.

It can do this thanks to the training analysis it has carried out on the vast body of text used to pre-train it. Unlike other algorithms that, in their raw state, have not been trained, OpenAI has already expended the huge amount of compute resources necessary for GPT-3 to understand how languages work and are structured. The compute time necessary to achieve this is said to have cost OpenAI $4.6 million.

To learn how to build language constructs, such as sentences, it employs semantic analytics - studying not just the words and their meanings, but also gathering an understanding of how the usage of words differs depending on other words also used in the text.

It's also a form of machine learning termed unsupervised learning because the training data does not include any information on what is a "right" or "wrong" response, as is the case with supervised learning. All of the information it needs to calculate the probability that it's output will be what the user needs is gathered from the training texts themselves.

This is done by studying the usage of words and sentences, then taking them apart and attempting to rebuild them itself.

For example, during training, the algorithms may encounter the phrase the house has a red door. It is then given the phrase again, but with a word missing such as the house has a red X."

It then scans all of the text in its training data hundreds of billions of words, arranged into meaningful language and determines what word it should use to recreate the original phrase.

To start with, it will probably get it wrong potentially millions of times. But eventually, it will come up with the right word. By checking its original input data, it will know it has the correct output, and weight is assigned to the algorithm process that provided the correct answer. This means that it gradually learns what methods are most likely to come up with the correct response in the future.

The scale of this dynamic "weighting" process is what makes GPT-3 the largest artificial neural network ever created. It has been pointed out that in some ways, what it does is nothing that new, as transformer models of language prediction have been around for many years. However, the number of weights the algorithm dynamically holds in its memory and uses to process each query is 175 billion ten times more than its closest rival, produced by Nvidia.

What are some of the problems with GPT-3?

GPT-3's ability to produce language has been hailed as the best that has yet been seen in AI; however, there are some important considerations.

The CEO of OpenAI himself, Sam Altman, has said, "The GPT-3 Hype is too much. AI is going to change the world, but GPT-3 is just an early glimpse."

Firstly, it is a hugely expensive tool to use right now, due to the huge amount of compute power needed to carry out its function. This means the cost of using it would be beyond the budget of smaller organizations.

Secondly, it is a closed or black-box system. OpenAI has not revealed the full details of how its algorithms work, so anyone relying on it to answer questions or create products useful to them would not, as things stand, be entirely sure how they had been created.

Thirdly, the output of the system is still not perfect. While it can handle tasks such as creating short texts or basic applications, its output becomes less useful (in fact, described as "gibberish") when it is asked to produce something longer or more complex.

These are clearly issues that we can expect to be addressed over time as compute power continues to drop in price, standardization around openness of AI platforms is established, and algorithms are fine-tuned with increasing volumes of data.

All in all, its a fair conclusion that GPT-3 produces results that are leaps and bounds ahead of what we have seen previously. Anyone who has seen the results of AI language knows the results can be variable, and GPT-3s output undeniably seems like a step forward. When we see it properly in the hands of the public and available to everyone, its performance should become even more impressive.

Read the original post:
What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence? - Forbes

Thomas J. Fuchs, DSc, Named Dean of Artificial Intelligence and Human Health and Co-Director of the Hasso Plattner Institute for Digital Health at…

Newswise (New York, NY October 7, 2020) Thomas J. Fuchs, DSc, a prominent scientist in the groundbreaking field of computational pathologythe use of artificial intelligence to analyze images of tissue samples to identify disease and predict outcomehas been appointed Co-Director of the Hasso Plattner Institute for Digital Health at Mount Sinai, Dean of Artificial Intelligence (AI) and Human Health, and Professor of Computational Pathology and Computer Science in the Department of Pathology at the Icahn School of Medicine at Mount Sinai. In his new role, he will lead the next generation of scientists and clinicians to use machine learning and other forms of artificial intelligence to develop novel diagnostics and treatments for acute and chronic disease.

Dr. Fuchs has advanced the field of precision medicine through his contributions to artificial intelligence in pathology, helping the health care industry better understand and fight cancer. His expertise will enhance Mount Sinais continued efforts to use digital health to train future medical leaders and improve care for our patients, said Dennis S. Charney, MD, Anne and Joel Ehrenkranz Dean, Icahn School of Medicine at Mount Sinai, and President for Academic Affairs, Mount Sinai Health System. By building on existing AI and health initiatives, like the Mount Sinai Digital and Artificial Intelligence-Enabled Pathology Center of Excellence, Dr. Fuchss guidance, along with shared knowledge and academic excellence from our team of researchers and clinicians, will help revolutionize health care and science, nationally and globally.

Dr. Fuchss trailblazing work includes developing novel methods for analysis of digital microscopy slides to better understand genetic mutations and their influence on changes in tissues. He has been recognized for developing large-scale systems for mapping the pathology, origins, and progress of cancer. This breakthrough was achieved by building a high-performance compute cluster to train deep data networks at petabyte scale.

Mount Sinai is at the forefront of digital health in medicine with an exceptionally talented team driving innovation forward. I am tremendously excited to join them in expanding initiatives and efforts to advance artificial intelligence in human health; the honor of leading this task is utterly humbling, said Dr. Fuchs. Together, we will weave a fabric of AI services that help nurses, physicians, and hospital leadership to make personalized decisions for every patient. The key goals are to help especially vulnerable populations, improve treatment for all, and use AI to democratize health care throughout New York and across the globe.

His vision for Mount Sinai is to further revolutionize medical practice by pushing the boundaries of AI with the ultimate goal of transforming the quality of life and human health for people all over the globe. That vision includes transforming pathologythe study of causes and effects of a disease or injuryfrom a qualitative to a quantitative science, and empowering more doctors and medical students to use their talent for good by joining the novel field.

Dr. Fuchs will focus on developing a new system and code for machine learning; large-scale research models and computation; more effectively using data to apply to real-world clinical settings; and continuing to expand the use of computational pathology in treatments through collaboration.

He will co-lead the Hasso Plattner Institute for Digital Health at Mount Sinai, established in 2019 by the Mount Sinai Health System and the Hasso Plattner Institute with generous philanthropic support from the Hasso Plattner Foundation.

Dr. Fuchs has made key contributions in AI for cancer diagnosis which will be significant as we work to save lives, prevent disease, and improve the health of patients using artificial intelligence in real-time analysis of comprehensive health data from electronic health records, genetic information, and mobile sensor technologies, said Erwin P. Bottinger, MD, Co-Director of the Hasso Plattner Institute for Digital Health at Mount Sinai and Professor of Digital Health-Personalized Medicine, Hasso Plattner Institute, University of Potsdam, Germany. As Dr. Fuchs and I collaborate to advance artificial intelligence and machine learning in health care, the institute will continue to be a force in creating progressive digital health services.

Before joining Mount Sinai, Dr. Fuchs was Director of the Warren Alpert Center for Digital and Computational Pathology at Memorial Sloan Kettering Cancer Center (MSK) and Associate Professor at Weill Cornell Graduate School for Medical Sciences. At MSK he led a laboratory focused on computational pathology and medical machine learning. Dr. Fuchs co-founded Paige.AI in 2017 and led its initial growth to the leading AI company in pathology. He is a former research technologist at NASAs Jet Propulsion Laboratory and visiting scientist at the California Institute of Technology. Dr. Fuchs holds a Doctor of Sciences from ETH Zurich in Machine Learning and a MS in Technical Mathematics from Graz Technical University in Austria.

We are very pleased to welcome Thomas to our faculty, said Eric Nestler, MD, PhD, Nash Family Professor of Neuroscience, Director of The Friedman Brain Institute, and Dean for Academic and Scientific Affairs, Icahn School of Medicine at Mount Sinai. His vast knowledge in data science, machine learning, and artificial intelligence will significantly move Mount Sinai forward as a world leader in health care.

About the Mount Sinai Health SystemThe Mount Sinai Health System is New York City's largest academic medical system, encompassing eight hospitals, a leading medical school, and a vast network of ambulatory practices throughout the greater New York region. Mount Sinai is a national and international source of unrivaled education, translational research and discovery, and collaborative clinical leadership ensuring that we deliver the highest quality carefrom prevention to treatment of the most serious and complex human diseases. The Health System includes more than 7,200 physicians and features a robust and continually expanding network of multispecialty services, including more than 400 ambulatory practice locations throughout the five boroughs of New York City, Westchester, and Long Island. The Mount Sinai Hospital is ranked No. 14 on U.S. News & World Reports Honor Roll of the Top 20 Best Hospitals in the country and the Icahn School of Medicine as one of the Top 20 Best Medical Schools in the country. Mount Sinai Health System hospitals are consistently ranked regionally by specialty by U.S. News & World Report.

For more information, visit https://www.mountsinai.org or find Mount Sinai on Facebook, Twitter and YouTube.

To learn more about Dr. Thomas Fuchs and the Hasso Plattner Institute for Digital Health at Mount Sinai, watch the short videohere.

Go here to read the rest:
Thomas J. Fuchs, DSc, Named Dean of Artificial Intelligence and Human Health and Co-Director of the Hasso Plattner Institute for Digital Health at...

Penn researchers get $3.2 million grant to use artificial intelligence for improving heart transplants – PhillyVoice.com

A team of researchers at Penn Medicine are turning to artificial intelligence as a diagnostic tool to improve outcomes for patients who receive heart transplants.

Each year, more than 2,000 heart transplants are performed in the United States, but the immune systems of their recipients reject as many as 30% to 40% of these organs.

A new grant from the National Institutes of Health will support research into the use of artificial intelligence to better detect the risk of rejection and the immune mechanisms that underlie it. The $3.2 million grant will be shared over four years by Penn Medicine, Case Western Reserve University, Cleveland Clinic and Cedars-Sinai Medical Center.

When a patient's immune system recognizes a donor heart as a foreign object, the organ can become damaged and eventually rejected.

The current grading standard for such damage has poor diagnostic accuracy, leaving patients vulnerable to receiving too much or too little treatment.

With the grant funding, researchers will use AI to analyze cardiac biopsy tissue images and better distinguish between rejection grades. They hope the analysis will also detect patterns of immune cells that reveal the mechanism of rejection.

With improved diagnostic accuracy, researchers believe they may be able to spot serious rejection earlier on, reduce rates of infection, and prevent complications of immune-suppressing drugs.

By improving identification of rejection mechanisms, clinicians may be able to better target medications and predict long-term outcomes, reducing the need for frequent heart biopsies.

The research team will compare the relative performance of the AI analysis with human pathologists to see how computer-aided tissue diagnostics can serve as a decision support tool.

This research is focused on a critical component of heart transplantation improving patient outcomes," said Kenneth B. Marguiles, principle investigator and professor of cardiovascular medicine at Penn. "Unfortunately, the number of patients with end-stage heart failure is increasing. But research like this is another step in the right direction for improving survival and quality of life for heart failure patients."

Read more here:
Penn researchers get $3.2 million grant to use artificial intelligence for improving heart transplants - PhillyVoice.com

Future reality: Triad of Internet of Things, Artificial Intelligence & Blockchain in action – The Financial Express

Blockchain, with promise of immutability, transparency, security, interoperability, etc., allows us to exploit otherwise unused resources, trade the un-tradable, and allow new ecosystems that were not possible before.

By Sanjay Pathak

Blockchain today is still in its infancy, and its mainstream value is yet to be realised. While, its for sure that blockchain will disrupt the existing solutions, not only in industry and commerce but in almost all aspects of our day-to-day lives, it cannot do so just by itself. Same holds true for Internet of Things (IoT) and Artificial Intelligence (AI). The underlying fact is that to get the real value new-age emerging technologies such as blockchain, AI and IoT have to work in tandem. As we begin to understand the new normal in the midst of the corona pandemic, it will be important to draw value from any digital transformation that firms undertake. Businesses will have to think beyond their domain and scope to provide services which are of actual value to consumers.

How can this happen? IoT has brought new and cheaper ways to communicate with things which was not fathomable in the past. Blockchain, with promise of immutability, transparency, security, interoperability, etc., allows us to exploit otherwise unused resources, trade the un-tradable, and allow new ecosystems that were not possible before. The new entrant AI (inclusive of machine/deep learning, vision, NLP, robots or autonomous machines etc.) has already started to deliver great value to many industries, so much so as to reduce or even replace the human element. Further advancement in 5G communication is a positive catalyst to this ecosystem.

However, these technologies, with a disjointed ecosystem or industries siloed approach towards them, may not reach their full potential. In the above combination, data becomes the common driving factor. While IoT is producing data from new sources and sensors, blockchain is safeguarding and ensuring immutability, and the AI layer on top is helping deliver new business meanings and outcomes in almost real-time. In summary, data value chain comes from new technologies enabling collection, sharing, security, immutability, analysis, and automation of decisions with minimal human involvement.

Lets run this model on a practical consumer problem of provenance the classic Farm to Table use case. The big questions that need solutions are with respect to quality, credibility, genuineness, safety, increase in efficiency and warranting correct distribution of revenue. IoT takes care of conditions maintained in farms with respect to temperature, humidity, soil nutrients and growth progress, and also conditions at processing centres and logistics. All this information can be stored on blockchain-based smart contracts. AI-based engine on top of this, with feeds from weather systems, etc., can trigger and automatically execute smart contracts and take required action based on pre-agreed rules, including payments, etc. In an adverse event like an outbreak at any stage, the source could be easily traced and isolated. Next, this can be extended to insurance and forward commodity trading using a trade setup, thus bringing real value from agriculture, supply chain, financial services, insurance and other industries combined.

IoT has come a long way in improving the type of sensors, size and cost and even their usage in some industries; the real consumer centric benefits can be manifold. AI faces the challenge of accuracy, trust and confidence over replacement by the human cognitive mind. Building such ecosystems without regulatory pressure, is not easy if not impossible. This is one of the primary factors for blockchain and other similar transformative technologies not gaining mainstream acceptance or adoption.

Lets also keep an eye on Quantum Computing breakthroughs, as this not only threatens the key features of these emerging technologies, but will severely impact best of encryption, security and cryptography that exists today. Which means any industry, digital ecosystems, IT infrastructure will have to evolve at a rapid pace before they get negatively impacted.

The writer is head Blockchain, Healthcare & Insurance Practice, 3i Infotech

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

Originally posted here:
Future reality: Triad of Internet of Things, Artificial Intelligence & Blockchain in action - The Financial Express

Welcome Initiative on Artificial Intelligence – Economic Times

The first step is to enact robust data protection

It is welcome that India is hosting a global summit on artificial intelligence (AI) and that the Prime Minister has addressed the gathering and expressed commitment at the highest level of government to wholesome developmentand regulation of AI. AI will fast become not just a major component of economic competitiveness but also a force multiplier in strategic capacity. It also poses serious challenges in itself and in the way it is put to use. Therefore, control and regulation of AI are global concerns of mounting importance, on which the G20 grouping of the worlds 20 largest economies have adopted guidelines and principles.

For India to offer something more than lip service to developing AI, the first thing to do is to put in place a robust data protection framework. Data is oxygen for AI and how data is used to train AI has implications for the data subjects whose data is utilised for the purpose and for the kind of algorithm that is produced. In the US, racial bias has been built into facial recognition software that makes use of AI. That embarrassment has led to some principles being formulated for AI development, as well. Transparency and explainability, for example. If someone adversely affected by AI decisions wants to challenge a decision, the AI in use must be able to explain how and why it reached the conclusion it did. Robustness, safety and security must be ensured, for which traceability of the data sets used for creating or training the algorithms involved is essential. Accountability is another principle AI actors must be accountable for the proper functioning of AI. Regulation of AI and of algorithms must emerge as a robust and active field of study and practice in India.

The Prime Minister said that AI should not be weaponised in the hands of non-State actors. How to translate this fine sentiment into action is the question. The Global Partnership on Artificial Intelligence excludes China, whose labs and companies operate at the cutting edge of AI. That makes global coordination to keep AI safe rather tough.

This piece appeared as an editorial opinion in the print edition of The Economic Times.

View original post here:
Welcome Initiative on Artificial Intelligence - Economic Times

Nexen Tire to use Artificial Intelligence to reduce tire noise – Traction News

Connect

Share

Share

Email

Nexen Tire America announced the development of an Artificial Intelligence (AI) and big data-driven methodology aimed at reducing tire noise.

The big data research for Noise, Vibration, and Harshness (NVH) was jointly conducted with Hyundai-Kia Automotive Group and Inha University in Korea. Since 2018, Nexen Tire conducted the joint research with long-standing partner Hyundai-Kia Automotive Group to increase customer satisfaction and improve the environment by reducing noise levels.

Due to worldwide regulations and the increasing trend on noise-reduction for electric vehicles, Nexen Tire designed an anechoic chamber containing dozens of microphone sensors to measure noise, analyze pass-by noise (PBN) and detect causes of noise from vehicle powertrain.

Nexen Tire constantly monitors top industry trends and feedback from customers and what weve discovered is that nearly all drivers want a quieter driving experience. This system has provided valuable scientific information about tire noise and based on that information, weve developed an industry-exclusive manufacturing process to reduce tire noise using AI technology, said John Hagan, executive vice president of sales for Nexen Tire America, Inc. This methodology is exclusive to Nexen Tire and will allow us to expand our technology to provide some of the industrys best and quietest tires to global automotive manufacturers.

Link:
Nexen Tire to use Artificial Intelligence to reduce tire noise - Traction News

IBM Watson Advertising Expands Suite, Makes Artificial Intelligence The Backbone 10/08/2020 – MediaPost Communications

IBMWatson Advertising is building a suite of solutions using artificial intelligence as the foundation.

The suite, based on privacy measures, leverages first-party data to increase adoptimization and move the industry into cookieless targeting. The strategy also uses intelligent chatbots that connect brands and consumers.

Artificial intelligence is becoming the backbone ofonline advertising, says Sheri Bachstein, global head of Watson Advertising and The Weather Company. The industry is feeling a lot of pressure as targeting pixels disappear and privacylegislation increases, she says, suggesting that AI has moved from a buzzword to supporting brands.

Change requires education. Eight to 10 years ago, the industry underwent a majortransformation with programmatic based on automation. It took time for marketers to learn about the technology and for companies to adopt it, with trial and error -- but most importantly, it tookpatience.

advertisement

advertisement

AI isnt about automation, but rather augmenting the human process, Bachstein said. Its about being predictive. The cookie can only tell you whathappened in the past. AI can tell you what happened in the past, present the insights, and tell you what you can gain in the future.

The expanded suite of cookieless offerings includeExtensions forIBMWatson Advertising Accelerator, IBMWatson Advertising Attribution, and IBMWatson Advertising Predictive Audiences.

The attributionproduct uses machine learning, for example, to help to determine when campaigns yield performance results.

IBM made the announcement with a series of partners that were willing to combinetheir data to help brands and publishers achieve these results. Partners include Xandr/AT&T, Magnite, Nielsen, MediaMath, LiveRamp and Beeswax.

Rand Harbert, executive vice president andchief agency, sales and marketing officer at State Farm, acknowledged using IBMs AI products through The Weather Channel, an approach that helps the company use data to create experiences withconsumers in the moment.

The new capabilities are focused on privacy and designed to allow brands to reach consumers while considering their customers privacy.

See the article here:
IBM Watson Advertising Expands Suite, Makes Artificial Intelligence The Backbone 10/08/2020 - MediaPost Communications

There is already a beer created by Artificial Intelligence – Thehour.com

There is already a beer created by Artificial Intelligence

Technology has become a large part of our lives and with it Artificial Intelligence (AI) has intruded into our daily lives, so much so that with the help of it we have been able to create products that man normally makes.

In this context, a Swiss company launched Deeper, the first beer in the European country created with the assistance of AI. The recipe for the drink was made by the algorithm known as Brauer AI.

Photo: brauer.ai

To carry out this project, the creators chose the type of Indian Pale Ale beer, subsequently the algorithm analyzed market trends and an international database with around 157 thousand recipes to choose the type of malt and hops to use.

The MN Brew microbrewery, the University of Lucerne and the Jaywalker Digital company participated in the creation of this product. On the official page of the drink, the little legend explains "we believe in the power of merging human wisdom with artificial intelligence."

Related:

See original here:
There is already a beer created by Artificial Intelligence - Thehour.com

Better glass quality through artificial intelligence and LineScanner Management – Glass on Web

For over 20 years SOFTSOLUTION has been developing and manufacturing quality assurance systems for glass processors at its site in Waidhofen/Ybbs (A). Thanks to the latest developments, theLineScanner Management Consoleandartificial intelligenceare now used to optimize processes.

NEW// LineScanner Management Console

The newly developedLineScanner Management Consoleprovides processors with an online overview of the current status of all scanners in production and thus integrates the scanners into the extensive automation and workflow control. This software tool records thequality and quantity of produced and scanned slices per lineand gives aquick overview of all scanners in operation.

The LineScanner Management Console provides the user with the most important data (status of the line, service requirements - also foresighted - as well as current production figures with corresponding quality results) in real time. A complete documentation of the glass quality is indispensable and, thanks to lot and Industry 4.0, will be even easier in the future.

Thus artificial intelligence prevents "false quality rejects"

Artificial intelligenceis already finding practical application in many areas - includinginspection systems from SOFTSOLUTION- and replaces traditional automated methods, which often suffer from ahigh rate of "false rejects" (= false quality rejects).

SOFTSOLUTION has understood this customer requirement and relies on the use of artificial intelligence to solve problems with "false quality rejects" withthe help of algorithms.Existing standards regulate the tolerances, but practice shows a different picture. Today's quality demands go far beyond that and every customer has individual requirements and tolerances.In practical use, this often leads to a high rate of "false rejects" - a false quality rejection by the plant.The scanner delivers results continuously - which defects were found on a glass, what kind of defect and is the defect acceptable or not for this customer.

An operator may have a different picture of an assessment. In this case SOFTSOLUTION nowallows the operator to correct the LineScanner's decision. Such "changes" by the operator are collected and used for continuous improvement.Thus, it can be said that theLineScanner increasingly learns from feedbackand constantly adapts its evaluation behaviour.

With SOFTSOLUTION always one step ahead!

Excerpt from:
Better glass quality through artificial intelligence and LineScanner Management - Glass on Web

UI group wins $1 million to work on medical artificial intelligence – UI The Daily Iowan

A team of researchers at the University of Iowa are leading a multi-university project to work on the advancements of medical artificial intelligence.

Matthew Hsieh

Entrance to the University of Iowas Seamans Center for the Engineering Arts and Sciences at 103 South Capitol Street, Iowa City, IA on Friday, Oct. 2nd, 2020. A one million dollar grant from the National Science Foundation was rewarded to a University of Iowa collaboration between engineering and medical students and faculty to work on advancing medical AI.

As Computers and Artificial Intelligence (AI) play a key role in improving medical fields, experts in medical and engineering research at the University of Iowa are merging disciplines to work towards the advancement of medical AI with the help of a $1 million grant from the National Science Foundation.

Assistant Professor of Industrial and Systems Engineering Stephen Baek, the main person behind this research, said he is working alongside a large team of medical and engineering experts at the UI and across the world.

Baek said this research will also be sent to medical institutions at other universities to form a network to test the model, including Harvard, Yale, Stanford, the University of Chicago, and Seoul National University in South Korea.

My research is basically about creating an informatics system that can support human experts making more informed decisions. So I believe that artificial intelligence agents can help human experts make better decisions Baek said. Treating a cancer patient, for example, is a highly demanding job. The doctors, physicians must understand all the charts, images, and different information, and then have to finally reach to a conclusion in terms of how youre going to treat a patient.

Baek said his hypothesis is that AI algorithms should be able to support physicians in making challenging decisions in a way that could revolutionize the medical field.

In medicine, you want to have a bullet proof solution, he said. 99 percent accuracy is not enough, because theres a 1 percent chance you mess up with a patient, which means the patient might die. So we want to make sure everything is perfect and reliable.

The main way they can assure everything is reliable is to collect a lot of data, Baek said, as AI systems are hungry for data to work properly. This data needs to also be accurate for all people, he said, which can be a problem if a particular hospital does not see a lot of diversity, as different hospitals have different demographics and populations.

UI Professor of Electrical and Computer Engineering and Radiation Oncology Xiaodong Wu said huge data sets are needed to create effective medical AI models.

Currently, in this era of precision medicine, in order to allow medical imaging AI models to offer effective clinical decision support, large amounts of image and clinical data are required by most of the current medical AI models, Wu said. [AI models] make use of data, just make use of data from a single or relatively few institutions, also maybe from just a few geographic regions or patient demographics.

RELATED: Combining arts & engineering: NEXUS hosts student open house

UI Associate Dean for Research and Ph. D programs at the Tippie College of Business Nick Street said he has been working alongside Baek on this project, as well.

Street said what hospitals really need is the records of every patient on Earth collected into one spot, but for security reasons, medical records cannot leave the hospital they are in.

The problem with that is sharing patient data is not something that we can just send over email. It requires an agreement the consent from the patient. It is private information so theres an ethics concern, theres a regulatory concern, theres administrator concern, Street said, So even if people like me who are doing data science, wants to develop an AI model and then test it against multiple different institutions, theres always a barrier in terms of patient privacy and patient data sharing.

The solution, he says, is instead of having the AI sit in one location and collect data sets, or sending a residency person to other hospitals, they can send the AI agent to the location where the knowledge data exists instead. This gives other institutions the opportunity to improve the data.

This collaborative research is made possible through a competition through the National Science Foundation (NSF). Over the next nine months, 28 other teams will work on constructing prototypes and pitches to present to the NSF.

The NSF will then select teams that are able to move on to the next phase, in which the institutions are rewarded a $5 million grant to make the proposals come to life.

Baek, Street, and Wu all said although this will be a tough competition, they are hopeful and optimistic that they will move on to the next stage.

Street said to him, the most exciting part of the competition is the various research expertise Baek connected them all with during this project.

This to me is the exciting part of working at a place like this, is that youve got so many people with so many different backgrounds, he said. In this case, we found a way to come together to do something that no one of us could have done on our own.

Originally posted here:
UI group wins $1 million to work on medical artificial intelligence - UI The Daily Iowan