St. Louis Is Grappling With Artificial Intelligence’s Promise And Potential Peril – St. Louis Public Radio

Tinus Le Rouxs company, FanCam, takes high-resolution photos of crowds having fun. That might be at Busch Stadium, where FanCam is installed, or on Market Street, where FanCam set up its technology to capture Blues fans celebrating after the Stanley Cup victory.

As photos, theyre a fun souvenir. But paired with artificial intelligence, theyre something more: a tool that gives professional sports teams a much more detailed look at whos in the audience, including their estimated age and gender. The idea, he explained Thursday on St. Louis on the Air, is to help teams understand their fans a bit better understand when theyre leaving their seats, what merchandise are they wearing?

Now that the pandemic has made crowd size a matter of public health, Le Roux noted that FanCam can help teams tell whether the audience has swelled past 25% capacity or how many patrons are wearing masks.

But for all the technologys power, Le Roux believes in limits. He explained that he is not interested in technology that would allow him to identify individuals in the crowd.

We dont touch facial recognition. Ethically, its dubious, he said. In fact, Im passionately against the use of facial recognition in public spaces. What we do is use computer vision to analyze these images for more generalized data.

Not all tech companies share those concerns. Detroit now uses facial recognition as an investigatory tool. Earlier this year, that practice led to the wrongful arrest of a Black man. The ACLU has now filed a lawsuit seeking to stop the practice there.

Locally, Sara Baker, policy director for the ACLU of Missouri, said the concerns go far beyond facial recognition.

The way in which many technologies are being used, on the surface, the purpose is benign, she said. The other implication of that is, what rights are we willing to sacrifice in order to engage with those technologies? And that centers, really, on your right to privacy, and if you are consenting to being surveilled or not, and how that data is being used on the back end as well.

Baker cited the license readers now in place around the city, as well as Persistent Surveillance Systems attempts to bring aerial surveillance to the city as a potential concern. The Board of Aldermen has encouraged Mayor Lyda Krewson to enter negotiations with the company as a way to stop crime, although Baltimores experience with the technology has yet to yield the promised results.

That could involve surveillance of the entire city, Baker said. In Baltimore, that means 90% of outdoor activities are surveilled. I think were getting to a point where we need to have robust conversations like this when were putting our privacy rights on the line, because I think we have a shared value of wanting to keep some aspects of our lives private to ourselves.

To that end, Baker said shed like to see the St. Louis Board of Aldermen pass Board Bill 95, which would regulate surveillance in the city. She said it offers common sense guardrails for how surveillance is used in the city.

Other than California and Illinois, Le Roux said, few states have even grappled with technologys capabilities.

I think the legal framework is still behind, and we need to catch up, Le Roux said.

Le Roux will be speaking more about the ethical issues around facial recognition at Prepare.ais Prepare 2020 conference. The St. Louis-based nonprofit hosts the annual conference to explore issues around artificial intelligence. (Thanks to the ongoing pandemic, Prepare 2020 is now entirely virtual and entirely free.)

Prepare.ais mission is to increase collaboration around fourth-industrial revolution technologies in order to advance the human experience.

Le Roux said he hopes more tech leaders and those who understand the building blocks of technology have a seat at the table as regulations are being written. And Baker said her hope is that local governments proceed with caution in turning to new technologies being touted as a way to solve crime.

We have over 600 cameras in the city of St. Louis, she said. Weve spent up to $100,000 a pop on different surveillance technologies, and weve spent over $4 million in the past three years on these types of surveillance technologies, and weve done it without any real audit or understanding of how the data is being used, and whether its being used ethically. And that is what needs to change.

Related Event

What: Prepare 2020

When: Now through Oct. 28

St. Louis on the Air brings you the stories of St. Louis and the people who live, work and create in our region. The show is hosted by Sarah Fenske and produced by Alex Heuer, Emily Woodbury, Evie Hemphill and Lara Hamdan. The audio engineer is Aaron Doerr.

Go here to see the original:
St. Louis Is Grappling With Artificial Intelligence's Promise And Potential Peril - St. Louis Public Radio

The grim fate that could be ‘worse than extinction’ – BBC News

Toby Ord, a senior research fellow at the Future of Humanity Institute (FHI) at Oxford University, believes that the odds of an existential catastrophe happening this century from natural causes are less than one in 2,000, because humans have survived for 2,000 centuries without one. However, when he adds the probability of human-made disasters, Ord believes the chances increase to a startling one in six. He refers to this century as the precipice because the risk of losing our future has never been so high.

Researchers at the Center on Long-Term Risk, a non-profit research institute in London, have expanded upon x-risks with the even-more-chilling prospect of suffering risks. These s-risks are defined as suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In these scenarios, life continues for billions of people, but the quality is so low and the outlook so bleak that dying out would be preferable. In short: a future with negative value is worse than one with no value at all.

This is where the world in chains scenario comes in. If a malevolent group or government suddenly gained world-dominating power through technology, and there was nothing to stand in its way, it could lead to an extended period of abject suffering and subjugation. A 2017 report on existential risks from the Global Priorities Project, in conjunction with FHI and the Ministry for Foreign Affairs of Finland, warned that a long future under a particularly brutal global totalitarian state could arguably be worse than complete extinction.

Singleton hypothesis

Though global totalitarianism is still a niche topic of study, researchers in the field of existential risk are increasingly turning their attention to its most likely cause: artificial intelligence.

In his singleton hypothesis, Nick Bostrom, director at Oxfords FHI, has explained how a global government could form with AI or other powerful technologies and why it might be impossible to overthrow. He writes that a world with a single decision-making agency at the highest level could occur if that agency obtains a decisive lead through a technological breakthrough in artificial intelligence or molecular nanotechnology. Once in charge, it would control advances in technology that prevent internal challenges, like surveillance or autonomous weapons, and, with this monopoly, remain perpetually stable.

The rest is here:
The grim fate that could be 'worse than extinction' - BBC News

The Link Between Artificial Intelligence Jobs and Well-Being – Stanford University News

Artificial intelligence carries the promise of making industry more efficient and our lives easier. With that promise, however, also comes the fear of job replacement, hollowing out of the middle class, increased income inequality, and overall dissatisfaction. According to the quarterly CNBC/SurveyMonkey Workplace Happiness survey from October last year, 37% of workers between the ages of 18 and 24 are worried about AI eliminating their jobs.

But a recent study from two researchers affiliated with the Stanford Institute for Human-Centered Artificial Intelligence (HAI) challenged this public perception about AIs impact on social welfare. The study found a relationship between AI-related jobs and increases in economic growth, which in return improved the well-being of the society.

Demand for AI-related jobs has been growing constantly in recent years, but this growth has been widely variable between cities and industry. Arizona State University assistant professor Christos Makridis and Saurabh Mishra, HAI AI Index manager and researcher, wanted to understand the effects of AI on society independent of these variables.

For this, they examined the number of AI-related job listings by city in the U.S. using Stanford HAIs AI Index, an open source project that tracks and visualizes data on AI. They found that, between 2014 and 2018, cities with greater increases in AI-related job postings exhibited greater economic growth. This relationship was dependent on a citys ability to leverage its inherent capabilities in industry and education to create AI-based employment opportunities. This meant that only cities with certain infrastructure such as high-tech services and more educated workers benefited from this growth.

Next, the researchers studied how this growth translated to well-being at a macro level using data from Gallups U.S. Daily Poll, which surveys 1,000 different people each day on five components of well-being: physical, social, career, community, and financial. The researchers studied the correlation between the number of AI jobs and the poll results, controlling for many factors, such as demographic characteristics of a population and presence of universities in a given city. They found that AI-related job growth mediated by economic growth was positively associated with improved state of being, especially for physical, social, and financial components.

This was a surprising finding given the publics concern over AIs potentially adverse effects on quality of life and overall happiness.

The researchers believe that their study is the first quantitative investigation of the relationship between AI and social well-being. While their findings are intriguing, they are also correlative. The study cant conclude whether AI is the cause of the observed improvement in well-being.

Nevertheless, the study makes an important and unique contribution to understanding the impact of AI on society. The fact that we found this robust, positive association, even after we control for things like education, age, and other measures of industrial composition, I think is all very positive, Makridis says.

Their findings also offer a course of action to policymakers. The researchers suggest that city leaders introduce smart industrial policies, such as the Endless Frontier Act, to support scientific and technological innovation through increased funding and investments targeted for AI-based research and discovery. These policies along with ones that promote higher education can help balance the economic inequality between cities by providing them with opportunities to grow.

Given that [cities] have an educated population set, a good internet connection, and residents with programming skills, they can drive economic growth, Mishra says. Supporting the AI-based industry can improve the economic growth of any city, and thus the well-being of its residents.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition.Learn more.

View original post here:
The Link Between Artificial Intelligence Jobs and Well-Being - Stanford University News

Four Practical Applications Of Artificial Intelligence And 5G – Forbes

Pixabay

It is no secret that artificial intelligence (AI) is a technical marketing whitewash. Many companies claim that its algorithms and data scientists enable a differentiated approach in the networking infrastructure space. However, what are the practical applications of AI for connectivity and, in particular, 5G? From my perspective, it encapsulates four key areas. Here I will provide my insights into each and highlight what I believe is the practical functionality for operators, subscribers and equipment providers.

Smart automation

Automation is all about reducing human error and improving network performance and uptime through activities such as low to no-touch device configuration, provisioning, orchestration, monitoring, assurance and reactive issue resolution. AI promises to deliver the "smarts" in analyzing the tasks above, steering networking to a more closed-loop process. Pairing all of this with 5G should help mobile service providers offer simpler activations, higher performance and the rapid deployment of new services. The result should be higher average revenue per subscriber (ARPU) for operators and a more reliable connection, and better user experience.

Predictive remediation

Over time, I believe AI will evolve to enable network operators to move from reactive to proactive issue resolution. They will be able to evaluate large volumes of data for anomalies and make course corrections before issues arise. 5G should enable networks to better handle these predictive functions' complexity and support significantly more connected devices. We're beginning to see AI-powered predictive remediation applied to the enterprise networking sector to positive results, via some tier one carriers and 5G infrastructure providers such as Ericsson. In my opinion, one of the most significant impacts of AI in mobile networks will be the reduction of subscriber churn. That is a huge considerationcarriers are spending billions of dollars building fixed and mobile 5G networks. They must be able to add and retain customers.

Digital transformation acceleration

One of the pandemic's silver linings is the acceleration, out of necessity, of businesses' digital transformation. The distributed nature of work from home has put tremendous pressure on corporate and mobile networks from a scalability, reliability and security perspective. Many connectivity infrastructure providers are embracing AIOps for its potential to supercharge DevOps and SecOps. AI will also help operators better manage the lifecycle of 5G deployments from a planning, deployment, ongoing operations and maintenance perspective. For example, China Unicom leveraged AI to transform how it internally manages operations and how it interfaces with partners and customers. In 2019, the operator reported a 30% reduction in time to product delivery and a 60% increase in productivity for leased line activations.

Enhanced user experiences

The combination of AI and 5G will unlock transformative user experiences across consumer and enterprise market segments. I expanded on this topic in my Mobile World Congress 2019 analysis, which you can find here if interested. At a high level, AI has the potential to reduce the number of subscriber service choices, presenting the most relevant ones based on past behavior. I believe the result will be higher subscriber loyalty and operator monetization.

Wrapping Up

Though AI is hyped all around, there is particular synergy with 5G. Mobile networks are no longer just a "dumb pipe" for data access. AI can improve new device provisioning, deliver high application and connectivity performance, accelerate digital transformation and provide exceptional user experiences. For service providers, I also believe AI and 5G will result in operational expense savings and drive incremental investment in new service delivery. In my mind, that is a win-win for subscribers, operators, and infrastructure providers alike.

Disclosure: My firm, Moor Insights & Strategy, like all research and analyst firms, provides or has provided research, analysis, advising, and/or consulting to many high-tech companies in the industry, including Ericsson. I do not hold any equity positions with any companies cited in this column.

The rest is here:
Four Practical Applications Of Artificial Intelligence And 5G - Forbes

Artificial intelligence solutions built in India can serve the world – The Indian Express

Updated: October 8, 2020 8:43:31 am

Written by Abhishek Singh

The RAISE 2020 summit (Responsible AI for Social Empowerment) has brought issues around artificial intelligence (AI) to the centre of policy discussions. Countries across the world are making efforts to be part of the AI-led digital economy, which is estimated to contribute around $15.7 trillion to the global economy by 2030. India, with its AI for All strategy, a vast pool of AI-trained workforce and an emerging startup ecosystem, has a unique opportunity to be a major contributor to AI-driven solutions that can revolutionise healthcare, agriculture, manufacturing, education and skilling.

AI is the branch of computer science concerned with developing machines that can complete tasks that typically require human intelligence. With the explosion of available data expansion of computing capacity, the world is witnessing rapid advancements in AI, machine learning and deep learning, transforming almost all sectors of the economy.

India has a large young population that is skilled and eager to adopt AI. The country has been ranked second on the Stanford AI Vibrancy Index primarily on account of its large AI-trained workforce. Our leading technology institutes like the IITs, IIITs and NITs have the potential to be the cradle of AI researchers and startups. Indias startups are innovating and developing solutions with AI across education, health, financial services and other domains to solve societal problems.

Machine Learning-based deep-learning algorithms in AI can give insights to healthcare providers in predicting future events for patients. It can also aid in the early detection and prevention of diseases by capturing the vitals of patients. A Bengaluru based start-up has developed a non-invasive, AI-enabled technology to screen for early signs of breast cancer. Similarly, hospitals in Tamil Nadu are using Machine Learning algorithms to detect diabetic retinopathy and help address the challenge of shortage of eye doctors. For the COVID-19 response, an AI-enabled Chatbot was used by MyGov for ensuring communications. Similarly, the Indian Council of Medical Research (ICMR) deployed the Watson Assistant on its portal to respond to specific queries of frontline staff and data entry operators from various testing and diagnostic facilities across the country on COVID-19. AI-based applications have helped biopharmaceutical companies to significantly shorten the preclinical drug identification and design process from several years to a few days or months. This intervention has been used by pharmaceutical companies to identify possible pharmaceutical therapies to help combat the spread of COVID19 by repurposing drugs.

Opinion | An AI future set to take over post-Covid world

AI-based solutions on water management, crop insurance and pest control are also being developed. Technologies like image recognition, drones, and automated intelligent monitoring of irrigation systems can help farmers kill weeds more effectively, harvest better crops and ensure higher yields. Voice-based products with strong vernacular language support can help make accurate information more accessible to farmers. A pilot project taken up in three districts Bhopal, Rajkot and Nanded has developed an AI-based decision support platform combined with weather sensing technology to give farm level advisories about weather forecasts and soil moisture information to help farmers make decisions regarding water and crop management. ICRISAT has developed an AI-power sowing app, which utilises weather models and data on local crop yield and rainfall to more accurately predict and advise local farmers on when they should plant their seeds. This has led to an increase in yield from 10 to 30 per cent for farmers. AI-based systems can also help is establishing partnerships with financial institutions with a strong rural presence to provide farmers with access to credit.

An AI-based flood forecasting model that has been implemented in Bihar is now being expanded to cover the whole of India to ensure that around 200 million people across 2,50,000 square kilometres get alerts and warnings 48 hours earlier about impending floods. These alerts are given in nine languages and are localised to specific areas and villages with adequate use of infographics and maps to ensure that it reaches all.

The Central Board of Secondary Education has integrated AI in the school curriculum to ensure that students passing out have the basic knowledge and skills of data science, machine learning and artificial intelligence. The Ministry of Electronics and Information Technology (MeitY) had launched a Responsible AI for Youth programme this year in April, wherein more than 11,000 students from government schools completed the basic course in AI.

As AI works for digital inclusion in India, it will have a ripple effect on economic growth and prosperity. Analysts predict that AI can help add up to $957 billion to the Indian economy by 2035. The opportunity for AI in India is colossal, as is the scope for its implementation. By 2025, data and AI can add over $500 billion and almost 20 million jobs to the Indian economy.

Opinion | Automation and AI in a changing business landscape

Indias AI for All strategy focuses on responsible AI, building AI solutions at scale with an intent to make India the AI garage of the world a trusted nation to which the world can outsource AI-related work. AI solutions built in India will serve the world.

AI derives strength from data. To this end, the government is in the process of putting in place a strong legal framework governing the data of Indians. The legislation stems from a desire to become a highly secure and ethical AI powerhouse. India wants to build a data-rich and a data-driven society as data, through AI, which offers limitless opportunities to improve society, empower individuals and increase the ease of doing business.

The RAISE 2020 summit has brought together global experts to create a roadmap for responsible AI an action plan that can help create replicable models with a strong foundation of ethics built-in. With the participation of more than 72,000 people from 145 countries, RAISE 2020 has become the true global platform for the exchange of ideas and thoughts for creating a robust AI roadmap for the world.

This article first appeared in the print edition on October 8, 2020 under the title Making AI work for India. The writer is president and CEO, NeGD, CEO MyGov and MD and CEO, Digital India Corporation.

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Opinion News, download Indian Express App.

See the article here:
Artificial intelligence solutions built in India can serve the world - The Indian Express

What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence? – Forbes

Theres been a great deal of hype and excitement in the artificial intelligence (AI) world around a newly developed technology known as GPT-3. Put simply; it's an AI that is better at creating content that has a language structure human or machine language than anything that has come before it.

What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?

GPT-3 has been created by OpenAI, a research business co-founded by Elon Musk and has been described as the most important and useful advance in AI for years.

But theres some confusion over exactly what it does (and indeed doesnt do), so here I will try and break it down into simple terms for any non-techy readers interested in understanding the fundamental principles behind it. Ill also cover some of the problems it raises, as well as why some people think its significance has been overinflated somewhat by hype.

What is GPT-3?

Starting with the very basics, GPT-3 stands for Generative Pre-trained Transformer 3 its the third version of the tool to be released.

In short, this means that it generates text using algorithms that are pre-trained theyve already been fed all of the data they need to carry out their task. Specifically, theyve been fed around 570gb of text information gathered by crawling the internet (a publicly available dataset known as CommonCrawl) along with other texts selected by OpenAI, including the text of Wikipedia.

If you ask it a question, you would expect the most useful response would be an answer. If you ask it to carry out a task such as creating a summary or writing a poem, you will get a summary or a poem.

More technically, it has also been described as the largest artificial neural network every created I will cover that further down.

What can GPT-3 do?

GPT-3 can create anything that has a language structure which means it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code.

In fact, in one demo available online, it is shown creating an app that looks and functions similarly to the Instagram application, using a plugin for the software tool Figma, which is widely used for app design.

This is, of course, pretty revolutionary, and if it proves to be usable and useful in the long-term, it could have huge implications for the way software and apps are developed in the future.

As the code itself isn't available to the public yet (more on that later), access is only available to selected developers through an API maintained by OpenAI. Since the API was made available in June this year, examples have emerged of poetry, prose, news reports, and creative fiction.

This article is particularly interesting where you can see GPT-3 making a quite persuasive attempt at convincing us humans that it doesnt mean any harm. Although its robotic honesty means it is forced to admit that "I know that I will not be able to avoid destroying humankind," if evil people make it do so!

How does GPT-3 work?

In terms of where it fits within the general categories of AI applications,GPT-3 is a language prediction model. This means that it is an algorithmic structure designed to take one piece of language (an input) and transform it into what it predicts is the most useful following piece of language for the user.

It can do this thanks to the training analysis it has carried out on the vast body of text used to pre-train it. Unlike other algorithms that, in their raw state, have not been trained, OpenAI has already expended the huge amount of compute resources necessary for GPT-3 to understand how languages work and are structured. The compute time necessary to achieve this is said to have cost OpenAI $4.6 million.

To learn how to build language constructs, such as sentences, it employs semantic analytics - studying not just the words and their meanings, but also gathering an understanding of how the usage of words differs depending on other words also used in the text.

It's also a form of machine learning termed unsupervised learning because the training data does not include any information on what is a "right" or "wrong" response, as is the case with supervised learning. All of the information it needs to calculate the probability that it's output will be what the user needs is gathered from the training texts themselves.

This is done by studying the usage of words and sentences, then taking them apart and attempting to rebuild them itself.

For example, during training, the algorithms may encounter the phrase the house has a red door. It is then given the phrase again, but with a word missing such as the house has a red X."

It then scans all of the text in its training data hundreds of billions of words, arranged into meaningful language and determines what word it should use to recreate the original phrase.

To start with, it will probably get it wrong potentially millions of times. But eventually, it will come up with the right word. By checking its original input data, it will know it has the correct output, and weight is assigned to the algorithm process that provided the correct answer. This means that it gradually learns what methods are most likely to come up with the correct response in the future.

The scale of this dynamic "weighting" process is what makes GPT-3 the largest artificial neural network ever created. It has been pointed out that in some ways, what it does is nothing that new, as transformer models of language prediction have been around for many years. However, the number of weights the algorithm dynamically holds in its memory and uses to process each query is 175 billion ten times more than its closest rival, produced by Nvidia.

What are some of the problems with GPT-3?

GPT-3's ability to produce language has been hailed as the best that has yet been seen in AI; however, there are some important considerations.

The CEO of OpenAI himself, Sam Altman, has said, "The GPT-3 Hype is too much. AI is going to change the world, but GPT-3 is just an early glimpse."

Firstly, it is a hugely expensive tool to use right now, due to the huge amount of compute power needed to carry out its function. This means the cost of using it would be beyond the budget of smaller organizations.

Secondly, it is a closed or black-box system. OpenAI has not revealed the full details of how its algorithms work, so anyone relying on it to answer questions or create products useful to them would not, as things stand, be entirely sure how they had been created.

Thirdly, the output of the system is still not perfect. While it can handle tasks such as creating short texts or basic applications, its output becomes less useful (in fact, described as "gibberish") when it is asked to produce something longer or more complex.

These are clearly issues that we can expect to be addressed over time as compute power continues to drop in price, standardization around openness of AI platforms is established, and algorithms are fine-tuned with increasing volumes of data.

All in all, its a fair conclusion that GPT-3 produces results that are leaps and bounds ahead of what we have seen previously. Anyone who has seen the results of AI language knows the results can be variable, and GPT-3s output undeniably seems like a step forward. When we see it properly in the hands of the public and available to everyone, its performance should become even more impressive.

Read the original post:
What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence? - Forbes

Penn researchers get $3.2 million grant to use artificial intelligence for improving heart transplants – PhillyVoice.com

A team of researchers at Penn Medicine are turning to artificial intelligence as a diagnostic tool to improve outcomes for patients who receive heart transplants.

Each year, more than 2,000 heart transplants are performed in the United States, but the immune systems of their recipients reject as many as 30% to 40% of these organs.

A new grant from the National Institutes of Health will support research into the use of artificial intelligence to better detect the risk of rejection and the immune mechanisms that underlie it. The $3.2 million grant will be shared over four years by Penn Medicine, Case Western Reserve University, Cleveland Clinic and Cedars-Sinai Medical Center.

When a patient's immune system recognizes a donor heart as a foreign object, the organ can become damaged and eventually rejected.

The current grading standard for such damage has poor diagnostic accuracy, leaving patients vulnerable to receiving too much or too little treatment.

With the grant funding, researchers will use AI to analyze cardiac biopsy tissue images and better distinguish between rejection grades. They hope the analysis will also detect patterns of immune cells that reveal the mechanism of rejection.

With improved diagnostic accuracy, researchers believe they may be able to spot serious rejection earlier on, reduce rates of infection, and prevent complications of immune-suppressing drugs.

By improving identification of rejection mechanisms, clinicians may be able to better target medications and predict long-term outcomes, reducing the need for frequent heart biopsies.

The research team will compare the relative performance of the AI analysis with human pathologists to see how computer-aided tissue diagnostics can serve as a decision support tool.

This research is focused on a critical component of heart transplantation improving patient outcomes," said Kenneth B. Marguiles, principle investigator and professor of cardiovascular medicine at Penn. "Unfortunately, the number of patients with end-stage heart failure is increasing. But research like this is another step in the right direction for improving survival and quality of life for heart failure patients."

Read more here:
Penn researchers get $3.2 million grant to use artificial intelligence for improving heart transplants - PhillyVoice.com

Thomas J. Fuchs, DSc, Named Dean of Artificial Intelligence and Human Health and Co-Director of the Hasso Plattner Institute for Digital Health at…

Newswise (New York, NY October 7, 2020) Thomas J. Fuchs, DSc, a prominent scientist in the groundbreaking field of computational pathologythe use of artificial intelligence to analyze images of tissue samples to identify disease and predict outcomehas been appointed Co-Director of the Hasso Plattner Institute for Digital Health at Mount Sinai, Dean of Artificial Intelligence (AI) and Human Health, and Professor of Computational Pathology and Computer Science in the Department of Pathology at the Icahn School of Medicine at Mount Sinai. In his new role, he will lead the next generation of scientists and clinicians to use machine learning and other forms of artificial intelligence to develop novel diagnostics and treatments for acute and chronic disease.

Dr. Fuchs has advanced the field of precision medicine through his contributions to artificial intelligence in pathology, helping the health care industry better understand and fight cancer. His expertise will enhance Mount Sinais continued efforts to use digital health to train future medical leaders and improve care for our patients, said Dennis S. Charney, MD, Anne and Joel Ehrenkranz Dean, Icahn School of Medicine at Mount Sinai, and President for Academic Affairs, Mount Sinai Health System. By building on existing AI and health initiatives, like the Mount Sinai Digital and Artificial Intelligence-Enabled Pathology Center of Excellence, Dr. Fuchss guidance, along with shared knowledge and academic excellence from our team of researchers and clinicians, will help revolutionize health care and science, nationally and globally.

Dr. Fuchss trailblazing work includes developing novel methods for analysis of digital microscopy slides to better understand genetic mutations and their influence on changes in tissues. He has been recognized for developing large-scale systems for mapping the pathology, origins, and progress of cancer. This breakthrough was achieved by building a high-performance compute cluster to train deep data networks at petabyte scale.

Mount Sinai is at the forefront of digital health in medicine with an exceptionally talented team driving innovation forward. I am tremendously excited to join them in expanding initiatives and efforts to advance artificial intelligence in human health; the honor of leading this task is utterly humbling, said Dr. Fuchs. Together, we will weave a fabric of AI services that help nurses, physicians, and hospital leadership to make personalized decisions for every patient. The key goals are to help especially vulnerable populations, improve treatment for all, and use AI to democratize health care throughout New York and across the globe.

His vision for Mount Sinai is to further revolutionize medical practice by pushing the boundaries of AI with the ultimate goal of transforming the quality of life and human health for people all over the globe. That vision includes transforming pathologythe study of causes and effects of a disease or injuryfrom a qualitative to a quantitative science, and empowering more doctors and medical students to use their talent for good by joining the novel field.

Dr. Fuchs will focus on developing a new system and code for machine learning; large-scale research models and computation; more effectively using data to apply to real-world clinical settings; and continuing to expand the use of computational pathology in treatments through collaboration.

He will co-lead the Hasso Plattner Institute for Digital Health at Mount Sinai, established in 2019 by the Mount Sinai Health System and the Hasso Plattner Institute with generous philanthropic support from the Hasso Plattner Foundation.

Dr. Fuchs has made key contributions in AI for cancer diagnosis which will be significant as we work to save lives, prevent disease, and improve the health of patients using artificial intelligence in real-time analysis of comprehensive health data from electronic health records, genetic information, and mobile sensor technologies, said Erwin P. Bottinger, MD, Co-Director of the Hasso Plattner Institute for Digital Health at Mount Sinai and Professor of Digital Health-Personalized Medicine, Hasso Plattner Institute, University of Potsdam, Germany. As Dr. Fuchs and I collaborate to advance artificial intelligence and machine learning in health care, the institute will continue to be a force in creating progressive digital health services.

Before joining Mount Sinai, Dr. Fuchs was Director of the Warren Alpert Center for Digital and Computational Pathology at Memorial Sloan Kettering Cancer Center (MSK) and Associate Professor at Weill Cornell Graduate School for Medical Sciences. At MSK he led a laboratory focused on computational pathology and medical machine learning. Dr. Fuchs co-founded Paige.AI in 2017 and led its initial growth to the leading AI company in pathology. He is a former research technologist at NASAs Jet Propulsion Laboratory and visiting scientist at the California Institute of Technology. Dr. Fuchs holds a Doctor of Sciences from ETH Zurich in Machine Learning and a MS in Technical Mathematics from Graz Technical University in Austria.

We are very pleased to welcome Thomas to our faculty, said Eric Nestler, MD, PhD, Nash Family Professor of Neuroscience, Director of The Friedman Brain Institute, and Dean for Academic and Scientific Affairs, Icahn School of Medicine at Mount Sinai. His vast knowledge in data science, machine learning, and artificial intelligence will significantly move Mount Sinai forward as a world leader in health care.

About the Mount Sinai Health SystemThe Mount Sinai Health System is New York City's largest academic medical system, encompassing eight hospitals, a leading medical school, and a vast network of ambulatory practices throughout the greater New York region. Mount Sinai is a national and international source of unrivaled education, translational research and discovery, and collaborative clinical leadership ensuring that we deliver the highest quality carefrom prevention to treatment of the most serious and complex human diseases. The Health System includes more than 7,200 physicians and features a robust and continually expanding network of multispecialty services, including more than 400 ambulatory practice locations throughout the five boroughs of New York City, Westchester, and Long Island. The Mount Sinai Hospital is ranked No. 14 on U.S. News & World Reports Honor Roll of the Top 20 Best Hospitals in the country and the Icahn School of Medicine as one of the Top 20 Best Medical Schools in the country. Mount Sinai Health System hospitals are consistently ranked regionally by specialty by U.S. News & World Report.

For more information, visit https://www.mountsinai.org or find Mount Sinai on Facebook, Twitter and YouTube.

To learn more about Dr. Thomas Fuchs and the Hasso Plattner Institute for Digital Health at Mount Sinai, watch the short videohere.

Go here to read the rest:
Thomas J. Fuchs, DSc, Named Dean of Artificial Intelligence and Human Health and Co-Director of the Hasso Plattner Institute for Digital Health at...

Welcome Initiative on Artificial Intelligence – Economic Times

The first step is to enact robust data protection

It is welcome that India is hosting a global summit on artificial intelligence (AI) and that the Prime Minister has addressed the gathering and expressed commitment at the highest level of government to wholesome developmentand regulation of AI. AI will fast become not just a major component of economic competitiveness but also a force multiplier in strategic capacity. It also poses serious challenges in itself and in the way it is put to use. Therefore, control and regulation of AI are global concerns of mounting importance, on which the G20 grouping of the worlds 20 largest economies have adopted guidelines and principles.

For India to offer something more than lip service to developing AI, the first thing to do is to put in place a robust data protection framework. Data is oxygen for AI and how data is used to train AI has implications for the data subjects whose data is utilised for the purpose and for the kind of algorithm that is produced. In the US, racial bias has been built into facial recognition software that makes use of AI. That embarrassment has led to some principles being formulated for AI development, as well. Transparency and explainability, for example. If someone adversely affected by AI decisions wants to challenge a decision, the AI in use must be able to explain how and why it reached the conclusion it did. Robustness, safety and security must be ensured, for which traceability of the data sets used for creating or training the algorithms involved is essential. Accountability is another principle AI actors must be accountable for the proper functioning of AI. Regulation of AI and of algorithms must emerge as a robust and active field of study and practice in India.

The Prime Minister said that AI should not be weaponised in the hands of non-State actors. How to translate this fine sentiment into action is the question. The Global Partnership on Artificial Intelligence excludes China, whose labs and companies operate at the cutting edge of AI. That makes global coordination to keep AI safe rather tough.

This piece appeared as an editorial opinion in the print edition of The Economic Times.

View original post here:
Welcome Initiative on Artificial Intelligence - Economic Times

Future reality: Triad of Internet of Things, Artificial Intelligence & Blockchain in action – The Financial Express

Blockchain, with promise of immutability, transparency, security, interoperability, etc., allows us to exploit otherwise unused resources, trade the un-tradable, and allow new ecosystems that were not possible before.

By Sanjay Pathak

Blockchain today is still in its infancy, and its mainstream value is yet to be realised. While, its for sure that blockchain will disrupt the existing solutions, not only in industry and commerce but in almost all aspects of our day-to-day lives, it cannot do so just by itself. Same holds true for Internet of Things (IoT) and Artificial Intelligence (AI). The underlying fact is that to get the real value new-age emerging technologies such as blockchain, AI and IoT have to work in tandem. As we begin to understand the new normal in the midst of the corona pandemic, it will be important to draw value from any digital transformation that firms undertake. Businesses will have to think beyond their domain and scope to provide services which are of actual value to consumers.

How can this happen? IoT has brought new and cheaper ways to communicate with things which was not fathomable in the past. Blockchain, with promise of immutability, transparency, security, interoperability, etc., allows us to exploit otherwise unused resources, trade the un-tradable, and allow new ecosystems that were not possible before. The new entrant AI (inclusive of machine/deep learning, vision, NLP, robots or autonomous machines etc.) has already started to deliver great value to many industries, so much so as to reduce or even replace the human element. Further advancement in 5G communication is a positive catalyst to this ecosystem.

However, these technologies, with a disjointed ecosystem or industries siloed approach towards them, may not reach their full potential. In the above combination, data becomes the common driving factor. While IoT is producing data from new sources and sensors, blockchain is safeguarding and ensuring immutability, and the AI layer on top is helping deliver new business meanings and outcomes in almost real-time. In summary, data value chain comes from new technologies enabling collection, sharing, security, immutability, analysis, and automation of decisions with minimal human involvement.

Lets run this model on a practical consumer problem of provenance the classic Farm to Table use case. The big questions that need solutions are with respect to quality, credibility, genuineness, safety, increase in efficiency and warranting correct distribution of revenue. IoT takes care of conditions maintained in farms with respect to temperature, humidity, soil nutrients and growth progress, and also conditions at processing centres and logistics. All this information can be stored on blockchain-based smart contracts. AI-based engine on top of this, with feeds from weather systems, etc., can trigger and automatically execute smart contracts and take required action based on pre-agreed rules, including payments, etc. In an adverse event like an outbreak at any stage, the source could be easily traced and isolated. Next, this can be extended to insurance and forward commodity trading using a trade setup, thus bringing real value from agriculture, supply chain, financial services, insurance and other industries combined.

IoT has come a long way in improving the type of sensors, size and cost and even their usage in some industries; the real consumer centric benefits can be manifold. AI faces the challenge of accuracy, trust and confidence over replacement by the human cognitive mind. Building such ecosystems without regulatory pressure, is not easy if not impossible. This is one of the primary factors for blockchain and other similar transformative technologies not gaining mainstream acceptance or adoption.

Lets also keep an eye on Quantum Computing breakthroughs, as this not only threatens the key features of these emerging technologies, but will severely impact best of encryption, security and cryptography that exists today. Which means any industry, digital ecosystems, IT infrastructure will have to evolve at a rapid pace before they get negatively impacted.

The writer is head Blockchain, Healthcare & Insurance Practice, 3i Infotech

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

Originally posted here:
Future reality: Triad of Internet of Things, Artificial Intelligence & Blockchain in action - The Financial Express