The intelligence community is developing its own AI ethics – C4ISRNet

The Pentagon made headlines last month when it adopted its five principles for the use of artificial intelligence, marking the end of a months-long effort with significant public debate over what guidelines the department should employ as it develops new AI tools and AI-enabled technologies.

Less well known is that the intelligence community is developing its own principles governing the use of AI.

The intelligence community has been doing its own work in this space as well. Weve been doing it for quite a bit of time, said Ben Huebner, chief of the Office of Director of National Intelligences Civil Liberties, Privacy, and Transparency Office, at an Intelligence and National Security Alliance event March 4.

According to Huebner, ODNI is making progress in developing its own principles, although he did not give a timeline for when they would be officially adopted. They will be made public, he added, noting there likely wouldnt be any surprises.

Fundamentally, theres a lot of consensus here, said Huebner, who noted that ODNI had worked closely with the Department of Defenses Joint Artificial Intelligence Center on the issue.

Key to the intelligence communitys thinking is focusing on what is fundamentally new about AI.

Bluntly, theres a bit of hype, said Huebner. Theres a lot of things that the intelligence community has been doing for quite a bit of time. Automation isnt new. Weve been doing automation for decades. The amount of data that were processing worldwide has grown exponentially, but having a process for handling data sets by the intelligence community is not new either.

What is new is the use of machine learning for AI analytics. Instead of being explicitly programmed to perform a task, machine learning tools are fed data to train them to identify patterns or make inferences before being unleashed on real world problems. Because of this, the AI is constantly adapting or learning from each new bit of data it processes.

Know all the coolest acronyms Sign up for the C4ISRNET newsletter about future battlefield technologies.

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

That is fundamentally different from other IC analytics, which are static.

Why we need to sort of think about this from an ethical approach is that the government structures, the risk management approach that we have taken for our analytics, assumes one thing that is not true anymore. It generally assumes that the analytic is static, explained Huebner.

To account for that difference, AI requires the intelligence community to think more about explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic.

If we are providing intelligence to the president that is based on an AI analytic and he asks--as he doeshow do we know this, that is a question we have to be able to answer, said Huebner. Were going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient.

ODNI is also building an ethical framework to help employees implement those principles in their daily work.

The thing that were doing that we just havent found an analog to in either the public or the private sector is what were referring to as our ethical framework, said Huebner. That drive for that came from our own data science development community, who said We care about these principles as much as you do. What do you actually want us to do?

In other words, how do computer programmers apply these principles when theyre actually writing lines of code? The framework wont provide all of the answers, said Huebner, but it will make sure employees are asking the right questions about ethics and AI.

And because of the unique dynamic nature of AI analytics, the ethical framework needs to apply to the entire lifespan of these tools. That includes the training data being fed into them. After all, its not hard to see how a data set with an underrepresented demographic could result in a higher error rate for that demographic than the population as a whole.

If youre going to use an analytic and it has a higher error rate for a particular population and youre going to be using it in a part of the world where that is the predominant population, we better know that, explained Huebner.

The IC wants to avoid those biases due to concerns over privacy, civil liberties, and frankly, accuracy. And if biases are introduced into an analytic, intelligence briefers need to be able to explain that bias to policy makers so they can factor that into their decision making. Thats part of the concepts of explainability and interpretability Huebner emphasized in his presentation.

And because they are constantly changing, these analytics will require some sort of periodic review as well as a way to catalog the various iterations of the tool. After all, an analytic that was reliable a few months ago could change significantly after being fed enough new data, and not always for the better. The intelligence community will need to continually check the analytics to understand how theyre changing and compensate.

Does that mean that we dont do artificial intelligence? Clearly no. But it means that we need to think about a little bit differently how were going to sort of manage the risk and ensure that were providing the accuracy and objectivity that we need to, said Huebner. Theres a lot of concern about trust in AI, explainability, and the related concept of interpretability.

See more here:
The intelligence community is developing its own AI ethics - C4ISRNet

Artificial Intelligence to Improve the Precision of Mammograms – Imaging Technology News

March 7, 2020The study is based on the results obtained in the Digital Mammography (DM) DREAM Challenge, an international competition led by IBM where researchers from the Instituto de Fsica Corpuscular (IFIC, CSIC-UV) have participated along with scientists from the UPV's Institute of Telecommunications and Multimedia Applications (iTEAM).

The team of researchers from IFIC and the iTEAM UPV was the only Spanish group that reached the end of the challenge. To do so, they developed a prediction algorithm based on convolutional neuron networks, an artificial intelligence technique that simulates the neurons of the visual cortex and allows classifying images, as well as self-learning of the system. Principles related to interpreting x-rays were also applied, where the group has several patents. The Valencian team's results, along with the rest of the finalists, are now published in theJournal of the American Medical Association (JAMA Network Open).

"Participating in this challenge has allowed our group to collaborate in Artificial Intelligence projects with clinical groups of the Comunidad Valenciana," stated Alberto Albiol, tenured professor at UPV and member of the iTEAM group. "This has opened opportunities for us to apply the Machine Learning techniques, as they are proposed in the article," he added.

For example, the work carried out by Valencian researchers is being carried out in Artemisa, the new computing platform for artificial intelligence at the Instituto de Fsica Corpuscular funded by the European Union and the Generalitat Valenciana within the FEDER operating program of the Comunitat Valenciana for 2014-2020 for the acquisition of R+D+i infrastructures and equipment.

"Designing strategies to reduce operating costs of health care is one of the objectives of sustainably applying Artificial Intelligence," pointed out Francisco Albiol, researcher of the IFIC and participant in the study. "The challenges cover from the algorithm part to jointly designing evidence-based strategies along with the medical sector. Artificial Intelligence applied at a large scale is one of the most promising technologies to make health care sustainable," he noted.

The goal of the Digital Mammography (DM) DREAM Challenge is to involve a broad international scientific community (over 1,200 researchers from around the world) to evaluate whether or not Artificial Intelligence algorithms can be equal to or improve the interpretations of the mammograms carried out by radiologists.

"This DREAM Challenge allowed carrying out a rigorous and adequate evaluation of dozens of advanced deep learning algorithms in two independent databases," explained Justin Guinney, vice president of Computational Oncology at Sage Bionetworks and president of DREAM Challenges.

Led by IBM Research, Sage Bionetworks and Kaiser Permanente Washington Research Institute, the Digital Mammography DREAM Challenge concluded that, no algorithm by itself surpassed the radiologists, a combination of methods added to the evaluations of experts improved the accuracy of the exams. Kaiser Permanente Washington (KPW) and the Karolinska Institute (KI) of Sweden provided hundreds of thousands of unidentified mammograms and clinical data.

"Our study suggests that a combination of algorithms of Artificial Intelligence and the interpretations of the radiologists could result in a half million women per year not having to undergo unnecessary diagnostic tests in the United States alone," stated Gustavo Stolovitzky, the director of the IBM program dedicated to Translational Systems Biology and Nanotechnology in the Thomas J. Watson Research Center and founder of DREAM Challenges.

To guarantee the privacy of data and prevent the participants from downloading mammograms with sensitive data, the organizers of the study applied a working system from the model to the data. In the system, participants sent their algorithms to the organizers, who developed a system that applied them directly to the data.

"This focus on sharing data is particularly innovative and essential for preserving the privacy of the data," ensured Diana Buist, of the Kaiser Permanente Washington Health Research Institute. "In addition, the inclusion of data from different countries, with different practices for carrying out mammograms, indicates important translational differences in the way in which Artificial Intelligence can be used on different populations."

Mammograms are the most used diagnostic technique for the early detection of breast cancer. Though this detection tool is commonly effective, mammograms must be evaluated and interpreted by a radiologist, who uses their human visual perception to identify signs of cancer. Thus, it is estimated that there are 10% false positives in the 40 million women who undergo scheduled mammograms each year in the United States.

An effective artificial intelligence algorithm that can increase the radiologist's ability to reduce repeating unnecessary tests while also detecting clinically significant cancers would help increase mammograms' detection value.

For more information:www.upv.es

See more here:
Artificial Intelligence to Improve the Precision of Mammograms - Imaging Technology News

Airlines take no chances with our safety. And neither should artificial intelligence – The Conversation AU

Youd thinking flying in a plane would be more dangerous than driving a car. In reality its much safer, partly because the aviation industry is heavily regulated.

Airlines must stick to strict standards for safety, testing, training, policies and procedures, auditing and oversight. And when things do go wrong, we investigate and attempt to rectify the issue to improve safety in the future.

Its not just airlines, either. Other industries where things can go very badly wrong, such as pharmaceuticals and medical devices, are also heavily regulated.

Artificial intelligence is a relatively new industry, but its growing fast and has great capacity to do harm. Like aviation and pharmaceuticals, it needs to be regulated.

A wide range of technologies and applications that fit under the rubric of artificial intelligence have begun to play a significant role in our lives and social institutions. But they can be used in ways that are harmful, which we are already starting to see.

In the robodebt affair, for example, the Australian government welfare agency Centrelink used data-matching and automated decision-making to issue (often incorrect) debt notices to welfare recipients. Whats more, the burden of proof was reversed: individuals were required to prove they did not owe the claimed debt.

The New South Wales government has also started using AI to spot drivers with mobile phones. This involves expanded public surveillance via mobile phone detection cameras that use AI to automatically detect a rectangular object in the drivers hands and classify it as a phone.

Read more: Caught red-handed: automatic cameras will spot mobile-using motorists, but at what cost?

Facial recognition is another AI application under intense scrutiny around the world. This is due to its potential to undermine human rights: it can be used for widespread surveillance and suppression of public protest, and programmed bias can lead to inaccuracy and racial discrimination. Some have even called for a moratorium or outright ban because it is so dangerous.

In several countries, including Australia, AI is being used to predict how likely a person is to commit a crime. Such predictive methods have been shown to impact Indigenous youth disproportionately and lead to oppressive policing practices.

AI that assists train drivers is also coming into use, and in future we can expect to see self-driving cars and other autonomous vehicles on our roads. Lives will depend on this software.

Once weve decided that AI needs to be regulated, there is still the question of how to do it. Authorities in the European Union have recently made a set of proposals for how to regulate AI.

The first step, they argue, is to assess the risks AI poses in different sectors such as transport, healthcare, and government applications such as migration, criminal justice and social security. They also look at AI applications that pose a risk of death or injury, or have an impact on human rights such as the rights to privacy, equality, liberty and security, freedom of movement and assembly, social security and standard of living, and the presumption of innocence.

The greater the risk an AI application was deemed to pose, the more regulation it would face. The regulations would cover everything from the data used to train the AI and how records are kept, to how transparent the creators and operators of the system must be, testing for robustness and accuracy, and requirements for human oversight. This would include certification and assurances that the use of AI systems is safe, and does not lead to discriminatory or dangerous outcomes.

While the EUs approach has strong points, even apparently low-risk AI applications can do real harm. For example, recommendation algorithms in search engines are discriminatory too. The EU proposal has also been criticised for seeking to regulate facial recognition technology rather than banning it outright.

The EU has led the world on data protection regulation. If the same happens with AI, these proposals are likely to serve as a model for other countries and apply to anyone doing business with the EU or even EU citizens.

In Australia there are some applicable laws and regulations, but there are numerous gaps, and they are not always enforced. The situation is made more difficult by the lack of human rights protections at the federal level.

One prominent attempt at drawing up some rules for AI came last year from Data61, the data and digital arm of CSIRO. They developed an AI ethics framework built around eight ethical principles for AI.

These ethical principles arent entirely irrelevant (number two is do no harm, for example), but they are unenforceable and therefore largely meaningless. Ethics frameworks like this one for AI have been criticised as ethics washing, and a ploy for industry to avoid hard law and regulation.

Read more: How big tech designs its own rules of ethics to avoid scrutiny and accountability

Another attempt is the Human Rights and Technology project of the Australian Human Rights Commission. It aims to protect and promote human rights in the face of new technology.

We are likely to see some changes following the Australian Competition and Consumer Commissions recent inquiry into digital platforms. And a long overdue review of the Privacy Act 1988 (Cth) is slated for later this year.

These initiatives will hopefully strengthen Australian protections in the digital age, but there is still much work to be done. Stronger human rights protections would be an important step in this direction, to provide a foundation for regulation.

Before AI is adopted even more widely, we need to understand its impacts and put protections in place. To realise the potential benefits of AI, we must ensure that it is governed appropriately. Otherwise, we risk paying a heavy price as individuals and as a society.

View original post here:
Airlines take no chances with our safety. And neither should artificial intelligence - The Conversation AU

Google Uses DeepMind’s Artificial Intelligence to Fight Coronavirus: Is It Time to Trust AI? – Tech Times

Googleis using itsDeepMindartificial intelligence to use to combat thecoronavirusor COVID-19. With the coronavirus still spreading slowly but surely and no cure is yet to be seen, is there any hope that this AI might be able to help find the cure?

Read More:READ! Coronavirus Has Two Strains Which Will Make it Even More Difficult to Contain Since The Other Half Doesn't Know They Are Infected Until It's Too Late

A post that was published Thursday, DeepMind is now using itsAlphaFold systemto create "structure predictions of several under-studied proteins associated with SARS-CoV-2, the virus that causes COVID-19."

The predictions, however, have not been experimentally verified, but DeepMind is confident that the data will be useful to the scientists who have a better understanding of the novel coronavirus that it will be of use to them.

DeepMind stated thatunderstanding a protein's structureusually takes months or even longer. Of course, previous knowledge of the protein structures is another key, surely. AlphaFold is using cutting edge technology and methods to ascertain "accurate predictions of the protein structure" with no knowledge prior to the strain.

Assistance is helpful for sure, no matter where it comes from in the fight against the coronavirus. However, as stated by DeepMind, that AI has no prior knowledge of the protein and how to look for things that the scientists can use. So many what if's can happen: what if the AI didn't find something worthwhile? Does it even know what to look for? Can it be used realistically and applied in the real world? Questions come to mind, but we have to be optimistic, and thankfully, if there are any findings regardless of how minuscule they may be, it might still be a piece in the puzzle to find the cure, which we will all definitely benefit from.

Read More:10 Ways to Greet Someone in Style Without Getting Sick From Deadly Coronavirus

If there are any findings from DeepMind's artificial intelligence, scientists may verify, record, and then, if possible, find the cure or the vaccine to the deadly virus. The key to finding key components to help the production of a cure is a difficult science to be sure, and even with all the scientists crunching their brains 24/7, it will take a gigantic effort.

"Artificial intelligence could be one of humanity's most useful inventions. We research and build safe AI systems that learn how to solve problems and advance scientific discovery for all." so they advertise on their website, which brings us to their advances in all fields. So far, their systems have shown companies how to save energy, identify eye diseases, accelerating science, and, as they are partnered with Google, improves their products to use all over the world.

So DeepMind shows promise, and humanity is counting on their AI to try and solve one of the problems the whole world has to face every day. The cure needs to be found sooner rather than later before thevirus mutates again.

Read More:Guide to Proper Smartphone Cases Cleaning to Prevent Spread of Germs and Virus Like Coronavirus

2018 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Continued here:
Google Uses DeepMind's Artificial Intelligence to Fight Coronavirus: Is It Time to Trust AI? - Tech Times

Is Artificial Intelligence (AI) A Threat To Humans? – Forbes

Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever happen to humankind? This has been a question in existence since the 1940s when computer scientist Alan Turing wondered and began to believe that there would be a time when machines could have an unlimited impact on humanity through a process that mimicked evolution.

Is Artificial Intelligence (AI) A Threat To Humans?

When Oxford University Professor Nick Bostroms New York Times best-seller, Superintelligence: Paths, Dangers, Strategies was first published in 2014, it struck a nerve at the heart of this debate with its focus on all the things that could go wrong. However, in my recent conversation with Bostrom, he also acknowledged theres an enormous upside to artificial intelligence technology.

You can see the full video of our conversation here:

Since the writing of Bostrom's book in 2014, progress has been very rapid in artificial intelligence and machine and deep learning. Artificial intelligence is in the public discourse, and most governments have some sort of strategy or road map to address AI. In his book, he talked about AI being a little bit like children playing with a bomb that could go off at any time.

Bostrom explained, "There's a mismatch between our level of maturity in terms of our wisdom, our ability to cooperate as a species on the one hand and on the other hand our instrumental ability to use technology to make big changes in the world. It seems like we've grown stronger faster than we've grown wiser."

There are all kinds of exciting AI tools and applications that are beginning to affect the economy in many ways. These shouldnt be overshadowed by the overhype on the hypothetical future point where you get AIs with the same general learning and planning abilities that humans have as well as superintelligent machines.These are two different contexts that require attention.

Today, the more imminent threat isn't from a superintelligence, but the usefulyet potentially dangerousapplications AI is used for presently.

How is AI dangerous?

If we focus on whats possible today with AI, here are some of the potential negative impacts of artificial intelligence that we should consider and plan for:

Change the jobs humans do/job automation: AI will change the workplace and the jobs that humans do. Some jobs will be lost to AI technology, so humans will need to embrace the change and find new activities that will provide them the social and mental benefits their job provided.

Political, legal, and social ramifications: As Bostrom advises, rather than avoid pursuing AI innovation, "Our focus should be on putting ourselves in the best possible position so that when all the pieces fall into place, we've done our homework. We've developed scalable AI control methods, we've thought hard about the ethics and the governments, etc. And then proceed further and then hopefully have an extremely good outcome from that." If our governments and business institutions don't spend time now formulating rules, regulations, and responsibilities, there could be significant negative ramifications as AI continues to mature.

AI-enabled terrorism: Artificial intelligence will change the way conflicts are fought from autonomous drones, robotic swarms, and remote and nanorobot attacks. In addition to being concerned with a nuclear arms race, we'll need to monitor the global autonomous weapons race.

Social manipulation and AI bias: So far, AI is still at risk for being biased by the humans that build it. If there is bias in the data sets the AI is trained from, that bias will affect AI action. In the wrong hands, AI can be used, as it was in the 2016 U.S. presidential election, for social manipulation and to amplify misinformation.

AI surveillance: AIs face recognition capabilities give us conveniences such as being able to unlock phones and gain access to a building without keys, but it also launched what many civil liberties groups believe is alarming surveillance of the public. In China and other countries, the police and government are invading public privacy by using face recognition technology. Bostrom explains that AI's ability to monitor the global information systems from surveillance data, cameras, and mining social network communication has great potential for good and for bad.

Deepfakes: AI technology makes it very easy to create "fake" videos of real people. These can be used without an individual's permission to spread fake news, create porn in a person's likeness who actually isn't acting in it, and more to not only damage an individual's reputation but livelihood. The technology is getting so good the possibility for people to be duped by it is high.

As Nick Bostrom explained, The biggest threat is the longer-term problem introducing something radical thats super intelligent and failing to align it with human values and intentions. This is a big technical problem. Wed succeed at solving the capability problem before we succeed at solving the safety and alignment problem.

Today, Nick describes himself as a frightful optimist that is very excited about what AI can do if we get it right. He said, The near-term effects are just overwhelmingly positive. The longer-term effect is more of an open question and is very hard to predict. If we do our homework and the more we get our act together as a world and a species in whatever time we have available, the better we are prepared for this, the better the odds for a favorable outcome. In that case, it could be extremely favorable.

For more on AI and other technology trends, see Bernard Marrs new book Tech Trends in Practice: The 25 Technologies That Are Driving The 4Th Industrial Revolution, which is available to pre-order now.

Originally posted here:
Is Artificial Intelligence (AI) A Threat To Humans? - Forbes

Can Machines And Artificial Intelligence Be Creative? – Forbes

We know machines and artificial intelligence (AI) can be many things, but can they ever really be creative? When I interviewed Professor Marcus du Sautoy, the author of The Creativity Code, he shared that the role of AI is a kind of catalyst to push our human creativity. Its the machine and human collaboration that produces exciting resultsnovel approaches and combinations that likely wouldnt develop if either were working alone.

Can Machines And Artificial Intelligence Be Creative?

Instead of thinking about AI as replacing human creativity, it's beneficial to examine ways that AI can be used as a tool to augment human creativity. Here are several examples of how AI boosts the creativity of humans in art, music, dance, design, recipe building, and publishing.

Art

In the world of visual art, AI is making an impact in many ways. It can alter existing art such as the case when it made the Mona Lisa a living portrait a la Harry Potter, create likenesses that appear to be real humans that can be found on the website ThisPersonDoesNotExist.com and even create original works of art.

When Christies auctioned off a piece of AI artwork titled the Portrait of Edmond de Belamy for $432,500, it became the first auction house to do so. The AI algorithm, a generative adversarial network (GAN) developed by a Paris-based collective, that created the art, was fed a data set of 15,000 portraits covering six centuries to inform its creativity.

Another development that blurs the boundaries of what it means to be an artist is Ai-Da, the worlds first robot artist, who recently held her first solo exhibition. She is equipped with facial recognition technology and a robotic arm system thats powered by artificial intelligence.

More eccentric art is also a capability of artificial intelligence. Algorithms can read recipes and create images of what the final dish will look like. Dreamscope by Google uses traditional images of people, places and things and runs them through a series of filters. The output is truly original, albeit sometimes the stuff of nightmares.

Music

If AI can enhance creativity in visual art, can it do the same for musicians? David Cope has spent the last 30 years working on Experiments in Musical Intelligence or EMI. Cope is a traditional musician and composer but turned to computers to help get past composers block back in 1982. Since that time, his algorithms have produced numerous original compositions in a variety of genres as well as created Emily Howell, an AI that can compose music based on her own style rather than just replicate the styles of yesterdays composers.

In many cases, AI is a new collaborator for todays popular musicians. Sony's Flow Machine and IBM's Watson are just two of the tools music producers, YouTubers, and other artists are relying on to churn out today's hits. Alex Da Kid, a Grammy-nominated producer, used IBMs Watson to inform his creative process. The AI analyzed the "emotional temperature" of the time by scraping conversations, newspapers, and headlines over a five-year period. Then Alex used the analytics to determine the theme for his next single.

Another tool that embraces human and machine collaboration, AIVA bills itself as a creative assistant for creative people and uses AI and deep learning algorithms to help compose music.

In addition to composing music, artificial intelligence is transforming the music industry in a variety of ways from distribution to audio mastering and even creating virtual pop stars. An auxuman singer called Yona, developed by Iranian electronica composer Ash Koosha, creates and performs music such as the song Oblivious through AI algorithms.

Dance and Choreography

A powerful way dance choreographers have been able to break out of their regular patterns is to use artificial intelligence as a collaborator. Wayne McGregor, the award-winning British choreographer and director, is known for using technology in his work and is particularly fascinated by how AI could enhance what is done with the choreography in a project with Google Arts & Culture Lab. Hundreds of hours of video footage of dancers representing individual styles were fed into the algorithm. The AI then went to work and "learned how to dance. The goal is not to replace the choreographer but to efficiently iterate and develop different choreography options.

AI Augmented Design

Another creative endeavor AI is proving to be adept at is commercial design. In a collaboration between French designer Philippe Starck, Kartell, and Autodesk, a 3D software company, the first chair designed using artificial intelligence and put into production was presented at Milan Design Week. The Chair Project is another collaboration that explores co-creativity between people and machines.

Recipes

The creativity of AI is also transforming the kitchen not only by altering longstanding recipes but also creating entirely new food combinations in collaborations with some of the biggest names in the food industry. Our favorite libations might also get an AI makeover. You can now pre-order AI-developed whiskey. Brewmasters decisions are also being informed by artificial intelligence. MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) is making use of all those photos of the food that we post on social media. By using computer vision, these food photos are being analyzed to better understand peoples eating habits as well as to suggest recipes with the food that is pictured.

Write Novels and Articles

Even though the amount of written material to inform artificial intelligence algorithms is voluminous, writing has been a challenging skill for AI to acquire. Although AI has been most successful in generating short-form formulaic content such as journalism "who, what, where, and when stories," its skills continue to grow. AI has now written a novel, and although neural networks created what many might find a weird read, it was still able to do it. And, with the announcement a Japanese AI programs short-form novel almost won a national literary prize, its easy to see how it wont be long before AI can compete with humans to write compelling pieces of content. Kopan Page published Superhuman Innovation, a book not only about artificial intelligence but was co-written by AI. PoemPortraits is another example of AI and human collaboration where you can provide the algorithm with a single word that it will use to generate a short poem.

As the world of AI and human creativity continue to expand, its time to stop worrying about if AI can be creative, but how the human and machine world can intersect for creative collaborations that have never been dreamt of before.

You can watch the full interview with Marcus du Sautoy here:

See the article here:
Can Machines And Artificial Intelligence Be Creative? - Forbes

Global Healthcare Artificial Intelligence Market was Estimated to Grow at 25.9% CAGR During the Forecast Period Due to the Rising Adoption of…

Healthcare artificial intelligence market is estimated to be US$ 3,120 Mn in 2018 and is anticipated to grow at a CAGR of 25.9% over the forecast period owing to digitalization of medical device and patient registries

PUNE, India, March 5, 2020 /PRNewswire/ --In terms of revenue, the global healthcare artificial intelligence market is estimated to be US$ 3,120 Mn in 2018 and is anticipated to reach US$ 24,700 Mn by 2027 growing at a CAGR of 25.9% over the forecast period.

Absolute Market Insights Logo

The increasing use of electronic patient registry and medical device registries is leading to generation of potential datasets for application of AI technologies and deriving predictive insights. Electronic patient registry or electronic health record (EHR) are used by hospitals and clinics to collect observational medical data of their patients. This data is collected and analyzed by a web-based software and can be made available to the medical community, government agencies and research organizations as per their requirement. It allows professionals in healthcare and other industries to analyze available treatments and how patients with various characteristics and medical history respond to these treatments. In a similar way, medical device registry is used to collect, store and retrieve data to medical devices and equipment used for healthcare delivery. The trend of electronically storing patient and device data in healthcare sector has been witnessing growth in past few years due to the digital revolution.

Request for Sample Copy of This Report@ https://www.absolutemarketsinsights.com/request_sample.php?id=456

Major players such as McKesson Corporation, IBM and others have introduced their EHR products. The rising adoption of electronic patient and medical device registry has led to generation of huge datasets which can be optimally utilized for analytical predictive purposes. AI and advanced analytics enable healthcare providers to extract patient-specific information from connected medical devices instead of having to analyze large, time-consuming and complicated datasets, thus propelling the growth of global healthcare artificial intelligence market. Such specific patient information can aid them to offer personalized medicines and diagnostics. For instance, Qualetics Data Machines Inc. offers an intelligence platform for healthcare industry which provides incisive insights using artificial intelligence, machine learning, natural language processing and predictive analysis coupled with data obtained from patient registries.

In other such instance, Saykara, Inc. has developed an AI based virtual assistant for physicians utilizing speech recognition technology, which listens in the background during attending any patient and automatically generates notes which is later updated in the EHR system. These application of AI technologies in combination with EHR systems are enhancing healthcare delivery and user experience thus enhancing the growth of global healthcare artificial intelligence market. Going forward deployment of patient and medical device registries on cloud platform further deepens the market penetration of these electronic registries thus creating extensive potential application for AI technologies. For instance, SyTrue in partnership with Microsoft has introduced Azure, cloud platform of Microsoft, based solution to manage health records through natural language processing technology. Thus, growing digitalization of patient and medical device registries are expected to boost the growth of global healthcare artificial intelligence market globally.

Story continues

Enquiry Before Buying @ https://www.absolutemarketsinsights.com/enquiry_before_buying.php?id=456

The detailed research study provides qualitative and quantitative analysis of healthcare artificial intelligence market. The market has been analyzed from demand as well as supply side. The demand side analysis covers market revenue across regions and further across all the major countries. The supply side analysis covers the major market players and their regional and global presence and strategies. The geographical analysis done emphasizes on each of the major countries across North America, Europe, Asia Pacific, Middle East & Africa and Latin America.

Key Findings of the Report:

Request for Customization@ https://www.absolutemarketsinsights.com/request_for_customization.php?id=456

Global Healthcare Artificial Intelligence Market:

Get Full Information of this premium report@ https://www.absolutemarketsinsights.com/reports/Global-Healthcare-Artificial-Intelligence-Market-2019-2027-456

About Us:

Absolute Markets Insights assists in providing accurate and latest trends related to consumer demand, consumer behavior, sales, and growth opportunities, for the better understanding of the market, thus helping in product designing, featuring, and demanding forecasts. Our experts provide you the end-products that can provide transparency, actionable data, cross-channel deployment program, performance, accurate testing capabilities and the ability to promote ongoing optimization.

From the in-depth analysis and segregation, we serve our clients to fulfill their immediate as well as ongoing research requirements. Minute analysis impact large decisions and thereby the source of business intelligence (BI) plays an important role, which keeps us upgraded with current and upcoming market scenarios.

Contact Us:

Company:Absolute Markets Insights Email id:sales@absolutemarketsinsights.comPhone:+91-740-024-2424 Contact Name:Shreyas Tanna The Work Lab, Model Colony, Shivajinagar, Pune, MH, 411016 Website:https://www.absolutemarketsinsights.com/

View original content:http://www.prnewswire.com/news-releases/global-healthcare-artificial-intelligence-market-was-estimated-to-grow-at-25-9-cagr-during-the-forecast-period-due-to-the-rising-adoption-of-electronic-patient-registries-creating-potential-datasets-for-analytics-using-ai-absolu-301017207.html

SOURCE Absolute Markets Insights

Go here to see the original:
Global Healthcare Artificial Intelligence Market was Estimated to Grow at 25.9% CAGR During the Forecast Period Due to the Rising Adoption of...

The Temptations: Will Artificial Intelligence Ever Replace Broadway Shows? – Grit Daily

Last week I went to the Imperial Theater to see Aint Too Proud, a biography in song and dance of the ultimate Motown supergroup, The Temptations. It was one of the many Broadway shows that are often overlooked in favor of heavy hitters like Cats or Wicked, but is still entertaining nonetheless.

The performance reminded me of why people go to Broadway in the first place. The music, the dancing, the acting, and the story, surprisingly, are all secondary.

The real reason we go to the theater is to experience the energy and joy of the performers who are right in front of us, operating on that tightrope where there are no second chances, no explanations or forgiveness for forgetting ones lines or moves, and instead the excitement and thrill of watching individuals living their dreams and demonstrating the greatness of the human spirit, just for us, right before our very eyes.

Theres nothing necessarily easy about getting to Broadway. First, youve got to get tickets, and two tickets to any Broadway show costs about the same as a year of Netflix or Disney Plus. Next, youve got to make your way to Midtown Manhattan, an increasingly difficult chore, since Mayor de Blasio has all but outlawed private vehicles and, somehow, made traffic even worse than ever.

Then theres the experience of being in the theater at Broadway shows, which is not how most people consume their entertainment these days. When youre at home, nobody glares at you if you leave your phone on and it beeps, buzzes, or trills. You can get up and go to the bathroom anytime, not just before or after the show, in a line of fifty strangers equally desperate to pee.

At home, on the couch, you dont have to wrestle a stranger for control of an armrest. You can sprawl as much as you like, with no one to lean on you, breathe on you, or block your view.

And yet.

When we think about the term virtual, as in virtual reality, we tend to forget that the real meaning of virtue comes from the Latin word for truth. Virtual reality is, in fact, a bit of a dirty lie. Its neither virtual (truthful) nor is it real. The performers arent sharing the same space with you.

They had countless takes in order to get their songs, dances, or emoting exactly the way they want it. If a note, or a dance step, or an entrance, or anything gets flubbed, no problem.

Take two.

Producers on Broadway shows are a smart lot. They understand that their mission in life is to give the people what they want, and above all, thats a rollicking good time. Even if you cannot pee on-demand or check your phone without experiencing the opprobrium of those around you. And if you dont like what youre watching, theres nothing else on.

Youre stuckliterally in the middle of the row and figuratively, as theres no other channel, website, or video to which you can turn. So its easy to make the case for what really shouldnt be called virtual reality and should actually be referred to as a bunch of stuff caught on video.

That said, Im hoping youll do what I did: Make your way into trafficky, crowded Midtown, pay too much for dinner, pay just enough to get good seats, and wedge yourself in between a couple of strangers and arm wrestle with them for dominance on the seat dividers, and enjoy the show.

As for The Temptations itself, if youre going to see one Broadway show, as the expression goes, you really ought to get out more often. But if you are going to see one, make it this one.

The perfection of the performance, the awesome quality of the music, the thrill of the dancing, mic tosses, and splitsyou cant get that on YouTube. Okay, yeah, you can, but you wont breathe the same air as the performers.

And if there are any performers with whom youshould share air and space, its the men and women of the cast and band of Aint Too Proud.

You can always pick up your device again afterthe show.

Read the original post:
The Temptations: Will Artificial Intelligence Ever Replace Broadway Shows? - Grit Daily

UH Hilo receives $500K grant to research artificial intelligence interaction with humans – UH System Current News

Travis Mandel (Photo credit: Raiatea Arcuri)

A computer scientist at the University of Hawaii at Hilo is the recipient of a more than half-a million-dollar grant from the National Science Foundation aimed at developing new techniques in artificial intelligence (AI). Assistant Professor Travis Mandel, an AI expert, will use the prestigious $549,790 award to enhance research based on human-in-the-loop AI. The techniques are based on how AI and machine learning systems collaborate with humans to solve real-world problems too challenging for either to address alone.

The goal of this project is to create new algorithms and interaction paradigms that enable humans and artificial intelligence systems to work together, leveraging each others strengths to collect better data, Mandel said.

The National Science Foundation award is expected to have a major impact on research and education on Hawaii Island. The hope is to drive increased interest in science and technology at UH Hilo and showcase the universitys emerging data science program.

Im particularly excited about the opportunities this grant will provide for our talented and hardworking undergraduate students to get involved in cutting-edge computer science and data science research, Mandel said. The project also includes components that integrate research and education, such as building new data science curriculum and developing interactive video game exhibits at Imiloa Astronomy Center and the Hawaii Science and Technology Museum.

For more go to UH Hilo Stories.

More:
UH Hilo receives $500K grant to research artificial intelligence interaction with humans - UH System Current News

Where artificial intelligence fits in education – TechTalks

By Sergey Karayev

Artificial Intelligence is coming for education.

But dont panic.

Its not going to replace college faculty or teaching as we know it. Its not a slippery slope. Instead, AI is going to give faculty superpowers, extending their reach and expanding their time.

A good teacher is a role model, a sage, able to become what the student needs. Teaching is too personal, too human, to be turned over to AI.

Thats not just my opinion. Three years ago, McKinsey, the global consulting firm, issued a report on how and where AI and automation was most likely to replace jobs and job functions. They listed Educational Services as the sector least likely to undergo that type of technology-dependent displacement saying, the essence of teaching is deep expertise and complex interactions with other people.

Consider also Dr. David Weiss, a psychology professor at the University of Minnesota. Weiss was probably the first person in the world to use computers to give and grade assessments, work he was doing as early as 1969. As far back as the 1970s people said we could have computers deliver instruction, we wont need teachers anymore. And Im hearing that again now because so much is on computer, he said recently. But thats never been realistic. There are things computers can do well and things they cant, he said.

Thats all true and unlikely to change. Teachers teach. They are good at it. No one wants to change that.

So, the dawn of AI in teaching does not mean were on a path to robot instructors. Computers and algorithms are highly unlikely to come between faculty and students anytime in our foreseeable future.

Where AI can help today is outside the classroom, making many non-instructional responsibilities of teaching easier and faster.

As an example, the area Im working on is AI-assisted grading. When fully tested and deployed, it will be able to do things such as group student answers by their content, and batch feedback to all essentially similar responses in the blink of an eye. So instead of a teacher writing forgot to mention the Krebs cycle 50 times, they can identify the error once and write their feedback once and the AI in the tool will propagate it to other responses with the same error.

AI assessment tools can also help faculty spot sticky subject areas for subsets of students and even make student-by-student recommendations for areas of extra attention. It can spot when an unusually high percentage of students struggled with a particular question, flagging that either the specific question or the whole topic needs teacher review.

Make no mistake. This wont replace gradingteachers will still decide whats correct and what isnt. Teachers will still approve the results. They just wont need to spend as long doing it, and they will be more accurate to boot.

Used correctly, it could turn the rote process of grading into a faster, less repetitive exercise in much the same way the Scantron or optical mark recognition made scoring multiple-choice assessments faster. Neither innovation replaced teaching, they made being a teacher easier.

Think of it as the difference between using Microsoft Word or a typewriter. Computer-based typing tools such as spellcheckers and cut-and-paste did not replace writing or displace writers, they made writers better, faster, more powerful.

My point is not that automated grading tools and other AI advancements will be mundane improvements. I am confident they will be tremendously important advancements in education. What Im saying is that the AI that is coming to education will be in the support systems, freeing faculty to do more of what they love, the things computers cant do: mentor students, make intellectual connections, and inspire curious minds. Giving teachers significantly more time and energy to do those things has the potential to be a game-changer for learning.

AI can do that, and not just in grading but in

other areas too, streamlining the tasks and chores of faculty that exist largely outside and apart from person-to-person, teacher-to-student engagement. The point of AI is to make those moments more frequent and more powerfulto be a teaching superpower.

Sergey Karayev has a PhD in Computer Science from the University of California at Berkeley, is co-founder of Gradescope, and head of AI for STEM at Turnitin. He is also a co-organizer of Full Stack Deep Learning Bootcamp, which delves into best practices of all components of deep learning.

Read the original here:
Where artificial intelligence fits in education - TechTalks