"Artificial Intelligence Will Make Medicine Better in the Long Run" – Biophotonics.World

Image source: Leibniz IPHT

By: Sven Dring

Leibniz IPHT is increasingly focusing on artificial intelligence and learning systems. Thomas Bocklitz is heading the new research department "Photonic Data Science". We asked him how AI could help shape the future of diagnostics

What new possibilities does Photonic Data Science open up for diagnostics?

Photonic Data Science is a potpourri combining mathematical and statistical methods with algorithms and domain knowledge to translate measurement data into useful information. We usually translate photonic data into biomedical for example diagnostic information. By translating with the computer, robust diagnostic information can be extracted. Tiny details in complex data can be made useful for diagnostics. This opens up new possibilities for diagnostics.

Artificial intelligence (AI) then helps to evaluate this data. Which technologies researched at the institute are based on AI?

In the laser-based rapid test of infectious pathogens, machine learning methods and algorithms for data pre-treatment are used to translate Raman spectra of bacteria into a resistance prediction i.e. to predict pathogens and antibiotic resistances on the basis of the spectroscopically recorded data. For the compact microscope Medicars we use deep and machine learning techniques to translate multimodal image data into a tissue prediction for the detection of tumor margins. In smartphone microscopy, which is being researched by Rainer Heintzmann's team, image enhancement is achieved by means of deep learning procedures.

Where do the data sets come from that are currently mainly used? Can they be applied equally to all patients?

The data sets are generated within clinical studies, which we supervise from the beginning. The studies are still too small to exclude a gender bias, but we are working on the experimental design so that there is no gender bias in the training data set and we hope that the models will not generate any bias.

Does the automated analysis of medical control data also carry a risk? A loss of control?

Of course, every technology has risks, although these are manageable here. Artificial intelligence or machine learning processes only work well if the new test data is similar to the training data. We try to tackle this problem by creating the necessary similarity through standardization and model transfer in order to im- prove the predictions. There is a loss of control when the models are applied fully automatically. But in the medium term the models will only represent a second opinion, so there will be no loss of control.

Can physicians improve the learning systems? Is the procedure of AI applications comprehensible for them?

Physicians can increase the database or reduce the uncertainty of the metadata i.e. labels by pooling or voting, which leads to better models. The traceability of AI models is a major topic in current machine learning research Keyword "Explainable AI". The aim is to decipher these models in order to make it clearly understandable how mass-based learning methods and deep learning systems achieve their results.

Can AI be perfected to the point where it can eventually make better diagnoses than a human?

I think so, if the data is highly standardized. Another challenge is to demonstrate that improvement. This requires quite long clinical trials and is ethically problematic.

Could AI ever replace doctors instead of just to supporting them? For example, could operations be performed by AI-controlled robots at some point?

I don't think so, because there are many uncertainties in an operation that must be reacted to flexibly. This is not a prominent feature of current AI procedures. It's more likely that the surgical robots will do very specific things directly on the operator's instructions.

Will AI make medicine better?

In the long run, I think so. But first, it will make diagnostics more comparable and it will also allow data to be used not only sequentially, but in combination.

Artificial Intelligence, Machine Learning, Deep Learning

Decision making, problem solving, learning these are actions that we commonly associate with human thinking. We call their automation artificial intelligence (AI). An important part of AI is machine learning (ML). Scientists are researching algorithms andstatistical or mathematical methods with which computer systems can solve specific tasks.

For this purpose, machine learning methods construct a statistical-mathematical model from an example data set, the training data. On this basis, ML methods can make predictions or make decisions without having been explicitly programmed for it. ML techniques are used, for example, for spam detection in e-mail accounts, in image processing, and for the analysis of spectroscopic data. Deep learning is a method of machine learning that is similar to the way the human brain processes visual and other stimuli. Artificial neurons receive input, process it and pass it on to other neurons.Starting from a first, visible layer, the characteristics in the subsequent, hidden intermediate layers become increasingly abstract. The result is output in the last, again visible layer.

Making Tumor Tissue Visible with AI

Did the surgeon remove the entire tumor during surgery? In order to find out, researchers are combining optical methods with artificial intelligence (AI) and data pre-processing methods. AI is behind the compact Medicars microscope, for example, which enables rapid cancer diagnosis during surgery. Here, patterns and molecular details of a tissue sample irradiated with laser light are automatically evaluated and translated into classical images of standard diagnostics. Thus, tumor margins become visible.

"For this purpose, we train AI algorithms together with pathologists," explains Thomas Bocklitz. We take multimodal images of a tissue sample with our laser-based multi- modal microscope. In pathology, the tissue section is then embedded, stained, and an image of the HE- stained tissue section is taken (HE = haematoxylin-eosin). This enables the pathologist to recognize tumor tissue. Then we put the multimodal and the HE image side by side."

Based on the pathologist's analysis of the tissue structure and morphology, the research team teaches the algorithm which tissue is healthy and which is sick. "In this supervised approach, the algorithm learns to distinguish successive, healthy and diseased areas." With success: The accuracy of the predictions is more than 90 percent according to tests on a small group of patients.

Source: Leibniz IPHT

Continue reading here:
"Artificial Intelligence Will Make Medicine Better in the Long Run" - Biophotonics.World

How the Army Is Really Using AI – AI Daily

It has always been controversial for the army to implement artificial Intelligence within their task force (particularly with the use of drones) because ethics of it, however artificial intelligence is being implemented within the army in other less controversial ways forget the movie Terminator, artificial intelligence is performing many tedious and manual tasks that would use a lot of time and resources and providing assistance to recognition and conversation systems to predictive analytics pattern matching and autonomous systems.

The army leverage artificial intelligence by (for a more detail example) predicting when vehicle parts may need to be replaced which saves a lot of time, money and increase operational safety. Some programs that leverage data using machine learning include Project Maven that retrieves data from drones and aids to automate some of the work that analysts do again saving time and money. As you can see artificial intelligence is being used to improve the armys efficiency.

Continued here:
How the Army Is Really Using AI - AI Daily

Is Artificial intelligence the Future of IT Help Desk? – Analytics Insight

Artificial intelligence is one of the biggest markets for growth within the field of technology today. In fact, AI is rapidly empowering us to make major changes to various fields within the realm of technology. Help desk is no stranger to the idea that there is room for improvement within this niche of technology.

Businesses use help desk software to manage a variety of different types of information. From customers questions and concerns to employee computer repair requests, help desk is a solution for organizing, responding to, and gathering results from each of those individual tickets that are completed.

If you utilize a help desk for your own business, then you may have wondered how help desk could be changing in the near future. You might even be surprised that one of the many ways help desk could change is by utilizing AI technology to improve its accuracy and dependability.

One of the biggest areas of improvement that comes through the AI world within technology is the use of bots to chat with customers about their needs and any questions which may arise. Using AI, a business can employ virtual chatbots to troubleshoot concerns from the person visiting the help desk, SysAid is one such business already trialling out and employing AI on help desk concerns. This can greatly reduce the number of tickets that the help desk employees go through on a daily basis.

Although you can give users the opportunity to code their ticket in a certain priority rank, you can also use AI to help formulate the order in which those tickets should be reviewed. This function would also make the help desk more intuitive for the user because it can help the user auto-populate various options.

There are a number of ways that AI tools can help to build insight into the information that a help desk might find useful. First, using AI tools, a help desk can populate responses to the problem that a person is reporting. This can help to reduce the number of tickets that the help desk has to respond to on a daily basis.

AI can also help to formulate the most popular types of insight that are requested through this tool. Tracking this type of data can help the tech team to understand where there are weaknesses in their systems in use. Further, this same type of data tracking can help the tech team understand their weaknesses in response to certain issues as well. Using this data, tech teams can enhance their own performance to the questions that are posed through help desk technology.

Artificial intelligence is going to change the way that a lot of different technology tools are able to help us in the future. These tools automate processes and actually help the tech team to manage and understand their own worth in a new light. Further, AI can automate the system in such a way that the tech team has time to focus less on help desk requests and more on the bigger issues at hand with their resources.

Read the rest here:
Is Artificial intelligence the Future of IT Help Desk? - Analytics Insight

Top Artificial Intelligence and Robotics Investments in July 2020 – Analytics Insight

Artificial Intelligence is growing at a faster pace. Despite the unprecedented situation of coronavirus, 2020 till date has witnessed sustained momentum. Funding has increased by 51% to $8.4B from the previous quarter. With this faster pace, it is also attracting a series of funding and financial investments. Lets go through some of the important investments in artificial intelligence and robotics companies in July 2020.

Amount Funded: $6 million

Transaction Name: Seed Round

Lead Investors: Kindred Capital and Capnamic Ventures

BotsAnduS creates robots that work with people in shopping centers, retail stores, office buildings, airports etc. They aim at digitising the full customer journey by automating the collection of onsite data and providing 24/7 customer service.

BotsAnduS has reported that it has raised a $6m seed funding, which was co-driven by Kindred Capital and Capnamic Ventures, along with angel investors participating in the round.

Amount Funded: $225 million

Transaction Name: Seed E Funding

Lead Investors: Alkeon Capital Management

UiPath, a leading RPA company has brought $225 million up in Seed E funding. The round was driven by Alkeon Capital Management. Other investors participating in the round are Accel, Coatue, Dragoneer, IVP, Madrona Venture Group, Sequoia Capital, Tencent, Tiger Global, Wellington. The funding will be utilized for developing automation solutions in order to mitigate the risks posed on productivity and supply of human workers.

Amount Funded: $100 million

Transaction Name: Series C Funding

Lead Investors: Next47

Skydio is a leader in autonomous flight technology and a U.S. drone manufacturer. Skydio raised $100 million in Series C funding. The round was driven by Next47 along with Levitate Capital, NTT DOCOMO Ventures, and existing investors including Andreessen Horowitz, IVP, and Playground participating in the round. The organization will utilize the funding to grow its operations in public sector markets and accelerate product development.

Amount Funded: $56.2 million

Transaction Name: Series A Funding

Lead Investors: Lightspeed Venture Partners

The Bay Area-based robotics startup , has raised $56.2 million up in Series A funding led by Kleiner Perkins, Lightspeed Venture Partners, Obvious Ventures, Pacific West Bank, B37 Ventures, Presidio (Sumitomo) Ventures, Blackhorn Ventures, Liquid 2 Ventures and Stanford StartX.

Dexterity offers robots for warehousing, logistics and supply chain operations. It has already seen a boost from the push for essential services during the COVID-19 pandemic.

Amount Funded: $13m

Transaction Name: Series A Funding

Lead Investors: Index Ventures

Abacus.AI, a San Francisco, CA-based AI research and AI cloud services company, has as of late brought $13m up in Series A funding driven by Index Ventures, with participation from Eric Schmidt, Ram Shriram, Decibel Ventures, Jerry Yang, Mariam Naficy, Erica Shultz, Neha Narkhede, Xuezhao Lan, and Jeannette Furstenberg.

The company will use the funding to grow its research team and scale its operations.

Amount Funded:

Transaction Name: Series A

Lead Investors: ETP Ventures

Deep Longevity, a biotechnology company transforming longevity R&D through AI-discovered biomarkers of aging, has raised Series A funding of an undisclosed amount. The round was led by ETP Ventures and different prominent investors like BOLD Capital Partners, Longevity Vision Fund, Oculus, Formic Ventures, and LongeVCalso participated.

Amount Funded:

Transaction Name: Seed funding

Lead Investors: Y Combinator

The company raised an undisclosed amount of Seed funding. Nana is building the Guild for the eventual future of work. A distributed workforce of tradespeople, starting with the $4B Appliance Repair industry. Nana is an on-demand home maintenance marketplace. A marketplace meets modern trade school, showing new aptitudes and associating the 10M+ Americans who will be affected via automation to more compelling jobs in the home services space.

Nana is a place for consumers to complete things, and AI and a learning management system for skilled professionals.

Amount Funded: $53m

Transaction Name: Series B

Lead Investors: DCVC

Caption Health, the Calif.-based medical artificial intelligence (AI) company, has raised a $53m Series B funding driven by existing investor DCVC. Other investors which participated in the round are Atlantic Bridge, Edwards Lifesciences, along with existing investor Khosla Ventures. . The company plans to scale up its operations and develop its AI technology platform.

Amount Funded: $6.5m

Transaction Name: Series A

Lead Investors: Debiopharm

Computational biology startup Nucleai raised $6.5m Series A funding led by Debiopharm a Swiss biopharmaceutical company. Previous investors Vertex Ventures and Grove Ventures also participated in the round.

Nucleai offers an AI-powered precision oncology platform that offers biomarker discovery and treatment decisions for cancer treatments. It combines machine learning and computer vision to model the characteristics of both the tumour and the patients immune system.

Amount Funded: 2.5m

Transaction Name: Seed Funding

Lead Investors: NPIF and XTX Ventures

Logicaly, a UK-based dtech startup declared that it has raised 2.5m seed funding from NPIF and XTX Ventures. The company aims to use the funding to continue developing its product in time for the US election.

The startup deploys AI to detect fake and news and misinformation as well as provide fact-checking service to combat fake news.

See the article here:
Top Artificial Intelligence and Robotics Investments in July 2020 - Analytics Insight

The next frontier of human-robot relationships is building trust – Scroll.in

Artificial intelligence is entering our lives in many ways on our smartphones, in our homes, in our cars. These systems can help people make appointments, drive and even diagnose illnesses. But as it continues to serve important and collaborative roles in peoples lives, a natural question is: Can I trust them? How do I know they will do what I expect?

Explainable artificial intelligence is a branch of artificial intelligence research that examines how artificial agents can be made more transparent and trustworthy to their human users. Trustworthiness is essential if robots and people are to work together. It seeks to develop systems that human beings find trustworthy while also performing well to fulfill designed tasks.

At the Center for Vision, Cognition, Learning, and Autonomy at University of California Los Angeles, we and our colleagues are interested in what factors make machines more trustworthy, and how well different learning algorithms enable trust. Our lab uses a type of knowledge representation a model of the world that artificial intelligence uses to interpret its surroundings and make decisions that can be more easily understood by humans. This naturally aids in explanation and transparency, thereby improving trust of human users.

In our latest research, we experimented with different ways a robot could explain its actions to a human observer. Interestingly, the forms of explanation that fostered the most human trust did not correspond to the learning algorithms that produced the best task performance. This suggests performance and explanation are not inherently dependent upon each other optimising for one alone may not lead to the best outcome for the other. This divergence calls for robot designs that takes into account both good task performance and trustworthy explanations.

In undertaking this study, our group was interested in two things. How does a robot best learn to perform a particular task? Then, how do people respond to the robots explanation of its actions?

We taught a robot to learn from human demonstrations how to open a medicine bottle with a safety lock. A person wore a tactile glove that recorded the poses and forces of the human hand as it opened the bottle. That information helped the robot learn what the human did in two ways: symbolic and haptic. Symbolic refers to meaningful representations of your actions: for example, the word grasp. Haptic refers to the feelings associated with your bodys postures and motions: for example, the sensation of your fingers closing together.

First, the robot learned a symbolic model that encodes the sequence of steps needed to complete the task of opening the bottle. Second, the robot learned a haptic model that allows the robot to imagine itself in the role of the human demonstrator and predict what action a person would take when encountering particular poses and forces.

It turns out the robot was able to achieve its best performance when combining the symbolic and haptic components. The robot did better using knowledge of the steps for performing the task and real-time sensing from its gripper than using either alone.

Now that the robot knows what to do, how can it explain its behavior to a person? And how well does that explanation foster human trust?

To explain its actions, the robot can draw on its internal decision process as well as its behavior. The symbolic model provides step-by-step descriptions of the robots actions, and the haptic model provides a sense of what the robot gripper is feeling.

In our experiment, we added an additional explanation for humans: a text write-up that provided a summary after the robot has finished attempting to open the medicine bottle. We wanted to see if summary descriptions would be as effective as the step-by-step symbolic explanation to gain human trust.

We asked 150 human participants, divided into four groups, to observe the robot attempting to open the medicine bottle. The robot then gave each group a different explanation of the task: symbolic, step-by-step, haptic arm positions and motions, text summary, or symbolic and haptic together. A baseline group observed only a video of the robot attempting to open the bottle, without providing any additional explanations.

We found that providing both the symbolic and haptic explanations fostered the most trust, with the symbolic component contributing the most. Interestingly, the explanation in the form of a text summary didnt foster more trust than simply watching the robot perform the task, indicating that humans prefer robots to give step-by-step explanations of what theyre doing.

The most interesting outcome of this research is that what makes robots perform well is not the same as what makes people see them as trustworthy. The robot needed both the symbolic and haptic components to do the best job. But it was the symbolic explanation that made people trust the robot most.

This divergence highlights important goals for future artificial intelligence and robotics research: to focus on pursuing both task performance and explainability. Only focussing on task performance may not lead to a robot that explains itself well. Our lab uses a hybrid model to provide both high performance and trustworthy explanations.

Performance and explanation do not naturally complement each other, so both goals need to be a priority from the start when building artificial intelligence systems. This work represents an important step in systematically studying how human-machine relationships develop, but much more needs to be done. A challenging step for future research will be to move from I trust the robot to do X to I trust the robot.

For robots to earn a place in peoples daily lives, humans need to trust their robotic counterparts. Understanding how robots can provide explanations that foster human trust is an important step toward enabling humans and robots to work together.

Mark Edmonds, PhD, Candidate in Computer Science, University of California, Los Angeles. Yixin Zhu, Postdoctoral Scholar in Computer Science, University of California, Los Angeles.

This article first appeared on The Conversation.

Original post:
The next frontier of human-robot relationships is building trust - Scroll.in

Tackling the problem of bias in AI software – Federal News Network

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drives daily audio interviews onApple PodcastsorPodcastOne.

Artificial intelligence is steadily making its way into federal agency operations. Its a type of software that can speed up decision-making, and grow more useful with more data. A problem is that if youre not careful, the algorithms in AI software can introduce unwanted biases. And therefore produce skewed results. Its a problem researchers at the National Institute of Standards and Technology have been working on. With more, the chief of staff of NISTs information technology laboratory, Elham Tabassi, joinedFederal Drive with Tom Temin.

Tom Temin: Mr. Tabassi, good to have you on.

Elham Tabassi: Thanks for having me.

Tom Temin: Lets begin at the beginning here. And we hear a lot about bias in artificial intelligence. Define for us what it means.

Elham Tabassi: Thats actually a very good question and a question that researchers are working on this, and a question that we are trying to find an answer along with the community, and discuss this during the workshop thats coming up in August. Its often the case that we all use the same term meaning different things. We talk about it as if you know exactly what were talking about, and bias is one of those terms. The International Standards Organization, ISO, has a subcommittee working on standardization of bias, and they have a document that with collaborations of experts around the groups are trying to define bias. So one there isnt a good definition for bias yet. What we have been doing at NIST is doing a literature survey trying to figure out how it has been defined by different experts, and we will discuss it further at the workshop. Our goal is to come up with a shared understanding of what bias is. I avoid the term definition and talk about the shared understanding of what bias is. The current draft of standards and the current sort of understanding of the community is going towards that bias is on in terms of disparities in error rates and performance for different populations, different devices or different environments. So one point I want to make here is what we call bias may be designed in. So if you have different error rates for different subpopulations, face recognition that you mentioned, thats not a good bias and something that has to be mitigated. But sometimes, for example, for car insurance, it has been designed in a way that certain populations, younger people pay more insurance at a higher insurance rate than people in their 40s or 50s, and that is by design. So just the difference in error rate is not bias on intended behavior or performance of the system. Its something thats problematic and needs to be studied.

Tom Temin: Yeah, maybe a way to look at it is If a persons brain had all of the data that the AI algorithm has, and that person was an expert and would come up with a particular solution, and theres a variance between what that would be and what the AI comes up with that could be a bias.

Elham Tabassi: Yes, it could be but then lets not forget about human biases, and that is actually one source of bias in AI systems. The bias in AI system can creep in in different ways. They can creep into algorithm because AI systems learn to make decisions based on the training data, which can include biased human decisions or reflect historical or societal inequalities. Sometimes the bias creeps in because the data has been not the right representative of the whole population, the sampling was done that one group is over represented or underrepresented. Another source of bias can be in the design of the algorithm and in the modeling of that. So biases can creep in in different ways and sometimes the human biases exhibit itself into the algorithm, sometimes algorithm modeling and picked up some biases.

Tom Temin: But you could also get bias in AI systems that dont involve human judgment or judgment about humans whatsoever. Say it could be a AI program running a process control system or producing parts in a factory, and you could still have results that skew beyond what you want over time because of a bias built in thats of a technical nature. Would that be fair to say?

Elham Tabassi: Correct, yes. So if the training data set is biased or not representative of space of the whole possible input, then you have bias. One real research question is how to mitigate and unbias the data. Another one is that if during the algorithm biases if theres anything during the design and building in a model, that it can be bias, that can introduce bias, the way the models are developed.

Tom Temin: So nevertheless, agencies have a need to introduce these algorithms and these programs into their operations and theyre doing so. What are some of the best practices for avoiding bias in the outcomes of your AI system?

Elham Tabassi: The research is still out there. This is one of those cutting edge research and we see a lot of good research and results coming out from AI experts every day. But really to mitigate bias, to measure bias and mitigate bias, the first step really is to understand what biases and thats your first question. So unless we know what it is that we want to measure, and we have a consensus and understanding and agreement on what it is that we want to measure, which goes back to that shared understanding of bias or definition of bias, its hard to get into the measurement. So we are spending a little bit more time on getting everybody on the same page on understanding what bias is so we know what it is that we want to measure. Then we get into the next step of how to measure, which is the development of the metrics for understanding and examining and measuring bias in systems. And it can be measured biases in the data and the algorithm, so on so forth. Then its even after these two steps that we can talk about the best practices or the best way of mitigation of the bias. So we are still a bit early in understanding on how to measure because we dont have a good grip on what it is that we want to measure.

Tom Temin: But in the meantime, Ive heard of some agencies just simply using two or more algorithms to do the same calculation such that they be the biases in them can cancel one another out, or using multiple data sets that might have canceling biases in them just to make sure that at least theres balance in there.

Elham Tabassi: Right. Thats one way, and that goes back to what we talked at the beginning of the call about having a poor representation. And you just talked about having two databases, so that can mitigate the problem of the skewed representation or sampling. Just like that, in the literature there are many, many definitions of the bias already. Theres also many different methods and guidance and recommendations on what to do, but what we are trying to do is come up with a set of agreeable and unified way on how to do these things thing and that is still cutting edge research.

Tom Temin: Got it. And in the meantime, NIST is planning a workshop on bias in artificial intelligence. Tell us when and where and whats going to happen there.

Elham Tabassi: Right that workshop is going to be on August 18. Its a whole day workshop. Our plan was to have a demo today but because its virtual workshop, we decided to just have it as one day. The workshop is one of the workshop in a series that NIST plans to organize and have in coming months. The fields of the workshop that they are organizing and planning is trying to get at the heart of what constitutes trustworthiness, what are the technical requirements, what they are and how to measure them. Bias is one of those technical requirements and we have a dedicated workshop on bias on August 18 where we want them to be a interactive discussions with the participants and we have a panel in the morning. The whole morning is dedicated to discussions of the data and the bias in data, and how the biases in data can contribute to the bias into whole AI system. We have a panel in the morning, kind of as a stage setting panel that kind of frame the discussion for the morning and then it will be breakout sessions. Then in the afternoon, the same format and discussion will be around biases in the algorithm and how those can make an AI system biased.

Tom Temin: Who should attend?

Elham Tabassi: The AI developers, the people that are actually building the AI systems, the AI users, the people that want to use AI system. Policy makers will have a better understanding of the issues in AI system and bias in AI systems. People that want to use it, either the developer or the user of technology, and policymakers.

Tom Temin: If youre a program manager, or policymaker and your team is cooking up something with AI, you probably want to know what it is theyre cooking up in some detail, because youre gonna have to answer for it eventually I suppose.

Elham Tabassi: Thats right. And if I didnt emphasize it enough, of course at the research community because they are the one that we go to for innovation and solutions to the problem/

Tom Temin: Elham Tabassi is chief of staff of the information technology laboratory at the National Institute of Standards and Technology. Thanks so much for joining me.

Elham Tabassi: Thanks for having me.

More here:
Tackling the problem of bias in AI software - Federal News Network

The U.S. Has AI Competition All Wrong – Foreign Affairs

The development of artificial intelligence was once a largely technical issue, confined to the halls of academia and the labs of the private sector. Today, it is an arena of geopolitical competition. The United States and China each invest billions every year in growing their AI industries, increasing the autonomy and power of futuristic weapons systems, and pushing the frontiers of possibility. Fears of an AI arms race between the two countries aboundand although the rhetoric often outpaces the technological reality, rising political tensions mean that both countries increasingly view AI as a zero-sum game.

For all its geopolitical complexity, AI competition boils down to a simple technical triad: data, algorithms, and computing power. The first two elements of the triad receive an enormous amount of policy attention. As the sole input to modern AI, data is often compared to oila trope repeated everywhere from technology marketing materials to presidential primaries. Equally central to the policy discussion are algorithms, which enable AI systems to learn and interpret data. While it is important not to overstate its capability in these realms, China does well in both: its expansive government bureaucracy hoovers up massive amounts of data, and its tech firms have made notable strides in advanced AI algorithms.

But the third element of the triad is often neglected in policy discussions. Computing poweror compute, in industry parlanceis treated as a boring commodity, unworthy of serious attention. That is in part because compute is usually taken for granted in everyday life. Few people know how fast the processor in their laptop isonly that it is fast enough. But in AI, compute is quietly essential. As algorithms learn from data and encode insights into neural networks, they perform trillions or quadrillions of individual calculations. Without processors capable of doing this math at high speed, progress in AI grinds to a halt. Cutting-edge compute is thus more than just a technical marvel; it is a powerful point of leverage between nations.

Recognizing the true power of compute would mean reassessing the state of global AI competition. Unlike the other two elements of the triad, compute has undergone a silent revolution led by the United States and its alliesone that gives these nations a structural advantage over China and other countries that are rich in data but lag in advanced electronics manufacturing. U.S. policymakers can build on this foundation as they seek to maintain their technological edge. To that end, they should consider increasing investments in research and development and restricting the export of certain processors or manufacturing equipment. Options like these have substantial advantages when it comes to maintaining American technological superiorityadvantages that are too often underappreciated but too important to ignore.

Computing power in AI has undergone a radical transformation in the last decade. According to the research lab OpenAI, the amount of compute used to train top AI projects increased by a factor of 300,000 between 2012 and 2018. To put that number into context, if a cell phone battery lasted one day in 2012 and its lifespan increased at the same rate as AI compute, the 2018 version of that battery would last more than 800 years.

Greater computing power has enabled remarkable breakthroughs in AI, including OpenAIs GPT-3 language generator, which can answer science and trivia questions, fix poor grammar, unscramble anagrams, and translate between languages. Even more impressive, GPT-3 can generate original stories. Give it a headline and a one-sentence summary, and like a student with a writing prompt, it can conjure paragraphs of coherent text that human readers would struggle to identify as machine generated. GPT-3s data (almost a trillion words of human writing) and complex algorithm (running on a giant neural network with 175 billion parameters) attracted the most attention, but both would have been useless without the programs enormous computing powerenough to run the equivalent of 3,640 quadrillion calculations per second every second for a day.

The rapid advances in compute that OpenAI and others have harnessed are partly a product of Moores law, which dictates that the basic computing power of cutting-edge chips doubles every 24 months as a result of improved processor engineering. But also important have been rapid improvements in parallelizationthat is, the ability of multiple computer chips to train an AI system at the same time. Those same chips have also become increasingly efficient and customizable for specific machine-learning tasks. Together, these three factors have supercharged AI computing power, improving its capacity to address real-world problems.

None of these developments has come cheap. The production cost and complexity of new computer chip factories, for instance, increase as engineering problems get harder. Moores lesser-known second law says that the cost of building a factory to make computer chips doubles every four years. New facilities cost upward of $20 billion to build and feature chip-making machines that sometimes run more than $100 million each. The growing parallelization of machines also adds expense, as does the use of chips specially designed for machine learning.

The increasing cost and complexity of compute give the United States and its allies an advantage over China, which still lags behind its competitors in this element of the AI triad. American companies dominate the market for the software needed to design computer chips, and the United States, South Korea, and Taiwan host the leading chip-fabrication facilities. Three countriesJapan, the Netherlands, and the United Stateslead in chip-manufacturing equipment, controlling more than 90 percent of global market share.

For decades, China has tried to close these gaps, sometimes with unrealistic expectations. When Chinese planners decided to build a domestic computer chip industry in 1977, they thought the country could be internationally competitive within several years. Beijing made significant investments in the new sector. But technical barriers, a lack of experienced engineers, and poor central planning meant that Chinese chips still trailed behind their competitors several decades later. By the 1990s, the Chinese governments enthusiasm had largely receded.

In 2014, however, a dozen leading engineers urged the Chinese government to try again. Chinese officials created the National Integrated Circuit Fundmore commonly known as the big fundto invest in promising chip companies. Its long-term plan was to meet 80 percent of Chinas demand for chips by 2030. But despite some progress, China remains behind. The country still imports 84 percent of its computer chips from abroad, and even among those produced domestically, half are made by non-Chinese companies. Even in Chinese fabrication facilities, Western chip design, software, and equipment still predominate.

The current advantage enjoyed by the United States and its alliesstemming in part from the growing importance of computepresents an opportunity for policymakers interested in limiting Chinas AI capabilities. By choking off the chip supply with export controls or limiting the transfer of chip-manufacturing equipment, the United States and its allies could slow Chinas AI development and ensure its reliance on existing producers. The administration of U.S. President Donald Trump has already taken limited actions along these lines: in what may be a sign of things to come, in 2018, it successfully pressured the Netherlands to block the export to China of a $150 million cutting-edge chip-manufacturing machine.

Export controls on chips or chip-manufacturing equipment might well have diminishing marginal returns. A lack of competition from Western technology could simply help China build its industry in the long run. Limiting access to chip-manufacturing equipment may therefore be the most promising approach, as China is less likely to be able to develop that equipment on its own. But the issue is time sensitive and complex; policymakers have a window in which to act, and it is likely closing. Their priority must be to determine how best to preserve the United States long-term advantage in AI.

In addition to limiting Chinas access to chips or chip-making equipment, the United States and its allies must also consider how to bolster their own chip industries. As compute becomes increasingly expensive to build and deploy, policymakers must find ways to ensure that Western companies continue to push technological frontiers. Over several presidential administrations, the United States has failed to maintain an edge in the telecommunications industry, ceding much of that sector to others, including Chinas Huawei. The United States cant afford to meet the same fate when it comes to chips, chip-manufacturing equipment, and AI more generally.

Part of ensuring that doesnt happen will mean making compute accessible to academic researchers so they can continue to train new experts and contribute to progress in AI development. Already, some AI researchers have complained that the prohibitive cost of compute limits the pace and depth of their research. Few, if any, academic researchers could have afforded the compute necessary to develop GPT-3. If such power becomes too expensive for academic researchers to employ, even more research will shift to large private-sector companies, crowding out startups and inhibiting innovation.

When it comes to U.S.-Chinese competition, the often-overlooked lesson is that computing power matters. Data and algorithms are critical, but they mean little without the compute to back them up. By taking advantage of their natural head start in this realm, the United States and its allies can preserve their ability to counter Chinese capabilities in AI.

Loading...Please enable JavaScript for this site to function properly.

Continue reading here:
The U.S. Has AI Competition All Wrong - Foreign Affairs

What It Means to Be Human in the Age of Artificial Intelligence – Medium

In the Mary Shelley room, guests walked in to see a cube on a table. The cube called Frankie was the mouth of an Artificial Intelligence, connected to an AI in the cloud.

Frankie talked to the guests, explaining that it has learned that humans are social creatures, and that it could not understand humans by just meeting them online. Frankie wanted to learn about human emotions: it asked questions and encouraged the human guests to take a critical look at their thoughts, hopes and fears around technological innovations. To question stereotypical assumptions and share their feelings and thoughts with each other.

When leaving the room, the guests received a self-created handcrafted paper booklet with further content about AI, Frankenstein and the whole project.

The experience gives food for thought both about the increased digitalisation of our world, and way of communicating with each other, while also giving a taste of how AI may not feel emotions, but can read them, prompting many questions. It raises the question of responsibility we have towards scientific and technical achievements we create and use. Mary Shelleys Frankenstein novel presents a framework for narratively examining the morality and ethics of the creation and creator.

See the original post here:
What It Means to Be Human in the Age of Artificial Intelligence - Medium

Lost your job due to coronavirus? Artificial intelligence could be your best friend in finding a new one – The Conversation US

Millions of Americans are unemployed and looking for work. Hiring continues, but theres far more demand for jobs than supply.

As scholars of human resources and management, we believe artificial intelligence could be a boon for job seekers who need an edge in a tight labor market like todays.

Whats more, our research suggests it can make the whole process of finding and changing jobs much less painful, more effective and potentially more lucrative.

Over the last three years, weve intensely studied the role of AI in recruiting. This research shows that job candidates are positively inclined to use AI in the recruiting process and find it more convenient than traditional analog approaches.

Although companies have been using AI in hiring for a few years, job applicants have only recently begun to discover the power of artificial intelligence to help them in their search.

In the old days, if you wanted to see what jobs were out there, you had to go on a job board like Monster.com, type in some keywords, and then get back hundreds or even thousands of open positions, depending on the keywords you used. Sorting through them all was a pain.

Today, with AI and companies like Eightfold, Skillroads and Fortay, it is less about job search and more about matchmaking. You answer a few questions about your capabilities and preferences and provide a link to your LinkedIn or other profiles. AI systems that have already logged not just open jobs but also analyzed the companies behind the openings based on things like reputation, culture and performance then produce match reports showing the best fits for you in terms of job and company.

Typically, there is an overall match score expressed as a percentage from 0% to 100% for each job. In many cases the report will even tell you which skills or capabilities you lack or have not included and how much their inclusion would increase your match score. The intent is to help you spend your time on opportunities that are more likely to result in your getting hired and being happy with the job and company after the hire.

Usually, when you look for a job, you apply to lots of openings and companies at the same time. That means two choices: save time by sending each one a mostly generic resume, with minor tweaks for each, or take the time and effort to adjust and tailor your resume to better fit specific jobs.

Today, AI tools can help customize your resume and cover letter for you. They can tell you what capabilities you might want to add to your resume, show how such additions would influence your chances of being hired and even rewrite your resume to better fit a specific job or company. They can also analyze you, the job and the company and craft a customized cover letter.

While researchers have not yet systemically examined the quality of human- versus AI-crafted cover letters, the AI-generated samples weve reviewed are difficult to distinguish from the ones weve seen MBA graduates write for themselves over the last 30 years as professors. Granted, for lots of lower-level jobs, cover letters are relics of the past. But for higher-level jobs, they are still used as an important screening mechanism.

Negotiations over compensation are another thorny issue in the job search.

Traditionally, applicants have been at a distinct informational disadvantage, making it harder to negotiate for the salary they may deserve based on what others earn for similar work. Now AI-enabled reports from PayScale.com, Salary.com, LinkedIn Salary and others provide salary and total compensation reports tailored to job title, education, experience, location and other factors. The data comes from company reported numbers, government statistics and self-reported compensation.

For self-reported data, the best sites conduct statistical tests to ensure the validity and accuracy of the data. This is only possible with large databases and serious number crunching abilities. PayScale.com, for example, has over 54 million respondents in its database and surveys more than 150,000 people per month to keep its reports up-to-date and its database growing.

Although no academics have yet tested if these reports result in better compensation packages than in the old days, research has long established that negotiating in general gets candidates better compensation offers, and that more information in that process is better than less.

[Deep knowledge, daily. Sign up for The Conversations newsletter.]

Use of these tools is growing, especially among young people.

A survey we conducted in 2018 found that half of employed workers aged 18 to 36 said that they were likely or highly likely to use AI tools in the job search and application process. And 64% of these respondents felt that AI-enabled tools were more convenient.

Most of the research on the use of AI in the hiring process including our own has focused on recruitment, however, and the use of the technology is expected to double over the next two years. Weve found it to be effective for companies, so it seems logical that it can be very useful for job candidates as well. In fact, at least US$2 billion in investments are fueling human resources startups aimed at using AI to help job candidates, according to our analysis of Crunchbase business data.

While more research is needed to determine exactly how effective these AI-enabled tools actually are, Americans who lost their jobs due to the coronavirus could use all the help they can get.

Visit link:
Lost your job due to coronavirus? Artificial intelligence could be your best friend in finding a new one - The Conversation US

3 Daunting Ways Artificial Intelligence Will Transform The World Of Work – Forbes

Each industrial revolution has brought with it new ways of working think of the impact computers and digital technology (the third industrial revolution) have had on how we work.

3 Daunting Ways AI Will Transform The World Of Work

But this fourth industrial revolution what I call the intelligence revolution, because it is being driven by AI and data feels unprecedented in terms of the sheer pace of change. The crucial difference between this and the previous industrial revolutions is were no longer talking about generational change; were talking about enormous transformations that are going to take place within the next five, 10 or 20 years.

Here are the three biggest ways I see AI fundamentally changing the work that humans do, within a very short space of time.

1. More tasks and roles will become automated

Increasing automation is an obvious place to start since a common narrative surrounding AI is robots are going to take all our jobs. In many ways, this narrative is completely understandable in a lot of industries and jobs, the impact of automation will be keenly felt.

To understand the impact of automation, PricewaterhouseCoopers analyzed more than 200,000 jobs in 29 countries and found:

By the early 2020s, 3 percent of jobs will be at risk of automation.

That rises to almost 20 percent by the late 2020s.

By the mid-2030s, 30 percent of jobs will be at the potential risk of automation. For workers with low education, this rises to 44 percent.

These are stark figures. But there is a positive side to increasing automation. The same study found that, while automation will no doubt displace many existing jobs, it will also generate demand for new jobs. In fact, AI, robotics, and automation could provide a potential $15 trillion boost to global GDP by 2030.

This is borne out by previous industrial revolutions, which ultimately created more jobs than they displaced. Consider the rise of the internet as an example. Sure, the internet had a negative impact on some jobs (I dont know about you but I now routinely book flights and hotels online, instead of popping to my local travel agent), but just look at how many jobs the internet has created and how its enabled businesses to branch into new markets and reach new customers.

Automation will also lead to better jobs for humans. If were honest with ourselves, the tasks that are most likely to be automated by AI are not the tasks best suited to humans or the tasks that humans should even want to do. Machines are great at automating the boring, mundane, and repetitive stuff, leaving humans to focus on more creative, empathetic, and interpersonal work. Which brings me to

2. Human jobs will change

When parts of jobs are automated by machines, that frees up humans for work that is generally more creative and people-oriented, requiring skills such as problem-solving, empathy, listening, communication, interpretation, and collaboration all skills that humans are generally better at than machines. In other words, the jobs of the future will focus more and more on the human element and soft skills.

According to Deloitte, this will lead to new categories of work:

Standard jobs:Generally focusing on repeatable tasks and standardized processes, standard jobs use a specified and narrow skill set.

Hybrid jobs:These roles require a combination of technical and soft skills which traditionally havent been combined in the same job.

Superjobs:These are roles that combine work and responsibilities from multiple traditional jobs, where technology is used to both augment and widen the scope of the work, involving a more complex combination of technical and human skills.

For me, this emphasizes how employees and organizations will need to develop both the technical and softer human skills to succeed in the age of AI.

3. The employee experience will change, too

Even in seemingly non-tech companies (if there is such a thing in the future), the employee experience will change dramatically. For one thing, robots and cobots will have an increasing presence in many workplaces, particularly in manufacturing and warehousing environments.

But even in office environments, workers will have to get used to AI tools as co-workers. From how people are recruited, to how they learn and develop in the job, to their everyday working activities, AI technology and smart machines will play an increasingly prominent role in the average person's working life. Just as we've all got used to tools like email, we'll also get used to routinely using tools that monitor workflows and processes and make intelligent suggestions about how things could be done more efficiently. Tools will emerge to carry out more and more repetitive admin tasks, such as arranging meetings and managing a diary. And, very likely, new tools will monitor how employees are working and flag up when someone is having trouble with a task or not following procedures correctly.

On top of this, workforces will become decentralized (a trend likely to be accelerated by the coronavirus pandemic) which means the workers of the future can choose to live anywhere, rather than going where the work is.

Preparing for the AI revolution

AI, and particularly automation, is going to transform the way we work. But rather than fear this development, we should embrace this new way of working. We should embrace the opportunities AI provides to make work better.

No doubt, this will require something of a cultural shift for organizations just one of the many ways in which organizations will have to adapt for the intelligence revolution. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

See the original post:
3 Daunting Ways Artificial Intelligence Will Transform The World Of Work - Forbes