What is AI? Everything you need to know about Artificial …

Video: Getting started with artificial intelligence and machine learning

It depends who you ask.

Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.

Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.

Special report: How to implement AI and machine learning (free PDF)

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Mller and philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting that so-called ' superintelligence' -- which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" -- was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.

Key to the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

Download now: IT leader's guide to deep learning(Tech Pro Research)

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The structure and training of deep neural networks.

Another area of AI research is evolutionary computation, which borrows from Darwin's famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google and Microsoft, have moved to using specialized chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labeled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that's just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

See also: How artificial intelligence is taking call centers to the next level

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively -- although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size -- Google's Open Images Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people -- most of whom were recruited through Amazon Mechanical Turk -- who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game the system builds a model of which action will maximize the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Many AI-related technologies are approaching, or have already reached, the 'peak of inflated expectations' in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait.

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units -- custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don't want to build their own machine learning models but instead want to consume AI-powered, on-demand services -- such as voice, vision, and language recognition -- Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella -- and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Internally, each of the tech giants -- and others such as Facebook -- use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam -- the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

The Amazon Echo Plus is a smart speaker with access to Amazon's Alexa virtual assistant built in.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space -- Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities.

Read more: How we learned to talk to computers, and how they learned to answer back (PDF download)

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion that major PC makers will build Alexa into laptops adding to speculation about whether Cortana's days are numbered, although Microsoft was quick to reject this.

It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021.

Baidu's self-driving car, a modified BMW 3 series.

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China's favor.

While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.

There's too many to put together a comprehensive list, but some recent highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each -- setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

IBM Watson competes on Jeopardy! in January 14, 2011

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision, with Google training a system to recognise an internet favorite, pictures of cats.

Since Watson's win, perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. At last year's prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world's top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone's voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people's image, with tools already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost 95 percent. Recently Microsoft's Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police.

Although privacy regulations vary across the world, it's likely this more intrusive use of AI technology -- including AI that can recognize emotions -- will gradually become more widespread elsewhere.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM's Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK's National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a "fundamental risk to the existence of human civilization". As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race.

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft's director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about "Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away."

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

While AI won't replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn't have the potential to impact. As AI expert Andrew Ng puts it: "many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work", saying he sees a "significant risk of technological unemployment over the next few decades".

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this means for the more than three million people in the US who works as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers, with plans to add many more. But Amazon also stresses that as the number of bots have grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it's not a given that manual and robotic labor will continue to grow hand-in-hand.

Amazon bought Kiva robotics in 2012 and today uses Kiva robots throughout its warehouses.

Visit link:
What is AI? Everything you need to know about Artificial ...

Artificial intelligence | MIT News

Weathers a problem for autonomous cars. MITs new system shows promise by using ground-penetrating radar instead of cameras or lasers.

Tech-based solutions sought for challenges in work environments, education for girls and women, maternal and newborn health, and sustainable food.

MIT duo uses music, videos, and real-world examples to teach students the foundations of artificial intelligence.

PatternEx merges human and machine expertise to spot and respond to hacks.

In a Starr Forum talk, Luis Videgaray, director of MITs AI Policy for the World Project, outlines key facets of regulating new technologies.

A deep-learning model identifies a powerful new drug that can kill many species of antibiotic-resistant bacteria.

MIT graduate student is assessing the impacts of artificial intelligence on military power, with a focus on the US and China.

The mission of SENSE.nano is to foster the development and use of novel sensors, sensing systems, and sensing solutions.

By organizing performance data and predicting problems, Tagup helps energy companies keep their equipment running.

Researchers develop a more robust machine-vision architecture by studying how human vision responds to changing viewpoints of objects.

Three-day hackathon explores methods for making artificial intelligence faster and more sustainable.

MITs new system TextFooler can trick the types of natural-language-processing systems that Google uses to help power its search results, including audio for Google Home.

Starting with higher-value niche markets and then expanding could help perovskite-based solar panels become competitive with silicon.

With the initial organizational structure in place, the MIT Schwarzman College of Computing moves forward with implementation.

Doctoral candidate Natalie Lao wants to show that anyone can learn to use AI to make a better world.

Device developed within the Department of Civil and Environmental Engineering has the potential to replace damaged organs with lab-grown ones.

Computer scientists new method could help doctors avoid ineffective or unnecessarily risky treatments.

Model tags road features based on satellite images, to improve GPS navigation in places with limited map data.

A new method determines whether circuits are accurately executing complex operations that classical computers cant tackle.

MIT researchers and collaborators have developed an open-source curriculum to teach young students about ethics and artificial intelligence.

Original post:
Artificial intelligence | MIT News

32 Artificial Intelligence Companies You Should Know | Built In

From Google and Amazon to Apple and Microsoft, every major tech companyis dedicating resources to breakthroughs in artificial intelligence. Personal assistants like Siri and Alexa have made AI a part of our daily lives. Meanwhile, revolutionary breakthroughs like self-driving cars may not be the norm, but are certainly within reach.

As the big guys scramble to infuse their products with artificial intelligence, other companies are hard at work developing their own intelligent technology and services. Here are 32artificial intelligence companies and AI startupsyou may not know today, but you will tomorrow.

Find out who's hiring

See all jobs at top tech companies & startups

Industry: Healthtech, Biotech, Big Data

Location: Chicago, Illinois

What it does: Tempus uses AI to gather and analyze massive pools of medical and clinical data at scale. The company, with the assistance of AI, provides precision medicine that personalizes and optimizes treatments to each individuals specific health needs; relying on everything from genetic makeup to past medical history to diagnose and treat. Tempus is currently focusing on using AI to create breakthroughs in cancer research.

View Jobs + Learn More

Industry: Big Data, Software

Location: Boston, Massachusetts

What itdoes:DataRobot provides data scientists with a platform for building and deploying machine learning models. The softwarehelps companies solve challenges by finding the best predictive model for their data. DataRobot's techis used inhealthcare, fintech, insurance, manufacturing and even sports analytics.

View Jobs + Learn More

Industry: Big Data, Software

Location: Chicago, Illinois

What itdoes:Narrative Science creates natural language generation (NLG) technology that can translatedata into stories. By highlightingonly the most relevant and interesting information, businesses canmake quicker decisions regardless of the staff's experience with data or analytics.

View Jobs + Learn More

Industry: Fintech

Location: New York, New York

What itdoes:AlphaSense is an AI-powered search engine designed to helpinvestment firms, banks and Fortune 500 companies find important information within transcripts, filings, news and research.The technologyuses artificial intelligencetoexpandkeyword searches for relevant content.

View Jobs + Learn More

Industry: Software

Location: New York, New York

What itdoes:Clarifai is an image recognition platform that helps users organize, curate, filter and search their media. Within the platform, images and videos are tagged, teachingthe intelligent technology to learn which objects are displayed in a piece of media.

View Jobs + Learn More

Industry: Machine Learning, Software

Location: Boston, Massachusetts

What itdoes:Neurala is developing "The Neurala Brain," a deep learning neural network software that makesdevices like cameras, phones and drones smarter and easier to use. Neuralas solutions are currently usedon more than amillion devices. Additionally, companies and organizations like NASA, Huawei, Motorola and the Defense Advanced Research Projects Agency (DARPA) are also using the technology.

View Jobs + Learn More

Industry: Automotive, Transportation

Location: Boston, Massachusetts

What itdoes:With a mission to provide safe efficient driverless vehicles,nuTonomy is developing software thatpowers autonomous vehicles in cities around the world. The company uses AItocombinemapping, perception, motion planning, control and decision making into software designed toeliminate driver-error accidents.

View Jobs + Learn More

RELATED ARTICLES20 examples of artificial intelligence shaking up business as usual

How AI is changing the banking and finance industries

The robots will see you now: How artificial intelligence is revolutionizing healthcare

Industry: Adtech, Software

Location: New York, New York

What itdoes:Persado is a marketing language cloud that usesAI-generated language to craft advertising for targeted audiences. With functionality across all channels, Persado helps businessesincrease acquisitions, boost retention and buildbetter relationships with their customers.

View Jobs + Learn More

Industry: Machine Learning

Location: New York, New York

What itdoes:x.ai creates autonomous personal assistants powered by intelligent technology. The assistants, simply named Amy and Andrew Ingram, integrate with programs like Outlook, Google, Office 365 and Slack,schedule or update meetings, and continually learnfrom everyinteraction.

View Jobs + Learn More

Industry: Software, Cloud

Location: Austin, Texas

What itdoes:CognitiveScale builds augmented intelligence for thehealthcare, insurance, financial services and digital commerce industries. Its technology helpsbusinesses increase customer acquisition and engagement, while improving processes like billing and claims. CognitiveScales products are used by such heavy hitters asP&G, Exxon, JP Morgan & Chase, Macys and NBC.

View Jobs + Learn More

Industry: Biotech, Healthtech

Location: San Francisco, California

What itdoes:Freenome uses artificial intelligence to conduct innovative cancer screenings and diagnostic tests. Using non-invasive blood tests, the companys AI technology recognizes disease-associated patterns, providingearlier cancer detection and better treatment options.

Find out who's hiring

See all jobs at top tech companies & startups

Industry: Robotics

Location: Pleasanton, California

What itdoes:AEyebuilds the vision algorithms, software and hardware that ultimately become the eyes of autonomous vehicles. ItsLiDAR technology focuses on the most important information in a vehicles sightline such as people, other cars and animals, while putting less emphasis on things like the sky, buildings and surrounding vegetation.

Industry: Machine Learning, Robotics

Location: Menlo Park, California

What itdoes:AIBrainis working to create fully autonomous artificial intelligence. By fusingproblem solving, learning and memory technologies together, the company can build systems thatlearn and adapt without human assistance.

Industry: Agriculture, Robotics, Software

Location: Sunnyvale, California

What itdoes:Blue River Tech combines artificial intelligence and computer vision to build smarter farm tech. The companys See & Spray machine learning technology, for example, can detectindividual plants and applyherbicide to the weeds only. The solution not only prevents herbicide-resistant weeds but reduces 90% of the chemicalscurrently sprayed.

Industry: Software

Location: Oakland, California

What it does:Vidado can pulldata from virtually any channel, including handwritten documents, dramatically increasingpaper to digital workflow speeds and accuracy. The cloud-based platform is utilized by leading organizations and companies like New York Life, the FDA, Metlife and MassMutual.

Industry: Legal, Software

Location: San Francisco

What itdoes:Casetext is an AI-powered legal search engine with a database of more than10 million statutes, cases and regulations. Called CARA A.I., the company's techcan search within the language, jurisdiction and citations of a user's uploaded documents and return relevant searches from the database.

Industry: Cloud, Robotics

Location: Santa Clara, California

What itdoes:CloudMinds provides cloud robot services for the finance, healthcare, manufacturing, power utilities, public sector and enterprise mobility industries. Its cloud-based AI usesadvanced algorithms, large-scale neural networks and training data to make smarter robots for image and object recognition, natural language processing, speech recognition and more.

Industry: Software

Location: San Francisco, California

What itdoes:Figure EightprovidesAI training software to machine learning and data science teams. The company's"human-in-the-loop" platform uses human intelligence to train and test machine learning, and has powered AI projects for major companies like Oracle, Ebay SAP and Adobe.

Industry: Big Data, Software

Location: Mountain View, California

What itdoes:H2O.ai is the creator of H2O, an open source platform for data science and machine learning that is utilized by thousands of organizations worldwide. H2O.ai supplies companies in a variety of industries predictive analytics and machine learning tools thataidein solvingcriticalbusiness challenges.

Industry: Biotech

Location: Bethesda, Maryland

What itdoes:Insilico Medicine is using artificial intelligence for anti-aging and drug discovery research. The company'sdrug discovery engine contains millions of samples forfinding disease identifiers. Insilicois used byacademic institutions, pharmaceutical and cosmetic companies.

Industry: Software, Automotive

Read this article:
32 Artificial Intelligence Companies You Should Know | Built In

A 5-Year Vision for Artificial Intelligence in Higher Ed – EdTech Magazine: Focus on Higher Education

The Historical Hype Cycle of AI

Before talking about the current and projected impact of AI in education and other industries, Ramsey explained the concept of the AI winter.

He showed a graph on the historical hype cycle of AI that featured peaks and drops over a 70-year period.

There was a big peak in the mid-1960s, when there was an emergence of symbolic AI research and new insights into the possibility of training two-layer neural networks. A resurgence came in the 1980s with the invention of certain algorithms for training three-plus layer neural networks.

The graph showed a drop in the mid-1990s, as the computational horsepower and data did not exist to develop real-world applications for AI a situation he calls an AI winter. We are in the middle of another resurgence today, he said.

There has been a huge increase in the amount of data and computer power that we have available, sparking research, Ramsey said. People have been able to start inventing algorithms and training not just three-layer neural networks but a 100-layer one.

The question now is where we will go next, he said. His answer? We will sustain progress, leading to true or strong AI the point at which a machines intellectual capability is functionally equal to a humans.

The number of researchers working on this, the amount of money thats being spent on this and the amount of research publications its all growing, he said. And where Google is right now is on a plateau of productivity because were using AI in everything that we do, at scale.

MORE ON EDTECH:Learn how data-powered AI tools are helping universities drive enrollment and streamline operations.

During his presentation, Ramsey showed an infographic that featured what machine learning could look like across a students journey through higher education, starting from their college search and ending with employment.

For example, he said, colleges and universities can apply machine learning when targeting quality prospective students to attend their schools. They can even automate call center operations to make contacting prospective students more efficient and deploy AI-driven assistants to engage with applicants in a personalized way, he said.

Once students are enrolled, they can also useAI chatbotsto improve student support services, assisting new students in their adjustment to college. They can leverage adaptive learning technology topredictperformance as they choose a path through school, and they can tailor material to their knowledge levels and learning styles.

For example, a machine learning algorithm helped educators at Ivy Tech Community College in Indianapolis identify at-risk students and provide early intervention, Ramsey said.

Ivy Tech shifted toGoogleCloud Platform, which allowed the school to manage 12 million data points from student interactions and develop aflexible AI engineto analyze student engagement and success. For instance, a student who stops logging in to their learning management system or showing up to class would be flagged as needing assistance.

The predictions were 83 percent accurate, Ramsey said. It worked quite well, and they were actually able to save students from dropping out, which makes a big difference because their funding is based on how many students they have, he said.

As students near graduation and start their job searches, schools can also use AI to understand career trends and match them to a students competencies and skills. Machine learning can be used to better understand job listings and a jobseekers intent, matching candidates to their ideal jobs more quickly.

At the end of the day, what were doing with these technologies is trying to understand who we are and how our minds work, Ramsey said. Once we fully understand that, we can build machines that function in the same way, and the possibilities are endless.

Read the original here:
A 5-Year Vision for Artificial Intelligence in Higher Ed - EdTech Magazine: Focus on Higher Education

Parasoft introduces Artificial Intelligence (AI) and Machine Learning (ML) into Software Test Automation for the Safety-critical Market at Embedded…

Parasoft C/C++test's new functionality offers teams the ability to link test cases to requirements and code coverage enhancements, improving productivity instantly

MONROVIA, Calif. andNUREMBERG, Germany, Feb. 27, 2020 /PRNewswire/ -- Parasoft, the global automated software testing authority since 1987,announced todayat Embedded World, thenew release ofParasoftC/C++test, aunified C and C++ development testing solution forreal-time safety- and security-criticalembedded applicationsand enterprise IT.With this new release,Parasoftappliesa new approachtoexpeditesoftware code analysis findings andincreasetheproductivity ofautomatedsoftware testing, allowing teams to achieveindustrycompliance standardseasily.

Parasoft introduces AI and Machine Learning into Software Test Automation

To learn more aboutParasoftC/C++test, visit:https://www.parasoft.com/products/ctest.

"With the new release of C/C++test,we are bringing unique AIandMLcapabilities to help organizations with the adoption of static analysis forsecure safety-critical applicationsdevelopment.With these innovations, organizations can immediately reduce manual effort in their software quality processes,"statedMiroslawZielinski,ParasoftProduct Manager."Organization serious intheirapproach to safety, security, and quality of software, will soon need to include AI-based tools into their development process to keep pace with competition and stay relevantinthe market. This is only our first step in the application of AIandML to the safety-critical market."

Embedded devicesarecomplex,and with increasingsafety and securityconcerns, it is crucial that automated software testing solutions stay up to date on the ever-expanding compliance standards.Hence,Parasoftcontinues to leadtheenforcementof the latestguidelines.Additionally, withindustry-standard prerequisites toestablishtraceability of software requirements to test cases,Parasofthas built integrations with some of the most popular application lifecycle management(ALM)solutions.The integrationsestablishtraceabilityfromsoftware requirements to test cases.

"The market for functional safety (FuSa)Testtoolssales will grow at the quickest CAGR of 9.3% to reach $539.6M in revenue in 2023. The need to establish bi-directional traceability to meet FUSA certification requirements is fueling interest in using integrated application lifecycle management (ALM)and product lifecycle management (PLM) solutions to manage the entire product development process,"states Chris Rommel, EVP,VDC ResearchGroup.

What's new?

AboutParasoft

Parasoft, the global automated software testing authorityfor over 30+ years,provides innovative tools that automate time-consuming testing tasks and provide management with intelligent analytics necessary to focus on what matters.Parasoftsupports software organizations as they develop and deploy applicationsforthe embedded, enterprise, and IoT markets.Parasoft'stechnologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software, by integrating static and runtime analysis; unit, functional, and API testing; and service virtualization.Withourdeveloper testing tools, manager reporting/analytics, and executivedashboarding,Parasoftenables organizations to succeed in today's most strategicecosystems anddevelopment initiatives real-time, safety-critical, secure,agile, continuous testing,andDevOps.www.parasoft.com; https://www.parasoft.com/products/ctest

Story continues

Here is the original post:
Parasoft introduces Artificial Intelligence (AI) and Machine Learning (ML) into Software Test Automation for the Safety-critical Market at Embedded...

Artificial Intelligence Crowdsourcing Competition for Injury Surveillance – EC&M

By Sydney Webb, PhD; Carlos Siordia, PhD; Stephen Bertke, PhD; Diana Bartlett, MPH, MPP; and Dan Reitz

In 2018, NIOSH, the Bureau of Labor Statistics (BLS), and the Occupational Safety and Health Administration (OSHA) contracted the National Academies of Science (NAS) to conduct a consensus study on improving the cost-effectiveness and coordination of occupational safety and health (OSH) surveillance systems. NASs report recommended that the federal government use recent advancements in machine learning and artificial intelligence (AI) to automate the processing of data in OSH surveillance systems.

The main source of OSH information on fatal and non-fatal workplace incidents comes from the unstructured free-text injury narratives recorded in surveillance systems. For example, an employer may report an injury as worker fell from the ladder after reaching out for a box. For decades, humans have read these injury narratives to assign standardized codes using the U.S. Bureau of Labor Statistics (BLS) Occupational Injury and Illness Classification System (OIICS). Coding these injury narratives to analyze data is expensive, time consuming, and fraught with coding errors.

AI, namely machine learning text classification, offers a solution to this problem. If algorithms can be developed to read the injury narratives, data can be pulled from these surveillance systems in a fraction of the time of hand coding.

NIOSH developed an AI algorithm to apply OIICS codes based on injury narratives from a hospital emergency department surveillance system. However, the efficiency of this algorithm was not clear. To see if better coding algorithms could be developed, NIOSH turned to crowdsourcing.

While not unique to AI, crowdsourcing involves asking the crowd or people in the public with a variety of skill sets to provide their unique solution to a problem. The approach results in a large number of potential solutions that can be assessed to identify those that work best. Generally, the best crowd solutions are better than the original solution. In this case, NIOSH worked with two crowds one internal to CDC and one external to CDC to propose better solutions to NIOSHs initial coding algorithm.

Courtesy of NIOSH

Before conducting an external competition, a team of 17 researchers from NIOSH, the Centers for Disease Control and Prevention (CDC), BLS, OSHA, the Federal Emergency Management Administration (FEMA), the Census Bureau, the National Institutes of Health (NIH)7, and the Consumer Products Safety Commission hosted a competition for staff at CDC. A total of 19 employees competed to develop the best algorithm to code worker injury narratives. The team received nine algorithms, five of which outperformed the NIOSH baseline script, which had an accuracy of 81%. The internal crowdsourcing competition winning algorithm was 87% a 6% improvement.

In October 2019, NIOSH, together with National Aeronautics and Space Administration (NASA), hired a Tournament Lab vendor, Topcoder, to host the external crowdsourcing competition. This was the first-ever external crowdsourcing competition from CDC and NIOSH, which was partially funded through the CDC Innovation Fund Challenge. The competition accessed Topcoders global community of data science experts to develop a Natural Language Processing (NLP) algorithm to classify occupational work-related injury records according to OIICS.

Like the internal competition, the external competition was also a success. There were 961 submissions from 388 registrants representing over 26 countries (32% United States, 21% India). Those participating self-identified as having degrees in computer science and engineering, chemistry, computer engineering, computer science, data science, and economics to name a few. This competition produced 21% more registrants and 66% more submissions than the average Topcoder competition. The high-quality submissions achieved nearly 90% accuracy, which surpassed the 87% accuracy goal achieved during the internal competition.

The 1st place external crowdsource winner was Raymond van Veneti, who is a doctoral student in numerical mathematics at the University of Amsterdam. Second place was awarded to a senior data scientist at Sherbank AI lab in Russia; 3rd place was awarded to a developer and data scientist from China; 4th place was awarded to a biostatistician at the School of Medicine at Emory University in Atlanta, GA; and 5th place was awarded to a full stack engineer from Bangalore, India.

External crowdsource 1st place winner Raymond van Veneti.Courtesy of NIOSH

The external competition and the resulting algorithm support improving efficiency and reducing costs associated with coding occupational safety and health surveillance data. Ultimately, it is hoped that the improved algorithm will contribute to greater worker safety and health. The NIOSH project team will work with the 1st prize winners script to make an easy-to-use web tool for public use. In the interim, the top 5 winning solutions are available on GitHub.

For more information, visit https://blogs.cdc.gov.

Original post:
Artificial Intelligence Crowdsourcing Competition for Injury Surveillance - EC&M

Artificial Intelligence is Starting to Shape the Future of the Workplace – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

Read the rest here:
Artificial Intelligence is Starting to Shape the Future of the Workplace - JD Supra

Artificial Intelligence Contributes Higher in Healthcare Compared to Other Industries – EnterpriseTalk

KPMG survey has revealed that 53% of healthcare executives believe the healthcare industry is ahead of most other sectors in the adoption of artificial intelligence (AI)

The latest report from KPMG has stated that more than half (53%) of executives say that when it comes to the adoption of AI, the healthcare sector is quite ahead to other industries. As per the report, 37% of healthcare executives believe factors like cost and skill barriers are slowing the AI implementation in the healthcare industry.

The Critical Role of Cyber Security in Healthcare

The adoption of AI and automation in hospital systems has increased exponentially since 2017, found the report. Nearly 90% of respondents believe AI is already creating efficiencies in their operations, and 91% say AI is expanding patient access to care. AI will be effective in diagnosing patient illnesses, say 68% of respondents. Meanwhile, 47% believe diagnostics will have a significant impact by 2022, according to the study. Respondents also said AI would have a positive effect on process automation. Around 40% of them think AI will assist providers with better X-rays and CT scans. It will continue to advance the digitization of healthcare as per the KPMG survey. With the help of AI technologies, 41% of respondents expect improved records management, while 48% believe the most significant impact of AI will be in biometric-related applications.

In the field of healthcare and diagnostics, studies have shown that AI can assist doctors while making informed decisions, and enhance patient diagnostics, even to the extent of identification of cancer. Nearly half of healthcare executives said their institutions offer AI training courses to employees, and 67% say their employees support AI adoption.

On the flip side, there has also been suspicion that AI has increased the overall cost of healthcare more than half of the surveys respondents feel this way. This suggests that healthcare executives are still trying to determine the most cost-effective areas to use AI tools. Two of the major concerns for healthcare companies are privacy and security. According to the survey, 75% of respondents have concerns that AI could threaten the privacy and security of patient data, while 86% say their companies are taking steps to protect patient privacy as they implement AI.

Cyber Security- Only 17% of Global Enterprises are Cyber Resilient Leaders

Healthcare leaders agree that AI will play a key role in improving care delivery, with 90% of respondents saying they believe that AI will improve the patient experience. The results show that once leaders address key issues to implementation, the benefits of AI could outweigh potential risks. Applying AI to unstructured data will also be quite useful in the diagnosis and more accurate prognosis of health issues. Supported by doctors, AI could well be the closest tool to increase the accuracy of healthcare analysis and provide much more error-free results in days to come.

Read this article:
Artificial Intelligence Contributes Higher in Healthcare Compared to Other Industries - EnterpriseTalk

Artificial Intelligence + Robotic Process Automation: The Future of Business – Wire19

Indias only conference on Intelligent Process Automation designed around Tech Enthusiasts

80+ Delegates, 30+ Eminent Speakers, 2 Keynote Presentations, 10 Industrial Presentations, 3 Panel Discussions, 5 Sponsors & 17 Partners

RPA and AI are among the top technologies that are gaining grounds in the global business space, the others being cloud, mobile applications development, Internet of Things etc. Artificial Intelligence & Robotic Process Automation are expected to bring about major changes in terms of knowledge & skill requirements. Therefore, its inevitable for aspirants to be prepared for new-age job roles in the near future.

Related read: 5 decisions CEOs need to make in 2020 for embracing technologies faster

At Intelligent Process Automation Summit, Network & Benchmark with leading experts championing concepts, theories and applications in the automation domain. Map and Design a Winning Automation Strategy & converse on failure-free implementation. Only at this exclusive opportunity gather insights from leading industry experts.

Date & Venue: 4th & 5th March 2020 | Ramada Powai, Mumbai

Head over to the #IPASummit 2020, to confirm your participation!

Network, Benchmark & Innovate like never before at the #IPASummit with:

Keynote Speaker-

They will be joined by 30+ eminent speakers from top companies, who will share their experience, knowledge & expertise on how to put theory into action for AI + RPA.

The Intelligent Process Automation Summit is backed by Title Partner: Softomotive, Gold Partner: Nividous, Silver Partner: Tricentis, Exhibit Partners: NeoSOFT Technologies & Cygnet Infotech, Gift Partner: Spa La Vie, Association Partner: Analytics Society of India, Media Partners: Automation Connect, GIBF, The CEO Magazine, Analytics Insight, Business Connect India, Innovative Zone, The CEO Story, Electronics Media, Timestech.in, WIRE19, Business News This Week, Free Press Journal, CIO Insider India & Silicon India.

Must read: Whats missing from your growth strategy for 2020

Join the Intelligent Process Automation Summit, to keep pace with the rapid evolution of products/process, network with the new age automation talent pool and gain diversified exposure to new ideas by industry leaders & experts to conduct a thorough assessment of automation projects.

For Delegate Registration / Partnership or more Information Visit http://bit.ly/ipa-summit

See the rest here:
Artificial Intelligence + Robotic Process Automation: The Future of Business - Wire19

Artificial Intelligence (AI) and medicine | Interviews – The Naked Scientists

Chris Smith and Phil Sansom delve into the world of artificial Intelligence (AI) to find out how this emerging technology is changing the way we practise medicine...

Mike - I think this is an area where AI stands a really good chance of making quite dramatic improvements to very large numbers of people's lives.

Carolyn - Save lives and reduce medical complications.

Andre - Solid algorithms aiding physicians in some of their greatest challenges.

Beth - Thats a concern - when machine-learning algorithms learn the wrong things.

Andrew - Frankly revolutionary productivity that we are now starting to see from these AI approaches in drug design.

Lee - It will replace all manual labor in all research laboratories. And then suddenly everyone can collaborate.

What is AI?

Chris - AI - artificial intelligence. For many, its a term straight out of sci-fi, conjuring up visions of utopias or dystopias; from films ranging from the Terminator; to I Robot; to, well, the film AI!

Phil - But what was previously sci-fi is now closer to reality. AI technology exists, and theres a brand new frontier where its being applied to the world of healthcare. Were seeing AI helping to diagnose cancer, AI designing new medicines, and even AI predicting a persons medical future.

Chris - But this isnt the AI you see in the movies. In the words of Kent University computer scientist Colin Johnson, this is more software than Schwarzeneggar...

Colin - When scientists say AI, they often mean some piece of code that's running on a computer and it's taking some inputs. So if it was doing medical diagnosis, it might be taking scans and processing those and trying to generalise from it. So it will take, say, a thousand examples of these scans and the diagnosis that people had and build what's called a model, a kind of mathematical formula, that tells it how to predict when it sees a new example.

Phil - In some ways, these predictive algorithms are just an extension of the tools scientists have always used to analyse data: statistics. The only difference is how complex and layered they can get.

Colin - AI varies in complexity from things that can run on your laptop to things that require huge networks of computers. One approach that's particularly been common in recent years has been deep learning. Let's talk about that in the context of computer vision - computers learning to see and recognise. And deep learning would start by recognising colours and lines, and then the next layer would recognise shapes, circles, corners, textures, and so on. All the way up to the final layer where it's recognising whole objects. It's able, for example, to tell apart cats and dogs.

Phil - Could I download an AI to my computer?

Colin - You could download some code to do AI, what are called open source projects, projects that are made publicly available.

Phil - And code comes in lines, right? How many lines of code would I be getting?

Colin - Thousands and thousands of lines of code. But I think the complexity is not necessarily in the code as much as in the data that you'd need to train it.

Phil - Okay. Say I wanted a dog identification robot.

Colin - Yup.

Phil - And I had a picture of every dog in the world. How close would I be to the top AI systems that exist in the world today?

Colin - Pretty similar, to within probably a couple of percent of what something that was trained on a huge supercomputer could do. And that's facilitating revolutions like self driving cars. The ability to recognise road signs and pedestrians and other vehicles needs to happen in a small machine that can sit inside your car.

Phil - But while I could make my laptop very good at identifying objects in pictures, apparently there are other jobs it would find much more difficult - like identifying language.

Colin - They're very good at translation, but they're very bad at converting language into something that we might think of as understanding, particularly visual understanding. Can a crocodile run a steeplechase? That's a piece of language. We immediately convert that into an image of a crocodile trying to jump over large hurdles and we know that that's not possible. But for a current AI system that doesn't have that capacity for visualisation.

Phil - Are you saying Colin, that my dog translation robot isn't as easy to get?

Colin - I don't think you can do that. No, I don't think we could translate the language of dogs.

MEDICAL CARE

Chris - Phil, Im sorry that Colin crushed your dreams of dog dialogue - but you must admit, the degree to which these algorithms can effectively learn from the data theyre given is pretty astounding. Its also why some people refer to this as machine learning rather than the more general term AI.

Phil - It seems that computer vision - recognising patterns in images - is one of the places where machine learning excels. This is where healthcare comes in, because doctors spend lots of time examining scans or images. At Stanford University, Andre Esteva is applying machine learning to the diagnosis of skin cancer.

Andre - So we built computer vision algorithms that could, given an image of someone's skin, detect any lesions that might be concerning, and upon zooming into those lesions, diagnose them.

Phil - And does it work?

Andre - It worked really well, yes. We demonstrated that the algorithms are actually as effective as dermatologists at identifying if a lesion was benign or malignant.

Chris - To create algorithms that are as good as actual doctors, Andre had to teach them, by feeding them a large amount of data...

Andre - We collected a dataset of 130,000 images that were comprised of over 2000 different diseases.

Chris - Some of those images were used to train the algorithm, and others were used to test it afterwards, to ensure it actually worked.

Andre - The algorithms that we developed got a really good sense which ones are more concerning, which ones are less, and with that we were then able to fine tune it to work specifically well on skin cancers.

Chris - And not only could the AI distinguish a cancerous lesion from a normal one, it could even diagnose multiple lesions at once.

Andre - We actually built an AI that could take such a patch of skin with many lesions, and automatically zoom in on the ones that were most concerning.

Chris - This is just one example of how AI can help doctors with their work. Around the world, researchers are training algorithms to analyse scans and other medical information, including the DNA of cancers to track how the disease behaves and make predictions about the best treatments. Critically, in each of these examples, AI isnt replacing a doctor so much as helping a doctor with the heavy lifting.

Andre - I often describe AI as having a precocious resident following you around in clinic, being able to provide second opinions and surface questions which you might not have considered.

AI IS SPECIFIC

Phil - With all these achievements, its tempting to imagine robot doctors of the future. But according to Oxford University computer scientist Mike Wooldridge, and author of the book The Road to Conscious Machines, thats unlikely...

Mike - In the last decade, we have seen breakthroughs in artificial intelligence, but you need to be very careful when you talk about a breakthrough. Those breakthroughs are in tiny, narrow little areas.

Colin - So a system that's built and trained to do, say, medical diagnosis won't be the same artificial intelligence system that's, say, playing a game of chess.

Phil - Thats Colin Johnson again. Andres Estevas skin cancer AI, for example, wont become sentient - in fact it cant even do what Kris blood flow algorithm can do.

Colin - Current AI systems are very specific and they don't have motivations. They're doing exactly what they are told to do.

Mike - It can't explain what it's doing. It can't generalise its strategies and explain them to you or me. It can't tie its shoe laces or cook an omelette or ride a bicycle. We can do all of those things. Human beings have a much, much richer, much more general intelligence and capability than anything we can build now or anything that we're likely able to build in the near future.

Mike - I think it is extremely unlikely that there will be some kind of intelligence explosion as happens in the Terminator films - you know, the idea that intelligence suddenly multiplies overnight, and machines become sentient, and its out of our control overnight. Why isn't it very likely? Because we've been trying to build intelligent machines for the last 70 years and frankly despite the fact that they can do some very narrow tasks very well, they are actually not that smart.

GLOBAL HEALTH AND PREDICTING THE FUTURE

Chris - So AI is not without its limitations, but there are some truly massive problems that it can help us to tackle. Mike Wooldridge again.

Mike - If you look at what makes healthcare expensive, one of the key challenges is expertise. Training up a doctor takes a long time. There aren't very many people who can do it. It requires a very special set of skills. It's very, very expensive, very, very time consuming. What we can do with AI is we can capture that expertise and we can get that expertise out to people in places where, at the moment, it's just impossible.

Chris - Crucially, the poorer parts of the world - where medical care is in short supply - might really benefit from software that eases some of the doctors burdens.

Mike - A nice example from here in Oxford is in a company called Ultromics. They do ultrasound scans for hearts. Now, if you've ever looked at those ultrasound scans, it's impossible to figure out what's going on. The people that have the ability to interpret those ultrasound scans and detect abnormalities, that skill is very, very scarce. What Ultromics have done is they've taken records of ultrasound scans over a decade long period, and they've basically given that information to AI programs and they built systems that can detect abnormalities on these ultrasound scans. And they've got FDA approval, the Federal Drug Administration in the United States, so they can go live with this technology. And what that means is that a doctor in a remote part of the world with a handheld ultrasound scanner connected to their smartphone, they can do an ultrasound scan and they don't have to have that expertise themselves. That scan can be uploaded securely to a repository in Oxford, automatically analysed and they get that information back. So what that means is we'll be able to get out healthcare to huge numbers of people that just don't have it at the moment.

Chris - Were in very early days here, because a lot of these technologies are right now getting off the ground. Thats partly because they rely on a) a certain amount of IT infrastructure, and b) a good supply of data that applies to the patients.

Mike - And I know a lot of people are concerned about the idea of an AI program doing healthcare for them. That is I think, a rather first-world concern. I think for a lot of people in the world, the choice isn't between a person looking at your ultrasound scan or an AI program looking at your ultrasound scan. It's the AI program or nothing. And that I think is a real huge potential win for AI technologies in the decades ahead.

Chris - And moving beyond diagnosis; some are starting to use AI to predict the future. Carolyn McGregor from Ontario Tech University is doing groundbreaking work here in paediatrics.

Carolyn - We can monitor premature infants, and those born ill at term by monitoring their breathing, their heart rate, and their oxygen levels in their tiny bodies. We use AI to detect and predict when the behaviors of these are changing, and we classify the changes into the likely set of conditions causing the change. This has great potential to save lives and reduce medical complications.

Chris - The project is called Artemis, and its particularly important because of how vulnerable these babies are.

The challenge for these preterm infants in particular is that they're trying to complete the development outside of the womb and doing that presents them with many challenges, and it means that they're susceptible to many different conditions that they can develop, and many challenges in the development of various organs.

Chris - Artemis is designed to run in real-time, to help doctors with information that a human would find difficult to process - which, like the example of ultrasound scans earlier, could ease the burden off of doctors in poorer countries.

Carolyn - What were looking to do currently is deploy a version of Artemis for a hospital in India. Now this is interesting because were demonstrating how we can use the same techniques to support infants in low-income settings. This is very important, as the health outcomes for preterm infants in countries like India and areas of Africa are much worse than Western countries.

Chris - AI seems to do a pretty good job of predicting medical futures in many different ways - as long as it has the right data. Which, according to Mike Wooldridge, were beginning to give it.

Mike - We will be able to monitor our physiology on a 24 hour a day, seven day a week basis and that information is going to enable us to manage our health on a much better basis than we can do now. I have colleagues who think that you will be able to detect the onset of dementia just by the way that you use your smartphone. Just by looking at the pattern of usage, by the way that you search for a contact in your contact list or the way that you scan your email. As those patterns change, as you start to get the very, very early signs of dementia, it could be the smart phone is going to be able to detect that on your behalf, long before there would be any sort of formal diagnosis.

Phil - Coming up after break - AI that can invent new medicines, and peering inside the black box.

DATA AND BLACK BOXES

Phil - After all this talk about predicting your medical future with huge amounts of your personal data, its worth briefly taking a step back. Cambridge Universitys Beth Singler researches the implications of the machine learning revolution.

Beth - AI also doesn't work unless you have large amounts of data, so it cannot progress in particular directions unless it has access to human subject data. Large companies are probably less of a concern than some of the user chosen apps, that there's something like 320,000 medical apps available through app stores and that's a concern as well. We need to be protective of our data going forward.

Phil - And not only do you need to trust who has your data, but once the data goes in, its often a complete mystery what the algorithms will do with it. Colin Johnson again, followed by Beth.

Colin - One concern is that they are a black box. You don't understand what's going on within them. The understanding is very distributed across thousands or millions of little mathematical formulae and little pieces of data, and this is potentially problematic because if you're using these systems for something important, like making medical diagnoses or making decisions about job applications, it can't explain necessarily why it's made the decision it has.

Beth - They don't always learn the things you want them to learn. So for example, in looking at cases of pneumonia, a deduction was made using an AI system that actually, people with asthma shouldn't be treated as much because looking at the historical data, people with asthma seem to do better overall when they caught pneumonia. But actually what was happening was humans were triaging them more, giving them more attention because they had asthma.

Phil - A human would probably have made that link - but a computer just sees the data in black and white.

Beth - Every piece of data that we would want to put into these systems is either short-cuttable in that way, or comes laden with its own human inputted biases. So for example, in the case of women seeking treatment for pain, historically women are less likely to receive pain killing medicine in response to pain than men are. And they're more likely to have to go back and back, back again to the GP. So all that data gets into the system, to the extent that then any kind of machine learning system is going to say, if you're female, you don't need treatment in the same way that if you're male. So this kind of form of algorithmic bias is something we need to be really careful about.

DRUG DISCOVERY

Chris - When you give an algorithm data about people, any biases in that data can affect a persons health outcomes. But theres a whole other area of medical science where the relevant data isnt about individual people, but where AI could go on to save lives on a massive scale. Were talking about drug discovery - inventing brand new medicines. Mike Wooldridge.

Mike - The pharmaceutical industry, although it's ultimately about designing and building new drugs, more than anything I think it's the quintessential knowledge-based industry. It relies heavily on processing large amounts of data and being able to make extrapolations from that data. And so I think it's very, very well positioned to be able to make use of new artificial intelligence techniques and machine learning techniques in designing those drugs and understanding their consequences.

Chris - This area in particular has recently become a massive, multi-billion dollar industry. Every big pharma company is getting in on the action. And its starting to pay off, because recently a company called Exscientia announced a world first.

Andrew - This is the first time drug designed by AI will be tested in humans: DSP-1181, just starting phase one clinical trials, for the treatment of obsessive compulsive disorder.

Chris - Thats Exscientia CEO Andrew Hopkins. To create their drug, they used complex machine learning techniques inspired by the way evolution works in nature.

Andrew - We can generate millions of potential ideas, inside the computer. And then we can use all of the data that we can collect from patterns, from published scientific articles, we can take all that data, and we can build predictive models. But actually, one of the real challenges we also face is that whenever were starting a new project, its actually just on the boundary or sometimes just outside potentially the limits of our ability to predict with machine-learning models. So therefore we need a different set of algorithms to help us in this learning phase. Its a set of maths we call active learning. And active learning actually, its not about just picking the fittest compound, but its actually about selecting the most informative compounds to then make and test, and improve our models, and improve our predictions. And this is actually why weve seen the frankly revolutionary productivity that we are now starting to see from these AI approaches in drug design. We discovered the drug candidate molecule thats now going into the clinic, in about 12 months. A fifth of the time it normally takes.

Chris - Part of the reason drug design normally takes so much longer is because making a drug isnt just about helping the body in a specific way - its also crucial to simultaneously avoid harming the body by hitting the wrong target. Essentially, its about designing a key that fits only one lock and doesnt accidentally open any others...

Andrew - Its not just about designing a specific key to fit a specific lock. We also need to design that key so it avoids fitting maybe 21,000 other locks, which is effectively the number of proteins expressed by the human genome. Because, by hitting those other proteins, it actually potentially causes side effects. So what we have then, is a very difficult design problem, which potentially runs into a very large number of dimensions. This is exactly the type of problem we believe artificial intelligence can be used, then, to satisfy this large number of design objectives.

Chris - Other objectives include making sure the drug can actually be manufactured easily, and that it can be taken up by the body. With so many potential pitfalls, it was particularly important that Exscientias algorithms were not a complete black box.

Andrew - The beauty of the algorithms is that we can then trace the contribution that every atom is making to all the design objectives which we are designing against.

Chris - Their new drug, DSP-1181, isnt ready for the shelves yet - clinical trials take many years, and this is a part that the algorithms definitely should not be doing.

Andrew - How a drug is designed - whether its by humans or artificial intelligence or a combination of the two - that does not change how we want to then test for safety, and test for efficacy. One thing thats important is to know which are the really important battles that AI can make a difference to. And we can make a difference to how we can rapidly discover compounds, and the cost it may take to discover a new medicine. And the speed of bringing it to the clinic. But also we must remember that human biology is incredibly complex. It would be a mistake for people to think that AI can allow us to predict all the possibilities of how a medicine may interact with the human body.

CHEMPUTER

Chris - In the next few years, we might see more and more drugs designed using this kind of evolution-inspired AI. And soon after, there might be some basic manufacture and testing by AI as well - thanks to devices like Lee Cronins Chemputer.

Lee - The chemputer is the world's first general purpose programmable robot that makes molecules on demand. The reason I set out to make this was actually to make a chemical internet that would help me search for the origin of life, believe it or not. We couldn't get funding for that on its own, and I figured that the same technology we use to search for biology would also be very good in drug discovery and making molecules and personalising medicine.

Chris - Like the AI that works in medical diagnosis, the chemputer was originally designed to remove the grunt work out of chemistry so the chemist could be free to do the interesting parts. It consists of both software and hardware.

Lee - It looks like a normal chemistry set actually, round bottom flasks, conical flasks, test tubes, pipes and things.

Lee - We have to feed in some chemicals, like putting ink into a printer, and also we put in a code and that code has two parts to it. One is a graph which is literally understanding where those chemicals have to be moved to. And the other is like a recipe - like cooking a souffle - what temperature, for how long, and what ingredients must be added together in what order. So we can make the perfect chemical souffle, if you like, every time, correctly.

Chris - The result works like a 3D printer for molecules - but Lee started to apply AI to help the chemputer course-correct.

Lee - A bit like how an automated car works, the chemputer can drive perfectly when all the instructions are correct, but what about if something goes wrong or something is not quite as expected? Because we've put some sensors into the chemputer, it can feed back and say, "Oh, there's something a bit wrong here with the heating" or "we don't need to stay at this temperature for quite as long as we thought. Let's make another decision." And so what we've been doing in the last few years is integrating AI into the chemputer.

Chris - This combination of sensors and machine learning meant that the chemputer could start learning from, and experimenting on its own recipes.

Lee - Now we don't tell the robot to make molecules. We tell it to make molecules that have properties. Say we want a blue thing or a nano thing. We're able to dial this in and make a sensor for a blue nano thing and then the chemputer is able, if you like, to search chemical space randomly to start with, and then use a series of algorithms to focus in to say, is that bluer? Make it bluer, more nano, more blue. Yes, hit stop. And it's literally the ability to make a closed loop system where you have molecular discovery, synthesis and testing in a continuous workflow.

Chris - At this point, the Chemputer not only does the grunt work of a chemist; it does the chemists full job. Lee is even looking into teaching it to look through research papers and pick up new techniques, by translating them into its own chemical language.

Lee - That was the vision for our initial paper that it would literally be able to play the literature. Almost like taking vinyl records, digitising them, putting them onto Spotify.

Chris - And if the machine can do the full job of a chemist, that includes trying to synthesise new medicines. Lee already has one working on short biological molecules called peptides.

Lee - Now peptides are a good example because peptides are made by robots already, but our chemputer not only makes peptides but it can do any other type of chemistry on the peptide that you want. And that's getting the biochemists really excited, because we can start to dream up new types of drug molecules that maybe can look at the iron pumping system in the cell, or certain receptors at the membranes in the cell.

THE FUTURE

Visit link:
Artificial Intelligence (AI) and medicine | Interviews - The Naked Scientists