The Pentagon’s AI Shop Takes A Venture Capital Approach to Funding Tech – Defense One

The Joint Artificial Intelligence Center will take a Series A, B, approach to building tech for customers, with product managers and mission teams.

The Joint Artificial Intelligence Center will take a Series A, B, approach to building tech for customers, with product managers and mission teams. By PatrickTucker

Military leaders who long to copy the way Silicon Valley funds projects should know: the Valley isnt the hit machine people think it is, says Nand Mulchandani, chief technical officer of the Pentagons Joint Artificial Intelligence Center. The key is to follow the right venture capitalmodel.

Mulchandani, a veteran of several successful startups, aims to ensure JAICs investments in AI software and tools actually work out. So he is bringing a very specific venture-capital approach to thePentagon.

Heres the plan: when a DoD agency or military branch asks JAIC for help with some mission or activity, the Center will assign a mission team of, essentially, customer representatives to figure out what agency data might be relevant to theproblem.

Subscribe

Receive daily email updates:

Subscribe to the Defense One daily.

Be the first to receive updates.

Next, the JAIC will assign a product manager not DoDs customary program manager, but a role imported from the techindustry.

He or she handles the actual building of the product, not the administrative logistics of running a program. The product manager will gather customer needs, make those into product features, work with the program manager, ask, What does the product do? How is it priced? Mulchandani told Defense One in a phone conversation onThursday.

The mission team and product manager will take a small part of the agencys data to the software vendors or programs that they hire to solve the problem. These vendors will need to prove their solution works before scaling up to take on all availabledata.

Were going to have a Series A, a seed amount of money. You [the vendor] get a half a million bucks to curate the data, which tends to be the problem. Do the problem x in a very tiny way, taking sample data, seeing if an algorithm applies to it, and then scale it, Mulchandani saidon Wednesday at an event hosted by the Intelligence and National Security Alliance, orINSA.

In the venture capital industry, you take a large project, identify core risk factors, like team risk, customer risk, etc. you fund enough to take care of these risks and see if you can overcome the risks through a prototype or simulation, before you try to scale, he addedlater.

The customer must also plan to turn the product into a program of record or give it some other life outside of theJAIC.

Thats very different from the way the Defense Department pays for tech today, he said. The unit of currency in the DoD seems to be Well, this was a great idea; lets stick a couple million bucks on it, see what happens. Were not doing that way anymore he said onWednesday.

The JAIC is working with the General Services Administration Centers of Excellence to create product manager roles in DoD and to figure out how to scale small solutions up. Recently, some members of the JAIC and the Centers of Excellence participated in a series of human-centered design workshops to determine essential roles and responsibilities for managing data assets, across areas that the JAIC will be developing products, like cybersecurity, healthcare, predictive maintenance, and business automation, according to thestatement.

Mulchandani urges the Pentagon not to make a fetish of Silicon Valley. Without the right business and funding processes, many venture startups fail just as badly as poorly thought out government projects. You just dont hear aboutthem.

When you end up in a situation where theres too much capital chasing too few good ideas that are real, you end up in a situation where you are funding a lot of junk. What ends up happening [in Silicon Valley] is many of those companies just fail, he said Wednesday. The problem in DOD is similar. How do you apply discipline up front, on a venture model, to fund the good stuff as opposed to funding a lot of junk and then seeing two or three products that becomesuccessful?

Read the original here:
The Pentagon's AI Shop Takes A Venture Capital Approach to Funding Tech - Defense One

New research on adoption of Artificial intelligence within IoT ecosystem – ELE Times

element14, the Development Distributor, has published new research on the Internet of Things (IoT) which confirms strong adoption of Artificial Intelligence (AI) within IoT devices, alongside new insights on key markets, enablers and concerns for design engineers working in IoT.

AIoT is the major emerging trend from the survey, demonstrating the beginning of the process to build a true IoT ecosystem. Research showed that almost half (49%) of respondents already use AI in their IoT applications, with Machine Learning (ML) the most used technology (28%) followed by cloud-based AI (19%). This adoption of AI within IoT design is coupled with a growing confidence to take the lead on IoT development and an increasing number of respondents seeing themselves as innovators. However, it is still evident that some engineers (51%) are hesitant to adopt AI due to being new to the technology or because they require specialized expertise in how to implement AI in IoT applications.

Other results from element14s second Global IoT Survey show that security continues to be the biggest concern designers consider in IoT implementation. Although 40% cited security as their biggest concern in 2018 and this has reduced to 35% in 2019, it is still ranked significantly higher than connectivity and interoperability due to the type of data collected from things (machines) and humans, which can be very sensitive and personal. Businesses initiating new IoT projects treat IoT security as a top priority by implementing hardware and software security to protect for any kind of potential threat. Ownership of collected data is another important aspect of security, with 70% of respondents preferring to own the data collected by an edge device as opposed to it being owned by the IoT solution provider.

The survey also shows that although many engineers (46%) still prefer to design a complete edge-to-cloud and security solution themselves, openness to integrate production ready solutions, such as SmartEdge Agile, SmartEdge IIoT Gateway, which offer a complete end-to-end IoT Solution, has increased. 12% more respondents confirmed that they would consider third party devices in 2019 than 2018, particularly if in-house expertise is limited or time to market is critical.

A key trend from last years survey results has continued in 2019 and survey results suggest that the growing range of hardware available to support IoT development continues to present new opportunities. More respondents than ever are seeing innovation coming from start-ups (33%, up from 26%), who benefit from the wide availability of modular solutions and single board

computers available on the market. The number of respondents adopting off-the-shelf hardware has also increased to 54% from 50% in 2018.

Cliff Ortmeyer, Global Head of Technical Marketing for Farnell and element14 says: Opportunities within the Internet of Things and AI continue to grow, fueled by access to an increasing number of hardware and software solutions which enable developers to bring products to market more quickly than ever before, and without the need for specialized expertise. This is opening up IoT to new entrants, and giving more developers the opportunity to innovate to improve lives. element14 provides access to an extensive range of development tools for IoT and AI which provide off-the shelf solutions to common challenges.

Despite the swift integration of smart devices such as Amazons Alexa and Google Home into daily life, evidencing a widespread adoption of IoT in the consumer space, in 2019 we saw a slight shift in focus away from home automation with the number of respondents who considered it to be the most impactful application in IoT in the next 5 years reducing from 27% to 22%. Industrial automation and smart cities both gained, at 22% and 16% respectively, underpinned by a growing understanding of the value that IoT data can bring to operations (rising from 44% in 2018 to 50% in 2019). This trend is witnessed in industry where more manufacturing facilities are converting to full or semi-automation in robotic manufacturing and increasing investment in predictive maintenance to reduce production down times.

The survey was conducted between September and December 2019 with 2,015 respondents participating from 67 countries in Europe, North America and APAC. Responses were predominantly from engineers working on IoT solutions (59%), as well as buyers of components related to IoT solutions, Hobbyists and Makers.

element14 provides a broad range of products and support materials to assist developers designing IoT solutions and integrating Artificial Intelligence. Products are available from leading manufacturers such as Raspberry Pi, Arduino and Beagleboard. element14s IoT hub and AI pages also provide access to the latest products for development and insights and white papers to support the design journey. Readers can view an infographic covering the full results of the element14 Global IoT Survey at Farnell in EMEA, Newark in North America and element14 in APAC.

For more information, visit http://www.element14.com

See original here:
New research on adoption of Artificial intelligence within IoT ecosystem - ELE Times

HKMA’s paper on Artificial Intelligence in the banking industry – Lexology

Last year, the HKMA commissioned a study into the application of Artificial Intelligence technology (AI) in the Hong Kong banking industry. The report, published on 23 December 2019, summarises insights from academics and industry experts on AI. One key finding was that almost 90% of the surveyed retail banks had adopted or planned to adopt AI applications. 95% of banks which had adopted AI expressed their intention to use AI to shape their corporate strategy, mainly prompted by the need to improve customer experience, stay cost effective and better manage risk.

To help the banking industry understand the risk and potential of AI, the report covered the latest development trends, potential use cases, status of AI development in banking, challenges and considerations in designing and deploying the technology, as well as the market outlook.

This report is the first in a series of AI-related publications produced by the HKMA. The full report can be accessed here.

Excerpt from:
HKMA's paper on Artificial Intelligence in the banking industry - Lexology

What Is Artificial Intelligence (AI)? | PCMag

In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."

At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans.

But true artificial intelligence, as McCarthy conceived it, continues to elude us.

A great challenge with artificial intelligence is that it's a broad term, and there's no clear agreement on its definition.

As mentioned, McCarthy proposed AI would solve problems the way humans do: "The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans," McCarthy said.

Andrew Moore, Dean of Computer Science at Carnegie Mellon University, provided a more modern definition of the term in a 2017 interview with Forbes: "Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence."

But our understanding of "human intelligence" and our expectations of technology are constantly evolving. Zachary Lipton, the editor of Approximately Correct, describes the term AI as "aspirational, a moving target based on those capabilities that humans possess but which machines do not." In other words, the things we ask of AI change over time.

For instance, In the 1950s, scientists viewed chess and checkers as great challenges for artificial intelligence. But today, very few would consider chess-playing machines to be AI. Computers are already tackling much more complicated problems, including detecting cancer, driving cars, and processing voice commands.

The first generation of AI scientists and visionaries believed we would eventually be able to create human-level intelligence.

But several decades of AI research have shown that replicating the complex problem-solving and abstract thinking of the human brain is supremely difficult. For one thing, we humans are very good at generalizing knowledge and applying concepts we learn in one field to another. We can also make relatively reliable decisions based on intuition and with little information. Over the years, human-level AI has become known as artificial general intelligence (AGI) or strong AI.

The initial hype and excitement surrounding AI drew interest and funding from government agencies and large companies. But it soon became evident that contrary to early perceptions, human-level intelligence was not right around the corner, and scientists were hard-pressed to reproduce the most basic functionalities of the human mind. In the 1970s, unfulfilled promises and expectations eventually led to the "AI winter," a long period during which public interest and funding in AI dampened.

It took many years of innovation and a revolution in deep-learning technology to revive interest in AI. But even now, despite enormous advances in artificial intelligence, none of the current approaches to AI can solve problems in the same way the human mind does, and most experts believe AGI is at least decades away.

The flipside, narrow or weak AI doesn't aim to reproduce the functionality of the human brain, and instead focuses on optimizing a single task. Narrow AI has already found many real-world applications, such as recognizing faces, transforming audio to text, recommending videos on YouTube, and displaying personalized content in the Facebook News Feed.

Many scientists believe that we will eventually create AGI, but some have a dystopian vision of the age of thinking machines. In 2014, renowned English physicist Stephen Hawking described AI as an existential threat to mankind, warning that "full artificial intelligence could spell the end of the human race."

In 2015, Y Combinator President Sam Altman and Tesla CEO Elon Musk, two other believers in AGI, co-founded OpenAI, a nonprofit research lab that aims to create artificial general intelligence in a manner that benefits all of humankind. (Musk has since departed.)

Others believe that artificial general intelligence is a pointless goal. "We don't need to duplicate humans. That's why I focus on having tools to help us rather than duplicate what we already know how to do. We want humans and machines to partner and do something that they cannot do on their own," says Peter Norvig, Director of Research at Google.

Scientists such as Norvig believe that narrow AI can help automate repetitive and laborious tasks and help humans become more productive. For instance, doctors can use AI algorithms to examine X-ray scans at high speeds, allowing them to see more patients. Another example of narrow AI is fighting cyberthreats: Security analysts can use AI to find signals of data breaches in the gigabytes of data being transferred through their companies' networks.

Early AI-creation efforts were focused on transforming human knowledge and intelligence into static rules. Programmers had to meticulously write code (if-then statements) for every rule that defined the behavior of the AI. The advantage of rule-based AI, which later became known as "good old-fashioned artificial intelligence" (GOFAI), is that humans have full control over the design and behavior of the system they develop.

Rule-based AI is still very popular in fields where the rules are clearcut. One example is video games, in which developers want AI to deliver a predictable user experience.

The problem with GOFAI is that contrary to McCarthy's initial premise, we can't precisely describe every aspect of learning and behavior in ways that can be transformed into computer rules. For instance, defining logical rules for recognizing voices and imagesa complex feat that humans accomplish instinctivelyis one area where classic AI has historically struggled.

An alternative approach to creating artificial intelligence is machine learning. Instead of developing rules for AI manually, machine-learning engineers "train" their models by providing them with a massive amount of samples. The machine-learning algorithm analyzes and finds patterns in the training data, then develops its own behavior. For instance, a machine-learning model can train on large volumes of historical sales data for a company and then make sales forecasts.

Deep learning, a subset of machine learning, has become very popular in the past few years. It's especially good at processing unstructured data such as images, video, audio, and text documents. For instance, you can create a deep-learning image classifier and train it on millions of available labeled photos, such as the ImageNet dataset. The trained AI model will be able to recognize objects in images with accuracy that often surpasses humans. Advances in deep learning have pushed AI into many complicated and critical domains, such as medicine, self-driving cars, and education.

One of the challenges with deep-learning models is that they develop their own behavior based on training data, which makes them complex and opaque. Often, even deep-learning experts have a hard time explaining the decisions and inner workings of the AI models they create.

Here are some of the ways AI is bringing tremendous changes to different domains.

Self-driving cars: Advances in artificial intelligence have brought us very close to making the decades-long dream of autonomous driving a reality. AI algorithms are one of the main components that enable self-driving cars to make sense of their surroundings, taking in feeds from cameras installed around the vehicle and detecting objects such as roads, traffic signs, other cars, and people.

Digital assistants and smart speakers: Siri, Alexa, Cortana, and Google Assistant use artificial intelligence to transform spoken words to text and map the text to specific commands. AI helps digital assistants make sense of different nuances in spoken language and synthesize human-like voices.

Translation: For many decades, translating text between different languages was a pain point for computers. But deep learning has helped create a revolution in services such as Google Translate. To be clear, AI still has a long way to go before it masters human language, but so far, advances are spectacular.

Facial recognition: Facial recognition is one of the most popular applications of artificial intelligence. It has many uses, including unlocking your phone, paying with your face, and detecting intruders in your home. But the increasing availability of facial-recognition technology has also given rise to concerns regarding privacy, security, and civil liberties.

Medicine: From detecting skin cancer and analyzing X-rays and MRI scans to providing personalized health tips and managing entire healthcare systems, artificial intelligence is becoming a key enabler in healthcare and medicine. AI won't replace your doctor, but it could help to bring about better health services, especially in underprivileged areas, where AI-powered health assistants can take some of the load off the shoulders of the few general practitioners who have to serve large populations.

In our quest to crack the code of AI and create thinking machines, we've learned a lot about the meaning of intelligence and reasoning. And thanks to advances in AI, we are accomplishing tasks alongside our computers that were once considered the exclusive domain of the human brain.

Some of the emerging fields where AI is making inroads include music and arts, where AI algorithms are manifesting their own unique kind of creativity. There's also hope AI will help fight climate change, care for the elderly, and eventually create a utopian future where humans don't need to work at all.

There's also fear that AI will cause mass unemployment, disrupt the economic balance, trigger another world war, and eventually drive humans into slavery.

We still don't know which direction AI will take. But as the science and technology of artificial intelligence continues to improve at a steady pace, our expectations and definition of AI will shift, and what we consider AI today might become the mundane functions of tomorrow's computers.

More:
What Is Artificial Intelligence (AI)? | PCMag

What is AI? Everything you need to know about Artificial …

Video: Getting started with artificial intelligence and machine learning

It depends who you ask.

Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.

Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.

Special report: How to implement AI and machine learning (free PDF)

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Mller and philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting that so-called ' superintelligence' -- which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" -- was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.

Key to the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

Download now: IT leader's guide to deep learning(Tech Pro Research)

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The structure and training of deep neural networks.

Another area of AI research is evolutionary computation, which borrows from Darwin's famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google and Microsoft, have moved to using specialized chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labeled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that's just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

See also: How artificial intelligence is taking call centers to the next level

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively -- although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size -- Google's Open Images Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people -- most of whom were recruited through Amazon Mechanical Turk -- who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game the system builds a model of which action will maximize the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Many AI-related technologies are approaching, or have already reached, the 'peak of inflated expectations' in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait.

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units -- custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don't want to build their own machine learning models but instead want to consume AI-powered, on-demand services -- such as voice, vision, and language recognition -- Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella -- and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Internally, each of the tech giants -- and others such as Facebook -- use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam -- the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

The Amazon Echo Plus is a smart speaker with access to Amazon's Alexa virtual assistant built in.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space -- Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities.

Read more: How we learned to talk to computers, and how they learned to answer back (PDF download)

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion that major PC makers will build Alexa into laptops adding to speculation about whether Cortana's days are numbered, although Microsoft was quick to reject this.

It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021.

Baidu's self-driving car, a modified BMW 3 series.

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China's favor.

While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.

There's too many to put together a comprehensive list, but some recent highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each -- setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

IBM Watson competes on Jeopardy! in January 14, 2011

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision, with Google training a system to recognise an internet favorite, pictures of cats.

Since Watson's win, perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. At last year's prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world's top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone's voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people's image, with tools already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost 95 percent. Recently Microsoft's Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police.

Although privacy regulations vary across the world, it's likely this more intrusive use of AI technology -- including AI that can recognize emotions -- will gradually become more widespread elsewhere.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM's Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK's National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a "fundamental risk to the existence of human civilization". As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race.

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft's director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about "Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away."

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

While AI won't replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn't have the potential to impact. As AI expert Andrew Ng puts it: "many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work", saying he sees a "significant risk of technological unemployment over the next few decades".

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this means for the more than three million people in the US who works as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers, with plans to add many more. But Amazon also stresses that as the number of bots have grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it's not a given that manual and robotic labor will continue to grow hand-in-hand.

Amazon bought Kiva robotics in 2012 and today uses Kiva robots throughout its warehouses.

Visit link:
What is AI? Everything you need to know about Artificial ...

Artificial intelligence | MIT News

Weathers a problem for autonomous cars. MITs new system shows promise by using ground-penetrating radar instead of cameras or lasers.

Tech-based solutions sought for challenges in work environments, education for girls and women, maternal and newborn health, and sustainable food.

MIT duo uses music, videos, and real-world examples to teach students the foundations of artificial intelligence.

PatternEx merges human and machine expertise to spot and respond to hacks.

In a Starr Forum talk, Luis Videgaray, director of MITs AI Policy for the World Project, outlines key facets of regulating new technologies.

A deep-learning model identifies a powerful new drug that can kill many species of antibiotic-resistant bacteria.

MIT graduate student is assessing the impacts of artificial intelligence on military power, with a focus on the US and China.

The mission of SENSE.nano is to foster the development and use of novel sensors, sensing systems, and sensing solutions.

By organizing performance data and predicting problems, Tagup helps energy companies keep their equipment running.

Researchers develop a more robust machine-vision architecture by studying how human vision responds to changing viewpoints of objects.

Three-day hackathon explores methods for making artificial intelligence faster and more sustainable.

MITs new system TextFooler can trick the types of natural-language-processing systems that Google uses to help power its search results, including audio for Google Home.

Starting with higher-value niche markets and then expanding could help perovskite-based solar panels become competitive with silicon.

With the initial organizational structure in place, the MIT Schwarzman College of Computing moves forward with implementation.

Doctoral candidate Natalie Lao wants to show that anyone can learn to use AI to make a better world.

Device developed within the Department of Civil and Environmental Engineering has the potential to replace damaged organs with lab-grown ones.

Computer scientists new method could help doctors avoid ineffective or unnecessarily risky treatments.

Model tags road features based on satellite images, to improve GPS navigation in places with limited map data.

A new method determines whether circuits are accurately executing complex operations that classical computers cant tackle.

MIT researchers and collaborators have developed an open-source curriculum to teach young students about ethics and artificial intelligence.

Original post:
Artificial intelligence | MIT News

A 5-Year Vision for Artificial Intelligence in Higher Ed – EdTech Magazine: Focus on Higher Education

The Historical Hype Cycle of AI

Before talking about the current and projected impact of AI in education and other industries, Ramsey explained the concept of the AI winter.

He showed a graph on the historical hype cycle of AI that featured peaks and drops over a 70-year period.

There was a big peak in the mid-1960s, when there was an emergence of symbolic AI research and new insights into the possibility of training two-layer neural networks. A resurgence came in the 1980s with the invention of certain algorithms for training three-plus layer neural networks.

The graph showed a drop in the mid-1990s, as the computational horsepower and data did not exist to develop real-world applications for AI a situation he calls an AI winter. We are in the middle of another resurgence today, he said.

There has been a huge increase in the amount of data and computer power that we have available, sparking research, Ramsey said. People have been able to start inventing algorithms and training not just three-layer neural networks but a 100-layer one.

The question now is where we will go next, he said. His answer? We will sustain progress, leading to true or strong AI the point at which a machines intellectual capability is functionally equal to a humans.

The number of researchers working on this, the amount of money thats being spent on this and the amount of research publications its all growing, he said. And where Google is right now is on a plateau of productivity because were using AI in everything that we do, at scale.

MORE ON EDTECH:Learn how data-powered AI tools are helping universities drive enrollment and streamline operations.

During his presentation, Ramsey showed an infographic that featured what machine learning could look like across a students journey through higher education, starting from their college search and ending with employment.

For example, he said, colleges and universities can apply machine learning when targeting quality prospective students to attend their schools. They can even automate call center operations to make contacting prospective students more efficient and deploy AI-driven assistants to engage with applicants in a personalized way, he said.

Once students are enrolled, they can also useAI chatbotsto improve student support services, assisting new students in their adjustment to college. They can leverage adaptive learning technology topredictperformance as they choose a path through school, and they can tailor material to their knowledge levels and learning styles.

For example, a machine learning algorithm helped educators at Ivy Tech Community College in Indianapolis identify at-risk students and provide early intervention, Ramsey said.

Ivy Tech shifted toGoogleCloud Platform, which allowed the school to manage 12 million data points from student interactions and develop aflexible AI engineto analyze student engagement and success. For instance, a student who stops logging in to their learning management system or showing up to class would be flagged as needing assistance.

The predictions were 83 percent accurate, Ramsey said. It worked quite well, and they were actually able to save students from dropping out, which makes a big difference because their funding is based on how many students they have, he said.

As students near graduation and start their job searches, schools can also use AI to understand career trends and match them to a students competencies and skills. Machine learning can be used to better understand job listings and a jobseekers intent, matching candidates to their ideal jobs more quickly.

At the end of the day, what were doing with these technologies is trying to understand who we are and how our minds work, Ramsey said. Once we fully understand that, we can build machines that function in the same way, and the possibilities are endless.

Read the original here:
A 5-Year Vision for Artificial Intelligence in Higher Ed - EdTech Magazine: Focus on Higher Education

32 Artificial Intelligence Companies You Should Know | Built In

From Google and Amazon to Apple and Microsoft, every major tech companyis dedicating resources to breakthroughs in artificial intelligence. Personal assistants like Siri and Alexa have made AI a part of our daily lives. Meanwhile, revolutionary breakthroughs like self-driving cars may not be the norm, but are certainly within reach.

As the big guys scramble to infuse their products with artificial intelligence, other companies are hard at work developing their own intelligent technology and services. Here are 32artificial intelligence companies and AI startupsyou may not know today, but you will tomorrow.

Find out who's hiring

See all jobs at top tech companies & startups

Industry: Healthtech, Biotech, Big Data

Location: Chicago, Illinois

What it does: Tempus uses AI to gather and analyze massive pools of medical and clinical data at scale. The company, with the assistance of AI, provides precision medicine that personalizes and optimizes treatments to each individuals specific health needs; relying on everything from genetic makeup to past medical history to diagnose and treat. Tempus is currently focusing on using AI to create breakthroughs in cancer research.

View Jobs + Learn More

Industry: Big Data, Software

Location: Boston, Massachusetts

What itdoes:DataRobot provides data scientists with a platform for building and deploying machine learning models. The softwarehelps companies solve challenges by finding the best predictive model for their data. DataRobot's techis used inhealthcare, fintech, insurance, manufacturing and even sports analytics.

View Jobs + Learn More

Industry: Big Data, Software

Location: Chicago, Illinois

What itdoes:Narrative Science creates natural language generation (NLG) technology that can translatedata into stories. By highlightingonly the most relevant and interesting information, businesses canmake quicker decisions regardless of the staff's experience with data or analytics.

View Jobs + Learn More

Industry: Fintech

Location: New York, New York

What itdoes:AlphaSense is an AI-powered search engine designed to helpinvestment firms, banks and Fortune 500 companies find important information within transcripts, filings, news and research.The technologyuses artificial intelligencetoexpandkeyword searches for relevant content.

View Jobs + Learn More

Industry: Software

Location: New York, New York

What itdoes:Clarifai is an image recognition platform that helps users organize, curate, filter and search their media. Within the platform, images and videos are tagged, teachingthe intelligent technology to learn which objects are displayed in a piece of media.

View Jobs + Learn More

Industry: Machine Learning, Software

Location: Boston, Massachusetts

What itdoes:Neurala is developing "The Neurala Brain," a deep learning neural network software that makesdevices like cameras, phones and drones smarter and easier to use. Neuralas solutions are currently usedon more than amillion devices. Additionally, companies and organizations like NASA, Huawei, Motorola and the Defense Advanced Research Projects Agency (DARPA) are also using the technology.

View Jobs + Learn More

Industry: Automotive, Transportation

Location: Boston, Massachusetts

What itdoes:With a mission to provide safe efficient driverless vehicles,nuTonomy is developing software thatpowers autonomous vehicles in cities around the world. The company uses AItocombinemapping, perception, motion planning, control and decision making into software designed toeliminate driver-error accidents.

View Jobs + Learn More

RELATED ARTICLES20 examples of artificial intelligence shaking up business as usual

How AI is changing the banking and finance industries

The robots will see you now: How artificial intelligence is revolutionizing healthcare

Industry: Adtech, Software

Location: New York, New York

What itdoes:Persado is a marketing language cloud that usesAI-generated language to craft advertising for targeted audiences. With functionality across all channels, Persado helps businessesincrease acquisitions, boost retention and buildbetter relationships with their customers.

View Jobs + Learn More

Industry: Machine Learning

Location: New York, New York

What itdoes:x.ai creates autonomous personal assistants powered by intelligent technology. The assistants, simply named Amy and Andrew Ingram, integrate with programs like Outlook, Google, Office 365 and Slack,schedule or update meetings, and continually learnfrom everyinteraction.

View Jobs + Learn More

Industry: Software, Cloud

Location: Austin, Texas

What itdoes:CognitiveScale builds augmented intelligence for thehealthcare, insurance, financial services and digital commerce industries. Its technology helpsbusinesses increase customer acquisition and engagement, while improving processes like billing and claims. CognitiveScales products are used by such heavy hitters asP&G, Exxon, JP Morgan & Chase, Macys and NBC.

View Jobs + Learn More

Industry: Biotech, Healthtech

Location: San Francisco, California

What itdoes:Freenome uses artificial intelligence to conduct innovative cancer screenings and diagnostic tests. Using non-invasive blood tests, the companys AI technology recognizes disease-associated patterns, providingearlier cancer detection and better treatment options.

Find out who's hiring

See all jobs at top tech companies & startups

Industry: Robotics

Location: Pleasanton, California

What itdoes:AEyebuilds the vision algorithms, software and hardware that ultimately become the eyes of autonomous vehicles. ItsLiDAR technology focuses on the most important information in a vehicles sightline such as people, other cars and animals, while putting less emphasis on things like the sky, buildings and surrounding vegetation.

Industry: Machine Learning, Robotics

Location: Menlo Park, California

What itdoes:AIBrainis working to create fully autonomous artificial intelligence. By fusingproblem solving, learning and memory technologies together, the company can build systems thatlearn and adapt without human assistance.

Industry: Agriculture, Robotics, Software

Location: Sunnyvale, California

What itdoes:Blue River Tech combines artificial intelligence and computer vision to build smarter farm tech. The companys See & Spray machine learning technology, for example, can detectindividual plants and applyherbicide to the weeds only. The solution not only prevents herbicide-resistant weeds but reduces 90% of the chemicalscurrently sprayed.

Industry: Software

Location: Oakland, California

What it does:Vidado can pulldata from virtually any channel, including handwritten documents, dramatically increasingpaper to digital workflow speeds and accuracy. The cloud-based platform is utilized by leading organizations and companies like New York Life, the FDA, Metlife and MassMutual.

Industry: Legal, Software

Location: San Francisco

What itdoes:Casetext is an AI-powered legal search engine with a database of more than10 million statutes, cases and regulations. Called CARA A.I., the company's techcan search within the language, jurisdiction and citations of a user's uploaded documents and return relevant searches from the database.

Industry: Cloud, Robotics

Location: Santa Clara, California

What itdoes:CloudMinds provides cloud robot services for the finance, healthcare, manufacturing, power utilities, public sector and enterprise mobility industries. Its cloud-based AI usesadvanced algorithms, large-scale neural networks and training data to make smarter robots for image and object recognition, natural language processing, speech recognition and more.

Industry: Software

Location: San Francisco, California

What itdoes:Figure EightprovidesAI training software to machine learning and data science teams. The company's"human-in-the-loop" platform uses human intelligence to train and test machine learning, and has powered AI projects for major companies like Oracle, Ebay SAP and Adobe.

Industry: Big Data, Software

Location: Mountain View, California

What itdoes:H2O.ai is the creator of H2O, an open source platform for data science and machine learning that is utilized by thousands of organizations worldwide. H2O.ai supplies companies in a variety of industries predictive analytics and machine learning tools thataidein solvingcriticalbusiness challenges.

Industry: Biotech

Location: Bethesda, Maryland

What itdoes:Insilico Medicine is using artificial intelligence for anti-aging and drug discovery research. The company'sdrug discovery engine contains millions of samples forfinding disease identifiers. Insilicois used byacademic institutions, pharmaceutical and cosmetic companies.

Industry: Software, Automotive

Read this article:
32 Artificial Intelligence Companies You Should Know | Built In

Parasoft introduces Artificial Intelligence (AI) and Machine Learning (ML) into Software Test Automation for the Safety-critical Market at Embedded…

Parasoft C/C++test's new functionality offers teams the ability to link test cases to requirements and code coverage enhancements, improving productivity instantly

MONROVIA, Calif. andNUREMBERG, Germany, Feb. 27, 2020 /PRNewswire/ -- Parasoft, the global automated software testing authority since 1987,announced todayat Embedded World, thenew release ofParasoftC/C++test, aunified C and C++ development testing solution forreal-time safety- and security-criticalembedded applicationsand enterprise IT.With this new release,Parasoftappliesa new approachtoexpeditesoftware code analysis findings andincreasetheproductivity ofautomatedsoftware testing, allowing teams to achieveindustrycompliance standardseasily.

Parasoft introduces AI and Machine Learning into Software Test Automation

To learn more aboutParasoftC/C++test, visit:https://www.parasoft.com/products/ctest.

"With the new release of C/C++test,we are bringing unique AIandMLcapabilities to help organizations with the adoption of static analysis forsecure safety-critical applicationsdevelopment.With these innovations, organizations can immediately reduce manual effort in their software quality processes,"statedMiroslawZielinski,ParasoftProduct Manager."Organization serious intheirapproach to safety, security, and quality of software, will soon need to include AI-based tools into their development process to keep pace with competition and stay relevantinthe market. This is only our first step in the application of AIandML to the safety-critical market."

Embedded devicesarecomplex,and with increasingsafety and securityconcerns, it is crucial that automated software testing solutions stay up to date on the ever-expanding compliance standards.Hence,Parasoftcontinues to leadtheenforcementof the latestguidelines.Additionally, withindustry-standard prerequisites toestablishtraceability of software requirements to test cases,Parasofthas built integrations with some of the most popular application lifecycle management(ALM)solutions.The integrationsestablishtraceabilityfromsoftware requirements to test cases.

"The market for functional safety (FuSa)Testtoolssales will grow at the quickest CAGR of 9.3% to reach $539.6M in revenue in 2023. The need to establish bi-directional traceability to meet FUSA certification requirements is fueling interest in using integrated application lifecycle management (ALM)and product lifecycle management (PLM) solutions to manage the entire product development process,"states Chris Rommel, EVP,VDC ResearchGroup.

What's new?

AboutParasoft

Parasoft, the global automated software testing authorityfor over 30+ years,provides innovative tools that automate time-consuming testing tasks and provide management with intelligent analytics necessary to focus on what matters.Parasoftsupports software organizations as they develop and deploy applicationsforthe embedded, enterprise, and IoT markets.Parasoft'stechnologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software, by integrating static and runtime analysis; unit, functional, and API testing; and service virtualization.Withourdeveloper testing tools, manager reporting/analytics, and executivedashboarding,Parasoftenables organizations to succeed in today's most strategicecosystems anddevelopment initiatives real-time, safety-critical, secure,agile, continuous testing,andDevOps.www.parasoft.com; https://www.parasoft.com/products/ctest

Story continues

Here is the original post:
Parasoft introduces Artificial Intelligence (AI) and Machine Learning (ML) into Software Test Automation for the Safety-critical Market at Embedded...

Artificial Intelligence is Starting to Shape the Future of the Workplace – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

Read the rest here:
Artificial Intelligence is Starting to Shape the Future of the Workplace - JD Supra