Washington state governor green-lights facial-recog law championed by… guess who: Yep, hometown hero Microsoft – The Register

Roundup Here's your quick-fire summary of recent artificial intelligence news.

DeepMind has built a reinforcement-learning bot capable of playing 57 classic Atari 2600 games about as well as the average human.

Why 57, you may ask? The Atari 2600 console was launched in 1977 and has a library of hundreds of games. In 2012, a group of computer scientists came up with The Arcade Learning Environment (ALE), a toolkit consisting of 57 old Atari games to test reinforcement-learning agents.

AI researchers have been using this collection to benchmark the progress of their game-playing bots ever since. The average score reached on all 57 games has steadily increased with the development of more complex machine-learning systems, but most models have struggled to play the most difficult ones, such as Montezuma's Revenge, Pitfall, Solaris, and Skiing.

Reinforcement learning attempts to teach AI bots how to complete a specific task, such as playing a game, without explicitly telling it the rules. The agents thus have to learn through trial and error, and are guided by rewards. Reaching high scores means more delicious rewards, and over time, the computer learns to make good moves to play the game well.

The researchers have improved their system by employing different types of algorithms and tricks. The bot, dubbed Agent57, is better equipped in dealing with the most difficult games because it's been programmed to be able to explore its environment more efficiently even when the rewards are sparse.

A number of steps have to be executed in the games before a reward is given, so it's not immediately obvious how to play Montezuma's Revenge, Pitfall, Solaris, and Skiing, compared to games like Pong that have a more immediate reward feedback system.

The boffins reckon that mastering games in the ALE dataset is a good sign that a system is more generally intelligent and robust so that they might be applied in the real world.

"The ultimate goal is not to develop systems that excel at games, but rather to use games as a stepping stone for developing systems that learn to excel at a broad set of challenges," Deepmind wrote.

You can read more about the numerous nifty techniques that were used to improve Agent57 in more detail here [PDF].

The governor of the US state of Washington, Jay Inslee, has passed a piece of legislation that regulates the use of facial-recognition systems.

While the likes of San Francisco and Oakland in California, and Somerville in Massachusetts, have banned law enforcement from using facial-recognition technology, Washington has gone for a softer approach. That's not too much of a surprise, considering the bill [PDF] was sponsored by Microsoft, and the US state is the home of the Windows giant. Microsoft is keen for organizations to use its machine-learning services for things like facial and object recognition.

"This legislation represents a significant breakthrough the first time a state or nation has passed a new law devoted exclusively to putting guardrails in place for the use of facial recognition technology," Redmond's president, Brad Smith, said.

Law enforcement agencies in Washington will be allowed to deploy facial-recognition systems, but will have to be more transparent about using it. First, they have to file a "notice of intent", a report that details the service the cops want to use from a particular vendor, and what it's being used for. The document also has to show what kind of data is collected and generated, what decisions the software makes, and where it will be deployed. The notice has to be given to a "legislative authority" that will be made public.

On the vendor side of things, companies will have to provide an application programming interface (API) to enable an independent party to audit the algorithm's performance. They must also report "any complaints or reports of bias regarding the service".

Smith gushed: "Through some of the new law's most important provisions, Washington state has become the first jurisdiction to enact specific facial recognition rules to protect civil liberties and fundamental human rights. While the public will rightly assess ways to improve upon this approach over time, it's worth recognizing at the outset the thorough approach the Washington state legislature has adopted."

Meanwhile, the American Civil Liberties Union has been fighting for a moratorium on facial recognition, demanding a temporary ban on the technology until Congress passes stricter laws that protect an individual's rights.

The Washington law is due to go into effect next year.

Remember Amazon's little AI music-generating keyboard DeepComposer that was touted at its annual re:Invent developer conference last year?

Well, now you can finally play with it. Don't worry if you don't have an actual physical keyboard, Amazon has released a digital version alongside the software needed to create music via machine learning.

DeepComposer trains generative adversarial networks (GANs) to create new jingles based on a particular style of music. The software is designed to help enthusiasts who don't necessarily have a deep knowledge of machine learning or music to learn about GANs in more detail.

It gives step-by-step instructions on how to build, train, and test GANs without having to write any code. Users create a little melody on the digital keyboard and pick the type of genre, and the GAN fills in the blanks, transforming the simple tune into computer generated music. The physical keyboard is available too, but only for the US.

You can find out more about that here.

Sponsored: Webcast: Build the next generation of your business in the public cloud

Read the rest here:
Washington state governor green-lights facial-recog law championed by... guess who: Yep, hometown hero Microsoft - The Register

The quantum computing market valued $507.1 million in 2019, from where it is projected to grow at a CAGR of 56.0% during 2020-2030 (forecast period),…

NEW YORK, April 6, 2020 /PRNewswire/ -- Quantum Computing Market Research Report: By Offering (Hardware, Software, Service), Deployment Type (On-Premises, Cloud-Based), Application (Optimization, Simulation and Data Problems, Sampling, Machine Learning), Technology (Quantum Dots, Trapped Ions, Quantum Annealing), Industry (BFSI, Aerospace & Defense, Manufacturing, Healthcare, IT & Telecom, Energy & Utilities) Industry Share, Growth, Drivers, Trends and Demand Forecast to 2030

Read the full report: https://www.reportlinker.com/p05879070/?utm_source=PRN

The quantum computing market valued $507.1 million in 2019, from where it is projected to grow at a CAGR of 56.0% during 20202030 (forecast period), to ultimately reach $64,988.3 million by 2030. Machine learning (ML) is expected to progress at the highest CAGR, during the forecast period, among all application categories, owing to the fact that quantum computing is being integrated in ML for improving the latter's use case.

Government support for the development and deployment of the technology is a prominent trend in the quantum computing market, with companies as well as public bodies realizing the importance of a coordinated funding strategy. For instance, the National Quantum Initiative Act, which became a law in December 2018, included a funding of $1.2 billion from the U.S. House of Representatives for the National Quantum Initiative Program. The aim behind the funding was to facilitate the development of technology applications and quantum information science, over a 10-year period, by setting its priorities and goals.

Moreover, efforts are being made to come with standards for the quantum computing technology. Among the numerous standards being developed by the IEEE Standards Association Quantum Computing Working Group are the benchmarks and performance matrix, which would help in analyzing the performance of quantum computers against that of conventional computers. Other noteworthy standards are those related to the nomenclature and definitions, in order to create a common language for quantum computers.

In 2019, the quantum computing market was dominated by the quantum annealing category, on the basis of technology. This is because the physical challenges in its development have been overcome, and it is now being deployed in larger systems. That year, the banking, financial services, and insurance (BFSI) division held the largest share in the market, on account of the rapid expansion of this industry. Additionally, banks and other financial institutions are quickly deploying this technology to make their business process streamlined as well as secure their data.

By 2030, Europe and North America are expected to account for more than 78.0% in the quantum computing market, as Canada, the U.S., the U.K., Germany, and Russia are witnessing heavy investments in the field. For instance, the National Security Agency (NSA), National Aeronautics and Space Administration (NASA), and Los Alamos National Laboratory are engaged in quantum computing technology development. Additionally, an increasing number of collaborations and partnerships are being witnessed in these regions, along with the entry of several startups.

The major players operating in the highly competitive quantum computing market are Telstra Corporation Limited, International Business Machines (IBM) Corporation, Silicon Quantum Computing, IonQ Inc., Alphabet Inc., Huawei Investment & Holding Co. Ltd., Microsoft Corporation, Rigetti & Co. Inc., Zapata Computing Inc., D-Wave Systems Inc., and Intel Corporation. Google LLC, the main operating subsidiary of Alphabet Inc. is establishing the Quantum AI Laboratory, in collaboration with the NSA, wherein the quantum computers developed by D-Wave Systems Inc. are being used.

Read the full report: https://www.reportlinker.com/p05879070/?utm_source=PRN

About Reportlinker ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________ Contact Clare: clare@reportlinker.com US: (339)-368-6001 Intl: +1 339-368-6001

View original content:http://www.prnewswire.com/news-releases/the-quantum-computing-market-valued-507-1-million-in-2019--from-where-it-is-projected-to-grow-at-a-cagr-of-56-0-during-20202030-forecast-period-to-ultimately-reach-64-988-3-million-by-2030--301036177.html

SOURCE Reportlinker

View original post here:
The quantum computing market valued $507.1 million in 2019, from where it is projected to grow at a CAGR of 56.0% during 2020-2030 (forecast period),...

DeepMinds AI models transition of glass from a liquid to a solid – VentureBeat

In a paper published in the journal Nature Physics, DeepMind researchers describe an AI system that can predict the movement of glass molecules as they transition between liquid and solid states. The techniques and trained models, which have been made available in open source, could be used to predict other qualities of interest in glass, DeepMind says.

Beyond glass, the researchers assert the work yields insights into general substance and biological transitions, and that it could lead to advances in industries like manufacturing and medicine. Machine learning is well placed to investigate the nature of fundamental problems in a range of fields, a DeepMind spokesperson told VentureBeat. We will apply some of the learnings and techniques proven and developed through modeling glassy dynamics to other central questions in science, with the aim of revealing new things about the world around us.

Glass is produced by cooling a mixture of high-temperature melted sand and minerals. It acts like a solid once cooled past its crystallization point, resisting tension from pulling or stretching. But the molecules structurally resemble that of an amorphous liquid at the microscopic level.

Solving glass physical mysteries motivated an annual conference by the Simons Foundation, which last year hosted a group of 92 researchers from the U.S., Europe, Japan, Brazil, and India in New York. In the three years since the inaugural meeting, theyve managed breakthroughs like supercooled liquid simulation algorithms, but theyve yet to develop a complete description of the glass transition and predictive theory of glass dynamics.

Thats because there are countless unknowns about the nature of the glass formation process, like whether it corresponds to a structural phase transition (akin to water freezing) and why viscosity during cooling increases by a factor of a trillion. Its well-understood that modeling the glass transition is a worthwhile pursuit the physics behind it underlie behavior modeling, drug delivery methods, materials science, and food processing. But the complexities involved make it a hard nut to crack.

Fortunately, there exist structural markers that help identify and classify phase transitions of matter, and glasses are relatively easy to simulate and input into particle-based models. As it happens, glasses can be modeled as particles interacting via a short-range repulsive potential, and this potential is relational (because only pairs of particles interact) and local (because only nearby particles interact with each other).

The DeepMind team leveraged this to train a graph neural network a type of AI model that directly operates on a graph, a non-linear data structure consisting of nodes (vertices) and edges (lines or arcs that connect any two nodes) to predict glassy dynamics. They first created an input graph where the nodes and edges represented particles and interactions between particles, respectively, such that a particle was connected to its neighboring particles within a certain radius. Two encoder models then embedded the labels (i.e., translated them to mathematical objects the AI system could understand). Next, the edge embeddings were iteratively updated, at first based on their previous embeddings and the embeddings of the two nodes to which they were connected.

After all of the graphs edges were updated in parallel using the same model, another model refreshed the nodes based on the sum of their neighboring edge embeddings and their previous embeddings. This process repeated several times to allow local information to propagate through the graph, after which a decoder model extracted mobilities measures of how much a particle typically moves for each particle from the final embeddings of the corresponding node.

The team validated their model by constructing several data sets corresponding to mobilities predictions on different time horizons for different temperatures. After applying graph networks to the simulated 3D glasses, they found that the system strongly outperformed both existing physics-inspired baselines and state-of-the-art AI models.

They say that network was extremely good on short times and remained well matched up to the relaxation time of the glass (which would be up to thousands of years for actual glass), achieving a 96% correlation with the ground truth for short times and a 64% correlation for relaxation time of the glass. In the latter case, thats an improvement of 40% compared with the previous state of the art.

In a separate experiment, to better understand the graph model, the team explored which factors were important to its success. They measured the sensitivity of the prediction for the central particle when another particle was modified, enabling them to judge how large of an area the network used to extract its prediction. This provided an estimate of the distance over which particles influenced each other in the system.

They report theres compelling evidence that growing spatial correlations are present upon approaching the glass transition, and that the network learned to extract them. These findings are consistent with a physical picture where a correlation length grows upon approaching the glass transition, wrote DeepMind in a blog post. The definition and study of correlation lengths is a cornerstone of the study of phase transition in physics.

DeepMind claims the insights gleaned could be useful in predicting the other qualities of glass; as alluded to earlier, the glass transition phenomenon manifests in more than window (silica) glasses. The related jamming transition can be found in ice cream (acolloidal suspension), piles of sand (granular materials), and cell migration during embryonic development, as well as social behaviors such as traffic jams.

Glasses are archetypal of these kinds of complex systems, which operate under constraints where the position of elements inhibits the motion of others. Its believed that a better understanding of them will have implications across many research areas. For instance, imagine a new type of stable yet dissolvable glass structure that could be used for drug delivery and building renewable polymers.

Graph networks may not only help us make better predictions for a range of systems, wrote DeepMind, but indicate what physical correlates are important for modeling them that machine learning systems might be able to eventually assist researchers in deriving fundamental physical theories, ultimately helping to augment, rather than replace, human understanding.

Follow this link:
DeepMinds AI models transition of glass from a liquid to a solid - VentureBeat

Keeping Up With Encryption in 2020 – Security Boulevard

Encryption has become key to many cyber defense strategies, with organizations looking to more securely protect their data and privacy, as well as meet stricter compliance regulations including Europes GDPR and the California Consumer Privacy Act. Its use is unsurprisingly on the rise, with Gartner estimating that over 80% of enterprise web traffic was encrypted in 2019 and Google currently offering the HTTPS protocol as standard to 94% of its customers, putting the company well on its way to its goal of 100% encryption this year.

From WhatsApps end-to-end encrypted messages to secure online banking, encryption is everywhere. Cryptographic protocols Secure Socket Layer (SSL) and its successor, Transport Layer Security (TLS), ensure organizations protect the important data on their networks while remaining compliant. Though some authorities believe they should have backdoor access to this content, tech giants and whistleblowers alike have condemned the idea, with Facebook stating it would undermine the privacy and security of people everywhere, and Edward Snowden claiming it would be the largest [] violation of privacy in history.

However, for all its privacy and data protection benefits, encryption has unintentionally created a new threat: encrypted malware. Cybercriminals are using the very aspects that make encryption so appealing for their own means and increasingly leveraging cryptographic protocols to provide cover for their attacks. As more companies adopt encryption, hackers will have even more places to hide.

Many organizations have had firsthand experience of encrypted malware attacks. Here are just some of 2019s higher-profile attacks that hid among encrypted traffic flows between compromised network servers and command and control centers, as a way to avoid being detected by IDS and other anti-malware solutions:

Emotet, TrickBot and Ryuk have also been dubbed a triple-threat, with Emotet and TrickBot trojans being used to deliver Ryuk ransomware, causing even more damage to the affected organizations.

The biggest issue with encrypted malware attacksand the primary reason the above examples were so successfulis that they are nearly impossible to detect, with many commonly deployed solutions offering woefully inadequate protection.

The challenge for organizations looking to spot and stop encrypted malware attacks is being able to see inside their encrypted data flows. To achieve this, many organizations decrypt the traffic entering and leaving their networks, before scanning it for threats and then re-encrypting it. While in principle this technique should work, the decryption approach comes with a whole host of issues.

First, it raises concerns around compliance. Since all encrypted traffic has to decrypted to be inspected, there is a very real risk that some sensitive information will, for a brief time at least, be visible in plaintext. Secondly, there are the huge financial costs and latency issues to consider with costs growing and network performance being severely impacted by the amount of data that has to be processeda problem that will only grow in correlation with an increase in encrypted data.

A more recentand potentially biggerproblem is that decryption will no longer be possible thanks to the introduction of TLS 1.3. This cryptographic protocol, ratified by the IETF in 2018, includes stronger encryption and streamlined authentication processes, but also flags any decryption attempt as a man-in-the-middle attack, immediately terminating the session and preventing malicious traffic from being detected. Even the NSA has warned of the problems associated with TLS Inspection, issuing a cyber advisory on the subject.

This inability to see inside encrypted traffic traversing an organizations network is worrying, to say the least, with 87% of CIOs believing their security defenses are less effective because they cannot inspect encrypted network traffic for attacks, according to Venafi. As a new decade begins, organizations need to be wary of relying on traditional methods of detecting this new attack vector and not depend on decryption alone to solve the problem. If 2019 is any indication, then hidden malware isnt going anywhere.

Gartner predicts that over 70% of malware campaigns in 2020 will use some type of encryption. Whether this includes new strains of Emotet or Ryuk, or completely new threats, organizations need to be prepared.

In particular, they must look at alternative methods of protecting their networks and consider more modern solutions. Rather than rely on anti-malware scanners that are unable to see inside encrypted traffic or count on decryption to sort the bad data from the good, organizations should look at AI and machine learning techniques that analyze encrypted traffic at a metadata level. These methods dont require decryption, so as well as avoiding compliance issues by avoiding looking at traffic content, there are also no problems with latency or with navigating TLS 1.3.

This proactive and neater approach to malware detection will be an essential tool as encrypted malware becomes an even greater threat.

More:
Keeping Up With Encryption in 2020 - Security Boulevard

The Coronavirus Crisis: ‘Global Surveillance in Response to COVID-19 Surpassing 9/11’ – Byline Times

Campaigners warn that it would be short-sighted for governments to allow efforts to save lives in the COVID-19 outbreak to destroy fundamental rights in societies.

Around the world, journalists are being gagged and imprisoned, the location of citizens is being tracked and some are being named and shamed on Government websites. This dystopian crackdown on human rights is all taking place under the pretext of keeping people safe from an invisible killer.

COVID-19 has forced governments to introduce emergency legislation that would be unthinkable in any other situation. In many cases, the emergency powers are helping to keep people safe but, in others, they are beginning to look more like power grabs by quasi-dictators who have seen an opportunity.

A stark example of this can be seen in the centre of Europe where Hungarian Prime Minister Viktor Orbn has passed legislation allowing him to continue to rule by decree for as long as there is a state of emergency a state which has been declared but has no clear time limit. The legislation paves the way for citizens to be jailed for up to five years for spreading what the state considers to be misinformation.

Pavol Szalai, head of the European Union and Balkans desk with Reporters Without Borders, branded it an Orwellian law that introduces a full-blown information police state in the heart of Europe.

In Bulgaria, Prime Minister Boyko Borissov has proposed a law that allows jail terms for those spreading fake news about infectious diseases and police have been given the authority to request and obtain metadata from citizens private communications. Meanwhile in Poland, Coronavirus patients are being told to download a new app that will require them to take selfies to prove that they are quarantining properly.

The UK Government has also sparked controversy with its Coronavirus Bill, labelled the most draconian powers in peacetime by UK campaign group Big Brother Watch because it allows police to detain anyone they believe could be infectious, restrict public events and gatherings and impose travel restrictions. The Government is also reportedly in negotiations with mobile network operators such as O2 and EE, asking them to hand over customer data that could allow people to be tracked through their phones, in the UK and abroad.

Edin Omanovic, advocacy director of Privacy International, warned in a statement that the growing use of invasive surveillance is even surpassing how Governments across the world responded to 9/11.

The laws, powers, and technologies being deployed around the world pose a grave and long-term threat to human freedom, he said. Some measures are based on public health measures with significant protections, while others amount to little more than opportunistic power grabs. This extraordinary crisis requires extraordinary measures, but it also demands extraordinary protections. It would be incredibly short-sighted to allow efforts to save lives to instead destroy our societies. Even now, Governments can choose to deploy measures in ways that are lawful, build public trust and respect peoples wellbeing. Now, more than ever, Governments must choose to protect their citizens rather than their own tools of control.

Privacy International is one of more than 100 civil society groups to sign an open letter urging Governments not to respond to the Coronavirus with an increase in digital surveillance if it comes at a cost to human rights. An increase in state digital surveillance powers, such as obtaining access to mobile phone location data, threatens privacy, freedom of expression and freedom of association, in ways that could violate rights and degrade trust in public authorities undermining the effectiveness of any public health response, it states.

These are extraordinary times, but human rights law still applies. Indeed, the human rights framework is designed to ensure that different rights can be carefully balanced to protect individuals and wider societies. States cannot simply disregard rights such as privacy and freedom of expression in the name of tackling a public health crisis.

Another signatory of the statement is Amnesty International. Rasha Abdul Rahim, deputy director of Amnesty Tech, acknowledged that technology does play an important role in combatting COVID-19 but said that it should not give governments carte blanche to expand digital surveillance.

The recent past has shown governments are reluctant to relinquish temporary surveillance powers, she said. We must not sleepwalk into a permanent expanded surveillance state. Increased digital surveillance to tackle this public health emergency, can only be used if certain strict conditions are met. Authorities cannot simply disregard the right to privacy and must ensure any new measures have robust human rights safeguards.

In the years following the 9/11 terror attacks, the UK and US implemented major new surveillance programmes under the pretext of tackling terrorism. This included almost all US mobile phone companies providing the US National Security Agency (NSA) with all of their customers phone records and the UKs Government communications headquarters, GCHQ, intercepting fibre optic cables around the world to capture data flowing through the internet.

These programmes and many more were revealed by NSA whistleblower Edward Snowden. In a video conference interview for the Copenhagen Documentary Film Festival, Snowden spoke of the dangers that the virus now presents to civil liberties.

On governments taking health data from devices such as fitness trackers to monitor heart rhythms, he said: Five years later, the Coronavirus is gone, this datas still available to them they start looking for new things. They already know what youre looking at on the internet, they already know where your phone is moving, now they know what your heart rate is. What happens when they start to intermix these and apply artificial intelligence to them?

Here is the original post:
The Coronavirus Crisis: 'Global Surveillance in Response to COVID-19 Surpassing 9/11' - Byline Times

4 ONLINE THEATRE Wild, Hampstead Theatre London – Morning Star Online

MIKE BARTLETT'S opening play in Hampstead Theatre's short season of free weekly online productions owes much to Pinter's comedies of menace, with their characteristic mixture of humour, mystery and lurking fear.

Like The Dumb Waiter, originally planned for Hampstead's main theatre programme now postponed Wild is set initially in a recognisable social context, with the plot progressively leaving the target character bewildered and unhinged.

Michael, played by Jack Farthing, is a somewhat naive Edward Snowden-type whistleblower who, having leaked a massive stash of incriminating Pentagon documents, is on the run.

He's trapped in a Moscow hotel room with Caioilfhionn Dunne's zany minder pressing him to join her unidentified resistance movement. In the background there is apparently an unnamed leader holed up in a nearby foreign embassy Julian Assange?

She progressively strips the nervous Michael of his wavering self-confidence: If you want to know anything about yourself, just ask.

When he fights back, insisting he had acted in the hope of creating a freer world and demanding to know what his tormentor believes in, she answers: Progress. Her evidence? Wi-fi.

She is replaced by an equally enigmatic protector with a more threatening approach, leading to a final surrealist climax which both mirrors the increasingly tragi-farcical nature of our contemporary world and, in James Macdonald's production, cleverly plays with and merges the very artifice of theatre and video.

Available online until April 5, hampsteadtheatre.com

View original post here:
4 ONLINE THEATRE Wild, Hampstead Theatre London - Morning Star Online

Artificial Intelligence News: Latest Advancements in AI …

How does Artificial Intelligence work?

Artificial Intelligence is a complex field with many components and methodologies used to achieve the final result an intelligent machine. AI was developed by studying the way the human brain thinks, learns and decides, then applying those biological mechanisms to computers.

As opposed to classical computing, where coders provide the exact inputs, outputs, and logic, artificial intelligence is based on providing a machine the inputs and a desired outcome, letting the machine develop its own path to achieve its set goal. This frequently allows computers to better optimize a situation than humans, such as optimizing supply chain logistics and streamlining financial processes.

There are four types of AI that differ in their complexity of abilities:

Artificial intelligence is used in virtually all businesses; in fact, you likely interact with it in some capacity on a daily basis. Chatbots, smart cars, IoT devices, healthcare, banking, and logistics all use artificial intelligence to provide a superior experience.

One AI that is quickly finding its way into most consumers homes is the voice assistant, such as Apples Siri, Amazons Alexa, Googles Assistant, and Microsofts Cortana. Once simply considered part of a smart speaker, AI-equipped voice assistants are now powerful tools deeply integrated across entire ecosystems of channels and devices to provide an almost human-like virtual assistant experience.

Dont worry we are still far from a Skynet-like scenario. AI is as safe as the technology it is built upon. But keep in mind that any device that uses AI is likely connected to the internet, and given that internet connected device security isnt perfect and we continue to see large company data breaches, there could be AI vulnerabilities if the devices are not properly secured.

Startups and legacy players alike are investing in AI technology. Some of the leaders include household names like:

As well as newcomers such as:

APEX Technologies was also ranked as the top artificial intelligence company in China last year.

You can read our full list of most innovative AI startups to learn more.

Artificial intelligence can help reduce human error, create more precise analytics, and turn data collecting devices into powerful diagnostic tools. One example of this is wearable devices such as smartwatches and fitness trackers, which put data in the hands of consumers to empower them to play a more active role managing their health.

Learn more about how tech startups are using AI to transform industries like digital health and transportation.

Then-Dartmouth College professor John McCarthy coined the term, artificial intelligence, and is widely known as the father of AI. in the summer of 1956, McCarthy, along with nine other scientists and mathematicians from Harvard, Bell Labs, and IBM, developed the concept of programming machines to use language and solve problems while improving over time.

McCarthy went on to teach at Stanford for nearly 40 years and received the Turing Award in 1971 for his work in AI. He passed away in 2011.

Open application programming interfaces (APIs) are publicly available governing requirements on how an application can communicate and interact. Open APIs provide developers access to proprietary software or web services so they can integrate them into their own programs. For example, you can create your own chatbot using this framework.

As you could imagine, artificial intelligence technology is evolving daily and Business Insider Intelligence keeping its finger on the pulse of how artificial intelligence will shape the future of a variety of industries, such as the Internet of Things (IoT), transportation and logistics, digital health, and multiple branches of fintech including insurtech and life insurance.

See the original post here:
Artificial Intelligence News: Latest Advancements in AI ...

Benefits and Risks of Artificial Intelligence

We might still be decades away from the superhuman artificial intelligence (AI), like sentient HAL 9000 from 2001: A Space Odyssey, but our fear of robots having a mind of their own and acting at their own (free) will and using it against humankind is nonetheless present. Even some of the greatest minds of our time, such as Elon Musk and Stephen Hawking have been talking about this possibility.

On a more down-to-earth and practical level, artificial intelligence has already sneaked into our lives. Weve grown so accustomed to some of the best AI apps,such as Cortana, Alexa or Siri, that we already think of them as our trusted companions that help us run our everyday tasks easily and smoothly.

However, while a catastrophic sci-fi movie scenario is not a thing we should be worried about (at least not at the moment) there are some risks related to AI implementation which are far more tangible and possible.

Read on to find out more about some real-life benefits and risks of AI implementation.

By now, all of the industries have opened their doors to the various advancements AI brings. Here are some of the most prominent usages were witnessing and will be seeing more of in the years to come, in the digital marketing, healthcare, and finance industry.

If youve recently used a chat to reach customer service, chances are high youve been talking to a chatbot, maybe even without realizing this fact. It may come as a surprise that 40% of customers are fine with both options, as long as they get their issues solved.

Chatbots embody many benefits AI brings to businesses, and are a great example of how it may improve a sensitive and time-consuming matter such as customer service.

Some of the crucial points where you can see the advantages of AI-powered chatbots are:

Besides chatbots, AI can benefit digital marketing in many different ways, as it can be used to automate many different tasks, such as email and paid ads campaigns. It can also help marketers create more precise buyer personas, predict customers behavior and give sales forecasts, help with content creation, etc. These benefits to the e-commerce industry can hardly be measured, as businesses can now always be there for their online customers, assisting them in making their purchasing decisions and helping them navigate their customer journey.

Another noticeable way AI benefits our lives is through its usage in healthcare.

Weve recently witnessed a win of trained AI over human experts, as AI outperformed six radiologists in reading mammograms and recognizing breast cancer. Images can now be analyzed in a few seconds by the computer algorithm, so the use of AI can significantly improve the speed of diagnosis.

Except in radiology, AI is widely used in digital consultations, on platforms such as Buoy or Isabel symptom checkers, offering remote medical assistance, and suggesting how to see a professional based on their location.

The advantages of AI have been recognized early by the finance and banking sectors, and the technology is now implemented in the ways beneficial for both parties.

One of the best examples of how beneficial AI in this industry can be, is Erica, a virtual employee of the National Bank of America. Erica has by now served over 7 million customers and managed over 50 million of their requests, helping them with their transactions and budgeting, tracking their spending habits and giving useful advice.

As for the potential actual risks of AI nowadays, the one that seems to bring the most concerns is job loss, which in some industries seem inevitable.

AI-powered employees have quite a few advantages when compared to their human colleagues. As they have no personal and emotional responses theyre never exhausted, bored or distracted, not to mention that they are more productive and efficient. Furthermore, their capacity to make errors is significantly reduced.

Such qualities of AI are the most likely to cause layoffs where a lot of tasks can be automated, such as the trucking, food service and retail industry, leading to millions of unemployed and an even higher income inequality.

Another rising concern has been an invasion of privacy. This has already taken place in China, where AI-powered technologies are used for the purposes of mass surveillance, impacting the so-called social credit system.

The system tracks users behavior everywhere it has access to their social media profiles, their financial reports, health records etc. Data collected this way, including jaywalking and failing to correctly sort personal waste can now negatively influence the credit score while donating blood or volunteering can increase it. Negative credit can, for example, ban you from buying plane tickets, or enrolling your kids in certain schools.Finally, the possibility of using AI capacities for military purposes shouldnt be neglected, as the idea of having this kind of power concentrated in the hands of any of the world leaders, seems like a genuine threat to the world as we know it.

And while we think about all the benefits and the risks artificial intelligence brings, lets not forget one crucial point AI doesnt set its own goals. The power it has is the power we delegate it to achieve the things we are trying to accomplish, meaning that were responsible for both its benefits and its risks.

Visit link:
Benefits and Risks of Artificial Intelligence

What Skills Do I Need to Get a Job in Artificial Intelligence?

Automation, robotics and the use of sophisticated computer software and programs characterize a career in artificial intelligence (AI). Candidates interested in pursuing jobs in this field require specific education based on foundations of math, technology, logic, and engineering perspectives. Written and verbal communication skills are also important to convey how AI tools and services are effectively employed within industry settings. To acquire these skills, those with an interest in an AI career should investigate the various career choices available within the field.

The most successful AI professionals often share common characteristics that enable them to succeed and advance in their careers. Working with artificial intelligence requires an analytical thought process and the ability to solve problems with cost-effective, efficient solutions. It also requires foresight about technological innovations that translate to state-of-the-art programs that allow businesses to remain competitive. Additionally, AI specialists need technical skills to design, maintain and repair technology and software programs. Finally, AI professionals must learn how to translate highly technical information in ways that others can understand in order to carry out their jobs. This requires good communication and the ability to work with colleagues on a team.

Basic computer technology and math backgrounds form the backbone of most artificial intelligence programs. Entry level positions require at least a bachelors degree while positions entailing supervision, leadership or administrative roles frequently require masters or doctoral degrees. Typical coursework involves study of:

Candidates can find degree programs that offer specific majors in AI or pursue an AI specialization from within majors such as computer science, health informatics, graphic design, information technology or engineering.

A career in artificial intelligence can be realized within a variety of settings including private companies, public organizations, education, the arts, healthcare facilities, government agencies and the military. Some positions may require security clearance prior to hiring depending on the sensitivity of information employees may be expected to handle. Examples of specific jobs held by AI professionals include:

From its inception in the 1950s through the present day, artificial intelligence continues to advance and improve the quality of life across multiple industry settings. As a result, those with the skills to translate digital bits of information into meaningful human experiences will find a career in artificial intelligence to be sustaining and rewarding.

See original here:
What Skills Do I Need to Get a Job in Artificial Intelligence?

Whats the Difference Between Artificial Intelligence …

This is the first of a multi-part series explaining the fundamentals of deep learning by long-time tech journalist Michael Copeland.

Artificial intelligence is the future. Artificial intelligence is science fiction. Artificial intelligence is already part of our everyday lives. All those statements are true, it just depends on what flavor of AI you are referring to.

For example, when Google DeepMinds AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go earlier this year, the terms AI, machine learning, and deep learning were used in the media to describe how DeepMind won. And all three are part of the reason why AlphaGo trounced Lee Se-Dol. But they are not the same things.

The easiest way to think of their relationship is to visualize them as concentric circles with AI the idea that came first the largest, then machine learning which blossomed later, and finally deep learning which is driving todays AI explosion fitting inside both.

AI has been part of our imaginations and simmering in research labs since a handful of computer scientists rallied around the term at the Dartmouth Conferences in 1956 and birthed the field of AI. In the decades since, AI has alternately been heralded as the key to our civilizations brightest future, and tossed on technologys trash heap as a harebrained notion of over-reaching propellerheads. Frankly, until 2012, it was a bit of both.

Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) images, text, transactions, mapping data, you name it.

Lets walk through how computer scientists have moved from something of a bust until 2012 to a boom that has unleashed applications used by hundreds of millions of people every day.

Back in that summer of 56 conference the dream of those AI pioneers was to construct complex machines enabled by emerging computers that possessed the same characteristics of human intelligence. This is the concept we think of as General AI fabulous machines that have all our senses (maybe even more), all our reason, and think just like we do. Youve seen these machines endlessly in movies as friend C-3PO and foe The Terminator. General AI machines have remained in the movies and science fiction novels for good reason; we cant pull it off, at least not yet.

What we can do falls into the concept of Narrow AI. Technologies that are able to perform specific tasks as well as, or better than, we humans can. Examples of narrow AI are things such as image classification on a service like Pinterest and face recognition on Facebook.

Those are examples of Narrow AI in practice. These technologies exhibit some facets of human intelligence. But how? Where does that intelligence come from? That get us to the next circle, machine learning.

Machine learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is trained using large amounts of data and algorithms that give it the ability to learn how to perform the task.

Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. clustering, reinforcement learning, and Bayesian networks among others. As we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches.

To learn more about deep learning, listen to the 100th episode of our AI Podcast with NVIDIAs Ian Buck.

As it turned out, one of the very best application areas for machine learning for many years was computer vision, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters S-T-O-P. From all those hand-coded classifiers they would develop algorithms to make sense of the image and learn to determine whether it was a stop sign.

Good, but not mind-bendingly great. Especially on a foggy day when the sign isnt perfectly visible, or a tree obscures part of it. Theres a reason computer vision and image detection didnt come close to rivaling humans until very recently, it was too brittle and too prone to error.

Time, and the right learning algorithms made all the difference.

Another algorithmic approach from the early machine-learning crowd, artificial neural networks, came and mostly went over the decades. Neural networks are inspired by our understanding of the biology of our brains all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.

You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.

Each neuron assigns a weighting to its input how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign image are chopped up and examined by the neurons its octogonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural networks task is to conclude whether this is a stop sign or not. It comes up with a probability vector, really a highly educated guess, based on the weighting. In our example the system might be 86% confident the image is a stop sign, 7% confident its a speed limit sign, and 5% its a kite stuck in a tree ,and so on and the network architecture then tells the neural network whether it is right or not.

Even this example is getting ahead of itself, because until recently neural networks were all but shunned by the AI research community. They had been around since the earliest days of AI, and had produced very little in the way of intelligence. The problem was even the most basic neural networks were very computationally intensive, it just wasnt a practical approach. Still, a small heretical research group led by Geoffrey Hinton at the University of Toronto kept at it, finally parallelizing the algorithms for supercomputers to run and proving the concept, but it wasnt until GPUs were deployed in the effort that the promise was realized.

If we go back again to our stop sign example, chances are very good that as the network is getting tuned or trained its coming up with wrong answers a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time fog or no fog, sun or rain. Its at that point that the neural network has taught itself what a stop sign looks like; or your mothers face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.

Ngs breakthrough was to take these neural networks, and essentially make them huge, increase the layers and the neurons, and then run massive amounts of data through the system to train it. In Ngs case it was images from 10 million YouTube videos. Ng put the deep in deep learning, which describes all the layers in these neural networks.

Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Googles AlphaGo learned the game, and trained for its Go match it tuned its neural network by playing against itself over and over and over.

Deep learning has enabled many practical applications of machine learning and by extension the overall field of AI. Deep learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon. AI is the present and the future. With Deep learnings help, AI may even get to that science fiction state weve so long imagined. You have a C-3PO, Ill take it. You can keep your Terminator.

Original post:
Whats the Difference Between Artificial Intelligence ...