Microsoft AI’s next leap forward: Helping you play video games – CNET

Could you be playing the next big video game with your voice?

Voice assistants can seem supersmart. Ask my Amazon Alexa why the sky is blue, and you'll get a lesson in light refraction through the atmosphere.

Ask it what CNET is and things start to break down.

"In addition CNET currently has region-specific and language-specific editions."

Well, sure. Then I asked Alexa when the Super Bowl was, right before Sunday night's game. It responded:

"Super Bowl 50's winner is Denver Broncos."

That's one of the biggest contradictions with voice assistants. They can control your lights, play music and even tell you silly jokes. But despite their growing presence in our lives, their capabilities are still very limited.

So far, the way many companies have made them better is to hand-code each response. For example, someone at Amazon could go into Alexa's code and teach it what CNET is and when the next Super Bowl will take place.

Microsoft thinks it's found a different way. It's inviting app developers and companies to use its technology, feeding questions, giving responses and learning what needs to be fixed along the way.

The software giant isn't the only one looking for new uses for artificial intelligence, which, in shorthand, is essentially software that can learn, adapt and act in more subtle, sophisticated ways. Facebook is training its AI with all sorts of software tools, including one in its Oregon data center that's trying to teach a computer to create an original piece of art after looking at a series of pictures. Google, meanwhile, is teaching its AI to play board games. And IBM is refining its AI, called Watson, by feeding it data from all manner of businesses.

Microsoft has had its share of public AI efforts too. It offers a voice assistant in its Windows PC and phone software called Cortana, which will happily jot down reminders and answer trivia questions.

It has also released experiments like Tay, a Twitter chatbot that learned from conversations with people. The experiment, however, was quickly taken offline after people taught it to hate feminists, praise Adolf Hitler and solicit sex.

This time around, Microsoft is taking a more measured approach by offering its AI tools to developers. So far, the results have been encouraging.

A security footage startup called Prism has started using Microsoft's tools to help organize playback video. Prism identifies when there's an object in the video that wasn't there before. Then it sends an image from that clip to Microsoft to identify what's in the picture and gets responses back like "dog" or "package."

This could take hours for a person to do, but combining Prism's technology with Microsoft's AI means a search to see how many packages came to the front desk that day takes mere moments. "It's unfathomable to think about how much data there is," said Adam Planas, a creative director at Prism.

Microsoft's doing the same with voice commands, offering apps not just transcriptions of what I say, but an estimation of what it means, too. That is, if a video game is expecting to hear me say "how old are you" and I say "you look really young," it'll know I basically mean the same thing.

That's a big improvement over the voice command software Alexander Mejia and his team at Human Interact were using before they turned to Microsoft. Their project, Starship Commander, is a new virtual reality game entirely controlled by the player's voice.

"When people put on the headset, they start role-playing, they get into character," he said. "They want to be the starship commander and go forth and have an adventure."

The goal, he said, is to make players feel completely natural talking to the game. Part of that is by creating a slick-looking game that immerses the player to the point that they feel as though they are on a starship. Then, the game has to coax the player into talking enough that after a while, it's just natural. The only downside is that the game will require an internet connection to send your voice commands to Microsoft for processing.

But the upside is that process is "crazy fast," said Sophie Wright, vice president of business development at Human Interact (who also doubles as a character in the game).

Microsoft believes that by inviting developers to use its technology, they can help train its AI. Aside from the 5,000 engineers Microsoft has working on artificial intelligence, more than 424,000 outside developers have signed up to try it out too.

"I think we're on the cusp of a breakthrough," said Andrew Shuman, a corporate vice president at Microsoft who leads the company's AI research group. Once AI is able to understand us better, they can start truly helping in our daily lives. Imagine being able to ask a security camera where you left your car keys.

"You can set up for real user delight," Shuman said.

Does the Mac still matter? Apple execs tell why the MacBook Pro was over four years in the making, and why we should care.

Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.

Read more here:

Microsoft AI's next leap forward: Helping you play video games - CNET

Intel will speed up Mukesh Ambanis 5G run, power him with AI and possibly a Jio laptop all in return for ma – Business Insider India

This move is more strategic rather than just an investment. Its more of a pull rather than a push strategy, Counterpoint analyst Tushar Pathak told Business Insider.

Experts believe that Intels investment in Jio can help Reliance across three main verticals Artificial Intelligence, the 5G push and a possible debut of a Jio laptop.

Advertisement

Using Intels AI expertise to manage data over 500 million subscribers

Bernsteins report from June estimated that Jio would capture nearly half the Indian market by 2025 calling the company the new king of Indian telecommunications. It forecasts that Jios subscriber base will jump from its current standing at 388 million subscribers to cross 500 million by 2023 to hit 609 million by 2025.

Advertisement

Most of Intel Capitals investments in the past have focussed on artificial intelligence (AI), including edge computing, cloud technology and network transformation going from 4G to 5G.

When 500 million subscribers will interact and create a massive database, the role of AI is going to be huge for a company like Jio. A lot of their users will have a cross-platform approach, Pathak noted.

Advertisement

According to Pathak, Intels impetus is to keep an eye on companies which have the ability to make the technological shift, something that only happens once every decade. In 2020, its the shift to 5G.

A Jio executive wrote on LinkedIn that the Jio and Intels leadership in ORAN and OpenRan will benefit the growth and transformation of next-generation networks. ORAN and OpenRan are open source 5G software. Intel has advanced Edge computing offerings across processors, analytics, AI and the access to this technology can help Jio Platforms engineering teams make significant pace with their 5G technology and IoT ecosystem rollouts, said a report by Greyhound research. Advertisement

Intels expertise in consumer electronics could also come in handy for the telecom giant. Jio had already marked its foray into the consumer electronics segment with mobiles and the partnership with Intel can help mark its debut in another segment laptops where Intel has a majority market share.

Intel can help Jio launch computing devices (Laptops and Tablets) and accessories like cameras. Might help to note that Jio is already aggressively pursuing its ambitions to become a smart city vendor in which cameras is a key ask. Also, the company has already launched IoT cameras for homes for both Smart TV and surveillance purposes, said the Greyhound report.Advertisement

SEE ALSO:

EXCLUSIVE: Instagram Reels is being quietly tested in India just days after the TikTok ban

ISRO's MOM captures Mars' biggest moon that's on a collision course for the Red Planet

Advertisement

Read the original here:

Intel will speed up Mukesh Ambanis 5G run, power him with AI and possibly a Jio laptop all in return for ma - Business Insider India

Mcubed: More speakers join machine learning and AI extravaganza – The Register

The speaker lineup for Mcubed our three-day dive into machine learning, AI and advanced analytics is virtually complete, meaning now would be a really good time to snap up a cut-price early-bird ticket.

Latest additions include Expero Inc's Steve Purves, who'll be discussing graph representations in machine learning, while Ben Chamberlain of ASOS will be discussing how the mega fashion etailer combines ML and social media information.

Steve and Ben join a lineup of experts who aren't just looking to the future, but are actually applying ML and AI principles to real business problems right now, at companies like Ocado and OpenTable.

Our aim is to show you how you can apply tools and methodologies to allow your business or organisation to take advantage of ML, AI and advanced analytics to solve the problems you face today, as well as prepare for tomorrow.

At the same time, we'll be looking at the organisational, legal and ethical implications of AI and ML, as well as taking a look at some of the most interesting applications, including autonomous vehicles and robotics.

And our keynote speakers, professor Mark Bishop of Goldsmiths, University of London, and Google's Melanie Warrick, will be grappling with the big issues and setting the tone for the event as a whole.

This all takes place at 30 Euston Square. As well as being easy to get to, this is simply a really pleasant environment in which to absorb some mind-expanding ideas, and discuss them on the sidelines with your fellow attendees and the speakers.

Of course, we'll ensure there's plenty of top-notch food and drink to fuel you through the formal and less formal parts of the programme.

Tickets will be limited, so if you want to ensure your place, head over to our website and snap up your early-bird ticket now.

More:

Mcubed: More speakers join machine learning and AI extravaganza - The Register

Facebook Killed an AI After It Came Up With Its Own Language – Nerdist

For decades, humanity has feared that the rise of artificial intelligence could cause unintended and even harmful side effects in the real world. While there are some who have predicted a robo-apocalypse, few would have suspected that the English language would be the first victim in the war between man and machine!

According to a report by Digital Journal, Facebook was experimenting with an artificial intelligence system that essentially gave up on using English in favor of creating its own more efficient language. The researchers on the project reportedly shut down the A.I. once they realized they could no longer understand its language. One of the reasons that the communication gap is significant is that it could theoretically mean that machines will be able to write their own languages and lock users out of their own systems. And you know where that leads

Well, were reasonably sure it wont come down to Terminators and the end of the world (probably). But Elon Musk has recently been offering warnings about letting AI run amok. And we dont entirely disagree with him. Its something that should be handled delicately. And killer robots on a battlefield will always be a bad idea.

As for Facebooks linguistic AI, it turns out that the bot may have been on to something. The sentencesI can i i everything else and balls have zero to me to me to me sound like nonsense to us, but they demonstrate how two of the AI bots negotiated with each other. The repeated words and letters apparently indicated a back-and-forth over the amounts that each bot should take in their negotiations. Essentially, it was shorthand.

Somehow, we doubt that use of language will catch on with humanity. But it is fascinating to see what the machines will come up with on their ownprovided that they dont kill us all in the process.

What do you think about Facebooks language altering AI? Download your thoughts to the comment section below!

Images: MGM/Skydance Productions

Read more:

Facebook Killed an AI After It Came Up With Its Own Language - Nerdist

How AI is shaping the new life in life sciences and pharmaceutical industry – YourStory

The pharma and life sciences industry is faced with increasing regulatory oversight, decreasing R&D productivity, challenges to growth and profitability, and the impact of artificial intelligence (AI) in the value chain. The regulatory changes led by the far-reaching Patient Protection and Affordable Care Act (PPACA) in the US are forcing the pharma and life sciences industry to change its status quo.

Besides the increasing cost of regulatory compliance, the industry is facing rising R&D costs, even though the health outcomes are deteriorating and new epidemics are emerging. Led by the regulatory changes, the customer demographics are also changing. The growth is being driven by emerging geographies of APAC and Latin American region.

Pharmaceutical organisations can leverage AI in a big way to drive insightful decisions on all aspects of their business, from product planning, design to manufacturing and clinical trials to enhance collaboration in the ecosystem, information sharing, process efficiency, cost optimisation, and to drive competitive advantage.

AI enables data mining, engineering, and real time- and algorithmic-driven decision-making solutions, which help in responding to the following key business value chain disruptions in the pharmaceutical industry:

Though genomics currently hogs the spotlight, there are plenty of other biotechnology fields wrestling with AI. In fact, when it comes to human microbes the bacteria, fungi, and viruses that live on or inside us we are talking about astronomical amounts of data. Scientists with the NIHs Human Microbiome Project have counted more than 100 trillion microbes in the human body.

To determine which microbes are most important to our well-being, researchers at the Harvard Public School of Health used unique computational methods to identify around 350 of the most important organisms in their microbial communities. With the help of DNA sequencing, they sorted through 3.5 terabytes of genomic data and pinpointed genetic name tags sequences specific to those key bacteria. They could then identify where and how often these markers occurred throughout a healthy population. This gave them the opportunity to catalogue over 100 opportunistic pathogens and understand where in the microbiome these organisms occur normally. Like genomics, there are also plenty of startups Libra Biosciences, Vedanta Biosciences, Seres Health, Onsel looking to leverage on new discoveries.

Perhaps the biggest data challenge for biotechnologists is synthesis. How can scientists integrate large quantities and diverse sets of data genomic, proteomic, phenotypic, clinical, semantic, social etc. into a coherent whole?

Many AI researchers are occupied to provide plausible responses:

Cambridge Semantics has a developed semantic web technologies that help pharmaceutical companies sort and select which businesses to acquire and which drug compounds to license.

Data scientists at the Broad Institute of MIT and Harvard have developed the Integrative Genomics Viewer (IGV), open source software that allows for the interactive exploration of large, integrated genomic datasets.

GNS Healthcare is using proprietary causal Bayesian network modeling and simulation software to analyse diverse sets of data and create predictive models and biomarker signatures.

Numbers-wise, each human genome is composed of 20,000-25,000 genes composed of three billion base pairs. Thats around three gigabytes of data. Genomics and the role of AI in personalising the healthcare experience:

Sequencing millions of human genomes would add up to hundreds of petabytes of data.

Analysis of gene interactions multiplies this data even further.

In addition to sequencing, massive amounts of information on structure/function annotations, disease correlations, population variations the list goes on are being entered into databanks. Software companies are furiously developing tools and products to analyse this treasure trove.

For example, using Google frameworks as a starting point, the AI team at NextBio have created a platform that allows biotechnologists to search life-science information, share data, and collaborate with other researchers. The computing resources needed to handle genome data will soon exceed those of Twitter and YouTube, says a team of biologists and computer scientists who are worried that their discipline is not geared to cope with the coming genomics flood.

By 2025, between 100 million and 2 billion human genomes could have been sequenced, which is published in the journal PLoS Biology. The data-storage demands for this alone could run to as much as 240 exabytes (1 exabyte is 1,018 bytes), because the number of data that must be stored for a single genome are 30 times larger than the size of the genome itself, to make up for errors incurred during sequencing and preliminary analysis.

The extensive data generation in pharma, genome, and microbiome serves as a clarion call that these fields are going to pose some severe challenges. Astronomers and high-energy physicists process much of their raw data soon after collection and then discard them, which simplifies later steps such as distribution and analysis. But fields like genomics do not yet have standards for converting raw sequence data into processed data.

The variety of analysis that biologists want to perform in genomics is also uniquely large, the authors write, and current methods for performing these analyses will not necessarily translate well as the volume of such data rises. For instance, comparing two genomes requires comparing two sets of genetic variants. If you have a million genomes, youre talking about a million-squared pairwise comparisons. The algorithms for doing that will be able to deliver this will be required with strong data engineering capabilities.

Theres a massive opportunity of AI transforming life sciences and pharmaceutical industry. The above mentioned disruptions in business value chains have already started making inroads and the CXOs in life sciences industry have realised the virtues of innovation and transformation regime led by AI . Brace up for more interventions in life sciences industry leveraged by AI.

(Edited by Evelyn Ratnakumar)

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)

Read the rest here:

How AI is shaping the new life in life sciences and pharmaceutical industry - YourStory

AI Engineers Need to Think Beyond Engineering – Harvard Business Review

Executive Summary

It is very, very easy for a well-intentioned AI practitioner to inadvertently do harm when they set out to do good AI has the power to amplify unfair biases, making innate biases exponentially more harmful. Because AI often interacts with complex social systems, where correlation and causation might not be immediately clear or even easily discernible AI practitioners need to build partnerships with community members, stakeholders, and experts to help them better understand the world theyre interacting with and the implications of making mistakes. Community-based system dynamics (CBSD) is a promising participatory approach to understanding complex social systems that does just that.

Artificial Intelligence (AI) has become one of the biggest drivers of technological change, impacting industries and creating entirely new opportunities. From an engineering standpoint, AI is just a more advanced form of data engineering. Most good AI projects function more like muddy pickup trucks than spotless race cars they are a workhorse technology that humbly makes a production line 5% safer or movie recommendations a little more on point. However, more so than many other technologies, it is very, very easy for a well-intentioned AI practitioner to inadvertently do harm when they set out to do good. AI has the power to amplify unfair biases, making innate biases exponentially more harmful.

As Google AI practitioners, we understand that how AI technology is developed and used will have a significant impact on society for many years to come. As such, its crucial to formulate best practices. This starts with the responsible development of the technology and mitigating any potential unfair bias which may exist, both of which require technologists to look more than one step ahead: not Will this delivery automation save 15% on the delivery cost? but How will this change affect the cities where we operate and the people at-risk populations in particular who live there?

This has to be done the old-fashioned way: by human data scientists understanding the process that generates the variables that end up in datasets and models. Whats more, that understanding can only be achieved in partnership with the people represented by and impacted by these variables community members and stakeholders, such as experts who understand the complex systems that AI will ultimately interact with.

How do we actually implement this goal of building fairness into these new technologies especially when they often work in ways we might not expect? As a first step, computer scientists need to do more to understand the contexts in which their technologies are being developed and deployed.

Despite our advances in measuring and detecting unfair bias, causation mistakes can still lead to harmful outcomes for marginalized communities. Whats a causation mistake? Take, for example, the observation during the Middle Ages that sick people attracted fewer lice, which led to an assumption that lice were good for you. In actual fact, lice dont like living on people with fevers. Causation mistakes like this, where a correlation is wrongly thought to signal a cause and effect, can be extremely harmful in high-stakes domains such as health care and criminal justice. AI system developers who usually do not have social science backgrounds typically do not understand the underlying societal systems and structures that generate the problems their systems are intended to solve. This lack of understanding can lead to designs based on oversimplified, incorrect causal assumptions that exclude critical societal factors and can lead to unintended and harmful outcomes.

For instance, the researchers who discovered that a medical algorithm widely used in the U.S. health care was racially biased against Black patients identified that the root cause was the mistaken causal assumption, made by the algorithm designers, that people with more complex health needs will have spent more money on health care. This assumption ignores critical factors such as lack of trust in the health care system and lack of access to affordable health care that tend to decrease spending on health care by Black patients regardless of the complexity of their health care needs.

Researchers make this kind of causation/correlation mistake all the time.But things are worse for a deep learning computer, which searches billions of possible correlations in order to find the most accurate way to predict data, and thus has billions of opportunities to make causal mistakes. Complicating the issue further, it is very hard, even with modern tools, such as Shapely analysis, to understand why such a mistake was made a human data scientist sitting in a lab with their supercomputer can never deduce from the data itself what the causation mistakes may be. This is why, among scientists, it is never acceptable to claim to have found a causal relationship in nature just by passively looking at data. You must formulate the hypothesis and then conduct an experiment in order to tease out the causation.

Addressing these causal mistakes requires taking a step back. Computer scientists need to do more to understand and account for the underlying societal contexts in which these technologies are developed and deployed.

Here at Google, we started to lay the foundations for what this approach might look like. In a recent paper co-written by DeepMind, Google AI, and our Trust & Safety team, we argue that considering these societal contexts requires embracing the fact that they are dynamic, complex, non-linear, adaptive systems governed by hard-to-see feedback mechanisms. We all participate in these systems, but no individual person or algorithm can see them in their entirety or fully understand them. So, to account for these inevitable blindspots and innovate responsibly, technologists must collaborate with stakeholders representatives from sociology, behavioral science, and the humanities, as well as from vulnerable communities to form a shared hypothesis of how they work. This process should happen at the earliest stages of product development even before product design starts and be done in full partnership with communities most vulnerable to algorithmic bias.

This participatory approach to understanding complex social systems called community-based system dynamics (CBSD) requires building new networks to bring these stakeholders into the process. CBSD isgrounded in systems thinking and incorporates rigorous qualitative and quantitative methods for collaboratively describing and understanding complex problem domains, and weve identified it as a promising practice in our research. Building the capacity topartner with communities in fair and ethical ways that provide benefits to all participants needs to be a top priority. It wont be easy. But the societal insights gained from a deep understanding of the problems that matter most to the most vulnerable in society can lead to technological innovations that are safer and more beneficial for everyone.

When communities are underrepresented in the product development design process, they are underserved by the products that result. Right now, were designing what the future of AI will look like. Will it be inclusive and equitable? Or will it reflect the most unfair and unjust elements of our society? The more just option isnt a foregone conclusion we have to work towards it. Our vision for the technology is one where a full range of perspectives, experiences and structural inequities are accounted for. We work to seek out and include these perspectives in a range of ways, including human rights diligence processes, research sprints, direct input from vulnerable communities and organizations focused on inclusion, diversity, and equity such as WiML (Women in ML) and Latinx in AI; many of these organizations are also co-founded and co-led by Googler researchers, such as Black in AI and Queer in AI.

If we, as a field, want this technology to live up to our ideals, then we need to change how we think about what were building to shift to our mindset from building because we can to building what we should. This means fundamentally shifting our focus to understanding deep problems and working to ethically partner and collaborate with marginalized communities. This will give us a more reliable view of both the data that fuels our algorithms and the problems we seek to solve. This deeper understanding could allow organizations in every sector to unlock new possibilities of what they have to offer while being inclusive, equitable and socially beneficial.

Go here to see the original:

AI Engineers Need to Think Beyond Engineering - Harvard Business Review

Shield AI Recognized As One of the Most Promising AI Companies – AiThority

Forbes includes emerging defense tech startup in its annual AI 50 list of companies using artificial intelligence in meaningful ways

Shield AI, the technology company focused on developing innovative AI technology to safeguard the lives of military service members and first responders, expressed its gratitude to Forbes for naming the company as one of the AI 50: Americas Most Promising Artificial Intelligence Companies for 2020. The five-year-old company has developed AI technology that enables unmanned systems to interpret signals and react autonomously in dynamic environments, including on the battlefield. Shield AIs products are already being utilized by the US Department of Defense to augment and extend service members ability to execute complex missions.

Shield AI co-founderBrandon Tseng, who served in the U.S. Navy for seven years, including as a SEAL, said Following my last deployment, I came home with the strong conviction that artificial intelligence could make a profound positive impact for our service members. This was the idea that Shield AI was founded upon, and a half-decade later, we are elated to have Forbes recognize our innovation of AI technology as both promising and meaningful.

Recommended AI News:Wipro Named A Worldwide Leader In Drug Safety Services By IDC MarketScape

Shield AI has grown from fewer than 30 employees at the end of 2017 to nearly 150 today, while producing revenue metrics on pace with the growth trajectory of the most promising venture-backed start-ups, including doubling its revenue between 2018 and 2019. In an adjoiningprofileForbes noted that Shield AI is is in prime position to capitalize on the nascent market consisting of autonomous technology linked to national security issues.

Recommended AI News:US Enterprises Look To SAPs S/4HANA To Transform Business Processes

Shield AI has developed three cutting-edge products for its range of customers, spanning both software and systems. ItsNova quadcopteris an unmanned artificially intelligent robotic system which can autonomously explore and map complex real-world environments without reliance on GPS or a human operator. Nova is powered byHivemind Edge, the companys intelligent software stack that enables machines to execute complex, unscripted tasks in denied and dynamic environments without direct operator inputs. The application is edge-deployed, with all processing and computation occurring without relying on a central intelligence hub, a critical need in environments lacking stable communications. The second software product,Hivemind Core, integrates data management and analysis, scalable simulation, and self-directed learning in order to radically accelerate product development workflows.

In the coming months, Shield AI will unveil a second generation Nova quadcopter aimed at bringing the power of resilient AI systems to an even wider array of mission sets, coupled with the ability to partner in real-time with operators to navigate tunnels beneath the earth and multi-level structures.

Link:

Shield AI Recognized As One of the Most Promising AI Companies - AiThority

Why AI Is The Perfect Drinking Buddy For The Alcoholic Beverage Industry – Analytics India Magazine

The use of AI-driven processes to increase efficiency in the F&B market is no longer an anomaly. A host of breweries and distilleries have incorporated the technology to not only develop flavour profiles faster, but also for other functions, including packaging, marketing, as well as to ensure they meet all food-safety regulations.

Although the intention is not to find a replacement for the brewmaster/distiller, it becomes a thrilling learning experiment that equips them with multiple data points that could help them come up with innovative ideas.

Here is a list of companies that have successfully blended technology into their beverages to make a heady cocktail:

The company claims to be the worlds first to use AI algorithms and machine learning to create innovative beers that adapt to users taste preferences. Based on customer feedback, the recipe for their brews goes through multiple iterations to generate various combinations. IntelligentX currently has four different varieties Black AI, Golden AI, Pale AI, and Amber AI.

How does it work?

Codes are printed on the cans which direct customers to the Facebook Messenger app. They are then asked to give feedback on the beer they tried by answering a series of 10 questions. The data points gathered are then fed into an AI algorithm to spot trends and inform the overall brewing process. Furthermore, using the feedback, the AI also learns to ask better questions each time to get better outcomes.

Although the insights gathered give brewmasters a window into understanding customer preferences better, the final decision to heed the AIs recommendations to create a fresh brew rests on them. But what is certain is that without technological intervention, such a large collection of data would not only be difficult to process, but also extremely time-consuming.

Multi-award-winning Swedish whiskey distillery Mackmyra Whisky collaborated with Microsoft and Finnish tech company Fourkind to create the worlds first AI-generated whiskey. Using Microsoft Azure and Machine Learning Studio, Fourkinds resulting AI solution was fed into Mackmyras existing recipes and customer feedback data to create thousands of different recipes.

Following this, the distillerys key master blender Angela DOrazio used her experience to review which ingredients would work well together, filtering down the recipes to more desirable combinations. Since this process was repeated multiple times over, the AI algorithm picked up on which combinations worked best and using machine learning, began producing more desirable mixes. Eventually, DOrazio was able to filter it down to five recipes, finally arriving at recipe number 36 which ultimately became the worlds first AI-generated whiskey that went into production.

This AI-generated, but human-curated whiskey has opened the doors to new and innovative combinations that would otherwise have never been discovered. Monikered Intelligens, the first batch of this blend was launched in September 2019.

The Copenhagen-based brewery started a multimillion-dollar project in 2017 to analyse different flavours in its beer using AI. Unlike IntelligentX which uses customer feedback to improve its brew, Carlsberg has accomplished this by developing a taste-sensing platform that helps identify the differential elements of the flavours.

Under the ongoing Beer Fingerprinting Project, 1000 different beer samples are created each day. With the help of advanced sensors, the flavour fingerprint of each sample is determined. Following this, different yeasts are analysed to map the flavours and help make a distinction between them. Thus, the data collected by this AI-powered system could potentially be used to develop new varieties of brews.

Launched in collaboration with Microsoft, Aarhus University and the Technical University of Denmark, the project marked a shift from conventional practices that did not involve any technology.

The brewers of Budweiser and Corona had also jumped on the AI bandwagon to shake up its business. The company had invested in a slew of initiatives to improve how it brews beer. The Beer Garage is one such initiative. Sitting at the interjection of a startup ecosystem and the AB InBev business, it focuses on developing technology-driven solutions. ZX Ventures another offshoot of its larger business was launched in 2015 with the objective of creating new products that address consumer needs.

Anchored around these enterprises, AB InBev is using machine learning capabilities to stay ahead of the curve in three broad areas:

This maker of Belgian-inspired ales has begun integrating AI and IoT into its brewing process to improve both the quality of the beer, as well as its manufacturing process. It started when a significant problem came to light at the packaging stage.

When the beer was loaded into bottles, it was observed that the level at which it was filled was inconsistent. Another problem was the excessive foaming inside the bottles. This spiked the oxygen levels in the beer, which is known to ruin the flavour and reduce the beers shelf life.

After partnering with IBM, the tech giant installed a camera at SCBs warehouse, which took pictures of the beer as it crossed the bottle line. When combined with other data collected during the packaging operations, the team of engineers at IBM uploaded it to the Cloud. At this point, brewers at SCB also provided specific criteria which they found to be useful and this was then left with Watson algorithms to interpret the large amount of data quickly and solve the problem.From losing more than $30,000 a month in beer spillage, SCB found a solution by building AI and IoT into its brewing processes.

See the original post:

Why AI Is The Perfect Drinking Buddy For The Alcoholic Beverage Industry - Analytics India Magazine

Dont leave it up to the EU to decide how we regulate AI – City A.M.

The war of words between Britain and the EU has begun ahead of next months trade talks.

But as Britain sets its own course on everything from immigration to fishing, there is one area where the battle for influence is only just kicking off: the future regulation of artificial intelligence.

As AI becomes a part of our everyday lives from facial recognition software to the use of black-box algorithms the need for regulation has become more apparent. But around the world, there is rigorous disagreement about how to do it.

Last Wednesday, the EU set out its approach in a white paper, proposing regulations on AI in line with European values, ethics and rules. It outlined a tough legal regime, including pre-vetting and human oversight, for high-risk AI applications in sectors such as medicine and a voluntary labelling scheme for the rest.

In contrast, across the Atlantic, Donald Trumps White House has so far taken a light-touch approach, publishing 10 principles for public bodies designed to ensure that regulation of AI doesnt needlessly get in the way of innovation.

Britain has still to set out its own approach, and we must not be too late to the party. If we are, we may lose the opportunity to influence the shaping of rules that will impact our own industry for decades to come.

This matters, because AI firms the growth generators of the future can choose where to locate and which market to target, and will do so partly based on the regulations which apply there.

Put simply, the regulation of AI is too important for Britains future prosperity to leave it up to the EU or anyone else.

That doesnt mean a race to the bottom. Regulation is meaningless if it is so lax that it doesnt prevent harm. But if we get it right, Britain will be able to maintain its position as the technology capital of Europe, as well as setting thoughtful standards that guide the rest of the western world.

So what should a British approach to AI regulation look like?

It is tempting for our legislators to simply give legal force to some of the many vague ethical codes currently floating around the industry. But the lack of specificity of these codes means that they would result in heavy-handed blanket regulation, which could have a chilling effect on innovation.

Instead, the aim must be to ensure that AI works effectively and safely, while giving companies space to innovate. With that in mind, we have created four principles which we believe a British approach to AI regulation should be designed around.

The first is that regulations should be context-specific. AI is not one technology, and it cannot be governed as such. Medical algorithms and recommender algorithms, for example, are likely to both be regulated, but to differing extents because of the impact of the outcomes the consequences of a diagnostic error are far greater than an algorithm pushing an irrelevant product advert into your social media feed.

Our second principle is that regulation must be precise; it should not be left up to tech companies themselves to interpret.

Fortunately, the latest developments in AI research including some which we are pioneering at Faculty allow for analysis of an algorithms performance across a range of important dimensions: accuracy (how good is an AI tool at doing its job?); fairness (does it have implicit biases?); privacy (does it leak peoples data?); robustness (does it fail unexpectedly?); and explainability (do we know how it is working?).

Regulators should set out precise thresholds for each of these according to the context in which the AI tool is deployed. For instance, an algorithm which hands out supermarket loyalty points might be measured only on whether it is fair and protects personal data, whereas one making clinical decisions in a hospital would be required to reach better-than-human-average standards in every area.

The third principle is that regulators must balance transparency with trust. For example, they might publish one set of standards for supermarket loyalty programmes, and another for radiology algorithms. Each would be subject to different licensing regimes: a light-touch one for supermarkets, and a much tougher inspection regime for hospitals.

Finally, regulators will need to equip themselves with the skills and know-how needed to design and manage this regime. That means having data scientists and engineers who can look under the bonnet of an AI tool, as well as ethicists and economists. They will also need the powers to investigate any algorithms performance.

These four principles offer the basis for a regulatory regime precise enough to be meaningful, nuanced enough to permit innovation, and robust enough to retain public trust.

We believe they offer a pragmatic guide for the UK to chart its own path and lead the debate about the future of the AI industry.

Main image credit: Getty

Read more:

Dont leave it up to the EU to decide how we regulate AI - City A.M.

How to Keep Your AI From Turning Into a Racist Monster – WIRED

Slide: 1 / of 1. Caption: Getty Images

Working on a new product launch? Debuting a new mobile site? Announcing a new feature? If youre not sure whether algorithmic bias could derail your plan, you should be.

About

Megan Garcia (@meganegarcia) is a senior fellow and director of New America California, where she studies cybersecurity, AI, and diversity in technology.

Algorithmic biaswhen seemingly innocuous programming takes on the prejudices either of its creators or the data it is fedcauses everything from warped Google searches to barring qualified women from medical school. It doesnt take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.

It took one little Twitter bot to make the point to Microsoft last year. Tay was designed to engage with people ages 18 to 24, and it burst onto social media with an upbeat hellllooooo world!! (the o in world was a planet earth emoji). But within 12 hours, Tay morphed into a foul-mouthed racist Holocaust denier that said feminists should all die and burn in hell. Tay, which was quickly removed from Twitter, was programmed to learn from the behaviors of other Twitter users, and in that regard, the bot was a success. Tays embrace of humanitys worst attributes is an example of algorithmic biaswhen seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed.

Tay represents just one example of algorithmic bias tarnishing tech companies and some of their marquis products. In 2015, Google Photos tagged several African-American users as gorillas, and the images lit up social media. Yonatan Zunger, Googles chief social architect and head of infrastructure for Google Assistant, quickly took to Twitter to announce that Google was scrambling a team to address the issue. And then there was the embarrassing revelation that Siri didnt know how to respond to a host of health questions that affect women, including, I was raped. What do I do? Apple took action to handle that as well after a nationwide petition from the American Civil Liberties Union and a host of cringe-worthy media attention.

One of the trickiest parts about algorithmic bias is that engineers dont have to be actively racist or sexist to create it. In an era when we increasingly trust technology to be more neutral than we are, this is a dangerous situation. As Laura Weidman Powers, founder of Code2040, which brings more African Americans and Latinos into tech, told me, We are running the risk of seeding self-teaching AI with the discriminatory undertones of our society in ways that will be hard to rein in, because of the often self-reinforcing nature of machine learning.

As the tech industry begins to create artificial intelligence, it risks inserting racism and other prejudices into code that will make decisions for years to come. And as deep learning means that code, not humans, will write code, theres an even greater need to root out algorithmic bias. There are four things that tech companies can do to keep their developers from unintentionally writing biased code or using biased data.

The first is lifted from gaming. League of Legends used to be besieged by claims of harassment until a few small changes caused complaints to drop sharply. The games creator empowered players to vote on reported cases of harassment and decide whether a player should be suspended. Players who are banned for bad behavior are also now told why they were banned. Not only have incidents of bullying dramatically decreased, but players report that they previously had no idea how their online actions affected others. Now, instead of coming back and saying the same horrible things again and again, their behavior improves. The lesson is that tech companies can use these community policing models to attack discrimination: Build creative ways to have users find it and root it out.

Second, hire the people who can spot the problem before launching a new product, site, or feature. Put women, people of color, and others who tend to be affected by bias and are generally underrepresented in tech companies development teams. Theyll be more likely to feed algorithms a wider variety of data and spot code that is unintentionally biased. Plus there is a trove of research that shows that diverse teams create better products and generate more profit.

Third, allow algorithmic auditing. Recently, a Carnegie Mellon research team unearthed algorithmic bias in online ads. When they simulated people searching for jobs online, Google ads showed listings for high-income jobs to men nearly six times as often as to equivalent women. The Carnegie Mellon team has said it believes internal auditing to beef up companies ability to reduce bias would help.

Fourth, support the development of tools and standards that could get all companies on the same page. In the next few years, there may be a certification for companies actively and thoughtfully working to reduce algorithmic discrimination. Now we know that water is safe to drink because the EPA monitors how well utilities keep it contaminant-free. One day we may know which tech companies are working to keep bias at bay. Tech companies should support the development of such a certification and work to get it when it exists. Having one standard will both ensure sectors sustain their attention to the issue and give credit to the companies using commonsense practices to reduce unintended algorithmic bias.

Companies shouldnt wait for algorithmic bias to derail their projects. Rather than clinging to the belief that technology is impartial, engineers and developers should take steps to ensure they dont accidentally create something that is just as racist, sexist, and xenophobic as humanity has shown itself to be.

Read the original:

How to Keep Your AI From Turning Into a Racist Monster - WIRED

Blue-Collar Revenge: The Rise Of AI Will Create A New Professional Class – Forbes


Forbes
Blue-Collar Revenge: The Rise Of AI Will Create A New Professional Class
Forbes
New, more-modern manufacturing processes, including the use of robots, have gutted the number of high-paying factory jobs in the U.S. and caused economic angst in large portions of the country. The movement of manufacturing plants overseas has ...

Here is the original post:

Blue-Collar Revenge: The Rise Of AI Will Create A New Professional Class - Forbes

Google crams machine learning into smartwatches in AI push – CIO

Google is bringing artificial intelligence to a whole new set of devices, including Android Wear 2.0 smartwatches and the Raspberry Pi board, later this year.

A cool thing is these devices don't require a set of powerful CPUs and GPUs to carry out machine learning tasks.

Google researchers are instead trying to lighten the hardware load to carry out basic AI tasks, as exhibited by last week's release of Android Wear 2.0 operating system for wearables.

[ Bots may send your liability risk soaring ]

Google has added some basic AI features to smartwatches with Android Wear 2.0, and those features can work within the limited memory and CPU constraints of wearables.

Android Wear 2.0 has a "smart reply" feature, which provides basic responses to conversations. It works much like how predictive dictionaries work, but it can auto-reply to messages based on the context of the conversation.

Google uses a new way to analyze data on the fly without bogging down a smartwatch. In conventional machine-learning models, a lot of data needs to be classified and labeled to provide accurate answers. Instead, Android Wear 2.0 uses a "semi-supervised" learning technique to provide approximate answers.

"We're quite surprised and excited about how well it works even on Android wearable devices with very limited computation and memory resources," Sujith Ravi, staff research scientist at Google said in a blog entry.

For example, the skimmed down machine-learning model can classify a few words -- based on sentiment and other clues -- and create an answer. The machine-learning model introduces a streaming algorithm to process data, and it provides trained responses that also factor in previous interactions, word relationships, and vector analysis.

The process is faster because the data is analyzed and compared based on bit arrays, or in the form of 1s and 0s. That helps analyze data on the fly, which tremendously reduces the memory footprint. It doesn't go through the conventional process of referring to rich vocabulary models, which require a lot of hardware. The AI feature is not intended for sophisticated answers or analysis of a large set of complex words.

The feature can be used with third-party message apps, the researchers noted. It is loosely based on the same smart-reply technology in Google's messaging Allo app, which is built from the company's Expander set of semi-supervised learning tools.

The Android Wear team originally reached out to Google's researchers and expressed an interested in implementing the "smart reply" technology directly in smart devices, Ravi said.

AI is becoming pervasive in smartphones, PCs, and electronics like Amazon's Echo Dot, but it largely relies on machine learning taking place in the cloud. Machine-learning models in the cloud are trained, a process called learning, to recognize images or speech. Conventional machine learning relies on algorithms, superfast hardware, and a huge amount of data for more accurate answers.

Google's technology is different than Qualcomm's rough implementation of machine learning in mobile devices, which hooks up algorithms with digital signal processors (DSPs) for image recognition or natural language processing. Qualcomm has tuned DSPs in its upcoming Snapdragon 835 to process speech or images at higher speeds, so AI tasks are carried out faster.

Google has an ambitious plan to apply machine learning through its entire business. The Google Assistant -- which is also in Android Wear 2.0 -- is a visible AI across smartphones, TVs, and other consumer devices. The search company has TensorFlow, an open-source machine-learning framework, and has its own inferencing chip called Tensor Processing Unit.

Originally posted here:

Google crams machine learning into smartwatches in AI push - CIO

Researchers have figured out how to fake news video with AI – Quartz – Quartz

If you thought the rampant spread of text-based fake news was as bad as it could get, think again. Generating fake news videos that are undistinguishable from real ones is growing easier by the day.

A team of computer scientists at the University of Washington have used artificial intelligence to render visually convincing videos of Barack Obama saying things hes said before, but in a totally new context.

In a paper published this month, the researchers explained their methodology: Using a neural network trained on 17 hours of footage of the former US presidents weekly addresses, they were able to generate mouth shapes from arbitrary audio clips of Obamas voice. The shapes were then textured to photorealistic quality and overlaid onto Obamas face in a different target video. Finally, the researchers retimed the target video to move Obamas body naturally to the rhythm of the new audio track.

This isnt the first study to demonstrate the modification of a talking head in a video. As Quartzs Dave Gershgorn previously reported, in June of last year, Stanford researchers published a similar methodology for altering a persons pre-recorded facial expressions in real-time to mimic the expressions of another person making faces into a webcam. The new study, however, adds the ability to synthesize video directly from audio, effectively generating a higher dimension from a lower one.

In their paper, the researchers pointed to several practical applications of being able to generate high quality video from audio, including helping hearing-impaired people lip-read audio during a phone call or creating realistic digital characters in the film and gaming industries. But the more disturbing consequence of such a technology is its potential to proliferate video-based fake news. Though the researchers used only real audio for the study, they were able to skip and reorder Obamas sentences seamlessly and even use audio from an Obama impersonator to achieve near-perfect results. The rapid advancement of voice-synthesis software also provides easy, off-the-shelf solutions for compelling, falsified audio.

There is some good news. Right now, the effectiveness of this video synthesis technique is limited by the amount and quality of footage available for a given person. Currently, the paper noted, the AI algorithms require at least several hours of footage and cannot handle certain edge cases, like facial profiles. The researchers chose Obama as their first case study because his weekly addresses provide an abundance of publicly available high-definition footage of him looking directly at the camera and adopting a consistent tone of voice. Synthesizing videos of other public figures that dont fulfill those conditions would be more challenging and require further technological advancement. This buys time for technologies that detect fake video to develop in parallel. As The Economist reported earlier this month, one solution could be to demand that recordings come with their metadata, which show when, where and how they were captured. Knowing such things makes it possible to eliminate a photograph as a fake on the basis, for example, of a mismatch with known local conditions at the time.

But as the doors for new forms of fake media continue to fling open, it will ultimately be left to consumers to tread carefully.

Read more from the original source:

Researchers have figured out how to fake news video with AI - Quartz - Quartz

Exclusive: Eshoo On AI, Cybersecurity And Kicking America Off The China Drug Habit – Forbes

WASHINGTON, DC - Chairman Rep. Anna Eshoo (D-Calif.) is seen during a House Energy and Commerce ... [+] Subcommittee on Health hearing to discuss protecting scientific integrity in response to the coronavirus outbreak on Thursday, May 14, 2020. in Washington, DC. (Photo by Greg Nash-Pool/Getty Images)

Congresswoman Anna Eshoo (CA-18) was first elected to Congress in 1992. She has served on the Energy and Commerce Committee since 1995 with a focus on health and technology. Last year she became the first woman ever to serve as Chair of the Health Subcommittee. She has authored 41 bills signed into law by four presidents. I was able to speak to Congresswoman Eshoo about her recent accomplishment and her agenda for AI, cybersecurity, and medical supply chains.

RL: Representative Eshoo, thank you for your bipartisan leadership to create a national strategy to end dependence of foreign manufacturing of lifesaving drugs. What is the status of this bill? What can your experience from pharmaceuticals supply chain security teach us about other critical areas for supply chain security, such as information technology?

Ive championed the need to address our nations overreliance on the foreign production of critical drugs in Congress. Last September, I co-authored a Washington Post Op-Ed about our dangerous and troubling reliance on China for the manufacturing of drugs and their ingredients. Soon after, I held a hearing in my Health Subcommittee about the consequences and complications of our global drug supply chain. On May 1st I introduced bipartisan legislation, the Prescription for American Drug Independence Act, which requires the National Academies of Sciences, Engineering, and Medicine to convene a committee of experts to analyze the impact of U.S. dependence on the manufacturing of lifesaving drugs and make recommendations to Congress within 90 days to ensure the U.S. has a diverse drug supply chain to adequately protect our country from natural or hostile occurrences. The legislation was included in the House-passed Heroes Act and I look forward to the Senate taking it up.

You are correct to note that an overreliance on China is not unique to the drug supply chain. For a decade Ive raised how the vulnerabilities in our telecommunications infrastructure directly impact our national security. On November 2, 2010, I wrote to the FCC expressing grave concerns about Huawei and ZTE, which have opaque entanglements with the Chinese government. Sadly, in the intervening decade Huawei and ZTE equipment has proliferated across our country because its cheap, due to the Chinese government subsidizing them. Weve passed several important measures this Congress that Im proud to support, including measures to create a mechanism for the federal government to exclude Huawei and ZTE equipment from our networks and to establish a program to rip and replace existing equipment made by the companies.

RL: It was great to see bipartisan and bicameral support for the National AI Research Resource Task Force Act under your leadership These are much-needed policies measures. What else do we need to do on this front? What is are your objectives in this area for the next Congress?

Im very proud of the smart tech-related provisions in the House-passed H.R. 6395, the William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021, or the NDAA.

The Global Al Index indicates that the U.S. is ahead of China in the global AI race today but experts predict China will overtake the U.S. in just five to 10 years. Im pleased that the NDAA includes several important AI efforts, including my bipartisan and bicameral legislation, H.R. 7096, the National AI Research Resource Task Force Act, which establishes a task force to develop a roadmap for a national AI research cloud to make available high-powered computing, large data sets, and educational resources necessary for AI research.

You ask what else is needed in addition to these provisions. In AI, the answer is federal R&D funding. Earlier this year, I wrote to the House Appropriations Committee urging them to allocate robust funding for nondefense AI R&D, and seventeen of my House colleagues joined my letter. This funding is an important investment in our countrys future and must be a priority.

On cybersecurity, Im pleased the NDAA included a number of recommendations from the Cyberspace Solarium Commission which Congress established in last years NDAA. Cybersecurity must be a top priority for every company and for government. It is a domain that works best when companies, researchers, and government work hand-in-hand. Unfortunately, cybersecurity efforts operate in silos across the private sector and within government. We need coordination. Its for this reason I cosponsored legislation to establish a centralized cybersecurity coordinator the National Cyber Director in the White House.

A gap I see is the cybersecurity of what I call small organizations small businesses, nonprofits and local governments that are too small to ever employ a cybersecurity professional and may never have the budget to pay for security services. While 50-page technical and legalistic government documents are critical for cybersecurity teams within large organizations, they are too dense for small business owners, executive directors of nonprofits, and city managers of small municipalities. Im currently drafting legislation to address this issue that should be ready to introduce shortly.

I was also pleased the House adopted an amendment I cosponsored that is based on the CHIPS for America Act, which will restore American leadership in semiconductor manufacturing. In the House, I represent much of Silicon Valley, a region that gets its name from the material used to make semiconductors. While the technology sector has evolved to include much more than semiconductor manufacturing, it remains the foundation of one of the most vibrant parts of our economy. Our militarys dependence on semiconductor manufacturing is why its a national security priority, and Im hopeful that the CHIPS for America Act will be enacted into law as soon as possible.

RL: In my own research, I have uncovered that California state government itself has set up purchasing agreements with Chinese-government owned firms like Lenovo, Lexmark, and others. As you well know, the Chinese government asserts its right to collect any data on any Chinese-made device anywhere for any reason. China has been building a database on Americans since 2015. Having Chinese owned equipment in state government is a risk particularly around elections. In any event, it appears that these contracts have been set up by procurement officers who are not aware of the security risks.I attribute this to the lack of communication between the federal government and the states themselves. How could Congress engage constructively with states to help them improve their privacy and security practices in this regard?

You raise a number of highly important points. When it comes to evolving technologies, thinking about privacy and security is critical at every step of policymaking and at every level of government. Laws and regulations need to require privacy and security. Vendor selection should always consider privacy and cybersecurity, especially when issues intersect with national security. And governmental oversight needs to review privacy and security issues.

The federal government must share threat and vulnerability information more reliably. We cant expect every procurement manager in every municipal government to be aware of the national security concerns related to routers, modems, printers, and myriad other internet-connected devices and electronics. National security is the domain of the federal government. In addition to protecting individual Americans, the federal governments responsibility includes protecting our governmental (federal, state, and local) and our economic interests.

RL: Thank you, Congresswoman Eshoo.

See original here:

Exclusive: Eshoo On AI, Cybersecurity And Kicking America Off The China Drug Habit - Forbes

AR, VR, Autonomy, Automation, Healthcare: Whats Hot In AI Right Now – Forbes

A city at night

AI is in the social network you chat on, the engine you search with, the word processor you write with, and the camera you take pictures with. But whats growing fastest in artificial intelligence?

One clue is where the Fortune 50 are placing their AI bets.

And one big tell is what they need training data for.

Training data is really the basis for AI, Wendy Gonzalez, the president and CEO of Samasource told me in a recent TechFirst podcast. At the end of the day, machines need to learn how to speak, see, and hear. And they do so much like a human learns how to speak, see, and hear.

Samasource creates training data the labeled, structured data that teaches a machine or a computer how to do these things for a quarter of the Fortune 50, including top global tech giants like Google and Microsoft. Walmart and GE are customers, as is Nvidia, which makes AI chips that power much of the worlds artificial intelligence. So are automotive giants like Volkswagen and Ford.

That training ranges from as simple as this shape is a car to this is a Louis Vuitton Deauville Mini handbag. But its vital for multiple fields.

So what are the hottest areas that Samasource is getting training data requests for?

We see a lot of growth in AR/VR, Gonzalez says. And this could really include everything, its everything from faces, shirts, shoes, you name it, furniture ... were also seeing a lot of really interesting growth ... in e-commerce. So a lot of things I would describe as visual search: how do you actually look up something and detect whether its a plaid shirt as an example.

Delivery robots also need to know what a sidewalk is, what people and pets look like, how to navigate getting down off a sidewalk and onto a road, and what grass, trees, and bushes look like. Autonomous vehicles need to not just know what a road looks like, and what white or yellow painted lines mean, but also how to recognize a parking space, an upside-down car that might have been involved in an accident ... and all of it in various moderate to extreme weather conditions.

And they have to be able to recognize those objects both with visible light and LIDAR or radar.

The challenge for most AI training data is edge cases, Gonzalez says.

Imagine if you had a representation of hundreds of thousands of vehicles, but only like 10 motorcycles, she told me. Then youve got immediately kind of an inherent bias and so you have to really worry about not just to get the quality data, but do you have the right and most comprehensive representative data.

Interestingly, Samasource isnt just working on the AI projects that youd think. Self-driving cars and delivery robots are fairly obvious applications, after all.

But Gonzalez says AI is getting pervasive across many more domains.

Weve worked on everything from sustainable fishing, to reducing elephant poaching, to financial services classification, Gonzalez says. We definitely see a lot in healthcare. Theres an incredible amount that can be done in the healthcare and life sciences.

Get a full transcript of our conversation here.

Visit link:

AR, VR, Autonomy, Automation, Healthcare: Whats Hot In AI Right Now - Forbes

More funding for AI cybersecurity: Darktrace raises $75M at an $825M valuation – TechCrunch

With cybercrime projected to reap some $6 trillion in damages by 2021, and businesses likely to invest around $1 trillionover the next five years to try to mitigate that, were seeing a rise of startups that are building innovative ways to combat malicious hackers.

In the latest development, Darktrace a cybersecurity firm that uses machine learning to detect and stop attacks has raised $75 million, giving the startup a post-money valuation of $825 million, on the back of a strong business: the company said it has a total contract value of $200 million, 3,000 global customers and has grown 140 percent in the last year.

The funding will be used to expand the companys business operations into more markets. Notably, Darktrace also separately announced today that it is now in a strategic partnership with Hong Kong-based CITIC Telecom CPC, a telecoms firm serving China and other parts of Asia, to bring next-generation cyber defense to businesses across Asia Pacific.

We have confirmed that CITIC, which owns the strategic partner, is not investing as part of this partnership. CITIC CPC is not an investor, a spokesperson for Darktrace confirmed. It was aDarktracecustomer and, impressed by the fundamental power of the AI technology, decided to enter into a strategic partnership to expand its reach. Other telcos that work with Darktrace includeBT in the UK and AustraliasTelstra.

This latest round, a Series D, was led byInsight Venture Partners, with existing investors Summit Partners, KKR and TenEleven Ventures also participating. Darktrace which is also backed by Autonomys Mike Lynch was founded in the UK and now is co-based in Cambridge and San Francisco.This round of funding brings the total raised by Darktrace to just under $180 million.

IT security has been around for as long as we have even had a concept of IT, but a wave of new threats such as polymorphic malware that changes profile as it attacks plus the ubiquity of networked and cloud-based services, has rendered many of the legacy antivirus and other systems obsolete, simply unable to cope with whats being thrown at organisations and the individuals that are a part of them.

Darktrace is part of the new guard of firms that are built around the concept of using artificial intelligence both to help security specialists identify and stop malicious attacks, as well as act on their own to automatically detect and stop the threats.

Other security startups built on using AI include Hexadite acquired by Microsoft for around $100 million last month which, like Darktrace, works in the area of remediation by both identifying and relaying information about attacks to specialists, as well as stopping some itself;Crowdstrike, which raised a large round of funding in May at a billion-dollar valuation;Cylance, also valued at more than $1 billion;Harvest AI, which Amazon quietly acquired last year; and Illumio, a provider of segmented security solutions that raised $125 million earlier this year.

Darktraces system is based on an appliance it calls theEnterprise Immune System that, as we have noted before, sits on a companys network and listens to whats going on. The immune system in its name is a reference to the immune system of humans, which (when healthy) develops immunity to viruses by being exposed to them in small doses. Darktraces system is designed to identify malicious activity in a network. It alerts IT managers when there is suspicious behavior. It is also designed to take immediate action to stop or at least slow down an attack until more help is at hand.

That has proven to be an attractive idea to investors, as seen by the hundreds of millions that have been ploughed into this area already.

Insight Venture Partners has a proven record of partnering with tech-focused firms, and its backing of Darktrace is another strong validation of the fundamental and differentiated technology that the Enterprise Immune System represents, said Nicole Eagan, CEO at Darktrace, in a statement. It marks another critical milestone for the company as we experience unprecedented growth in the U.S. market and are rapidly expanding across Latin America and Asia Pacific in particular, as organizations are increasingly turning to our AI approach to enhance their resilience to cyber-attackers.

In just four years, Darktrace has established itself as a world leader in AI-powered security, said Jeff Horing, Managing Director at Insight Venture Partners. Insight is proud to partner with Darktrace to continue to drive its strong growth and superior product market fit.

Its interesting to see Darktrace moving into China: the country has been identified numerous times as one of the main origination points of cyberattacks on Western firms, but what doesnt get reported much is that enterprises in China are also subject to the same problems.

CITIC Telecom CPC said that Asia Pacific businesses are battling fierce attacks on a daily basis.

As we have seen from the headlines, humans are consistently outpaced by increasingly automated threats, organizations increasingly recognize that traditional defenses focussed on past threats only provide the most essential protection,said Daniel Kwong, Senior Vice President, Information Technology and Security Services at CITIC Telecom CPC. Companies in Asia Pacific need a new approach to remain resilient in the face of brazen, never-seen-before advanced attacks.

With Darktraces machine learning approach, having a presence in China and working with a network provider in the region could see the company gain new kinds of insights into the larger global threat, subsequently passing on that benefit to other Darktrace users globally.

Updated with more investment detail from Darktrace.

Read more here:

More funding for AI cybersecurity: Darktrace raises $75M at an $825M valuation - TechCrunch

2020 predictions for AI in business – TechTalks

Image credit: Depositphotos

Artificial intelligence is already capable of some pretty wondrous things. From powering driverless vehicles and automated manufacturing systems to delivering web search results via human-like voice assistants, it grows more prominent every day.

While it is almost impossible to predict what AI will look like 10 or even five years from now, we can certainly speculate about the coming year. What can we expect to see in 2020 with AI and business applications?

As businesses look to capitalize on the vast benefits that AI has to offer, adoption will rise considerably. Not everyone will have direct access to the necessary technology and processing power, however. Thats where AI as a service comes into play. The tech world has yet to decide on a standard acronym for it, though some have gone with AIaaS.

IBMs Watson, Azure AI, and Google Cloud AI are just a few examples of how the technology applies in service-like settings. The servers carry the bulk of the processing power, allowing clients to tap into a remote solution. Quantum computing will further boost the remote service models rise to power.

Because a third party handles everything, businesses gain access to a low-cost AI solution with almost no risk and no buy-in.

Only a handful of markets arent suffering from a talent shortage. To make matters worse, as much as 85 percent of employees are either uninterested or actively disengaged at work.

Machine learning and other types of AI can help change that. Tools like Vibe or Keen allow managers to see how their employees are feeling, good or bad. Communication analysis tools powered by AI can help home in on morale problems and even suggest ways to improve the situation.

In the field of marketing, businesses and media managers are always looking for new ways to push the envelope. Its not just about getting eyes on marketing materials and content. Its about sharing a message that resonates with their interests and lifestyles.

With the help of AI, marketers can spice up their social media and email campaigns just enough to captivate audiences. Analysts expect AI to make considerable waves in the social media market, growing to 28.3 percent by 2023.

Imagine AI-developed ads and commercials designed from the ground up to target a specific audience or demographic. Big data and analytics solutions provide the necessary information to build such campaigns, while the AI solutions drive them forward.

AI can also power sales forecasting, customer insights, digital advertising, and even customer service. One of the more prominent uses of AI right now is as a customer support solution, answering queries and helping solve customer experience issues. Over the coming year, that will evolve to include many more facets of marketing.

One of the most lucrative uses of AI is the option to predict future events through data analysis.

With enough information flowing in, people can use machine learning algorithms to accurately predict or even pinpoint future changes coming down the pipeline. This approach allows businesses to adequately prepare for market and demand changes, supply shortages, growing competition and much more.

As more and more data amasses, and subsequently flows through AI-powered solutions, they will become smarter and more reliable, leading to frighteningly accurate prediction models. Complexity and sophistication will balloon, too, making AI the go-to for businesses that want to stay competitive and remain afloat.

In general, risk assessment is a broad field. Cybersecurity, third-party and vendor risks and investment risk are all part of the broader spectrum. All of them, however, share one thing in common: To be of any use, they must apply as continual and sustained processes. Mitigation is possible, but risk elimination is not.

AI solutions can take over for human eyes, not just providing 24/7 active monitoring, but also a system that learns and improves over time. Machine learning and deep neural network solutions offer some of the best and most versatile support.

Whether youre talking about the customer or employee-oriented personalization, manual processes complicate things too much to make it work. By injecting AI and automation solutions, it streamlines the entire system.

Netflix, for instance, uses machine learning to deliver targeted media and content suggestions to its users. Sesame Workshop developed an AI-powered vocabulary learning app. Initially, it observes a childs reading and vocabulary level and then delivers content and experiences to match their needs. It adjusts as they grow to stay in pace with their development.

Ultimately, personalization allows businesses to give customers exactly what they want when they want without soliciting them all the time. The AI solutions monitor their usage, habits and other performance data to discern what kinds of experiences, products, recommendations, and even marketing content matches their lifestyle.

AI is moving the entire world of business forward, from marketing and customer service to innovative employee experiences.

Its sure to become even more prevalent as the technology evolves and becomes smarter, more accurate and more responsive.

Go here to read the rest:

2020 predictions for AI in business - TechTalks

Diffbot attempts to create smarter AI that can discern between fact and misinformation – The Financial Express

The better part of the early 2000s was spent in creating artificial intelligence (AI) systems that could beat the Turing Test; the test is designed to determine if an AI can trick a human into believing that it is a human. Now, companies are in a race to create a smarter AI that is more knowledgeable and trustworthy. A few months ago, Open AI showcased GPT-3, a much smarter version of its AI bot, and now as per a report in MIT Technology Review, Diffbot is working on a system that can surpass the capabilities of GPT-3.

Diffbot is expected to be a smarter system as it works by reading a page as a human does. Using this technology, it can create knowledge graphs, which will contain verifiable facts. One of the problems that constant testing of GPT-3 reveals is that you still need a human to cross-verify information it is collecting. The Diffbot is trying to make the process more autonomous. The use of knowledge graphs is not unique to Diffbot; Google also uses them. The success of Diffbot will depend on how accurately it can differentiate between information and misinformation.

Give it will apply natural language processing and image recognition to virtually billions of web-pages, the knowledge graph it will build will be galactic. It will join Google and Microsoft in crawling nearly the entire web. Its non-stop crawling of the web means it knocks down its knowledge graph periodically, incorporating new information. If it can sift through data to verify information, it will indeed be a victory for internet companies looking to make their platforms more reliable.

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

See the rest here:

Diffbot attempts to create smarter AI that can discern between fact and misinformation - The Financial Express

Scientists are working with AI to measure chronic pain – Axios

Scientists are working on a way to use AI to create quantitative measurements for chronic pain.

Why it matters: Chronic pain is an epidemic in the U.S., but doctors can't measure discomfort as they can other vital signs. Building methods that can objectively measure pain can help ensure that the millions in need of palliative care aren't left to suffer.

What's happening: Late last month, scientists from IBM and Boston Scientific presented new research outlining a framework that uses machine learning and activity monitoring devices to capture and analyze biometric data that can correspond to the perception of pain.

What they're saying: "We want to use all the tools of predictive analytics and get to the point where we can predict where people's pain is going to be in the future, with enough time to give doctors the chance to intervene," says Jeff Rogers, senior manager for digital health at IBM Research.

Background: According to one estimate, more than 100 million Americans struggle with chronic pain, at an annual cost of as much as $635 billion in painkillers and lost productivity.

What's next: Rogers hopes the research can lead to medical devices that could predict chronic pain signals ahead of suffering and adjust their response accordingly.

Read more from the original source:

Scientists are working with AI to measure chronic pain - Axios

AI will be smarter than humans within 5 years, says Elon Musk – Express Computer

Tesla and SpaceX CEO Elon Musk has claimed that Artificial Intelligence will be vastly smarter than any human and would overtake us by 2025.

We are headed toward a situation where AI is vastly smarter than humans. I think that time frame is less than five years from now. But that doesnt mean that everything goes to hell in five years. It just means that things get unstable or weird, Musk said in an interview with New York Times over the weekend.

This is not the first time that Musk has shown concern related to AI. Back in 2016, Musk said that humans risk being treated like house pets by AI unless technology is developed that can connect brains to computers.

He even described AI as an existential threat to humanity.

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, its probably that, he said.

However, Musk helped found the artificial intelligence research lab OpenAI in 2015 with the goal of developing artificial general intelligence (AGI) that can learn and master several disciplines.

Recently, OpenAI released its first commercial product, a programme to make use of a text-generation tool that it once called too dangerous.

It has the potential to spare people from writing long texts. Once an application is developed on the basis of the programme, all they need to give is a prompt.

OpenAI earlier desisted from revealing more about the software fearing bad actors might misuse it for producing misleading articles, impersonate others or even automate phishing content.

IANS

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

View post:

AI will be smarter than humans within 5 years, says Elon Musk - Express Computer