This startup is building AI to bet on soccer games – The Verge – The Verge

Listen to Andreas Koukorinis, founder of UK sports betting company Stratagem, and youd be forgiven for thinking that soccer games are some of the most predictable events on Earth. Theyre short duration, repeatable, with fixed rules, Koukorinis tells The Verge. So if you observe 100,000 games, there are patterns there you can take out.

The mission of Koukorinis company is simple: find these patterns and make money off them. Stratagem does this either by selling the data it collects to professional gamblers and bookmakers, or by keeping it and making its own wagers. To fund these wagers, the firm is raising money for a 25 million ($32 million) sports betting fund that its positioning as an investment alternative to traditional hedge funds. In other words, Stratagem hopes rich people will give Stratagem their money. The company will gamble with it using its proprietary data, and, if all goes to plan, everyone ends up just that little bit richer.

Its a familiar story, but Stratagem is adding a little something extra to sweeten the pot: artificial intelligence.

At the moment, the company uses teams of human analysts spread out around the globe to report back on the various sporting leagues it bets on. This information is combined with detailed data about the odds available from various bookmakers to give Stratagem an edge over the average punter. But, in the future, it wants computers to do the analysis for it. It already uses machine learning to analyze some of its data (working out the best time to place a bet, for example), but its also developing AI tools that can analyze sporting events in real time, drawing out data that will help predict which team will win.

Stratagem is using deep neural networks to achieve this task the same technology thats enchanted Silicon Valleys biggest firms. Its a good fit, since this is a tool thats well-suited for analyzing vast pots of data. As Koukorinis points out, when analyzing sports, theres a hell of a lot data to learn from. The companys software is currently absorbing thousands of hours of sporting fixtures to teach it patterns of failure and success, and the end goal is to create an AI that can watch a range of a half-dozen different sporting events simultaneously on live TV, extracting insights as it does.

Stratagems AI identifies players to make a 2D map of the game

At the moment, though, Stratagem is starting small. Its focusing on just a few sports (soccer, basketball, and tennis) and a few metrics (like goal chances in soccer). At the companys London offices, home to around 30 employees including ex-bankers and programmers, were shown the fledgling neural nets for soccer games in action. On-screen, the output is similar to what you might see from the live feed of a self-driving car. But instead of the computer highlighting stop signs and pedestrians as it scans the road ahead, its drawing a box around Zlatan Ibrahimovi as he charges at the goal, dragging defenders in his wake.

Stratagems AI makes its calculations watching a standard, broadcast feed of the match. (Pro: its readily accessible. Con: it has to learn not to analyze the replays.) It tracks the ball and the players, identifying which team theyre on based on the color of their kits. The lines of the pitch are also highlighted, and all this data is transformed into a 2D map of the whole game. From this viewpoint, the software studies matches like an armchair general: it identifies what it thinks are goal-scoring chances, or the moments where the configuration of players looks right for someone to take a shot and score.

Football is such a low-scoring game that you need to focus on these sorts of metrics to make predictions, says Koukorinis. If theres a short on target from 30 yards with 11 people in front of the striker and that ends in a goal, yes, it looks spectacular on TV, but its not exciting for us. Because if you repeat it 100 times the outcomes wont be the same. But if you have Lionel Messi running down the pitch and hes one-on-one with the goalie, the conversion rate on that is 80 percent. We look at what created that situation. We try to take the randomness out, and look at how good the teams are at what theyre trying to do, which is generate goal-scoring opportunities.

Whether or not counting goal-scoring opportunities is the best way to rank teams is difficult to say. Stratagem says its a metric thats popular with professional gamblers, but they and the company weigh it with a lot of other factors before deciding how to bet. Stratagem also notes that the opportunities identified by its AI dont consistently line up with those spotted by humans. Right now, the computer gets it correct about 50 percent of the time. Despite this, the company say its current betting models (which it develops for soccer, but also basketball and tennis) are right more than enough times for it to make a steady return, though they wont share precise figures.

A team of 65 analysts collect data around the world

At the moment, Stratagem generates most of its data about goal-scoring opportunities and other metrics the old-fashioned way: using a team of 65 human analysts who write detailed match reports. The companys AI would automate some of this process and speed it up significantly. (Each match report takes about three hours to write.) Some forms of data-gathering would still rely on humans, however.

A key task for the companys agents is finding out a teams starting lineup before its formally announced. (This is a major driver of pre-game betting odds, says Koukorinis, and knowing in advance helps you beat the market.) Acquiring this sort of information isnt easy. It means finding sources at a club, building up a relationship, and knowing the right people to call on match day. Chatbots just arent up to the job yet.

Machine vision, though, is really just one element of Stratagems AI business plan. It already applies machine learning to more mundane facets of betting like working out the best time to place a bet in any particular market. In this regard, what the company is doing is no different from many other hedge funds, which for decades have been using machine learning to come up with new ways to trade. Most funds blend human analysis with computer expertise, but at least one is run completely by decisions generated by artificial intelligence.

However, simply adding more computers to the mix isnt always a recipe for success. Theres data showing that if you want to make the most out of your money, its better to just invest in the top-performing stocks of the S&P 500, rather than sign up for an AI hedge fund. Thats not the best sign that Stratagems sports-betting fund will offer good returns, especially when such funds are already controversial.

In 2012, a sports-betting fund set up by UK firm Centaur Holdings, collapsed just two years after it launched. It lost $2.5 million after promising investors returns of 15 to 20 percent. To critics, operations like this are just borrowing the trappings of traditional funds to make gambling look more like investing.

I dont doubt its great fun... but dont qualify it with the term investment.

David Stevenson, director of finance research company AltFi, told The Verge that theres nothing essentially wrong with these funds, but they need to be thought of as their own category. I dont particularly doubt its great fun [to invest in one] if you like sports and a bit of betting, said Stevenson. But dont qualify it with the term investment, because investment, by its nature, has to be something you can predict over the long run.

Stevenson also notes that AI hedge funds that are successful those that torture the math within an inch of its life to eek out small but predictable profits tend not to seek outside investment at all. They prefer keeping the money to themselves. I treat most things that combine the acronym AI and the word investing with an enormous dessert spoon of salt, he said.

Whether or not Stratagems AI can deliver insights that make sporting events as predictable as the tides remains to be seen, but the companys investment in artificial intelligence does have other uses. For starters, it can attract investors and customers looking for an edge in the world of gambling. It can also automate work thats currently done by the companys human employees and make it cheaper. As with other businesses that are using AI, its these smaller gains that might prove to be most reliable. After all, small, reliable gains make for a good investment.

Link:

This startup is building AI to bet on soccer games - The Verge - The Verge

How Much Dark Matter in the Universe? AI May Have the Answer – Technology Networks

Understanding the how our universe came to be what it is today and what will be its final destiny is one of the biggest challenges in science. The awe-inspiring display of countless stars on a clear night gives us some idea of the magnitude of the problem, and yet that is only part of the story. The deeper riddle lies in what we cannot see, at least not directly: dark matter and dark energy. With dark matter pulling the universe together and dark energy causing it to expand faster, cosmologists need to know exactly how much of those two is out there in order to refine their models.

At ETH Zurich, scientists from the Department of Physics and the Department of Computer Science have now joined forces to improve on standard methods for estimating the dark matter content of the universe through artificial intelligence. They used cutting-edge machine learning algorithms for cosmological data analysis that have a lot in common with those used for facial recognition by Facebook and other social media. Their results have recently been published in the scientific journal Physical Review D.

While there are no faces to be recognized in pictures taken of the night sky, cosmologists still look for something rather similar, as Tomasz Kacprzak, a researcher in the group of Alexandre Refregier at the Institute of Particle Physics and Astrophysics, explains: "Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy." As dark matter cannot be seen directly in telescope images, physicists rely on the fact that all matter - including the dark variety - slightly bends the path of light rays arriving at the Earth from distant galaxies. This effect, known as "weak gravitational lensing", distorts the images of those galaxies very subtly, much like far-away objects appear blurred on a hot day as light passes through layers of air at different temperatures.

Cosmologists can use that distortion to work backwards and create mass maps of the sky showing where dark matter is located. Next, they compare those dark matter maps to theoretical predictions in order to find which cosmological model most closely matches the data. Traditionally, this is done using human-designed statistics such as so-called correlation functions that describe how different parts of the maps are related to each other. Such statistics, however, are limited as to how well they can find complex patterns in the matter maps.

"In our recent work, we have used a completely new methodology", says Alexandre Refregier. "Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job." This is where Aurelien Lucchi and his colleagues from the Data Analytics Lab at the Department of Computer Science come in. Together with Janis Fluri, a PhD student in Refregier's group and lead author of the study, they used machine learning algorithms called deep artificial neural networks and taught them to extract the largest possible amount of information from the dark matter maps.

In a first step, the scientists trained the neural networks by feeding them computer-generated data that simulates the universe. That way, they knew what the correct answer for a given cosmological parameter - for instance, the ratio between the total amount of dark matter and dark energy - should be for each simulated dark matter map. By repeatedly analysing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information. In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.

The results of that training were encouraging: the neural networks came up with values that were 30% more accurate than those obtained by traditional methods based on human-made statistical analysis. For cosmologists, that is a huge improvement as reaching the same accuracy by increasing the number of telescope images would require twice as much observation time - which is expensive.

Finally, the scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-450 dataset. "This is the first time such machine learning tools have been used in this context," says Fluri, "and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications."

As a next step, he and his colleagues are planning to apply their method to bigger image sets such as the Dark Energy Survey. Also, more cosmological parameters and refinements such as details about the nature of dark energy will be fed to the neural networks.

Reference: Fluri J, Kacprzak T, Lucchi A, Refregier A, Amara A, Hofmann T, Schneider A: Cosmological constraints with deep learning from KiDS-450 weak lensing maps. Physical Review D. 100: 063514, doi: 10.1103/PhysRevD.100.063514

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

See the original post:

How Much Dark Matter in the Universe? AI May Have the Answer - Technology Networks

AI learns to write its own code by stealing from other programs – New Scientist

Set a machine to program a machine

iunewind/Alamy Stock Photo

By Matt Reynolds

OUT of the way, human, Ive got this covered. A machine learning system has gained the ability to write its own code.

Created by researchers at Microsoft and the University of Cambridge, the system, called DeepCoder, solved basic challenges of the kind set by programming competitions. This kind of approach could make it much easier for people to build simple programs without knowing how to write code.

All of a sudden people could be so much more productive, says Armando Solar-Lezama at the Massachusetts Institute of Technology, who was not involved in the work. They could build systems that it [would be] impossible to build before.

Ultimately, the approach could allow non-coders to simply describe an idea for a program and let the system build it, says Marc Brockschmidt, one of DeepCoders creators at Microsoft Research in Cambridge, UK.

DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.

It could allow non-coders to simply describe an idea for a program and let the system build it

One advantage of letting an AI loose in this way is that it can search more thoroughly and widely than a human coder, so could piece together source code in a way humans may not have thought of. Whats more, DeepCoder uses machine learning to scour databases of source code and sort the fragments according to its view of their probable usefulness.

All this makes the system much faster than its predecessors. DeepCoder created working programs in fractions of a second, whereas older systems take minutes to trial many different combinations of lines of code before piecing together something that can do the job. And because DeepCoder learns which combinations of source code work and which ones dont as it goes along, it improves every time it tries a new problem.

The technology could have many applications. In 2015, researchers at MIT created a program that automatically fixed software bugs by replacing faulty lines of code with working lines from other programs. Brockschmidt says that future versions could make it very easy to build routine programs that scrape information from websites, or automatically categorise Facebook photos, for example, without human coders having to lift a finger

The potential for automation that this kind of technology offers could really signify an enormous [reduction] in the amount of effort it takes to develop code, says Solar-Lezama.

But he doesnt think these systems will put programmers out of a job. With program synthesis automating some of the most tedious parts of programming, he says, coders will be able to devote their time to more sophisticated work.

At the moment, DeepCoder is only capable of solving programming challenges that involve around five lines of code. But in the right coding language, a few lines are all thats needed for fairly complicated programs.

Generating a really big piece of code in one shot is hard, and potentially unrealistic, says Solar-Lezama. But really big pieces of code are built by putting together lots of little pieces of code.

This article appeared in print under the headline Computers are learning to code for themselves

More on these topics:

See the rest here:

AI learns to write its own code by stealing from other programs - New Scientist

Prof. Hilbert: ‘The merger between biological and AI has already crossed beyond any point of return’ – AI in Healthcare

Commenting on what name historians might settle on to label our interesting times, Hilbert notes that this most recent period of ancient and incessant logic of societal transformation took a series of titles between the 1970s and the year 2000.

In chronological order, he recounts, these have included post-industrial society, information economy, information society, fifth Kondratieff, information technology revolution, digital age and information age.

While only time will provide the required empirical evidence to set any categorization of this current period on a solid footing, recent developments have suggested that we are living through different long waves within the continuously evolving information age, writes Hilbert, who holds separate doctorates in communication and economics/social sciences.

Considering the outlook for AI, he sees some already achieved advancementscancer diagnostics, speech recognitionas dazzling.

And in case its been stealthy enough in its march forward to escape the attention it ought to be getting, note well that AI has become an indispensable pillar of the most crucial building blocks of society.

Here is the original post:

Prof. Hilbert: 'The merger between biological and AI has already crossed beyond any point of return' - AI in Healthcare

Google wants to make sure AI advances don’t leave anyone behind – The Verge

For every exciting opportunity promised by artificial intelligence, theres a potential downside that is its bleak mirror image. We hope that AI will allow us to make smarter decisions, but what if it ends up reinforcing the prejudices of society? We dream that technology might free us from work, but what if only the rich benefit, while the poor are dispossessed?

Its issues like these that keep artificial intelligence researchers up at night, and theyre also the reason that Google is launching an AI initiative today to tackle some of these same problems. The new project is named PAIR (it stands for People + AI Research) and its aim is to study and redesign the ways people interact with AI systems and try to ensure that the technology benefits and empowers everyone.

Google wants to help everyone from coders to users

Its a broad remit, and an ambitious one. Google says PAIR will look at a number of different issues affecting everyone in the AI supply chain from the researchers who code algorithms, to the professionals like doctors and farmers who are (or soon will be) using specialized AI tools. The tech giant says it wants to make AI user-friendly, and that means not only making the technology easy to understand (getting AI to explain itself is a known and challenging problem) but also ensuring that it treats its users equally.

Its been noted time and time again that the prejudices and inequalities of society often become hard-coded in AI. This might mean facial recognition software that doesnt recognize dark-skinned users, or a language processing program which assume that doctors are always male and nurses are always female.

Usually this sort of issue is caused by the data that artificial intelligence is trained on. Either the information it has it incomplete, or its prejudiced in some way. Thats why PAIRs first real news is the announcement of two new open-source tools called Facets Overview and Facets Dive which make it easier for programmers to examine datasets.

In the screenshot above Facets Dive is being used to test a facial recognition system. The program is sorting the testers by their country of origin and comparing errors with successful identifications. This allows a coder to quickly see where their dataset is falling short, and make the relevant adjustments.

Currently, PAIR has 12 full-time staff. Its a bit of a small figure considering the scale of the problem, but Google says PAIR is really a company-wide initiative one that will draw in expertise from the firms various departments.

More open-source tools like Facets will be released in the future, and Google will also be setting up new grants and residencies to sponsor related research. Its not the only big organization taking these issues seriously (see also: the Ethics and Governance of Artificial Intelligence Fund and Elon Musk-funded OpenAI), but its good to see Google join the fight for a fairer future.

View original post here:

Google wants to make sure AI advances don't leave anyone behind - The Verge

4 Ways That You Can Prove ROI From AI – Forbes

How To Prove ROI From AI

Your use of AI is probably succeeding in countless ways; however, AI has the potential to fail you, and in a big way: by sealing down the fate of your business and career. In fact, you might not even be able to prove that AI is driving you or your stakeholders to profit at all. Failures in the world of AI today can be small or enormous. Take for example IBMs Watson for Oncology. The initiative had to be cancelled after $62 million in spending lead to unsafe treatment recommendations.

According to Venturebeat, an estimated 87% of data science projects never make it to the production stage, and TechRepublic claims that 56% of global CEOs expect it to take 3-5 years to see any real ROI on their AI investment. Long story short: you are not alone in your quest for returns. Nevermind this fact, you can take solace in the reality that you want to be a leader, not a laggard, and you need to be the one who can prove that your use of AI is contributing to ROI via expansion and growth.

Start Back at the Beginning

AI has taken over nearly every facet of business in the 21st century. Every major player in every industry has AI at the root of nearly every project. In retail, Domino's Pizza has used AI to reduce and more accurately predict delivery time from 75% to 95% accuracy.

In Mining, there are companies in Australia using autonomous trucks and drilling technology to cut mining costs, improve worker safety and boost productivity by 20%. They also predict that 77% of jobs in the countrys mining sector will be altered by technological innovations, increasing productivity by up to 23%.

Then in banking, Barclays is using AI to detect and prevent fraud. Barclays is also using similar tech to improve customer experience through chatbots, leveraging the vast amount of data they have accumulated. However, Barclays still faces challenges. They struggle with implementation of faster payment options for their customers.

Challenges That Youll Face

You will have to conquer several hills on your journey to return on AI investments. One challenge has to do with the American A.I initiative. Even though this policy implementation is a good start, we are still behind some of our global competitors when it comes to direct government funding of AI. Therefore, if you dont have the capital to internally implement, monitor and optimize your AI, you will have to seek out funding.

You will also need a well-planned and executed initiative for retraining, reskilling and repurposing your employees. A recent study by McKinsey predicts that in the U.S. up to 33.3% of the 2030 workforce may need to learn new skills and find new work. By now, you have already spent countless hours and substantial revenue on recruiting, hiring, training and building your team and company culture. Don't allow that money to go to waste by allowing your workforce to become irrelevant. Invest more in your people now to save your business in the long run.

In addition, your access to data and your use of it is critical. The AI you implement is only as good as the fuel you give it. And that fuel is data. The Pistoia Alliance released a survey in 2019 that showed 52% of respondents cited insufficient access to data as one of biggest barriers to the adoption of AI.

How can you mirror the success of the previously mentioned players? In order to replicate their triumphs, you must start by asking yourself variations of the following questions:

What are your specific business goals or challenges that youre looking to address with AI solutions?

Buying AI is not buying a one-size-fits-all, off-the-shelf solution for your business. Business leaders must treat AI like any other technology investment: it should have a specific purpose to solve a specific goal. It must be tracked with benchmarks and KPIs. You must then hold yourself and your teams accountable for those numbers.

Is this the right technology to solve your business problem?

Its important that an organization approaches AI from the starting point of: What problem do we need to solve? Rather than, Lets do something with AI. And it should be the right problem for which AI can have a substantial impact. Many companies have not answered basic questions on what business problems can be addressed with AI, which leads to unrealistic expectations.

Do you have internal expertise to maintain AI integration, and a team committed to training and improving the technology across your organization?

How are companies creating a people-focused practice around the operationalization of AI on a company-wide scale? Some have applied to AI teams. Some have virtual teams where two days out of the week, data scientists are embedded with the operations team (this is analogous to DBAs who train non-technical colleagues to understand databases role within company operations). Breaking down organizational silos, and allowing various groups to interact and collaborate, is a critical enabler of an AI project.

How will you measure the success of an AI deployment?

You create your own AI KPIs and maintain a functioning knowledge of how you will measure them prior to deployment. There should be no guesswork or maybes involved in the process. If you want to prove returns, you need concrete benchmarks to get you there.

Now what? Youve worked diligently and answered all of the questions. Youre ready for implementation, but how do you execute? There are multiple factors and things to keep in the back of your mind in order to ensure your deployment is a success.

Growth and Expansion is Greater Than Savings

While AI has the potential to cut expenses, the primary focus should be on growth and expansion in order to maximize outcomes. This includes innovation in products and services, efficiency for productivity and gaining market share. AI is optimized when it is adopted at every level of technology, from value chains to pricing, and when understanding the AI-related preferences of customers. Your best bet is to stay focused on growth by innovating new products and fine-tuning your business model. To better capitalize on the technological benefits of AI, stay on the offense.

Investment is Needed in Both Human Resources and Technology: You cannot fully benefit from AI in technology if your employees arent prepared. Consider that 69% of enterprises are facing a moderate, major or extreme skills gap when it comes to AI. Management and staff must be educated and trained in cross-functional teams in all processes and operations. Finding the right people for new jobsand the recruitment of new employees for the requisite technical job categoriesis essential.

ROI For Business as a Whole: Consider the potential ROI for the entire business. If there is a bottleneck operation in the automation process, you need to increase throughput, not just in one area, but throughout the entire organization. With business process automation platforms growing by 63% in 2019 you may be tempted to just dive in and throw caution to the wind. But, there is an effective playbook out there. Look at, for example, companies like Bosch, which is saving around $500,000 a year by automating some management operations regarding its thousands-strong network of suppliers. Find a similar story in your industry and study their playbook.

Continue to Cultivate and Develop AI: The business world is trending in the correct direction when it comes to workplace culture and AI. According to this publication, Forbes, 65% of workers are optimistic...about having robot co-workers...and 64% of workers would trust a robot more than their manager. Take steps to ensure that you have an abundant AI environment for success, an increase in knowledgeable talent/workers and a heightened corporate awareness of general AI knowledge and related benefits. Make certain these are fully embedded in your organization from top to bottom.

Effectively measuring ROI on AI is a universal challenge

Everyone faces the challenge of creating their standards, KPIs and goals for their AI. There have been several successful methods deployed. Here are a few to guide you on your path:

Determine What It Will Cost Versus What It Will Save: Focus on use-case goals around savings instead of potential revenue growth. This includes reduced employee hours, reduced headcount and less time on processes. How much you invest in AI should be based on these saving forecasts and not revenue uplift. This calculation determines how much you should be willing to invest and the break-even point for AI deployment. If the deployment is not successful, the organization will have risked only what it expected to save, rather than risking what it expected to add in revenue.

Focus on Soft Dollar Benefits: on top of cost savings and additional revenue, companies must also calculate soft dollar benefits, such as fewer errors, reduced turnover, faster access to information and service, etc. AI will improve employee productivity, customer satisfaction, and it will reveal areas where a company may have been unknowingly struggling to achieve maximum value.

Know When the Break-Even Point Will Be: The break-even point is when the cost savings of an AI project equals the investment. Once an organization has calculated what it hopes to save by implementing AI, only then should it begin to consider how much to invest. Many organizations struggle with predicting the break-even point for AI deployments. By allowing cost-savings to dictate the initial AI investment, companies can begin to estimate when the break-even point should be reached.

New Product or Service = New Revenue Streams: this point is about maximizing ROI. Once an organization has become fluent in AI deployment, its an ideal time to determine ways in which AI can help deliver new products or services to customers. Businesses that attempt this level of deployment should invest with both product development and AI piloting principles in mind. New products and services require additional investments beyond the AI technology itself (including marketing, sales, product management, etc.). As a result of these additional investments, organizations should not use the previous equations for determining ROI.

Stay committed to digitalization and automation as if your business and career depends on it, because it does. Stay committed to maximizing efficiency and return on your investment, and most importantly, being able to demonstrate ROI. If you answer the questions above, and execute the four steps of implementation outlined in conjunction with a well worn path to success, you will set yourself and your business up for triumph, and cast yourself vastly ahead of any competition. Refuse to take these facts into consideration, and you will be left in the dustbin of history.

See the article here:

4 Ways That You Can Prove ROI From AI - Forbes

How AI and mosquito sex parties can save the world – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Diptera.ai has raised a $3 million seed round to fight mosquitoes with mosquitoes and AI-based sex sorting.

Jerusalem-based Diptera.ai has figured out a way to use AI to fight the growing threat of mosquitoes, which are spreading malaria and viruses like Zika, dengue, and yellow fever. While the method for fighting mosquitoes has been around for decades, AI can take it to a new level and democratize what was otherwise a very costly and localized abatement effort.

Well get to the sex parties in a bit.

Diptera.ai is using computer vision and eco-friendly technology to make it easier to control mosquito populations using the sterile insect technique, which sends sterilized male mosquitoes to mate with female mosquitoes, said Diptera.ai CEO Vic Levitin, in an interview with VentureBeat.

We think we can disrupt the $100 billion pest control market, Levitin said, noting that many other pest control methods are toxic to both humans and the environment.

Above: Mosquito larvae.

Image Credit: Diptera.ai

The company could help mitigate the death toll from mosquitoes. More than just a nuisance, they are the deadliest creatures on Earth, as they kill more than 700,000 people a year and infect hundreds of millions more with diseases. A recent book, The Mosquitoby Timothy Winegard, cites estimates that mosquitoes have killed 52 billion people nearly half of the humans who have ever lived.

Diptera.ais technology works for a host of insects, including household and agricultural pests. The company is starting with mosquitoes, a rapidly growing problem with no effective solution to date. Due to climate change, by 2050 half of the worlds population (including the U.S. and Europe) will be living among disease-spreading mosquitoes.

With its technology in the testing stage now, Diptera.ai plans to offer an affordable subscription service to what it calls a highly effective and eco-friendly biological pest control method. Most pest control methods are based on insecticides that are toxic to both humans and the environment. Despite its high effectiveness, sterilization has thus far been limited to a handful of pests because of the prohibitive costs in implementing it.

Standard control methods are losing effectiveness as mosquitoes rapidly become resistant to existing pesticides. Moreover, public opinion and regulation limit the use of toxic insecticides. As a result, people increasingly find themselves unable to enjoy the outdoors without being at risk from emerging and potentially devastating diseases.

Above: Ariel Livne, CTO of Diptera.ai, at a lab in Israel.

Image Credit: Diptera.ai

Levitin believes his company can stop mosquitoes by the billions, mainly by releasing sterile males to mate with females. We create mosquito sex parties, he said.

Trust Ventures led the funding round, with participation from existing investors IndieBio and Fresh.fund, as well as new investors who joined the round.

Diptera.ai was started by Ariel Livne, Elly Ordan, and Levitin. In October 2020, the team graduated from the IndieBio Accelerator, and it now has 10 employees. The seed round should enable the company to finish its pilot, which could grow into a product launch.

Weve raised enough money to prove the concept, Levitin said.

At some point, the Environmental Protection Agency will likely have to approve the Diptera.ai solution.

Above: Elly Ordan of Diptera.ai inspects mosquito larvae.

Image Credit: Diptera.ai

The sterile insect technique (SIT) is a biological pest control method in which mostly government-run entities release overwhelming numbers of sterile male insects into the wild. These sterile males mate with female mosquitoes, which are the only mosquitoes that bite humans and animals. The female mosquitoes only mate once in their lifetimes, but they each lay hundreds of eggs. If they can be tricked into mating with sterile males, then they wont create offspring.

The sterile insect technique is the most effective, Levitin said. Mosquitoes mate once as females in their lives. If they mate with sterile males, then it suppresses the population.

This technique has been used in the U.S. to control the spread of the Mediterranean fruit fly, with billions a month being released into the wild. But it is expensive due to high production and distribution costs, and is often limited to localized control efforts.

The technique started in the 1950s in Russia and the U.S., when it was used to control the tsetse fly in Africa.

In 1998, the Debug project saw Googles Verily unit release millions of sterile mosquitoes into the area of Fresno, California, resulting in a temporary 93% suppression of the population during mosquito season, which runs from around March through October.

Above: Vic Levitin (center) is cofounder and CEO of Diptera.ai.

Image Credit: Diptera.ai

Diptera.ais market research has shown their solution is 20 times less expensive than existing SIT methods.

For most insects, the bottleneck for SIT is sex separation. Currently mosquitoes are sex-sorted late in their development, when the mosquitoes are fragile and have a limited remaining lifespan of a few days. Shipping them is impractical, Levitin said.

Normally, implementing SIT requires building and maintaining a local mosquito factory near every release site. Diptera.ai combines computer vision, deep biology, and automation to sex-sort mosquitoes (and other insects) at the larval stage, which was previously considered impossible. This allows for a centralized mass production of sterile male mosquitoes that can then be shipped to the end customers for release.

We can sex sort them at the larva stage, said Levitin. Larvae used to be considered asexual. Nobody tried to sex-sort them. This is where we are innovative. We can tell the sex when they are larvae. Thats two weeks before they become adults. So we can produce them in mass production and then ship them across the country. This gives us economies of scale where we can offer it as a service.

Mosquitoes exist as larvae for a lot longer than they live as adults. If you can identify the males and females at this stage, then there is a lot more time to ship them to the right place in the country, and then the whole U.S. could be served by a mass-production factory that churns out sterilized mosquitoes by the billions.

Once it separates the males, Diptera.ai sterilizes them with radiation, using the equivalent of a microwave oven, except one used for sterilization purposes. The oven is about the size of a pizza oven, and its not dangerous to humans, Levitin said.

Most of the mosquitoes in the U.S. are of the Asian tiger variety (Aedes albopictus), and these mosquitoes dont travel far, making it easier to take down populations with localized efforts. By contrast, mosquitoes in Africa can fly long distances, and that makes it harder to control the population, Levitin said.

Just like the cloud disrupted the computing industry with affordable, on-demand computing power, Diptera.ai disrupts pest control with an affordable SIT-as-a-service, Levitin said. Instead of building and maintaining insect production factories, customers will subscribe to our service to receive shipments of sterile males ready for release.

With Diptera.ais service, luxury resorts, residential complexes, or even homeowners should be able to afford the eradication service. It has to be a subscription because the mosquitoes will come back, year after year, if you dont take them out regularly.

Its like the Mafia, Levitin said. You are paying protection money to us.

By the way, this is the second Israeli startup that Ive seen take up the fight against mosquitoes. Bzigo uses computer vision to find where a mosquito lands in your home, then it shines a laser on it so you can zap the mosquito yourself. No matter how much Diptera.ai succeeds, I imagine there will always be a need for Bzigos product.

Here is the original post:

How AI and mosquito sex parties can save the world - VentureBeat

Taser bought two computer vision AI companies – Engadget

The Axon AI group will include about 20 programmers and engineers. They'll be tasked with developing AI capabilities specifically for public safety and law enforcement. The backbone of the Axon AI platform comes from Dextro Inc. Their computer-vision and deep learning system can search the visual contents of a video feed in real time. Technology from the Fossil Group, which Taser also acquired, will support Dextro's search capability by "improving the accuracy, efficiency and speed of processing images and video," according to the company's press release.

The AI platform is the latest addition to Taser's Axon ecosystem, which include everything from body and dash cameras to evidence and interview logging. Altogether the Axon system handles 5.2 petabytes of data from more than half of the nation's major city police departments.

With the new AI system in place, law enforcement could finally get a handle on all that footage. "Axon AI will greatly reduce the time spent preparing videos for public information requests or court submission," Taser CEO, Rick Smith, said in a statement. "This will lay the foundation for a future system where records are seamlessly recorded by sensors rather than arduously written by police officers overburdened by paperwork."

See the original post here:

Taser bought two computer vision AI companies - Engadget

What Would an AI Doomsday Actually Look Like? – Futurism

Imagining AIs Doomsday Artificial intelligence (AI) is going to transform the world, but whether it will be a force of good or evil is still subject to debate. To that end, a team of experts gathered for Arizona State Universitys (ASU) Envisioning and Addressing Adverse AI Outcomes to talk about the worst-case scenarios that we could face if AI veers towards becoming a serious threat to humanity.

There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology, says AI scientist Eric Horvitz.

As an optimistic supporter of everything AI has to offer, Horvitz has a very positive outlook about the future of AI. But hes also pragmatic enough to recognize that for the technology to consistently advance and move forward, it has to earn the trust of the public. For that to happen, all possible concerns surrounding the technology have to be discussed.

That conversation specificallywas what the workshop hoped to tackle.40 scientists, cyber-security experts, and policy-makers were divided into two teams to hash out the numerous ways AI can cause trouble for the world. The red team were tasked with imagining all the cataclysmic scenarios AI could incite, and the blue team was asked to devisesolutions to defend against such attacks.

These situations had to be realistic rather than purely hypothetical, anchored in whats possible given our current technology, and what we expect to come from AI over the next few decades.

Among the scenarios described were automated cyber attacks (wherein a cyber weapon is intelligent enough to hide itself after an attack and prevent all efforts to destroy it), stock markets being manipulated by machines, self-driving technology failing to recognize critical road signs, and AI being used to rig or sway elections.

Not all scenarios were given sufficient solutions either, illustrating just how unprepared we are at present to face the worse possible situations that AI could bring. For example, in the case of intelligent, automated cyber attacks, it would apparently be quiteeasy for attackers to use unsuspecting internet gamers to cover their tracks, using something like an online game toobscure the attacks themselves.

As entertaining as it may be to think up all of these wild doomsday scenarios, its actually a deliberate first step towards real conversations and awareness about the threat that AI could pose. John Launchbury, from the US Defenses Advanced Research Projects Agency hopes it will lead to concrete agreements on rules of engagement for cyber war, automated weapons, and robot troops.

The purpose of the workshop after all, isnt to incite fear, but to realistically anticipate the various possibilities of how technology can be misused and hopefully, get a head start on defending ourselves againstit.

Read the rest here:

What Would an AI Doomsday Actually Look Like? - Futurism

Beyond Limits to Expand Industrial AI in Energy with NVIDIA – GlobeNewswire

LOS ANGELES, Dec. 16, 2020 (GLOBE NEWSWIRE) -- Beyond Limits, an industrial and enterprise-grade AI technology company built for the most demanding sectors, today announced it is working with NVIDIA to advance its initiative for bringing digital transformation to the energy sector.

Beyond Limits will collaborate with NVIDIA experts on joint go-to-market strategies for Beyond Limits products and solutions in the energy sector. The company will also take advantage of NVIDIA technical support and GPU-optimized AI software such as containers, models and application frameworks from the NVIDIA NGC catalog to improve the performance and efficiency of its software development cycle.

AI has the potential to make a major impact on problems facing the heart of the global energy business, but the technology requires high levels of computing power to operate on the level and scale required by many of todays global producers, said AJ Abdallat, CEO of Beyond Limits. Thats why were so excited to collaborate with NVIDIA, a leading provider of AI computing platforms. With NVIDIA technology support and expertise, Beyond Limits is better positioned to offer faster, more intelligent and efficient AI-based solutions for maximizing energy production and profitability.

Breakthroughs in novel high-performance AI solutions are projected to have significant impacts throughout the energy industry. One key challenge facing the upstream oil and gas sector includes the resource requirement for optimizing well deployments, especially when data on a regions geological properties is highly uncertain. To overcome this problem, Beyond Limits developed a novel deep reinforcement learning (DRL) framework trained using NVIDIA A100 Tensor Core GPUs, capable of running 167,000 complex scenario simulations in 36 hours. Following initial tests, the DRL framework yielded a 208% increase in NPV value by predicting and recommending well placements, based on the number of actions explored and the expected financial return from reservoir production over time.

The NVIDIA A100 offers the performance and reliability required to meet the demands of the modern day energy sector, said Marc Spieler, Global Energy Director at NVIDIA. The ability to process hundreds of thousands of AI simulations in real-time provides the insight required for Beyond Limits to develop scalable applications that advance energy technologies.

Beyond Limits Cognitive AI applies human-like reasoning to solve problems, combining encoded human knowledge with machine learning techniques and allowing systems to adapt and continue to operate even when data is in short supply or uncertain. As a result, Beyond Limits customers are able to elevate operational insights, improve operating conditions, enhance performance at every level, and ultimately increase profits. For more information, please visit https://www.beyond.ai/solutions/beyond-energy.

About Beyond Limits

Beyond Limits is an industrial and enterprise-grade artificial intelligence company built for the most demanding sectors including energy, utilities, and healthcare.

Beyond traditional artificial intelligence, Beyond Limits unique Cognitive AI technology combines numeric techniques like machine learning with knowledge-based reasoning to produce actionable intelligence. Customers implement Beyond Limits AI to boost operational insights, improve operating conditions, enhance performance at every level, and ultimately increase profits as a result.

Founded in 2014, Beyond Limits leverages a significant investment portfolio of advanced technology developed at Caltechs Jet Propulsion Laboratory for NASA space missions. The company was recently honored by CB Insights on their 2020 List of Top AI 100 most innovative artificial intelligence startups and by Frost & Sullivan for their North American Technology Innovation Award.

For more information, please visit http://www.beyond.ai.

Go here to read the rest:

Beyond Limits to Expand Industrial AI in Energy with NVIDIA - GlobeNewswire

How soft law is used in AI governance – Brookings Institution

As an emerging technology, artificial intelligence is pushing regulatory and social boundaries in every corner of the globe. The pace of these changes will stress the ability of public governing institutions at all levels to respond effectively. Their traditional toolkit, in the form of the creation or modification of regulations (also known as hard law), require ample time and bureaucratic procedures to properly function. As a result, governments are unable to swiftly address the issues created by AI. An alternative to manage these effects is soft law, defined as a program that creates substantial expectations that are not directly enforceable by the government. As soft law grows in popularity as a tool to govern AI systems, it is imperative that organizations gain a better understanding of their current deployments and best practicesa goal we aim to facilitate with the launch of a new database documenting these tools.

The governance of emerging technologies has relied on soft law for decades. Entities such as governments, private sector firms, and non-governmental organizations have all attempted to address emerging technology issues through principles, guidelines, recommendations, private standards, best practices, among others. Compared to its hard law counterparts, soft law programs are more flexible and adaptable, and any organization can create or adopt a program. Once programs are created, they can be adapted to reactively or proactively address new conditions. Moreover, they are not legally tied to specific jurisdictions, so they can easily apply internationally. Soft law can serve a variety of objectives: It can complement or substitute hard law, operate as a main governance tool, or as a back-up option. For all these reasons, soft law has become the most common form of AI governance.

The main weakness of soft law governance tools are their lack of enforcement. In place of enforcement mechanisms, the proper implementation of soft law governance mechanisms relies on aligning the incentives of programs stakeholders. Unless these incentives are clearly defined and well-understood, the effectiveness and credibility of soft law will be questioned. To prevent the creation of soft law programs incapable of managing the risks of AI, it is important that stakeholders consider the inclusion of implementation mechanisms and appropriate incentives.

As AI methods and applications have proliferated, so too have soft law governance mechanisms to oversee them. To build on efforts to document soft law AI governance, the Center for Law, Science and Innovation at Arizona State University is launching a database with the largest compilation, to date, of soft law programs governing this technology. The data, available here, offer organizations and individuals interested in the soft law governance of AI with a reference library to compare and contrast original initiatives or draw inspiration for the creation of new ones.

Using a scoping review, the project identified 634 AI soft law programs published between 2001 and 2019 and labeled them using up to 107 variables and themes. Our data revealed several interesting trends. Among them, we found that AI soft law is a relatively recent phenomenon, with about 90% of programs created between 2017 and 2019. In terms of origin, higher-income regions and countries, such as the United States, United Kingdom, and Europe, were most likely to serve as a host to the creation of these instruments.

In the process of identifying stakeholders responsible for generating AI soft law, we found that government institutions have a prominent role in employing these programs. Specifically, more than a third (36%) were created by the public sector, which is evidence that usage of these tools is not confined to the private sector and that they can behave as a complement to traditional hard law in guiding AI governance. Multi-stakeholder alliances involving government, private sector, and non-profits and non-profit/private sector alliances followed with a 21% and 12% share of the programs, respectively.

We also looked at soft laws reliance on the alignment of incentives for implementation. Because government cannot levy a fee or penalty through these programs, stakeholders participating in soft law have to voluntarily agree to participate. Considering this, about 30% of programs in the database publicly mention enforcement or implementation mechanisms. We analyzed these measures and found that they can be divided into four quadrants: internal vs. external and levers vs. roles. The first dimension represents the location of the resources necessary for a mechanisms operation, whether it uses those located within an organization or externally through third-parties. Meanwhile, levers are the toolkit of actions or mechanisms (e.g. committees, indicators, commitments, and internal procedures) that an organization can employ to implement or enforce a program. Its counterpart is characterized as roles. It describes how individuals, the most important resource of any organization, are arranged to execute the toolkit of levers.

Finally, in addition to identifying a programs characteristics, we labeled the text of programs. This was done by creating 15 thematic categories divided into 78 sub-themes that touch upon a wide variety of issues and make it possible to scrutinize how organizations interpret different aspects of AI. The three most labeled themes are education and displacement of labor, transparency and explainability, and ethics. Similarly, the most prevalent sub-themes were general transparency, general mentions of discrimination and bias, and AI literacy.

As AI proliferates and its governance challenges grow, soft law will become an increasingly important part of this technologys governance toolkit. An empirical understanding of the strengths and weaknesses of AI soft law will therefore be crucial for policymakers, technology companies, and civil society as they grapple with how to govern AI in a way that best harnesses its benefits, while managing its risks.

By creating the largest compilation of AI soft law programs, our aim is to provide a critical resource for policymakers in all sectors focused on responding to AI governance challenges. Its intent is to aid decision-makers in their pursuit of balancing the advantages and disadvantages of this tool and facilitate a deeper understanding of how and when they work best. To that end, we hope that the AI soft law databases initial findings can suggest mechanisms for improving the effectiveness and credibility of AI soft law, or even catalyze the creation of new kinds of soft law altogether. After all, the future of AI governance and by extension, AI soft law is too important not to get right.

Carlos Ignacio Gutierrez is a governance of artificial intelligence fellow at Arizona State University. He completed his Ph.D. in Policy Analysis at the Pardee RAND Graduate School.

Gary Marchant is Regents Professor and Faculty Director of the Center for Law, Science & Innovation, Arizona State University.

Continue reading here:

How soft law is used in AI governance - Brookings Institution

AI Won’t Change Companies Without Great UX – Harvard Business Review

Executive Summary

As with the adoption of all technology, user experience trumps technical refinements. Many organizations implementing AI initiatives are making a mistake by focusing on smarter algorithms over compelling use cases. Use cases where peoples jobs become simpler and more productive are essential to AI workplace adoption. Focusing on clearer, crisper use cases means better and more productive relationships between machines and humans. This article offers five use case categories assistant, guide, consultant, colleague, boss that emerge when companies use AI-empowered people and processes over autonomous systems. Each describes how intelligent entities work together to get the job done and how depending on the process, AI makes the human element matter even more.

As artificial intelligence algorithms infiltrate the enterprise, organizational learning matters as much as machine learning. How should smart management teams maximize the economic value of smarter systems?

Business process redesign and better training are important, but better use cases those real-world tasks and interactions that determine everyday business outcomes offer the biggest payoffs. Privileging smarter algorithms over thoughtful use cases is the most pernicious mistake I see in current enterprise AI initiatives. Somethings wrong when optimizing process technologies take precedence over how work actually gets done.

Unless were actually automating a process that is, taking humans out of the loop AI algorithms should make peoples jobs simpler, easier, and more productive. Identifying use cases where AI adds as much value to peoples performance as to process efficiencies is essential to successful enterprise adoption. By contrast, companies committed to giving smart machines greater autonomy and control focus on governance and decision rights.

Strategically speaking, a brilliant data-driven algorithm typically matters less than thoughtful UX design. Thoughtful UX designs can better train machine learning systems to become even smarter. The most effective data scientists I know learn from use-case and UX-driven insights. At one industrial controls company, for example, the data scientists discovered that users of one of their smart systems informally used a dataset to help prioritize customer responses. That unexpected use case led to a retraining of the original algorithm.

Focusing on clearer, cleaner use cases means better and more productive relationships between AI and its humans. The division of labor becomes a source of design inspiration and exploration. The quest for better outcomes shifts from training smarter algorithms to figuring out howtheuse case should evolve. That drives machine learning and organizational learning alike.

Five dominant use case categories emerge when organizations pick AI-empowered people and processes over autonomous systems. Unsurprisingly, these categories describe how intelligent entities work together to get the job done and highlight that a personal touch still matters. Depending on the person, process, and desired outcome, AI can make the human element matter more.

Assistants

Alexa, Siri and Cortana already embody real-world use cases for AI-assistantship. In Amazons felicitous phrasing, assistants have skills enabling them to perform moderately complex tasks. Whether mediated by voice or chatbot, simple and straightforward interfaces make assistants fast and easy to use. Their effectiveness is predicated as much on people knowing exactly what they need as algorithmic sophistication. As digital assistants become smarter and more knowledgeable, their task range and repertoire expands. The most effective assistants learn to prompt their users with timely questions and key words to improve both interactions and outcomes.

Guide

Where assistants perform requested tasks, guides help users navigate task complexity to achieve desired outcomes. Using Waze to drive through cross-town traffic troubled by construction is one example; using an augmented-reality tool to diagnose and repair a mobile device or HVAC system would be another. Guides digitally show and tell their humans what their next steps should be and, should missteps occurs, suggest alternate paths to success. Guides are smart software sherpa whose domain expertise is dedicated to getting their users to desired destinations.

Consultant

In contrast to guides, consultants go well beyond navigation and destination expertise. AI consultants span use cases where workers need either just-in-time expertise or bespoke advice to solve problems. Consultants, like their human counterparts, offer options and explanations, as well as reasons and rationales. A software development project manager needs to evaluate scheduling trade-offs; AI consultants ask questions and elicit information allowing specific next step recommendations. AI consultants can include relevant links, project histories and reports for context. More sophisticated consultants offer strategic advice to complement their tactical recommendations.

Consultants customize their functional knowledge scheduling; budgeting; resource allocation; procurement; purchasing; graphic design; etc. to their human clients use case needs. They are robo-advisers dispassionately dispensing their domain expertise.

Colleague

A colleague is like a consultant but with a data-driven and analytic grasp of the local situation. That is, a colleagues domain expertise is the organization itself. Colleagues have access to the relevant workplace analytics, enterprise budgets, schedules, plans, priorities and presentations to offer organizational advice to colleagues. Colleague use cases revolve around advice managers and workers need to work more efficiently and effectively in the enterprise. An AI colleague might recommend referencing and/or attaching a presentation in an email; which project leaders to ask for advice; what budget template is appropriate for a requisition; what client contacts need an early warning, etc. Colleagues are more collaborator than tool; they offer data-driven organizational insight and awareness. Like their human counterparts, they serve as sounding boards that who? help clarify communications, aspirations and risk.

Boss

Where colleagues and consultants advise, bosses direct. Boss AI tells its humans what to do next. Boss use cases eliminate options, choices and ambiguity in favor of dictates, decrees and directives to be obeyed. Start doing this; stop doing that; change this schedule; shrink that budget; send this memo to your team.

Boss AI is designed for obedience and compliance; the human in the loop must yield to the algorithm in the system. Boss AI represents the slippery slope to autonomy the workplace counterpart to an autopilot taking over an airplane cockpit or an automotive collision avoidance system slamming on the brakes. Specific use cases and circumstances trigger human subordination to software. But bosswares true test is human: if humans arent sanctioned or fired for disobedience, then the software really isnt a boss.

As the last example illustrates, these distinct categories can swiftly blur into each other. Its easy to conceive of scenarios and use cases where guides can become assistants, assistants situationally escalate into colleagues, and consultants transform into bosses. But the fundamental differences and distinctions these five categories present should inject real rigor and discipline intoimagining their futures.

Trust is implicit in all five categories. Do workers trust their assistants to do what theyve been told or guides to get them where they want to go? Do managers trust the competence of bossware or that their colleagues wont betray them? Trust and transparency issues persist regardless of how smart AI software becomes, and they become even more important as the reasons for decisions become overwhelmingly complex and sophisticated. One risk: these artificial intelligences evolve or devolve into frenemies. That is, software that is simultaneously friend and rival to its human complement. Consequently, use cases become essential to identifying what kinds of interfaces and interactions facilitate human/machine trust.

Use cases may prove vital to empowering smart human/smart machine productivity. But reality suggests their ultimate value may come from how thoughtfully they accelerate the organizations advance to greater automation and autonomy. The true organizational impact and influence these categories may be that they prove to be the best way for humans to train their successors.

Read this article:

AI Won't Change Companies Without Great UX - Harvard Business Review

Stressed on the job? An AI teammate may know how to help – MIT News

Humans have been teaming up with machines throughout history to achieve goals, be it by using simple machines to move materials or complex machines to travel in space. But advances in artificial intelligence today bring possibilities for even more sophisticated teamwork true human-machine teams that cooperate to solve complex problems.

Much of the development of these human-machine teams focuses on the machine, tackling the technology challenges of training AI algorithms to perform their role in a mission effectively. But less focus, MIT Lincoln Laboratory researchers say, has been given to the human side of the team. What if the machine works perfectly, but the human is struggling?

"In the area of human-machine teaming, we often think about the technology for example, how do we monitor it, understand it, make sure it's working right. But teamwork is a two-way street, and these considerations aren't happening both ways. What we're doing is looking at the flip side, where the machine is monitoring and enhancing the other side the human," says Michael Pietrucha, a tactical systems specialist at the laboratory.

Pietrucha is among a team of laboratory researchers that aims to develop AI systems that can sense when a person's cognitive fatigue is interfering with their performance. The system would then suggest interventions, or even take action in dire scenarios, to help the individual recover or to prevent harm.

"Throughout history, we see human error leading to mishaps, missed opportunities, and sometimes disastrous consequences," says Megan Blackwell, former deputy lead of internally funded biological science and technology research at the laboratory. "Today, neuromonitoring is becoming more specific and portable. We envision using technology to monitor for fatigue or cognitive overload. Is this person attending to too much? Will they run out of gas, so to speak? If you can monitor the human, you could intervene before something bad happens."

This vision has its roots in decades-long research at the laboratory in using technology to "read" a person's cognitive or emotional state. By collecting biometric data such as video and audio recordings of a person speaking and processing these data with advanced AI algorithms, researchers have uncovered biomarkers of various psychological and neurobehavioral conditions. These biomarkers have been used to train models that can accurately estimate the level of a person's depression, for example.

In this work, the team will apply their biomarker research to AI that can analyze an individual's cognitive state, encapsulating how fatigued, stressed, or overloaded a person is feeling. The system will use biomarkers derived from physiological data such as vocal and facial recordings, heart rate, EEG and optical indications of brain activity, and eye movement to gain these insights.

The first step will be to build a cognitive model of an individual. "The cognitive model will integrate the physiological inputs and monitor the inputs to see how they change as a person performs particular fatiguing tasks," says Thomas Quatieri, who leads several neurobehavioral biomarker research efforts at the laboratory. "Through this process, the system can establish patterns of activity and learn a person's baseline cognitive state involving basic task-related functions needed to avoid injury or undesirable outcomes, such as auditory and visual attention and response time."

Once this individualized baseline is established, the system can start to recognize deviations from normal and predict if those deviations will lead to mistakes or poor performance.

"Building a model is hard. You know you got it right when it predicts performance," says William Streilein, principal staff in the Lincoln Lab's Homeland Protection and Air Traffic Control Division. "We've done well if the system can identify a deviation, and then actually predict that the deviation is going to interfere with the person's performance on a task. Humans are complex; we compensate naturally to stress or fatigue. What's important is building a system that can predict when that deviation won't be compensated for, and to only intervene then."

The possibilities for interventions are wide-ranging. On one end of the spectrum are minor adjustments a human can make to restore performance: drink coffee, change the lighting, get fresh air. Other interventions could suggest a shift change or transfer of a task to a machine or other teammate. Another possibility is using transcranial direct current stimulation, aperformance-restoringtechniquethat uses electrodes to stimulate parts of the brain and has been show to bemore effective than caffeinein countering fatigue, with fewer side effects.

On the other end of the spectrum, the machine might take actions necessary to ensure the survival of the human team member when the human is incapable of doing so. For example, an AI teammate could make the "ejection decision" for a fighter pilot who has lost consciousness or the physical ability to eject themselves. Pietrucha, a retired colonel in the U.S. Air Force who has had many flight hours as a fighter/attack aviator, sees the promise of such a system that "goes beyond the mere analysis of flight parameters and includes analysis of the cognitive state of the aircrew, intervening only when the aircrew can't or wont," he says.

Determining the most helpful intervention, and its effectiveness, depends on a number of factors related to the task at hand, dosage of the intervention, and even a user's demographic background. "There's a lot of work to be done still in understanding the effects of different interventions and validating their safety," Streilein says. "Eventually, we want to introduce personalized cognitive interventions and assess their effectiveness on mission performance."

Beyond its use in combat aviation, the technology could benefit other demanding or dangerous jobs, such as those related to air traffic control, combat operations, disaster response, or emergency medicine. "There are scenarios where combat medics are vastly outnumbered, are in taxing situations, and are as every bit as tired as everyone else. Having this kind of over-the-shoulder help, something to help monitor their mental status and fatigue, could help prevent medical errors or even alert others to their level of fatigue," Blackwell says.

Today, the team is pursuing sponsorship to help develop the technology further. The coming year will be focused on collecting data to train their algorithms. The first subjects will be intelligence analysts, outfitted with sensors as they play a serious game that simulates the demands of their job. "Intelligence analysts are often overwhelmed by data and could benefit from this type of system," Streilein says. "The fact that they usually do their job in a 'normal' room environment, on a computer, allows us to easily instrument them to collect physiological data and start training."

"We'll be working on a basis set of capabilities in the near term," Quatieri says, "but an ultimate goal would be to leverage those capabilities so that, while the system is still individualized, it could be a more turnkey capability that could be deployed widely, similar to how Siri, for example, is universal but adapts quickly to an individual." In the long view, the team sees the promise of a universal background model that could represent anyone and be adapted for a specific use.

Such a capability may be key to advancing human-machine teams of the future. As AI progresses to achieve more human-like capabilities, while being immune from the human condition of mental stress, it's possible that humans may present the greatest risk to mission success. An AI teammate may know just how to lift their partner up.

View post:

Stressed on the job? An AI teammate may know how to help - MIT News

Building up its AI operations, GSK opens a $13M London hub with plans to woo talent now trekking to Silicon Valley – Endpoints News

Continuing its efforts to ramp up global AI operations, GlaxoSmithKline has opened a 10 million ($13 million-plus) research base in Kings Cross, London.

The AI hotspot is already home to Googles DeepMind, and the Francis Crick and Alan Turing research institutes. GSK said it hopes to tap into the huge London tech talent pool and attract candidates who might otherwise head to Silicon Valley.

Its a vibrant ecosystem that has everything from outstanding medicine as well as also being a big tech corridor. DeepMind is there. Google is there. Its near the Crick Institute, and of course modern computing was born, basically, with Alan Turing and the Turing Institute, GSK R&D president Hal Barron said at a London Tech Week fireside chat. So we are quite convinced that both the talent and the ecosystem will enable us to build a very vibrant hub in London, getting the top talent, the best thinkers and people to be able to interact with us in GSK to take technology and help us turn it into medicines.

The company believes AI has the power to vastly improve its drug discovery process. It claims that genetically validated drugs are twice as likely to be successful. And GSK has lots of genetic data to work with. The new workspace, located in the Stanley Building, has already lured in 30 scientists, 10 of whom are in the companys AI fellow program.

In fact, many biotechs are now turning to AI, which they believe can speed up successful development by analyzing hundreds of genes at once or rapidly screening billions of molecules.

GSK is focused on finding better medicines and vaccines not just better products, but finding them in better ways, so we are using functional genomics, human genetics and artificial intelligence and machine learning, the company said in a statement.

It also has AI researchers based in San Francisco and Boston, and aims to reach 100 AI-focused employees by mid-2021. Our goal is to have the best and brightest people in the world to join us, Barron said.

In AI, we are scouring the planet for the best people. These folks are very rare to find. Competition is high and there arent a large number of them, Tony Wood, GSKs SVP of medicinal science and technology, told The Guardian in December.

The new London hub has the capacity for 60 to 80 staff members. Now all thats left to do is fill it.

Continued here:

Building up its AI operations, GSK opens a $13M London hub with plans to woo talent now trekking to Silicon Valley - Endpoints News

How the Army plans to revolutionize tanks with artificial intelligence – C4ISRNet

Even as the U.S. Army attempts to integrate cutting edge technologies into its operations, many of its platforms remain fundamentally in the 20th century.

Take tanks, for example.

The way tank crews operate their machine has gone essentially unchanged over the last 40 years. At a time when the military is enamored with robotics, artificial intelligence and next generation networks, operating a tank relies entirely on manual inputs from highly trained operators.

Currently, tank crews use a very manual process to detect, identify and engage targets, explained Abrams Master Gunner Sgt. 1st Class Dustin Harris. Tank commanders and gunners are manually slewing, trying to detect targets using their sensors. Once they come across a target they have to manually select the ammunition that theyre going to use to service that target, lase the target to get an accurate range to it, and a few other factors.

The process has to be repeated for each target.

That can take time, he added. Everything is done manually still.

On the 21st century battlefield, its an anachronism.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

Army senior leaders recognize that the way the crews in the tank operate is largely analogous to how these things were done 30, 45 years ago, said Richard Nabors, acting principal deputy for systems and modeling at the DEVCOM C5ISR Center.

These senior leaders, many of them with extensive technical expertise, recognized that there were opportunities to improve the way that these crews operate, he added. So they challenged the Combat Capabilities Development Command, the Armaments Center and the C5ISR Center to look at the problem.

On Oct. 28, the Army invited reporters to Aberdeen Proving Ground to see their solution: the Advanced Targeting and Lethality Aided System, or ATLAS.

ATLAS uses advanced sensors, machine learning algorithms and a new touchscreen display to automate the process of finding and firing targets, allowing crews to respond to threats faster than ever before.

The assistance that were providing to the soldiers will speed up those engagement times [and] allow them to execute multiple targets in the same time that they currently take to execute a single target, said Dawne Deaver, C5ISR project lead for ATLAS.

At first glance, the ATLAS prototype the Army had set up looked like something out of a Star Wars film, albeit with treads and not easily harpooned legs. The system was installed on a mishmash of systems a sleek black General Dynamics Griffin I chassis with the Armys Advance Lethality and Accuracy System for Medium Calibur (ALAS-MC) auto-loading 50mm turret stacked on top.

And mounted on top of the turret was a small round Aided Target Recognition (AiTR) sensor a mid-wave infrared imaging sensor to be more exact. Constantly rotating to scan the battlefield, the sensor almost had a life of its own, not unlike an R2 unit on the back of an X-Wing.

Trailing behind the tank and connected via a series of long black cables was a black M113. For this demonstration, the crew station was located inside the M113, not the tank itself. Cavernous compared to the inside of an Abrams tank, the M113 had three short seats lined up. At the forward-most seat was a touchscreen display and a video game-like controller for operating the tank, while further back computer monitors displayed ATLAS' internal processes.

Of course, ATLAS isnt the tank itself, or even the M113 connected to it. The chassis served as a surrogate for either a future tank, fighting vehicle or even a retrofit of current vehicles, while the turret was an available program being developed by the Armaments Center. The M113 is not really meant to be involved at all, but the Army decided to remotely locate the crew station inside of it for safety concerns during a live fire demonstration expected to take place in the coming weeks. ATLAS, Army officials reminded observers again and again, is agnostic to the chassis or turret its installed on.

So if ATLAS isnt the tank, what is it?

Roughly speaking, ATLAS is the mounted sensor collecting data, the machine learning algorithm processing that data, and the display/controller that the crew uses to operate the tank.

Heres how it works:

ATLAS starts with the optical sensor mounted on top of the tank. Once activated, the sensor continuously scans the battlefield, feeding that data into a machine learning algorithm that automatically detects threats.

Images of those threats are then sent to a new touchscreen display, the graphical user interface for the tanks intelligent fire control system. The images are lined up vertically on the left side of the screen, with the main part of the display showing what the gun is currently aimed at. Around the edges are a number of different controls for selecting ammunition, fire type, camera settings and more.

By simply touching one of the targets on the left with your finger, the tank automatically swivels its gun, training its sights on the dead center of the selected object. As it does that, the fire control system automatically recommends the appropriate ammo and setting such as burst or single shot to respond with, though the user can adjust these as needed.

So with the target in its sights, weapon selected, the operator has a choice: Approve the AIs recommendations and pull the trigger, adjust the settings before responding, or disengage. The entire process from target detection to the pull of the trigger can take just seconds. Once the target is destroyed, the operator can simply touch the screen to select the next target picked up by ATLAS.

In automating what are now manual tasks, the aim of ATLAS is to reduce end-to-end engagement times. Army officials declined to characterize how much faster ATLAS is than a traditional tank crew. However, a demo video shown at Aberdeen Proving Ground claimed ATLAS allows the operator to engage three targets in the time it now takes to just engage one.

ATLAS is essentially a marriage between technologies developed by the Armys C5ISR Center and the Armaments Center.

We are integrating, experimenting and prototyping with technology from C5ISR center things like advanced EO/IR targeting sensors, aided target algorithms were taking those technology products and integrating them with intelligent fire control systems from the Armaments Center to explore efficiencies between those technologies that can basically buy back time for tank crews, explained Ground Combat Systems Division Deputy Director Jami Davis.

Starting in August, the Army began bringing in small groups of tank operators to test out the new system, mostly using a new virtual reality setup that replicates the ATLAS display and controller. By gathering soldier feedback early, the Army hopes that they can improve the system quickly and make it ready for fielding that much faster. Already, the Army has brought in 40 soldiers. More soldier touchpoints and a live fire demonstration are anticipated to help the Army mature its product.

In some ways, ATLAS replicates the AI-capabilities demonstrated at Project Convergence in miniature. Project Convergence is the Armys new campaign of learning, designed to integrate new sensor, AI and network capabilities to transform the battlefield. In September, the Army hauled many of its most advanced cutting edge technologies to the desert at Yuma Proving Ground, then tried to connect them in new ways. In short, at Project Convergence the Army tried to create an environment where it could connect any sensor to the best shooter.

The Army demonstrated two types of AI at Project Convergence. First were the automatic target recognition AIs. These machine learning algorithms processed the massive amount of data picked up by the Armys sensors to detect and identify threats on the battlefield, producing targeting data for weapon systems to utilize.

The second type of AI was used for fire control, and is represented by FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM. Taking in the targeting data from the other AI systems, FIRESTORM automatically looks at the weapons at the Armys disposal and recommends the best one to respond to any given threat.

While ATLAS does not yet have the networking components that tied Project Convergence together across domains, it essentially performs those two tasks: Its AI automatically detects threats and recommends the best response to the human operators. Although the full ATLAS system wasnt hauled out to Project Convergence this year, the Army was able to bring out the virtual prototyping setup to Yuma Proving Ground, and there is hope that ATLAS itself could be involved next year.

To be clear: ATLAS is not meant to replace tank crews. Its meant to make their jobs easier, and in the process, much faster. Even if ATLAS is widely adopted, crews will still need to be trained for manual operations in case the system breaks down. And theyll still need to rely on their training to verify the algorithms recommendations.

We can assist the soldier and reduce the number of manual tasks that they have to do while still retaining the soldiers' ability to always override the system, to always make the final decision of whether or not the target is a threat, whether or not the firing solution is correct, and that they can make that decision to pull the trigger and engage targets, explained Deaver.

Read more:

How the Army plans to revolutionize tanks with artificial intelligence - C4ISRNet

Physicists Teach AI to Identify Exotic States of Matter | WIRED – WIRED

Slide: 1 / of 1. Caption: Getty Images

Put a tray of water in the freezer. For a while, its liquid. And thenboomthe molecules stack into little hexagons, and youve got ice. Pour supercold liquid nitrogen onto a wafer of yttrium barium copper oxide, and suddenly electricity flows through the compound with less resistance than beer down a college students throat. Youve got a superconductor.

Those drastic alterations in physical properties are called phase transitions, and physicists love them. Its as if they could spot the exact instant Dr. Jekyll morphs into Mr. Hyde. If they could just figure out exactly how the upstanding doctors body metabolized the secret formula, maybe physicists could understand how it turns him evil. Or make more Mr. Hydes.

A human physicist might never have the neural wetware to see a phase transition, but now computers can. In two papers published in Nature Physics today, two independent groups of physicistsone based at Canadas Perimeter Institute, the other at the Swiss Federal Institute of Technology in Zurichshow that they can train neural networks to look at snapshots of just hundreds of atoms and figure out what phase of matter theyre in.

And it works pretty much like Facebooks auto-tags. We kind of repurposed the technology they use for image recognition, says physicist Juan Carrasquilla, who co-authored the Canadian paper and now works for quantum computing company D-Wave.

Of course, facial recognition, water turning to ice, and Jekylls turning to Hydes arent really the scientists bag. They want to use artificial intelligence to understand fringey phenomena with potential commercial applicationslike why some materials become superconductors only near absolute zero but others transition at a balmy -150 degrees Celsius. The high-temperature superconductors that might be useful for technology, we actually understand them very poorly, says physicist Sebastian Huber, who co-wrote the Swiss paper.

They also want to better understand exotic phases of matter called topological states, in which quantum particlesact even weirder than usual. (The physicists who discovered these new phases nabbed the Nobel Prize last October.) Quantum particles like photons or atoms change their physical states relatively easily, but topological states are sturdy. That means they might be useful for building data storage for quantum computers, if you were a company like, say, Microsoft.

The research was about more than identifying phasesit was about understanding transitions. The Canadian group trained their computer to find the temperature at which a phase transition occurred to 0.3 percent accuracy. The Swiss group showed an even trickier move, because they got their neural network to understand something without training it ahead of time. Typically in machine learning, you give the neural network a goal: Figure out what a dog looks like. You train the network with 100,000 pictures, Huber says. Whenever a dog is in one, you tell it. Whenever there isnt, you tell it.

But the physicists didnt tell their network about phase transitions at all: They just showed the network collections of particles. The phases were different enough that the computer could identify each one. Thats a level of skill acquisition that Huber thinks will eventually allow neural networks to discover entirely new phases of matter.

These new successes arent just academic. In the hunt for stronger, cheaper, or otherwise better materials, researchers have been using machine learning for a while. In 2004, a collaboration that included NASA and GE developed a strong, durable alloy for aircraft engines using neural networks by simulating the materials before troubleshooting them in the lab. And machine learning is way faster than, say, simulating the properties of a material on a supercomputer.

Still, the phase transition simulations that the physicists studied were simple compared to the real world. Before these speculative materials end up in your new gadgets, the physicists will need to figure out how to make neural networks parse 1023 particles at a timenot just hundreds, but 100 sextillion. But Carrasquilla already wants to show real experimental data to his neural network, to see if it can find phase changes. The computer of the future might be smart enough to tag your grandmas face in photosand discover the next wonder material.

Read this article:

Physicists Teach AI to Identify Exotic States of Matter | WIRED - WIRED

Google Has Started Adding Imagination to Its DeepMind AI – Futurism

Advanced AI

Researchers have started developing artificial intelligence with imagination AI that can reason through decisions and make plans for the future, without being bound by human instructions.

Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.

The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they havent been specifically programmed for. Insert your usual fears of a robot uprising here.

When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall, explain the researchersin a blog post. On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking.

If our algorithms are to develop equally sophisticated behaviours, they too must have the capability to imagine and reason about the future. Beyond that they must be able to construct a plan using this knowledge.

Weve already seen a version of this forward planning inthe Go victoriesthat DeepMinds bots have scored over human opponents recently, as the AI works out the future outcomes that will result from its current actions.

The rules of the real world are much more varied and complex than the rules of Go though, which is why the team has been working on a system that operates on another level.

To do this, the researchers combined several existing AI approaches together, including reinforcement learning (learning through trial and error) and deep learning (learning through processing vast amounts of data in a similar way to the human brain).

What they ended up with is a system that mixes trial-and-error with simulation capabilities, so bots can learn about their environment then think before they act.

One of the ways they tested the new algorithms was with a 1980s video game calledSokoban, in which players have to push crates around to solve puzzles. Some moves can make the level unsolvable, so advanced planning is needed, and the AI wasnt given the rules of the game beforehand.

The researchers found their new imaginative AI solved 85 percent of the levels it was given, compared with 60 percent for AI agents using older approaches.

The imagination-augmented agents outperform the imagination-less baselines considerably,say the researchers. They learn with less experience and are able to deal with the imperfections in modelling the environment.

The team noted a number of improvements in the new bots: they could handle gaps in their knowledge better, they were better at picking out useful information for their simulations, and they could learn different strategies to make plans with.

Its not just advance planning its advance planning with extra creativity, so potential future actions can be combined together or mixed up in different ways in order to identify the most promising routes forward.

Despite the success of DeepMinds testing, its still early days for the technology, and these games are still a long way from representing the complexity of the real world. Still, its a promising start in developing AI that wont put a glass of water on a table if its likely to spill over, plus all kinds of other, more useful scenarios.

Further analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about and plan for the future,conclude the researchers.

The researchers also created a video of the AI in action, which you can see below:

You can read the two papers published to the pre-print website arXiv.orghereandhere.

Read this article:

Google Has Started Adding Imagination to Its DeepMind AI - Futurism

Delving Into the Weaponization of AI – Infosecurity Magazine

Digital transformation continues to multiply the potential attack surface exponentially, bringing new opportunities for the cyber-criminal community. In addition to their expanding arsenal of sophisticated malware and zero day threats, AI and machine learning are new tools being added to their toolbox. To the surprise of almost no-one, AI is being weaponized by cyber adversaries.

Leveraging AI and automation enables bad actors to commit more attacks at a faster rate and that means security teams are going to have to likewise quicken their speed to keep up. Adding fuel to the fire, this is happening in real-time, and were seeing rapid development, so there is little time for deciding whether to deploy your own AI countermeasures.

AI offers cyber actors more bang for the buck

Just like their victims, cyber actors are subject to economic realities: zero day threats can cost upwards of six figures to identify and exploit; developing new threats and malware takes time and can be expensive, as can renting Malware as a Service tools off the dark web. Like anyone else, they are looking to get the most bang for their buck, that means getting the most ROI with the least amount of overhead expenditure, including money, time, and effort, while maximizing the efficiency and efficacy of the tools theyre using.

Using AI and ML enables cyber-criminals to create malware that can self-seek for vulnerabilities and then autonomously determine which payloads will be the most successful without exposing itself through constant communications back to its C2 server.

We have already seen multi-vector attacks combined with advanced persistent threats (APTs) or an array of payloads. AI accelerates the effectiveness of these tools by autonomously learning about targeted systems so attacks can be laser focused rather than taking the usual slower, scattershot approach that can alert a victim that they are under attack.

AI reduces time to breach

We can all expect attacks to become faster than ever before, especially as technologies such as 5G connections are added to networks. 5G also enables edge devices to communicate faster, creating ad hoc networks that are harder to secure and easier to exploit. This can lead to swarm-based attacks where individual elements perform a specific function as part of a larger, coordinated attack.

When you incorporate AI into a network of connected devices that can communicate at 5G speeds, you create a scenario where those devices can not only launch an attack on their own, but customize that attack at digital speeds based on what it learns during the attack process.

With swarm technology, intelligent swarms of bots can share information and learn from each in other in real-time. By incorporating self-learning technologies, cyber-criminals can create attacks capable of quickly assessing vulnerabilities and then apply methods of countering efforts to stop them.

AI-based cyber-attacks will be more affordable

Traditional cyber weapons built by humans can be complex to build. Because of this, they can sell for a lot of money on the dark web. With AI in place, bad actors will be able to build weapons far more quickly, in greater quantity, and with more flexibility than ever before.

This will decrease their black market value, while at the same time, these AI-based weapons will be more plentiful and readily available to a greater number of people. In the age-old battle of quality versus quantity, threat actors will no longer need to choose: quantity will increase while quality will improve as well.

AI is AIs greatest enemy

Solutions that use AI-based strategies are the only effective defense against AI-enhanced attack strategies. However, AI takes time often years and specialized skills to develop and train. It is far more than the specialized scripts many vendors label as AI, because not everyone understands what goes into a legitimate AI solution, enterprises looking to fight fire with fire can be left in a quandary as to which solutions they should select.

This decision is critical, as future cyber battles may evolve into Flash Wars where interactions between defensive and adversarial AI systems become so fast that the attack/defense cycle is over in microseconds. Like traditional stock traders trying to compete against systems that can bid for stocks using algorithms and AI/ML models, network security professionals do not want to have to compete without having the right tools in place.

Preparing now for the coming challenges

Swarm-based network attacks are still likely a couple of years away, but the impact of AI-enhanced threats are right around the corner. Enterprises need to start preparing now for this reality and it starts with basic cybersecurity hygiene. This is about more than just having a patching and updating program in place, it also includes having proper security architectures and segmentation in place to reduce a companys attack surface and prevent hackers from gaining access to the wider system.

Collaboration is another key component to combatting the weaponization of AI. Security solutions need to be able to see and share threat intelligence, and participate in a unified and coordinated response to a detected threat, even across different network ecosystems such as multi-cloud environments.

Deception is another important tool to add to your arsenal, and which will increase in importance as attacks become faster and more sophisticated. Its essentially counterintelligence deploying decoys across the network to lure in attackers and unmask them because theyre unable to tell which assets are real and which are fake.

AI gives security teams the upper hand in the cyber arms race

As threat actors gain decreased latency and more intelligent attack resources, security teams will have to respond with even greater speed and intelligence. Humans alone cannot respond to these coming threats, and neither can the traditional security solutions they have in place. Instead, defensive strategies will have to incorporate advanced automation technology, including ML and AI.

Ultimately, enterprises have far more resources available to them than cyber-criminals do. Teams that can incorporate technologies like machine learning and AI into their cyber defenses will be able to build the quintessential security system that will not only able them to survive, but for the first time ever, gain the upper hand in the escalating cyber war.

Go here to see the original:

Delving Into the Weaponization of AI - Infosecurity Magazine

What investment trends reveal about the global AI landscape – Brookings Institution

We arent what we were in the 50s and 60s and 70s, former Secretary of Defense Ash Carter recently reflected. In those days, all technology of consequence for protecting our people, and all technology of any consequence at all, came from the United States and came from within the walls of government. Those days are irrevocably lost. To get that technology now, Ive got to go outside the Pentagon no matter what, Carter added.

The former Pentagon chief may be overstating the case, but when it comes to artificial intelligence, theres no doubt that the private sector is in command. Around the world, nations and their governments rely on private companies to build their AI software, furnish their AI talent, and produce the AI advances that underpin economic and military competitiveness. The United States is no exception.

With Big Techs titans and endless machine-learning startups racing ahead on AI, its easy to imagine that the public sector has little to contribute. But the federal governments choices on R&D policy, immigration, antitrust, and government contracting could spell the difference between growth and stagnation for Americas AI industry in the coming years. Meanwhile, as AI booms in other countries, diplomacy and trade policy can help the United States and its private sector take greatest advantage of advances abroad, and protective measures against industrial espionage and unfair competition can help keep America ahead of its adversaries.

Smart policy starts with situational awareness. To achieve the outcomes they intend and avoid unwanted distortions and side effects in the market, American policymakers need to understand where commercial AI activity takes place, who funds it and carries it out, which real-world problems AI companies are trying to solve, and how these facets are changing over time. Our latest research focuses on venture capital, private equity, and M&A deals from 2015 through 2019, a period of rapid growth and differentiation for the global AI industry.

Although the COVID-19 pandemic has since disrupted the market, with implications for AI that are still unfolding, studying this period helps us understand the foundations of todays AI sectorand where it may be headed.

America leads, but doesnt dominate

Contrary to narratives that Beijing is outpacing Washington in this field, the United States remains the leading destination for global AI investments. China is making meaningful investments in AI, but in a diverse, global playing field it is one player among many.

As of the end of 2019, the United States had the worlds largest investment market in privately held AI companies, including startups as well as large companies that arent traded on stock exchanges. We estimate AI companies attracted nearly $40 billion globally in disclosed investment in 2019 alone, as shown in Figure 1. American companies attracted the lions share of that investment: $25.2 billion in disclosed value (64% of the global total) across 1,412 transactions. (These disclosed totals significantly understate U.S. and global investment, since many deals and deal values are undisclosed, so total transaction values were probably much higher.)

Around the world, private-market AI investment grew tremendously from 2015 to 2019especially outside China. Notwithstanding occasional claims in the media that China is outstripping U.S. investment in AI, we find that Chinese investment levels in fact continue to lag behind the United States. Consistent with broader trends in Chinas tech sector, the Chinese AI market saw a dramatic boom from 2015 to 2017, prompting many of those media claims. But the following two years, investment sharply declined, resulting in little net growth in the annual level of investment from 2015 to 2019.

Figure 1: Total disclosed value of equity investments in privately held AI companies, by target region

Although Americas nearest rival for AI supremacy may not have taken the lead, our data suggest the United States shouldnt grow complacent. Americas AI companies remain ahead in overall transaction value, but they account for a steadily shrinking percentage of global transactions. And by our estimates, investment outside the United States and China is quickly expanding, with Israel, India, Japan, Singapore, and many European countries growing faster than their larger competitors by some or all metrics.

Figure 2: Investment activity and growth in the top 10 target countries (ranked by disclosed value)

Chinese investors play a meaningful but limited role

Chinas investments abroad are attracting mounting scrutiny, but in the American AI investment market, Chinese investors are relatively minor players. In 2019, we estimate that disclosed Chinese investors participated in 2% of investments into American AI companies, down from a peak of only 5% in 2016. As Figure 3 makes clear, the Chinese investors in our dataset generally seem to invest in Chinese AI companies instead.

Figure 3: Investment events with at least one Chinese investor participant, by target region

There was also little evidence in our data that disclosed Chinese investors seek out especially sensitive companies or technologies, such as defense-related AI, when they invest outside China. That said, our data are limited; some Chinese investors may be undisclosed or operate through foreign subsidiaries that obscure their interests. And aggregate trends are of course only one part of the picture. Some China-based investors clearly invest abroad in order to extract security-sensitive information or technology. These efforts deserve scrutiny. But overall, it seems that disclosed Chinese investors, and any bad actors among them, are a relatively small piece of a larger and more diverse AI investment market.

Few AI companies focus on public-sector needs

When it comes to specific applications, we found that most AI companies are focused on transportation, business services, or general-purpose applications. There are some differences across borders: Compared to the rest of the world, investment into Chinese AI companies is concentrated in transportation, security and biometrics (including facial recognition), and arts and leisure, while in the United States and other countries, companies focused on business uses, general-purpose applications, and medicine and life sciences attract more capital.

Across all countries, though, relatively few private-market investments seem to be flowing to companies that focus squarely on military and government AI applications. Even the related category of security and biometrics is relatively small, though materially larger in China. Governments can and do adapt commercial AI tools for their own purposes, but for the time being, relatively few AI startups seem to be working and raising funds with public-sector clients in mind, especially outside China.

Figure 4: Regional investment targets by application area

The bottom-line on global AI

The worlds AI landscape is changing fast, and a plethora of unpredictable geopolitical factors, from U.S.-China decoupling to COVID-related disruptions, counsel against confident claims about where the global AI landscape is headed next. Still, our estimates of investment around the world point to fundamental, longer-term trends unlikely to vanish anytime soon. These trends have important implications for policy:

Go here to read the rest:

What investment trends reveal about the global AI landscape - Brookings Institution

This AI Will Tell You Who The Next Great Football Player Will Be – Interesting Engineering

Computer scientists at Loughborough University have engineered artificial intelligence (AI) algorithms that can analyze football(that's soccer for you fellow Americans) players' abilities on the field. Dr. Baihua Li, the project lead, says the novel technology could revolutionize the sport by effectively enabling teams to properly identify the right talent to recruit.

Currently, player performance analysis is a long and labor-intensive process that sees an individual watch many video recordings of a player's performances. This process is time-consuming and could be faulty as it relies on human judgment which is often influenced by bias.

Although some automated technologies exist today, they are only able to track players on the pitch. To resolve this issue, Li and her team developed a hybrid system where human data entry can be supplemented by camera-based automated methods.

The team has made use of the latest advances in computer vision, deep learning, and AI to achieve three outcomes:

1. Detecting body pose and limbs to identify actions

2. Tracking players to get individual performance data

3. Camera stitching (using two low-cost consumer-grade normal cameras (such as GoPros), with each recording half of the football field to get a full picture)

Li believes her new system will aid in getting the data needed for accurate player performance analysis and talent identification. There is also the potential to adapt the technology to be used in other sports.

Performance data and match analysis in football is an essential part of the sport and can have a huge impact on the player and team performance.

The developed technology will allow a much greater objective interpretation of the game as it highlights the skills of players and team cooperation.

This innovation will have a positive impact on the football industry and further advance sports technology while providing value to the players, coaches, and recruiters that use the data," Li concluded.

Here is the original post:

This AI Will Tell You Who The Next Great Football Player Will Be - Interesting Engineering