A US Air Force pilot is taking on AI in a virtual dogfight here’s how to watch it – The Next Web

An AI-controlled fighter jet will battle a US Air Force pilot in a simulated dogfight next week and you can watch the actiononline.

The clashis the culmination of DARPAsAlphaDogfight competition, which the Pentagons mad science wing launched to increase trust in AI-assisted combat.DARPA hopes this will raise support for using algorithms in simpler aerial operations,so pilots can focus on more challenging tasks, such as organizing teams of unmanned aircraft across the battlespace.

The three-day event was scheduled to take place in-person in Las Vegas from August 18-20, but the COVID-19 pandemic led DARPA to move the event online.

Before the teams take on the Air Force on August 20, the eight finalists will test their algorithms against five enemy AIs developed byJohns Hopkins Applied Physics Laboratory. Theirmission is to recognize and exploit the weaknesses and mistakes of their rivals, and maneuver to a position of control beyond the enemys weapon employment zone.

[Read:Gamers will teach AI how to control military drone swarms]

The next day, the teams will compete against each other in a round-robin tournament. The top four will then enter a single-elimination tournament for theAlphaDogfight Trials Championship. Finally, the winner will take on a US Air Force fighter pilot flying a virtual reality F16 simulator, to test whether their system can vanquish the militarys elite.

The contest aims to develop a base of AI developers for DARPAs Air Combat Evolution (ACE) program, which is trying to further automate aerial combat. Dogfighting is expected to be a rare part of this in the future, but the duels will provide evidence that AI can handle a high-end fight.

Regardless of whether the human or machine wins the final dogfight, the AlphaDogfight Trials is all about increasing trust in AI, said Colonel Dan Animal Javorsek, program manager in DARPAs Strategic Technology Office. If the champion AI earns the respect of an F-16 pilot, well have come one step closer to achieving effective human-machine teaming in air combat, which is the goal of the ACE program.

You can watch the battles unfold by signing up online. Registration closes on August 17 for US citizens, and on August 11 for everyone else. If youre not a US citizen, youll also need to submit one of DARPAs visit request forms.

Published August 10, 2020 12:28 UTC

Original post:

A US Air Force pilot is taking on AI in a virtual dogfight here's how to watch it - The Next Web

How AI and ML Applications Will Benefit from Vector Processing – EnterpriseAI

As expected, artificial intelligence (AI) and machine learning (ML) applications are already having an impact on society. Many industries that we tap into dailysuch as banking, financial services and insurance (BFSI), and digitized health carecan benefit from AI and ML applications to help them optimize mission-critical operations and execute functions in real time.

The BFSI sector is an early adopter of AI and ML capabilities. Natural language processing (NLP) is being implemented for personal identifiable information (PII) privacy compliance, chatbots and sentiment analysis; for example, mining social media data for underwriting and credit scoring, as well as investment research. Predictive analytics assess which assets will yield the highest returns. Other AI and ML applications include digitizing paper documents and searching through massive document databases. Additionally, anomaly detection and prescriptive analytics are becoming critical tools for the cybersecurity sector of BFSI for fraud detection and anti-money laundering (AML).1

Scientists searching for solutions to the COVID-19 pandemic rely heavily on data acquisition, processing and management in health care applications. They are turning to AI, ML and NLP to track and contain the coronavirus, as well as to gain a more comprehensive understanding of the disease. Among the applications for AI and ML include medical research for developing a vaccine, tracking the spread of the disease, evaluating the effects of COVID-19 intervention, using natural processing of language in social media to understand the impact on society, and more.2

Processing a Data Avalanche

The fuel for BFSI applications like fraud detection, AML applications and chatbots, or health applications such as tracking the COVID-19 pandemic, are decision support systems (DSSs) containing vast amounts of structured and unstructured data. Overall, experts predict that by 2025, 79 trillion GB of data will have been generated globally.3 This avalanche of data is making data mining (DM) difficult for scalar-based high-performance computers to effectively and efficiently run a DSS for its intended applications. More powerful accelerator cards, such as vector processing engines supported by optimized middleware, are proving to efficiently process enterprise data lakes to populate and update data warehouses, from which meaningful insights can be presented to the intended decision makers.

Resurgence of Vector Processors

There is currently a resurgence in vector processing, which, due to the cost, was previously reserved for the most powerful supercomputers in the world. Vector processing architectures are evolving to provide supercomputer performance in a smaller, less expensive form factor using less power, and they are beginning to outpace scalar processing for mainstream AI and ML applications. This is leading to their implementation as the primary compute engine in high performance computing applications, freeing up scalar processors for other mission critical processing roles.

Vector processing has unique advantages over scalar processing when operating on certain types of large datasets. In fact, a vector processor can be more than 100 times faster than a scalar processor, especially when operating on the large amounts of statistical data and attribute values typical for ML applications, such as sparse matrix operations.

While both scalar and vector processors rely on instruction pipelining, a vector processor pipelines not only the instructions but also the data, which reduces the number of fetch then decode steps, in turn reducing the number of cycles for decoding. To illustrate this, consider the simple operation shown in Figure 1, in which two groups of 10 numbers are added together. Using a standard programming language, this is performed by writing a loop that sequentially takes each pair of numbers and adds them together (Figure 1a).

Figure 1: Executing the task defined above, the scalar processor (a) must perform more steps than the vector processor (b).

When performed by a vector processor, this task requires only two address translations, and fetch and decode is performed only once (Figure 1b) , rather than the 10 times required by a scalar processor (Figure 1a). And because the vector processors code is smaller, memory is used more efficiently. Modern vector processors also allow different types of operations to be performed simultaneously, further increasing efficiency.

To bring vector processing capabilities into applications less esoteric than scientific ones, it is possible to combine vector processors with scalar CPUs to produce a vector parallel computer. This system comprises a scalar host processor, a vector host running LINUX, and one or more vector processor accelerator cards (or vector engines), creating a heterogeneous compute server that is ideal for broad AI and ML workloads and data analytics applications. In this scenario, the primary computational components are the vector engines, rather than the host processor. These vector engines also have self-contained memory subsystems for increased system efficiency, rather than relying on the host processors direct memory access (DMA) to route packets of data through the accelerator cards I/O pins.

Software Matters

Processors perform only as well as the compilers and software instructions that are delivered to them. Ideally, they should be based on industry-standard programming languages such as C/C++. For AI and ML application development, there are several frameworks available with more emerging. A well designed vector engine compiler should utilize both industry-standard programming languages and open source AI and ML frameworks such as TensorFlow and PyTorch. A similar approach should be taken for database management and data analytics, using proven frameworks such as Apache Spark and Scikit-Learn. This software strategy allows for seamless migration of legacy code to vector engine accelerator cards. Additionally, by using the message passing interface (MPI) to implement distributed processing, the configuration and initialization become transparent to the user.

Conclusion

AI and ML are driving the future of computing and will continue to permeate more applications and services in the future. Many of these application deployments will be implemented in smaller server clusters, perhaps even a single chassis. Accomplishing such a feat requires revisiting the entire spectrum of AI technologies and heterogeneous computing. The vector processor, with advanced pipelining, is a technology that proved itself long ago. Vector processing paired with middleware optimized for parallel pipelining is lowering the entry barriers for new AI and ML applications, and is set to solve the challenges both today and in the future that were once only attainable by the hyperscale cloud providers.

References

About the Author

Robbert Emery is responsible for commercializing NEC Corporations advanced technologies in HPC and AI/ML platform solutions. His role includes discovering and lowering the entry point and initial investment for enterprises to realize the benefits of big data analytics in their operations. Robbert has developed a career of over 20 years in the ICT industrys emerging technologies, including mobile network communications, embedded technologies and high-volume manufacturing. Prior to joining NECs technology commercialization accelerator, NEC X Inc., in Palo Alto California, Robbert led the product and business plan for an embedded solutions company that resulted in a leadership position, in terms of both volume and revenue. He has an MBA from SJSUs Lucas College and Graduate School of Business, as well as a bachelors degree in electrical engineering from California Polytechnic State University.

Related

Read more here:

How AI and ML Applications Will Benefit from Vector Processing - EnterpriseAI

DGWorld Brings the Future of AI and Digitization with WIZO – Business Wire

DUBAI, United Arab Emirates--(BUSINESS WIRE)--DGWorld, the leading AI and Digitalization Company, has launched the new version of its humanoid robot at the Ai Everything Summer Conference held on July 16, 2020 at the Dubai World Trade Center.

WIZO serves a multi-functional purpose in pushing beyond the boundaries and limitations of a service robot with big data analysis & management, centralized control system, safe & secure database, multimodal interaction, flexible movement, facial recognition, speech engine, proprietary SLAM technology for autonomous navigation, and automatic docking & recharging.

Eng. Bilal Al-Zoubi, Founder and CEO at DGWorld said: DGWorld aims to create sustainable solutions by utilizing the power of AI. The new and improved WIZO will make a positive impact on the way business is done and optimize the human experience. By delegating standard tasks to the robot, employees can focus on human qualities that allow them to excel and flourish.

Hardware and software features can be added to the humanoid robot to meet the business needs. Temperature checking, Payment system and Integration with mobile apps to name a few. The flexibility in customization makes WIZO versatile in a wide variety of industries like Retail, Healthcare, Education, Transportation, Hospitality, Entertainment, and Government services.

As life is slowly going back to normal with the current pandemic, some of the precautions taken are here to stay. Limiting human contact will limit the spread of any virus. Deploying WIZO at entry check-points, reception desk, or to take care of specific tasks will result in work efficiency and more importantly help people stay safe. We all know robots are part of the future. The future has already started. Eng. Bilal-Al-Zoubi continued.

This is the third version of humanoid robots that DGWorld has developed. The first two versions were 3D printed and completely manufactured by the company including all hardware and software. With the latest WIZO, DGWorld wanted to reduce costs, save manufacturing time and increase the quality, therefore, the company chose a reliable vendor to work with and enable the intelligent features.

For more information:

Website http://www.dgworld.com

YouTube youtube.com/dgworld

Facebook facebook.com/DGWorlds/

Instagram instagram.com/dgworlds/

LinkedIn linkedin.com/company/digiroboticstechnologies

Twitter twitter.com/DGWorld3

*Source: AETOSWire

Continue reading here:

DGWorld Brings the Future of AI and Digitization with WIZO - Business Wire

Nvidia’s Next Big Thing: The HGX-1 AI Platform – Seeking Alpha

Over the past three months, Nvidia's (NASDAQ:NVDA) stock has been upgraded by several financial services firms including Goldman Sachs (NYSE:GS), Citigroup (NYSE:C), and Bernstein, while some others have downgraded the stock, such as Pacific Crest. In an article published in December, last year, I said Nvidia's stock could scale new highs if the company's revenue continues to grow at a CAGR of 20% plus in the foreseeable future. At that time, the stock created a new high around $120, before correcting almost 20% afterwards.

I also cautioned investors that the stock could go through spine-chilling volatility, and that's exactly what is happening now. The commentaries of several sell-side analyst firms fueled the extent of volatility beyond what happens under normal circumstances. My medium-term target for the stock is $200+, albeit with continued volatility. The new catalyst for the stock will be the availability of its HGX-1 platform at the hyperscale space.

Nvidia: Revisiting the Bull Thesis

My investment thesis was based on the expectation that Nvidia's revenue will grow at a CAGR of 20% plus in the next three years. The primary revenue growth driver will be the expanding application of artificial intelligence (AI) and/or deep learning (DL) across various industries. Let's focus on Nvidia's competitive advantage in AI/DL.

Nvidia's DGX-1 platform is its trump card to lead the next generation of AI-wave leveraging the TensorFlow software library, originally developed by Google (NASDAQ:GOOGL)(NASDAQ:GOOG). Nvidia recently launched a partner program with hyperscale vendors, such as Foxconn, Inventec, Quanta and Wistron, for its HGX-1 AI reference platform. What's the difference between DGX-1 and HGX-1? Well, HGX-1 is the hyperscale version of the DGX-1 platform. The launch of the partnership program will have a strong and sustainable impact in the HPC (high performance computing) space via hyperscale players.

Investors need to understand how the advantages of TensorFlow coupled with Nvidia's HGX-1 reference architecture will boost its revenue in the foreseeable future, which Nvidia's stock at today's price around $150 hasn't factored in.

CUDA, HGX and TensorFlow

TensorFlow has become extremely popular since its inception in late 2015 by the Google Brain Team. Its appeal lies in its usage of data flow graphs for solving complex mathematical problems related to building efficient neural networks. Add to that its support for multiple GPUs and CPUs via a single API. This is the secret sauce behind the immense success of TensorFlow, out-competing other AI-related software libraries, such as torch, caffe and theano in terms of speeding up the training process of the neural networks.

Engineers at Nvidia were quick to understand TensorFlow's competitive advantage over other software libraries. While Intel (NASDAQ:INTC) was busy to make its MKL (math kernel library) more versatile via incorporating the Neon deep learning framework of Nervana, the startup which Intel acquired almost a year ago, Nvidia silently made its CUDA parallel computing software platform compatible with TensorFlow.

The DGX-1 platform, which is basically a parallel processing-based hardware accelerator with eight Nvidia Tesla P100 GPUs, is the platform that gave birth to HGX-1. Both DGX-1 and HGX-1 connect the eight Tesla P100 GPUs via NVLink interconnect. With the introduction of HGX-1 earlier this year and then making it available to end-users via different hyperscale vendors, Nvidia has opened the door for running different kinds of AI/DL workloads for various industries.

How HGX-1 would expand the scope of DGX-1 via taking the latter to the hyperscale space? Okay, before addressing the issue let's delve a bit deeper into how Nvidia's upcoming Tesla V100, i.e., Tesla powered by Volta GPU architecture, will help DGX-1 deliver much faster performance compared to other GPU-based systems. The Volta architecture will be made to support Tensor-based AI/DL operations, which implies the HGX-1 will also deliver Tensor-based performance boost at the hyperscale space.

HGX At Hyperscale: What Does It Really Mean?

What does HGX at the hyperscale space mean for investors? Simple. It means more revenue for Nvidia. Since hyperscale computing enables datacenters to deploy distributed computing environments across the globe via scaling only a few servers to lots of servers, Nvidia's GPUs will see a steady rise in adoption. However, that doesn't necessarily mean all the datacenters across the world will need to deploy the high end Tesla P100 or the upcoming Tesla V100 GPUs.

Only the hyperscale datacenters will need them. For the traditional datacenters, Nvidia's Pascal-powered midrange Quadro workstation GPUs will be enough. This essentially means HGX has the ability to boost the overall demand for Nvidia's server-grade GPUs. Nvidia's decision to launch the HGX partner program will help it grab a significant portion of the AI/DL market share from archrivals Intel and AMD (NASDAQ:AMD).

Intel is also trying to make MKL compatible with TensorFlow. However, the process isn't complete yet. I believe Intel has potential to compete with Nvidia in the parallel processing space. But due to its FPGA-dominated business model and late entrance in the AI/DL marketplace, it won't be able to become a real competitor of Nvidia just yet.

Unfortunately for Nvidia, though, AMD has made significant progress in the last six months. AMD, with its upcoming Radeon Instinct line of GPUs along with its upcoming Zen-based Naples CPUs (bundled with the SenseMI technology), could be Nvidia's real competitor, but for that to happen the OpenCL (open computing language) software library (which is the CUDA equivalent for AMD, and to some extent Intel) needs to support TensorFlow. Please refer to my article mentioned at the beginning to learn more about CUDA and OpenCL.

Valuation

Nvidia's trailing 12-month revenue is $7.5 billion. A CAGR growth rate of 20%, which is quite possible due its AI/DL leadership as analyzed above, will catapult its revenue closer to $13 billion in 2020. And since such an incredible growth is almost given, I won't be surprised if the Street offers a P/S multiple of 17/18x to its stock in the next six months. After a year, Nvidia's revenue will be around $9 billion at the CAGR rate mentioned above, and at ~15x P/S multiple the stock will get closer to or even slightly beyond the $200 level. So it's a strong possibility that the stock will cross the $200 mark in the next six to twelve months.

NVDA Revenue (TTM) data by YCharts

Final Words

Nvidia is a secular bull story. However, you need nerves of steel to be a part of the story. AI/DL was a concept even a few years ago, and has started to take the shape of an industry just now. I would recommend risk-tolerant investors to hold the stock as a long-term investment.

Disclosure: I am/we are long NVDA, GOOGL.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Read the original here:

Nvidia's Next Big Thing: The HGX-1 AI Platform - Seeking Alpha

Industry VoicesAI doesn’t have to replace doctors to produce better health outcomes – FierceHealthcare

Americans encounter some form of artificial intelligence and machine learning technologies in nearly every aspect of daily life: We accept Netflixs recommendations on what movie we should stream next, enjoy Spotifys curated playlists and take a detour when Waze tells us we can shave eight minutes off of our commute.

And it turns out that were fairly comfortable with this new normal: A survey released last year by Innovative Technology Solutions found that, on a scale of 1 to 10, Americans give their GPS systems an 8.1 trust and satisfaction score, followed closely by a 7.5 for TV and movie streaming services.

But when it comes to higher stakes, were not so trusting. When asked about whether they trust an AI doctor diagnosing or treating a medical issue, respondents scored it just a 5.4.

CMS Doubles Down on CAHPS and Raises the Bar on Member Experience

A new CMS final rule will double the impact of CAHPS and member experience on a Medicare plans overall Star Rating. Learn more and discover how to exceed member expectations and improve Star Ratings in this new whitepaper.

Overall skepticism about medical AI and ML is nothing new. In 2012, we were told that IBMs AI-powered Watson was being trained to recommend treatments for cancer patients. There were claims that the advanced technology could make medicine personalized and tailored to millions of people living with cancer. But in 2018, reports surfaced that indicated the research and technology had fallen short of expectations, leaving users to speculate the accuracy of Watsons predictive analytics.

RELATED:Investors poured $4B into healthcare AI startups in 2019

Patients have been reluctant to trust medical AI and ML out of fear that the technology would not offer a unique or personalized recommendation based on individual needs. A piece in Harvard Business Review in 2019 referenced a survey in which 200 business students were asked to take a free health assessment to perform a diagnosis40% of students signed up for the assessment when told their doctor would perform the diagnosis, while only 26% signed up when told a computer would perform the diagnosis.

These concerns are not without basis. Many of the AI and ML approaches that are being used in healthcare todaydue to simplicity and ease of implementationstrive for performance at the population-level by fitting to the characteristics most common among patients. They look to do well in the general case, failing to serve large groups of patients and individuals with unique health needs. However, this limitation of how AI and ML is being applied is not a limitation of the technology.

If anything, what makes AI and ML exceptionalif done rightis its ability to process huge sets of data comprising a diversity of patients, providers, diseases and outcomes and model the fine-grained trends that could potentially have a lasting impact on a patients diagnosis or treatment options. This ability to use data in the large for representative populations and to obtain inferences in the small for individual-level decision support is the promise of AI and ML. The whole process might sound impersonal or cookie-cutter, but the reality is that the advancements in precision medicine and delivery will make care decisions more data-driven and thus more exact.

Consider a patient choosing a specialist. Its anything but data-driven: Theyll search for a provider in-network or maybe one that is conveniently located, without understanding potential health outcomes as a result of their choice. The issue is that patients lack the proper data and information they need to make these informed choices.

RELATED:The unexpected ways AI is impacting the delivery of care, including for COVID-19

Thats where machine intelligence comes into playan AI/ML model that is able to accurately predict the right treatment, at the right time, by the right provider for a patient, which could drastically help reduce the rate of hospitalizations and emergency room visits.

As an example, research published last month in AJMC looked at claims data from 2 million Medicare beneficiaries between 2017 and 2019 to evaluate the utility of ML in the management of severe respiratory infections in community and post-acute settings. The researchers found that machine intelligence for precision navigation could be used to mitigate infection rates in the post-acute care setting.

Specifically, at-risk individuals who received care at skilled nursing facilities (SNFs) that the technology predicted would be the best choice for them had a relative reduction of 37% for emergent care and 36% for inpatient hospitalizations due to respiratory infections compared to those who received care at non-recommended SNFs.

This advanced technology has the ability to comb through and analyze an individuals treatment needs and medical history so that the most accurate recommendations can be made based on that individuals personalized needs and the doctors or facilities available to them. In turn, matching a patient to the optimal provider has the ability to drastically improve health outcomes while also lowering the cost of care.

We now have the technology where we can use machine intelligence to optimize some of the most important decisions in healthcare. The data show results we can trust.

Zeeshan Syed is the CEO and Zahoor Elahi is the COO of Health at Scale.

Link:

Industry VoicesAI doesn't have to replace doctors to produce better health outcomes - FierceHealthcare

How 4 Companies Are Using AI To Solve Waste Issues On Earth & In Space – Forbes

Getty

Artificial intelligence is a tool used by people all over the world to empower humans to make informed decisions. With a responsible use of AI, humans are able to solve problems faster because artificial intelligence can process more information at a time than a human can.

As climate change, sustainability, and corporate social responsibility become of increased importance for brands across the globe, business, and communications professionals should keep an eye on how emerging technologies, like AI, are being used to solve real-world problems.

Some of the problems AI is solving today are right out of science fiction movies. Waste recognition, space junk, and sustainability are a few of the challenges that artificial intelligence is helping humanity tackle.

Many dystopian movies depict Earth covered in garbage. People live in slums surrounded by trash without anywhere else for it to go. Thankfully, with the help of companies like Greyparrot, that dystopian predicament doesnt have to be humanitys future. Greyparrot is an artificial intelligence company devoted to using AI-powered computer vision software to increase transparency and automation in recycling.

As consumers become more concerned about product sustainability, plus increasing government regulations, companies are finding new ways to create a circular economy. The circular economy is a framework for an economy that is restorative and regenerative by design. Its a process of designing waste out of an economic system so that a product has a continued life, even after its main use. This is key for the waste and recycling industry which has low economic rates of 14% globally because 60% of the solid waste produced each year ends up in open dumps and landfill[s].

Greyparrot AI helps companies work in a circular economy by highlighting inefficiencies in sorting and waste facilities. The artificial intelligence vision can spot items on a conveyer belt faster and with better accuracy than a human. Their system automatically identifies different types of waste, providing composition information and analytics to help facilities increase recycling rates. AI is easy to scale and provides real-time insight and analysis into facilities. AI can turn the 1% of waste that is monitored and analyzed into 100%. Not to mention the opportunities to partner with smart bins and sorting robots. Greyparrot AI does more than bring waste management into the 21st century, it opens the door to AI applications to less well-known sectors of the economy, making our future circular instead of dystopian.The startup was recently named by The Europas Tech Startup Awardsas the hottest startup in Climate Tech/Green Tech startup of 2020.

Other scifi movies often depict humans escaping Earth when it is no longer inhabitable. They blast off into orbit while leaving robots like Wall-E to clean up the mess. But what if we cant launch into the final frontier because theres too much junk in the way for humans to leave Earths orbit? Space junk is a serious worry for many scientists. According to the European Space Agency, 129 million pieces of debris hurtle around Earth. Space debris poses a danger to astronauts navigating orbit, the network of communication and weather satellites, and future space missions.

Thats where AI and technology companies are coming together to tackle spaces biggest problem. NASA created the Deep Asteroid challenge, inviting participants to use machine learning to help humans avoid the same fate as the dinosaurs. Asteroids arent the only Near Earth Object that can come in contact with Earth. There are approximately 1000 comets and space debris which pose a real danger to life on Earth.

Detecting space junk is one thing but removing it from orbit is another. The startup StartRocket has a plan to remove debris using foam. StartRocket wants to deploy a satellite called Foam Debris Catcher. When the satellite reaches a debris cloud, it would splay out a web of space-grade polymeric foam arms which, according to StartRocket, can capture as much as a ton of space junk. The orbital drag of the foamed junk would pull the mass towards Earth, burning up in the atmosphere.

Artificial intelligence is more than a good samaritan. It offers a big ROI for companies who use it. In the food industry, AI could help generate up to $127 billion a year by 2030. Food waste is a problem before it even hits consumers refrigerators. About one third (about 1.3 billion tons) of food is lost or wasted each year from farm to refrigerator.

One company, Winnow Solutions, uses artificial intelligence to identify and weigh food waste in commercial kitchens. Winnow Vision, the companys AI, automatically assigns a dollar value to each scrapped plate thats dumped into its smart waste bin. Winnow Vision can identify waste foods correctly more than 80 percent of the time and is improving as it learns, which is up to 10% more accurate than kitchen staff (who are usually busy with other work).

A worker throws away expired food in a local supermarket in Brussels.

AI isnt just for the kitchen. Israeli startup, Wasteless, developed a dynamic pricing algorithm for perishable products. Wasteless is able to track an items price in real-time and adjust it based on the expiration date. Wasteless integrates with a stores inventory management system to automatically discount items with shorter shelf lives. By using Wasteless, a store retailer reduced food waste by 39 percent while boosting revenues by 110 percent and still maintaining a positive net margin.

The bad news is that water is a finite resource, but happily, it also happens to be renewable. Unfortunately, more than 25% of water is wasted due to leaks - and thats in commercial buildings alone. Water intelligence company, WINT, developed an AI to detect and stop leaks at the source. WINT does this by using pattern matching to detect water leaks. WINTs AI continually learns and adapts to a companys water network to optimize leak and waste detection. That 25% of water wasted due to leaks? Not for companies who use artificial intelligence. One company reported that WINT reduced its water consumption by 24%. Thats a technology thats good for the budget and the environment.

Artificial Intelligence can analyze, predict, and recommend action. However, its up to people to make the final call and change the systems in place to make the most out of the way we produce. Space junk, food waste, trash, and water leaks are all ways to rethink an industrys purpose. Economics can evolve along with technology, like artificial intelligence, to create sustainable solutions that at times can increase ROI, but can also show a brands goodwill and help that company find its purpose.

The rest is here:

How 4 Companies Are Using AI To Solve Waste Issues On Earth & In Space - Forbes

Report: Top AI Developments That Took Place in 2019 – Analytics Insight

Advancements in AI, in particular, have been incredible in 2019 be it in the field of machine learning, neural networks, vision, natural language processing (NLP), and certain others. The year saw the progress in technology that has opened new doors for further improvement of things that one couldnt have imagined a few years back. Today so much more is possible owing to the potential of artificial intelligence across almost every sector, industry, country and region.

So, lets unfold a few pages back to this year and explore the significant developments that took place in the field of artificial intelligence in 2019.

With the upsurging advancements in artificial intelligence, barriers are natural to emerge to ambitious companies. Here, acquisitions and mergers produce a way to overcome such barriers to organic growth. For some organizations, it caters to a significant business strategy to benefit, increase and accelerate their businesses. When it comes to technology, being a master of one is not fruitful, companies need to advance and attain leadership in certain other technologies as well. For AI companies, as AI is underpinned with most of the industries, it is necessary to acquire or merge with significant others to develop new channels for growth and sustainability.

The year 2019 witnessed a number of AI acquisitions that made the top headlines. Such expansions not only enhance the revenue but also improve the technological facet of the companies. Here is a list of significant AI acquisitions that took place this year.

McDonalds Dynamic Yield

McDonalds has acquired the personalization company Dynamic Yield. This is the formers largest acquisitions in the past 20 years. Dynamic Yield works with different brands including e-commerce, travel, finance, and media. According to McDonalds, it will use Dynamic Yields technology to create a drive-thru menu. This drive-thru menu can be tailored to things like the weather, current restaurant traffic, and trending menu items. The menu display can also recommend additional items based on peoples preferences after they start ordering.

HP MapR

Hewlett Packard Enterprise (HPE) acquired MapR business assets. According to HPE, the transaction includes MapRs technology, intellectual property, and domain expertise in AI, machine learning, and analytics data management. Phil Davis, president, Hybrid IT, Hewlett Packard Enterprise, stated, MapRs enterprise-grade file system and cloud-native storage services complement HPEs BlueData container platform strategy and will allow us to provide a unique value proposition for customers. We are pleased to welcome MapRs world-class team to the HPE family.

Nike Celect

Nike has acquired Boston-based predictive analytics company Celect. The sports brand marked its latest acquisition in the hope to bolster its direct-to-consumer strategy. With the integration of Celects technology into Nikes mobile apps and website, the latter can better predict what styles of sneakers and apparel customers want, when they want it and where they want to buy it. Chief Operating Officer Eric Sprunk explained, Our goal is to serve consumers more personally at scale. We have to anticipate demand. We dont have six months to do it. We have 30 minutes.

Uber Mighty AI

Uber acquired a Seattle startup Mighty AI, which develops training data for computer vision models. Uber has acquired Mighty AIs intellectual property, tooling, tech talent, and labeling community. More than 40 employees from Mighty AI have joined Uber at its Seattle engineering office. The clients of Mighty AI Samsung, Microsoft, Intel, Accenture, Siemens, and others have been notified regarding the shut-down of the companys operations.

Cognex Sualab

Cognex Corporation which is a leader in machine vision for factory automation and industrial barcode reading, acquired Sualab, a leading Korean-based developer of vision software using deep learning for industrial applications. The deal worth USUS$195 million. The acquisition is expected to extend Cognex Corporations deep learning capabilities. It is also likely to increase opportunities to automate difficult visual inspection tasks in industrial markets.

In a world where every other tech-savvy company is battling in the race of gaining AI lead, partnerships serve them with great opportunity to give their business a competitive edge above others. For AI companies, strategic partnerships open up the opportunity to share their success with one another in a way that benefits all parties involved.

Here is the list of some momentous partnerships signed in the field of AI in 2019.

Novartis Microsoft

Novartis, in October, announced an important step in reimagining medicine by founding the Novartis AI innovation lab and by selecting Microsoft Corp. as its strategic AI and data-science partner for this effort. The new lab aims to enhance Novartis AI capabilities from research through commercialization. It will also help it accelerate the discovery and development of transformative medicines for patients worldwide. This strategic alliance will focus on AI Empowerment and AI Exploration.

WeBank Mila Tencent

Recently, Chinas WeBank announced two strategic partnerships with Mila and the leading cloud computing platform Tencent Cloud. The cooperation will focus on further developing federated learning, based on WeBanks real-world experiences in finance and fintech, adhering to Milas core philosophy AI for Humanity, Tencents AI for Good and WeBanks Make Banking Better for All to create safe, inclusive AI applications.

Nuance Communications Mila: Quebec AI Institute

Nuance Communications, Inc. which is a leading provider of Conversational AI and Ambient Intelligence solutions, announced its partnership with Mila, an academic research institution dedicated to advancing ML with an intensive focus on deep learning related to language and image processing. This partnership will enable Nuance and Mila to collaborate on research, advance cutting-edge work in machine learning, and meaningfully enhance AI applications.

Big Data BizViz (BDB) L&T Technology Services

Big Data BizViz (BDB) announced a multi-faceted partnership with Larsen &Tubro Technology Services in order to enable the next-gen data-driven engineering solutions. The partnership will help LTTS to integrate the BDB platform into their AI-based solutions to provide enterprises with seamless analytics in a manner that is simple, affordable, scalable and sustainable.

Wipro IISc Bangalore

In August-September, Wipro announced a partnership with the Indian Institute of Science (IISc), Bangalore in an effort to conduct advanced applied research in autonomous systems, robotics, and 5G space. The collaboration is intended to develop an electric autonomous/driverless car for Indian roads. Both the organizations are going to create their own electric vehicle and then put all the systems inside.

Any technological progress needs investment in new equipment and structures embodying the latest technology in order to realize its benefits. As several companies are marching towards inventions and innovations, investments play a basic yet important role in building their dreams. Here we have compiled some of the significant AI investments that took place in the year 2019.

OpenAI

Amount Funded: US$1 billion

Transaction Name: Investment

Lead Investors: Microsoft Corporation

Artificial intelligence organization OpenAI raised US$1 billion from the tech goliath Microsoft Corporation. Besides funding, OpenAI made a selective computing partnership with Microsoft to assemble new Azure AI supercomputing advancements.

Recogni Inc.

Amount Funded: US$25 million

Transaction Name: Series A

Lead Investors: GreatPoint Ventures

Recogni Inc., the designers of a vision-oriented AI for autonomous vehicles, reported US$25 million in Series A financing driven by GreatPoint Ventures with participation from Toyota AI Ventures, BMW I Ventures, Faurecia (one of the worlds leading car innovation organizations), Fluxunit (VC arm of lighting and photonics organization OSRAM), and DNS Capital.

iFlytek

Amount Funded: US$407 million

Transaction Name: Investment

Lead Investors: China Asset Management

Chinese artificial intelligence (AI) organization iFlytek has raised US$407 million from a state-sponsored industry fund and a few provincial government reserves by means of a private placement, as per a documenting on Shenzhen Stock Exchange.

Fetch Robotics

Amount Funded: US$46 million

Transaction Name: Series C

Lead Investors: Fort Ross Ventures

Fetch Robotics reported a US$46 million Series C raise. The round, driven by Fort Ross Ventures, brings the San Jose-based warehouse automation organizations total funding up to US$94 million. New financial investors CEAS Investments, Redwood Technologies, TransLink Capital and Zebra Ventures joined the round, alongside existing investors OReilly AlphaTech Ventures, Shasta Ventures, SoftBank Capital and Sway Ventures.

Beta Bionics

Amount Funded: US$126 million

Transaction Name: Series B and B2

Lead Investors: Perceptive Advisors and Soleus Capital

Beta Bionics, a startup utilizing ML and AI to build up the worlds first autonomous bionic pancreas, shut its Series B and B2 rounds of funding, carrying the organizations total capital to US$126 million. The round was co-driven by Perceptive Advisors and Soleus Capital with participation from Farallon Capital, RTW Investments and ArrowMark Partners.

In comparison to other fields, AI has shown some great and remarkable advancements in the past year. As we are about to enter in 2020, lets have a look back at some interesting trendsetter innovations brought in by artificial intelligence and its subsets.

Artificial Deep Neural Network Brain

It has been found that as the artificial deep neural network brain learns to recognize any type of cyber threat, the prediction capabilities of such brain becomes instinctive. This is a significant innovation for enterprises which implies that any kind of malware, known and unknown, is predicted and prevented with unmatched accuracy and speed. The deep learning neural network enables the analysis of files pre-execution in order to prevent malicious files pre-emptively.

Deep Learning Models Expansion

During 2019 we noticed that the size of deep learning models kept growing at an accelerating pace which indicates that the larger sets of data comprised of greater complexity can now be processed. This innovation of growing the layers of deep learning models is predicted to develop at an exponential pace and also highlight the importance of growing computational efforts and the cost required in training state-of-the-art models.

Generative Adversarial Networks

2019 saw the emergence of a particularly interesting innovation known as Generative Adversarial Networks (GAN). In May 2019, researchers at Samsung demonstrated a GAN-based system that produced videos of a person speaking with only a single photo of that person provided. Later in August, a large dataset consisting of 12,197 MIDI songs each with their own lyrics and melodies were created through neural melody generation from lyrics by using conditional GAN-LSTM.

Upside Down Reinforcement Learning

Swiss AI Labs team introduced a new methodology known as upside-down reinforcement learning. The team carried out reinforcement learning in the form of supervised learning that allowed them to provide rewards as input, which is contrary to how the traditional reinforcement learning works. The technology also helped them imitate and train the robot for carrying out hard tasks just by imitating in front of a machine.

Explainable AI

In order to understand the methodologies inside the black box and bring crucial thrust in the process, several companies released services to enables the business to highlight the prime factors that lead to outcomes from their machine learning models. Today, for the first time, companies are able to clear the cloud and gain insights into the way the black box works.

Every human associated with a company impacts its functions and operations but leadership plays a significant role in driving success and value. Without the intellect and guiding light, an organization becomes directionless. More importantly in the field of AI, leadership is even more important as the technology is rising and advancing at a fast pace, therefore it needs to be channelized in a useful manner. The year 2019 saw some great visionaries taking charge of next-gen AI innovations across the world. Here are some notable AI leadership appointments:

Publicis Sapient elevated Nigel Vaz as a global CEO.

Apple hired Ian Goodfellow as the director of Machine Learning for its Special Projects Group.

Experian which is a leading global information services company appointed Sathya Kalyanasundaram as Country Managing Director, Experian India.

Google CEO Sundar Pichai has replaced Larry Page as the head of parent company Alphabet Inc, as founders Page and Sergey Brin step down from active roles.

Vishal Sikka, founder, and CEO of the AI company Vianai Systems was named Oracles Board of Directors.

Read more:

Report: Top AI Developments That Took Place in 2019 - Analytics Insight

Ford To Invest $1B In Self-Driving AI – Forbes


Forbes
Ford To Invest $1B In Self-Driving AI
Forbes
Autonomous cars are still mostly in test-drive where our streets are concerned, but the race for this technology now has many major firms kicking their AI investment into high gear. Ford Motor Company announced Saturday that it intends to invest $1 ...
Ford bets big on AI with $1B investmentMarketing Dive
Ford pledges $1bn for AI start-upBBC News
Ford invests massively in AI startupElectronics EETimes (registration)
The Engineer -The Register -NDTV -Ford Media
all 85 news articles »

Excerpt from:

Ford To Invest $1B In Self-Driving AI - Forbes

Squirrel AI Learning Attends the Web Summit to Talk About the Application and Breakthrough of Artificial Intelligence in the Field of Education -…

Squirrel AI Learning is not only a global leader in artificial intelligence education enterprises, but also the only Chinese high-tech education enterprise that is invited to participate in this event. Derek Li, Founder and Chief Educational Technology Scientist of Squirrel AI Learning, gathered in the same hall with Tony Blair, former British Prime Minister, Brad Smith, President of Microsoft, Ping Guo, Vice Chairman and Rotating Chairman of Huawei, Marc Raibert, Founder and CEO of Boston Dynamics and other big names, which brought brilliant sharing and demonstration to everyone.

With a long history, Web Summit has been held once a year since 2009. After ten years of development, it has become a world-renowned and large-scale technology event, and the 2019 Summit has attracted attention from all walks of life. The event not only brought together more than 70,000 leaders of technology enterprises, founders of start-ups and policy makers from more than 160 countries, but also invited more than 2,600 media from all over the world to attend the summit, which has a powerful influence in the world.

Gathering of Big Names to Discuss the Changes Brought by the Latest Technology

Although the concept of artificial intelligence is hot, the specific empowerment of artificial intelligence in all walks of life cannot be accomplished at one stroke. At the "Davos Forum of Tech-geeks", many guests shared wonderful perspectives, and expressed their opinions around transportation technology, artificial intelligence, financial technology, earth technology, future technology, wearable devices, big data, front-end design, content creation, fashion and music industry technology and other fields.

Ping Guo, Vice Chairman and Rotating CEO of Huawei, explained the golden opportunities that 5G may bring to the development of all from the perspective of 5G technology. Ping Guo said that"5G + X" will bring an "Age of Wisdom where X can be artificial intelligence, big data, augmented reality, virtual reality and other technologies." He also predicted that about sixty 5G commercial networks will be put into use by the end of this year, and the 5G era will come earlier than we expected.

Marc Raibert, Founder and CEO of Boston Dynamics, showed everyone the first commercial intelligent robot dog, Spot, which is a four-legged mobile intelligent robot that can identify the environment, avoid obstacles, and perform complex tasks such as exploration, patrol and logistics transportation.

AI is Applied to Education, and Teaching Students According to Their Aptitude Promotes Educational Equality

With the development of AI technology, many industries around the world are facing new changes. Education is the foundation of the nation, and how technology empowers traditional education industry has always been paid much attention. On the day of the summit, Derek Li, Founder and Chief Educational Technology Scientist of Squirrel AI Learning, delivered a wonderful speech, which brought the whole event to a climax.

As the first company in China that developed an artificial intelligence self-adaptive learning engine with complete independent intellectual property rights and advanced algorithms as the core, Squirrel AI Learning has used a variety of AI technologies, such as evolutionary algorithms, neural network technology, machine learning, graph theory and Bayesian networks, to recommend personalized learning solutions to students in the past few years of practice. The further in-depth application of technology and the real-time improvement and update of products are closely related to the education status and future of hundreds of millions of students. Derek Li, its Founder and Chief Educational Technology Scientist, first introduced everyone the overall architecture of Squirrel AI Learning at the summit.

Squirrel AI Learning intelligent adaptive learning system provides a student-centered intelligent and personalized education, which applies artificial intelligence technology in the instructional process of teaching, learning, assessment, testing and training, to achieve the purpose of surpassing the real person teaching on the basis of simulating excellent teachers.

Squirrel AI Learning uses more than ten algorithms, deep learning and other technologies. It has MCM ability training system (Model of Thinking, Capacity and Methodology), cause-of-mistakes knowledge map reconstruction, nanoscale knowledge point decomposition, association probability of non-associated knowledge points, MIBA and other global first AI application technologies. It can accurately give the most suitable learning path for each child, drive learning with interest and encouragement, and improve learning efficiency. In addition, Squirrel AI Learning adopts the mode of artificial intelligence + real human teachers to effectively solve the problems of high class cost, few famous teacher resources and low learning efficiency of traditional education, so as to promote education equality.

Later, Derek Li shared three real stories to everyone, which made the audience more directly feel the achievements of Squirrel AI Learning in the instructional practice of teaching students according to their aptitude and promoting educational equality.

The first story is the daughter of Derek Li's driver, who only scored 25 points through various other types of tutoring. After receiving the Squirrel AI's adaptive learning engine's instruction and learning, she was able to be admitted to the best school within her own ability - the best Boeing aircraft maintenance major in vocational high school. It is just because of the personalization and pertinence of AI teachers that the fate of a so-called "Poor Student" in a traditional education has been changed.

The second story is Derek Li's own two twin boys, who are excellent students since childhood, but after using the MCM system, their overall personal skills has been greatly improved. "Education is not about the learning of knowledge points and test scores, quality education should be that after you have forgotten all the knowledge, your ability allows you to face any problems in your life," Derek Li concluded. In this year, his eldest son in the second grade of primary school has been able to make a speech to 2,500 audiences without any stage fright. This is the success of MCM, which enables the students with outstanding achievements to obtain a great improvement in their overall quality in addition to exam-oriented education.

The third story took place in Qingtai County, a poverty-stricken county in China. Squirrel AI Learning took two months to help the children in mountain areas by using the methods of "tracing the source" for student learning. Within two months, the achievement level of these rural children not only exceeded that of the children in the county, but also some children's level far exceeded the average level of students in Wuhan (the provincial city of Hubei, China). High-quality education resources are scarce in China, which is not only uneven in China's second, third and fourth tier cities, but also uneven in China's first-tier cities. If everyone has a most knowledgeable AI teacher around him/her, then education equity is not just a slogan, but every poor child can realize his/her own different dream completely.

Derek Li also said that the ultimate wish is to build Squirrel AI Learning into an omniscient and omnipotent teacher like Confucius + Da Vinci + Einstein, hoping to really use artificial intelligence to change the development history of human education.

Conclusion

Using technological innovation to leverage the personalized education market, Squirrel AI Learning is making every child have an AI super teacher that combines Confucius + Da Vinci + Einstein.

In the past five years, Squirrel AI Learning has opened more than 2,300 learning centers in more than 700 cities and counties in more than 20 provinces in China. With the business model of connecting online and offline, Squirrel AI Learning builds the core AI technology into a K12 full-course extracurricular tutoring intelligent system, which has taught nearly 2 million registered students accumulatively. It is believed that with the efforts of Squirrel AI Learning by Yixue Group, the future artificial intelligence technology can break through the limitations of traditional education mode and bring personalized education to every child.

SOURCE Squirrel AI Learning

View post:

Squirrel AI Learning Attends the Web Summit to Talk About the Application and Breakthrough of Artificial Intelligence in the Field of Education -...

We May Be Losing The Race For AI With China: Bob Work – Breaking Defense

Robert Work, former DoD deputy secretary

UPDATED from further Work remarks WASHINGTON: The former deputy secretary of defense who launched Project Maven and jumpstarted the Pentagons push for artificial intelligence says the Defense Department is not doing enough. Bob Work made the case that the Pentagon needs to adopt AI with the same bureaucracy-busting urgency the Navy seized on nuclear power in the 1950s, with the Joint Artificial Intelligence Center acting as the whip the way Adm. Hyman Rickover did during the Cold War.

There has to be this top-down sense of urgency, Work told the AFCEA AI+ML conference today. One thousand flowers blooming will work over time, but it wont [work] as fast as we need to go.

Work, now vice-chair of the congressionally chartered National Security Commission on Artificial Intelligence, told the conference yesterday that China and Russia to a lesser extent could overtake the US in military AI and automation. To keep them at bay, he said, the U.S. needs to undertake three major reforms:

Work added Wednesday that the US should also consider replicating the Chinese model of a single unified Strategic Support Force overseeing satellites, cyberspace, electronic warfare, and information warfare functions that the US splits between Space Command, Cyber Command, and other agencies. Given how interdependent these functions are in the modern world, he said, I think the unified Strategic Support Force is a better way to go, but this is something that would need to be analyzed, wargamed, experimented with.

Adm. Hyman Rickover in civilian clothes.

Rickover, Reprise?

Were all saying the right things: AI is absolutely important. Its going to give us an advantage on the battlefield for years to come, Work said. But the key thing is, where is our sense of urgency? We may be losing the race, due to our own lack of urgency.

For the US to keep up requires not only funding, Work said, but also a new sense of urgency and new forms of organization, he said: I would recommend that we adopt a Naval Reactors-type model.

At the dawn of the nuclear era, Congress promoted Hyman Rickover over the heads of more-tradition-minded admirals and empowered him as chief of Naval Reactors, which set strict technical standards for the training of nuclear personnel and construction of nuclear vessels. The remit of NR extended not only across the Navy but into the Energy Department, giving it an extraordinary independence from both military and civilian oversight.

How would this model apply to AI? Work proposes giving the Joint Artificial Intelligence Center to be renamed the Joint Autonomy & AI Center the role of systems architect for human-machine collaborative battle networks the most important AI-enabled applications. To unpack this jargon, the JAIC/JAAIC would effectively set the technical standards for most military AI projects and control how they fit together into an all-encompassing Joint All Domain Command & Control (JADC2) system sharing data across land, sea, air, space, and cyberspace.

Cross Functional Teams run by the JAIC for different aspects of AI would have to certify that any specific program had significant joint impact for it to be eligible for a share of the added $7 billion in AI funding, Work said. In some cases, he said, the JAIC could compel the services to invest in AI upgrades that they might not want to find room for in their budgets, but which would work best if everyone adopted them, much as then-Defense Secretary Bill Perry forced the services to install the early versions of GPS on vehicle, ships, and aircraft.

Im recommending a much more muscular JAIC, Work said Wednesday. You have to tell the JAIC, youre the whip. Youre going to be the one recommending to the senior leaders, what are the applications and the algorithms that we need to pursue now to gain near-term military advantage?

These proposals would upset rice bowls in the Defense Department and industry alike, making reform an uphill battle both politically and bureaucratically. What Im proposingall of the services would fight against, Work admitted.

But Work remains highly respected in the defense policy world and if Joe Biden becomes president in November, Work could well be back in the Pentagon again.

Then-Deputy Defense Secretary Robert Work (center) settles in for a congressional hearing, flanked by Adm. James Winnefeld (left) and comptroller Mike McCord (right).

Work has a lot of credibility in this area. A retired Marine Corps artillery officer, he spent years in government and thinktank positions, rising to Deputy Defense Secretary during the Obama Administration. Work warned that China and Russia had advanced their military technology dramatically while the US waged guerrilla warfare in Afghanistan and Iraq, and he convinced Sec. Chuck Hagel to launch the Third Offset Strategy to regain Americas high-tech edge. While the name died out after the Trump administration took power, the emphasis on great power competition in technology especially AI has only grown at the Trump Pentagon, despite the presidents own ambivalence about containing China and his outright refusal to confront Russias Vladimir Putin.

Just yesterday, the Defense Department released an alarming new report saying China has pulled ahead of the US in shipbuilding, missile defense, and offensive missiles. The Peoples Republic in large part owes its rapid advance to the government-mandated collaboration between military and industry, the report says, with China harvesting foreign technology from both international collaboration and outright theft. [There is] not a clear line between the PRCs civilian and military economies, raising due diligence costs for U.S. and global entities that do not desire to contribute to the PRCs military modernization, the report states.

Chinese weapons ranges (CSBA graphic)

Work likewise prioritizes China as the most dangerous competitor, although he notes that Russia is remarkably advanced in military robotics. But robotic vehicles and drones are just one aspect of AI and automation, Work says. Equally important and a major strength for China is the intangible autonomy of software systems and communications networks that comb through vast amounts of sensor data to spot targets, route supplies, schedule maintenance, and offer options to commanders.

Since algorithms are invisible, its much harder to calculate the correlation of forces today than in the Cold War, when spyplanes and satellites could count tanks, planes, and ships. For example, Work said, the right automation package could convert obsolete fighter jets from scrapyard relics to lethal but expendable drones able to out-maneuver human pilots a possibility hinted at in DARPAs recent AlphaDogfight simulation where AI beat human pilots 5-0.

An AI-driven world will be rife with surprise, Work warned. If you had a great new AI into a junky old MiG-21 or MiG-19, and you come up against it, its going to surprise the heck out of a [US] pilot because its going to be a lot more capable than the platform itself might indicate.

Here is the original post:

We May Be Losing The Race For AI With China: Bob Work - Breaking Defense

Ai | Define Ai at Dictionary.com

Contemporary Examples

On Jan. 11, ai was startled to learn authorities were razing his studio in Shanghai, weeks before the slated demolition date.

With characteristic wit, ai turned the joke back on his captors.

"Girl, it ain't no less exciting," Weaver tells me as table mates egg her on.

He called him a mean word in that there book I ain't actually read!

But then again, they didn't really have ai, surrogacy and cloning to contend with back then, did they?

Historical Examples

I says to myself, I can't prevent her, ain't it best for me to help her?

I ain't felt so young in years as I have since Oscar and I had that clearing up.

I ain't fit to run this shebang, so we need you, and need you bad.

"I ain't got nothing to sell, and don't want to buy nohow," said Bart, violently.

You ain't got much to talk about, with a stummick like yours.

British Dictionary definitions for ai Expand

Word Origin

C17: from Portuguese, from Tupi

artificial insemination

artificial intelligence

ai in Medicine Expand

AI abbr. artificial insemination

ai in Science Expand

Abbreviation of artificial insemination

Abbreviation of artificial intelligence

ai in Technology Expand

Related Abbreviations for ai Expand

artificial insemination

artificial intelligence

Associate Investigator

ai in the Bible Expand

ruins. (1.) One of the royal cities of the Canaanites (Josh. 10:1; Gen. 12:8; 13:3). It was the scene of Joshua's defeat, and afterwards of his victory. It was the second Canaanite city taken by Israel (Josh. 7:2-5; 8:1-29). It lay rebuilt and inhibited by the Benjamites (Ezra 2:28; Neh. 7:32; 11:31). It lay to the east of Bethel, "beside Beth-aven." The spot which is most probably the site of this ancient city is Haiyan, 2 miles east from Bethel. It lay up the Wady Suweinit, a steep, rugged valley, extending from the Jordan valley to Bethel. (2.) A city in the Ammonite territory (Jer. 49:3). Some have thought that the proper reading of the word is Ar (Isa. 15:1).

See the rest here:

Ai | Define Ai at Dictionary.com

No More Playing Games: AlphaGo AI to Tackle Some Real World Challenges – Singularity Hub

Humankind lost another important battle with artificial intelligence (AI) last month when AlphaGo beat the worlds leading Go player Kie Je by three games to zero.

AlphaGo is an AI program developed by DeepMind, part of Googles parent company Alphabet. Last year it beat another leading player, Lee Se-dol, by four games to one, but since then AlphaGo has substantially improved.

Kie Je described AlphaGos skill as like a god of Go.

AlphaGo will now retire from playing Go, leaving behind a legacy of games played against itself. Theyve been described by one Go expert as like games from far in the future, which humans will study for years to improve their own play.

Go is an ancient game that essentially pits two playersone playing black pieces the other whitefor dominance on board usually marked with 19 horizontal and 19 vertical lines.

Go is a far more difficult game for computers to play than chess, because the number of possible moves in each position is much larger. This makes searching many moves aheadfeasible for computers in chessvery difficult in Go.

DeepMinds breakthrough was the development of general-purpose learning algorithms that can, in principle, be trained in more societal-relevant domains than Go.

DeepMind says the research team behind AlphaGo is looking to pursue other complex problems, such as finding new cures for diseases, dramatically reducing energy consumption or inventing revolutionary new materials. It adds:

"If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We cant wait to see what comes next."

This does open up many opportunities for the future, but challenges still remain.

AlphaGo combines the two most powerful ideas about learning to emerge from the past few decades: deep learning and reinforcement learning. Remarkably, both were originally inspired by how biological brains learn from experience.

In the human brain, sensory information is processed in a series of layers. For instance, visual information is first transformed in the retina, then in the midbrain, and then through many different areas of the cerebral cortex.

This creates a hierarchy of representations where simple, local features are extracted first, and then more complex, global features are built from these.

The AI equivalent is called deep learning; deep because it involves many layers of processing in simple neuron-like computing units.

But to survive in the world, animals need to not only recognize sensory information, but also act on it. Generations of scientists and psychologists have studied how animals learn to take a series of actions that maximize their reward.

This has led to mathematical theories of reinforcement learning that can now be implemented in AI systems. The most powerful of these is temporal difference learning, which improves actions by maximizing expectation of future reward.

By combining deep learning and reinforcement learning in a series of artificial neural networks, AlphaGo first learned human expert-level play in Go from 30 million moves from human games.

But then it started playing against itself, using the outcome of each game to relentlessly refine its decisions about the best move in each board position. A value network learned to predict the likely outcome given any position, while a policy network learned the best action to take in each situation.

Although it couldnt sample every possible board position, AlphaGos neural networks extracted key ideas about strategies that work well in any position. It is these countless hours of self-play that led to AlphaGos improvement over the past year.

Unfortunately, as yet there is no known way to interrogate the network to directly read out what these key ideas are. Instead, we can only study its games and hope to learn from these.

This is one of the problems with using such neural network algorithms to help make decisions in, for instance, the legal system: they cant explain their reasoning.

We still understand relatively little about how biological brains actually learn, and neuroscience will continue to provide new inspiration for improvements in AI.

Humans can learn to become expert Go players based on far less experience than AlphaGo needed to reach that level, so there is clearly room for further developing the algorithms.

Also, much of AlphaGos power is based on a technique called back-propagation learning that helps it correct errors. But the relationship between this and learning in real brains is still unclear.

The game of Go provided a nicely constrained development platform for optimizing these learning algorithms. But many real-world problems are messier than this and have less opportunity for the equivalent of self-play (for instance self-driving cars).

So, are there problems to which the current algorithms can be fairly immediately applied?

One example may be optimization in controlled industrial settings. Here the goal is often to complete a complex series of tasks while satisfying multiple constraints and minimizing cost.

As long as the possibilities can be accurately simulated, these algorithms can explore and learn from a vastly larger space of outcomes than will ever be possible for humans. Thus DeepMinds bold claims seem likely to be realized, and as the company says, we cant wait to see what comes next.

This article was originally published on The Conversation. Read the original article.

Read the original here:

No More Playing Games: AlphaGo AI to Tackle Some Real World Challenges - Singularity Hub

Researchers examine the ethical implications of AI in surgical settings – VentureBeat

A new whitepapercoauthored by researchers at the Vector Institute for Artificial Intelligence examines the ethics of AI in surgery, making the case that surgery and AI carry similar expectations but diverge with respect to ethical understanding. Surgeons are faced with moral and ethical dilemmas as a matter of course, the paper points out, whereas ethical frameworks in AI have arguably only begun to take shape.

In surgery, AI applications are largely confined to machines performing tasks controlled entirely by surgeons. AI might also be used in a clinical decision support system, and in these circumstances, the burden of responsibility falls on the human designers of the machine or AI system, the coauthors argue.

Privacy is a foremost ethical concern. AI learns to make predictions from large data sets specifically patient data, in the case of surgical systems and its often described as being at odds with privacy-preserving practices. The Royal Free London NHS Foundation Trust, a division of the U.K.s National Health Service based in London, provided Alphabets DeepMind with data on 1.6 million patients without their consent. Separately, Google, whose health data-sharing partnership with Ascension became the subject of scrutiny last November, abandoned plans to publish scans of chest X-rays over concerns that they contained personally identifiable information.

Laws at the state, local, and federal levels aim to make privacy a mandatory part of compliance management. Hundreds of bills that address privacy, cybersecurity, and data breaches are pending or have already been passed in 50 U.S. states, territories, and the District of Columbia. Arguably the most comprehensive of them all the California Consumer Privacy Act was signed into law roughly two years ago. Thats not to mention the national Health Insurance Portability and Accountability Act (HIPAA), which requires companies to seek authorization before disclosing individual health information. And international frameworks like the EUs General Privacy Data Protection Regulation (GDPR) aim to give consumers greater control over personal data collection and use.

But the whitepaper coauthors argue measures adopted to date are limited by jurisdictional interpretations and offer incomplete models of ethics. For instance, HIPAA focuses on health care data from patient records but doesnt cover sources of data generated outside of covered entities, like life insurance companies or fitness band apps. Moreover, while the duty of patient autonomy alludes to a right to explanations of decisions made by AI, frameworks like GDPR only mandate a right to be informed and appear to lack language stating well-defined safeguards against AI decision making.

Beyond this, the coauthors sound the alarm about the potential effects of bias on AI surgical systems. Training data bias, which concerns the quality and representativeness of data used to train an AI system, could dramatically affect a preoperative risk stratification prior to surgery. Underrepresentation of demographics might also cause inaccurate assessments, driving flawed decisions such as whether a patient is treated first or offered extensive ICU resources. And contextual bias, which occurs when an algorithm is employed outside the context of its training, could result in a system ignoring nontrivial caveats like whether a surgeon is right- or left-handed.

Methods to mitigate this bias exist, including ensuring variance in the data set, applying sensitivity to overfitting on training data, and having humans-in-the-loop to examine new data as its deployed. The coauthors advocate the use of these measures and of transparency broadly to prevent patient autonomy from being undermined. Already, an increasing reliance on automated decision-making tools has reduced the opportunity of meaningful dialogue between the healthcare provider and patient, they wrote. If machine learning is in its infancy, then the subfield tasked with making its inner workings explainable is so embryonic that even its terminology has yet to recognizably form. However, several fundamental properties of explainability have started to emerge [that argue] machine learning should be simultaneous, decomposable, and algorithmically transparent.

Despite AIs shortcomings, particularly in the context of surgery, the coauthors argue the harms AI can prevent outweigh the adoption cons. For example, in thyroidectomy, theres risk of permanent hypoparathyroidism and recurrent nerve injury. It might take thousands of procedures with a new method to observe statistically significant changes, which an individual surgeon might never observe at least not in a short time frame. However, a repository of AI-based analytics aggregating these thousands of cases from hundreds of sites would be able to discern and communicate those significant patterns.

The continued technological advancement in AI will sow rapid increases in the breadths and depths of their duties. Extrapolating from the progress curve, we can predict that machines will become more autonomous, the coauthors wrote. The rise in autonomy necessitates an increased focus on the ethical horizon that we need to scrutinize Like ethical decision-making in current practice, machine learning will not be effective if it is merely designed carefully by committee it requires exposure to the real world.

Read the original post:

Researchers examine the ethical implications of AI in surgical settings - VentureBeat

Fraud Detection with AI and Machine Learning – How Can It Work to Protect Your Business – TechFunnel

Fraud Detection Machine Learning Solution for Business Security

In fact, business security is just the tip of the iceberg. Or a collective term. Machine learning systems can give your business more than you think.

Machine learning in itself is a very powerful tool for enhancing the user experience. Smart systems learn to understand users based on their actions, predict, customize, and hit the target. And also protect users from fraudulent attempts.

The simplest example is credit card fraud detection. Advanced online banking systems will not allow you to enter the clients personal account, manage money if your behavior pattern indicates a possible fraud. In this case, improved user experience means the confidence of your users that they are protected from fraudulent attempts as much as possible.

According to a Harvard Business Review study, 90% of users surveyed said that the attentive attitude of companies to the personal data of their customers shows a real attitude towards customers. In other words, if you want to win user loyalty, then a careful attitude to data and its comprehensive protection can help. Machine learning systems are able to track how the data is stored, collected, and used generally, how much do your procedures comply with GDPR. In the event that potential fraudulent or abnormal actions that treat user data are detected, the system sends an alarm.

Fraudsters a priori are smart people, otherwise, they would not be able to come up with schemes that work. As for retail, this is a very attractive industry, since it is always possible to pretend to be a respectable buyer to deceive the seller. Machine learning systems are able to stop these attempts even at the stage of intent for example, when users begin to place an order with a suspicious IP, which has already been noticed in fraudulent schemes.

Any successful fraudulent attempt means loss of money and reputation. It is much easier to return money than a reputation this is exactly what you should not risk. Paradoxically, some companies refuse to confront fraud because they are afraid that this will damage their reputation, although in fact, the opposite is true. Lacking a fraudulent response strategy damages your reputation the most. And this is the opinion of most modern users.

So, how do machine learning systems work to provide a high level of protection against illegal attacks?

Rule-based systems detected fraud when money was already stolen. Modern systems work with constantly changing data in real-time, therefore they are able to catch a fraudulent attempt even at the intention stage. Heres how it works.

Read more:

Fraud Detection with AI and Machine Learning - How Can It Work to Protect Your Business - TechFunnel

Six Ways Marketers Need to Adapt in the AI Age – AdAge.com

Credit: istock

Malcolm Gladwell, in "The Tipping Point" said, "If you want to bring a fundamental change in people's belief and behavior ... you need to create a community around them, where those new beliefs can be practiced and expressed and nurtured."

Our behaviors are changing: we ask Alexa to take our shopping orders; we rely on Maps to recompute our driving route due to traffic; we rely on Netflix to suggest the next show or movie; and we even let our cars detect the probability of collision and steer or brake as needed.

There's an evolving community around us, one that relies heavily on intelligent machines. Whether we're at an AI tipping point or not, it's clear that artificially-instilled intellect is growing. As AI continues to change consumer behavior, brands must recognize how it can impact marketing initiatives.

1. Moving from keywords to context. Is the long-tail really a tail anymore? As search becomes conversational, particularly through the explosion of voice (Alexa, Google, Siri, Cortana), extracting context is more relevant, as possible permutations of keywords will rise exponentially (e.g. 4 core keywords = 24 queries; 8 keywords = 40,320 queries). While there are tactical ways to manage this, the imperative is that context becomes more important and AI can be used to extract it.

2. Using structured and unstructured data together. Marketing channels are intricately interconnected, and not all data points are structured. There's been a surge in brands trying to understand this connection through various attribution solutions. While a step in the right direction, attribution approaches are still mostly linear and largely use structured data. As AI-based solutions evolve, structured and unstructured data (signals, images, emoticons, sound, video, etc.) will become nodes in a neural network, dynamically determining exposure to each touchpoint. This will cause a shift in how channel performance is managed, measured and reported.

3. Creating shorter search funnels, with more content optimizations. Cognitive AI solutions aim to connect users with the information they're seeking, faster and with a higher degree of relevance. As these systems get more predictive, it's plausible that drill-down searches will be reduced, and the responsibility of delivering the right information, in the context of the search, will fall on the systems that render the content. This implies that content optimization will not only need to be connected tightly with the ad delivery system, but will also rely more on AI and decisioning algorithms.

4. Training the AI system. The most challenging task in any good AI application is training the AI algorithm. This requires large amounts of good data (and content) for training, as well as validation, which can be an intensive and costly effort. As brands look to use AI systems, they'll need to account for these costs.

5. Rethinking segmentation. Segmentation is about creating "segments" that are more likely to convert. As AI systems evolve, a user's interaction with the "next" stage is always maximized to the highest probability. As such, preformed segments lose their meaning. Accurate understanding of context will drive conversion. Thus, we'll need to rethink when and where we use classic audience segments vs. AI.

6. Assessing your data science capabilities. The boundaries around what machines can do are being redefined. Therefore, brands must also redefine their data science capabilities. Cognitive AI systems, neural networks, predictive analytics using AI, self-learning systems -- all these things require a heavy dependence on data science. To adequately consider these factors, a robust analytics team is a requirement.

Brands should take this plunge, as leading this evolution of intellect is quickly become a key lever in driving performance.

Read more:

Six Ways Marketers Need to Adapt in the AI Age - AdAge.com

$9m raised to develop process-based AI solution – Foodprocessing

Based in Israel, Seebo has announced the completion of a $9 million funding round, led by Ofek Ventures, with the participation of Vertex Ventures, and existing investors Viola Ventures and TPY Capital.

The company will use the funding to further expand its global reach and continue enhancing its process-based artificial intelligence (AI) solution. The solution enables manufacturers to predict and prevent production losses and master complex production processes, which is designed to save costs for users.

There is a growing demand for the companys AI solution, as manufacturers seek new ways to prevent losses and optimise their processes. Current users include Nestl, Barilla, Mondelz, Allnex and the Volkswagen Group.

Ofek Ventures partner Itay Rand said he was excited to be investing in the company. Manufacturers today understand that in order to compete successfully they must adopt effective process optimisation capabilities, and there is a clear recognition that industrial artificial intelligence and a data-driven approach is fundamental to achieving that goal, he said.

According to Lior Akavia, Seebo CEO and Co-Founder, in order to prevent losses and continuously master complex production processes, manufacturers need a technological solution that understands the unique complexity of the production lines, is scalable across various manufacturing lines and is easy for production teams to use.

The coronavirus (COVID-19) pandemic has also changed the face of manufacturing, with companies having to adapt to shifting customer demand. Manufacturers have faced supply chain disruptions, implemented new regulations for employees and moved towards optimisation of remote processes.

The coronavirus pandemic has spurred a search for more efficient, effective ways to identify and prevent process inefficiencies overall which lies at the heart of what Seebo does. Data-driven decision-making is critical in our new reality, as manufacturers must adapt quickly and implement changes effectively. Those manufacturers who meet this challenge today will gain a competitive edge in tomorrows marketplace, Akavia said.

Image credit: stock.adobe.com/au/Aleksandr Matveev

See the original post here:

$9m raised to develop process-based AI solution - Foodprocessing

Revolution AI: How Edmonton is gaining ground as a research and innovation hub for AI – Financial Post

The AI community in Edmonton may have gotten off to a quiet start, but it is now viewed as a world leader in the field. This status is driving efforts to build a startup ecosystem to attract VCs and corporations hungry for AI solutions.

The communitys notoriety can be attributed in large part to the work of Richard Sutton, a computing science professor at the University of Alberta who heads up the Reinforcement Learning and Artificial Intelligence research program (RLAI).

Sutton is considered the pre-eminent researcher in reinforcement learning a subset of AI he describes as learning by trial and error.

Deep (or machine) learning is more supervised, in which you are given a training set with many examples of how systems should behave. Reinforcement learning systems interact and figure out for themselves the best things they need to do to achieve their goals.

Reinforcement learning is being used in ad and article placement on websites and schedule and resource management. Its often used in tandem with machine learning techniques (autonomous vehicles being a case in point).

Suttons latest mission is to work with various groups to develop a stronger economy for potential AI companies. Its a timely goal, given that the demand for AI is estimated to reach $45 trillion by 2025, he says. We now have major companies coming into Edmonton; and theres a lot of excitement around this activity in the startup community.

When we started, you couldnt get attention unless you were talking about oil.

Sutton is also head advisor for the universitys Alberta Machine Intelligence Institute (AMII), established 15 years ago to promote research in AI and machine learning.

Now there is a commercial piece that goes with that research, says executive director.

There needs to be a demand-driven component to this. Professors come up with things that can apply to industry. We find industry problems we can create startups around.

Last fall, AMII announced its first official startup: PFM Scheduling Services, an AI solution for automating schedule-building processes in healthcare. Plans are to launch three to four additional startups over the next 12 to 18 months, or possibly more, Schuler says.

Another step forward has been the recent RBC announcement establishing an AI research lab in Edmonton.

Gabriel Woo, VP of innovation at RBC Research in Toronto, says while Torontos and Montreals AI ecosystems are further along, you have a comparable academic lab at AMII, and it is home to Sutton, who literally wrote the textbook on reinforcement learning that is being read around the world. Because of that, we are partnering with them to create and fuel opportunities to help that talent stay in Edmonton.

Woo believes the community can expect to see more investors and startups in the near future. If we are able to provide opportunities for them to apply their research, it will attract more attention from VCs and others and increase the opportunities for commercialization.

Cam Linke, co-founder of Startup Edmonton, says the city is starting to see more startups take advantage of this newfound interest in AI. When we started, you couldnt get attention unless you were talking about pulling fermented dinosaurs out of the ground. Now its great to see attention is on more sectors than oil.

Startups are working with AI in two ways, he explains. They either have it as a core part of what they are doing; or they are amplifying what is being done already using machine learning to target the right people and product offerings.

Edmonton is a perfect venue for this, Linke says. We have a large number of industries oil and gas, finance, healthcare with big problems to solve and the data and ability to use it. A lot of startups are being created because they can now combine AI techniques with these industry ties to create a company. Within that mix are the researchers at the university.

The only problem, Linke says, is that Edmonton has been too quiet about what it has been doing. Whether by accident or not, we have ended up with a fantastic group of AI researchers. Now were dealing with a good core of startups and connecting great talent to great companies, and multi-national companies are noticing what we are doing.

One particular point of pride for Sutton is that they are now keeping talent at home. AI is a global world and Canada is a world leader. Canada is punching above its weight and were trying to keep it that way. To do that we have to ensure there are business opportunities here.

As Schuler notes: If you want to drill for oil and gas, you would do it in Alberta, not Washington State. The argument is the same for AI. We have one of the best groups in the world so how do we capitalize on this? By building industries, attracting companies and reinvesting.

It should be natural for people to want to come here because of the asset we have.

Read more:

Revolution AI: How Edmonton is gaining ground as a research and innovation hub for AI - Financial Post

Texas Hold’em AI Bot Taps Deep Learning to Demolish Humans – IEEE Spectrum

A fresh Texas Holdem-playing AI terrorhas emerged barely a month after a supercomputer-powered bot claimedvictory over four professional poker players. But insteadof relying ona supercomputers hardware, the DeepStack AI has shown how it too can decisively defeat human poker pros while running on a GPU chip equivalent to those found in gaming laptops.

The success of anypoker-playing computer algorithm inheads-up, no-limit Texas Holdem is no small feat. Thisversion of two-player poker with unrestricted bet sizes has 10160possible plays at different stages of the gamemore than the number of atoms in the entire universe. But the Canadian and Czech reseachers who developed the new DeepStack algorithm leveraged deep learning technology to create the computer equivalent of intuition and reduce the possible future plays that needed to be calculated at any point in the gameto just 107. That enabled DeepStacks fairly humble computer chip to figure out its best move for each playwithin five seconds and handily beat poker professionals from all over the world.

To make this practical, we only look ahead a few moves deep,saysMichael Bowling, a computer scientist and head of the Computer Poker Research Groupat the University of Alberta in Edmonton, Canada.Instead of playing from there, we useintuition to decide how to play.

This is a huge deal beyond just bragging rights for an AIs ability to beat the best human poker pros. AI that can handle complex poker games such as heads-up, no-limit Texas Holdem could alsotackle similarly complex real-world situations by making the best decisionsin the midst of uncertainty. DeepStacks poker-playing success while running on fairly standard computer hardware could make it much more practical for AI to tackle many other imperfect-information situations involving business negotiations,medical diagnoses andtreatments, or even guiding military robotson patrol. Full details of the research are published in the 2 March 2017 online issue of thejournalScience.

Imperfect-information games have represented daunting challenges for AI until recently because of the seemingly impossible computing resources requiredto crunch all the possible decisions. To avoidthe computing bottleneck, most poker-playing AI have used abstraction techniques that combine similar plays and outcomes in an attempt to reduce the number of overall calculations needed. They solved for a simplified version of heads-up, no-limit Texas Holdem instead of actually running through all the possible plays.

Such an approach has enabledAI to play complex games from a practical computing standpoint, but at the cost of having huge weaknesses in their abstracted strategies that human players can exploit. An analysis showed that four of the top AI competitors in the Annual Computer Poker Competition were beatable by more than 3,000 milli-big-blinds per game in poker parlance. That performance is four times worse than if the AI simply folded and gave up the pot at the start of every game.

DeepStack takes a very different approach that combines both old and new techniques. The older technique isanalgorithm developed by University of Alberta researchers that previously helped come up with a solution for heads-up, limit Texas Holdem (a simpler version of poker with restricted bet sizes). This counterfactual regret minimization algorithm, called CFR+ by its creators, comes up with the best possible play in a given situation by comparing different possible outcomesusing game theory.

By itself, CFR+ would stillruninto the same problem of the computing bottleneck in trying to calculate all possible plays. But DeepStack gets around this by only having the CFR+ algorithm solve for a few moves ahead instead of all possible moves until the end of the game. For all the other possible moves, DeepStack turns to its own version of intuition that is equivalent to a gut feeling about the value of the hidden cards held by both poker players. To train DeepStacks intuition, researchers turned todeep learning.

Deep learning enables AI to learn from example by filtering huge amounts of data through multiple layers of artificial neural networks. In this case, the DeepStack team trained their AI on the best solutions of the CFR+ algorithm for random poker situations. That allowed DeepStacks intuition to become a fast approximate estimate of its best solution for the rest of the game without having to actually calculate all the possible moves.

Deepstack presents the right marriage between imperfect information solvers and deep learning, Bowling says.

But the success of the deep learning componentsurprised Bowling. He thought the challenge would prove too tough even for deep learning. His colleaguesMartin Schmid and Matej Moravcikboth first authors on the DeepStack paperwere convinced that the deep learning approach would work. They ended upmakinga private bet on whether or not the approach would succeed. (I owe them a beer, Bowling says.)

DeepStack proved its poker-playing prowess in 44,852 games played against 33 poker pros recruited by the International Federation of Poker from 17 countries. Typically researchers would need to have their computer algorithms play a huge number of poker hands to ensure that the results are statistically significant and not simply due to chance. But the DeepStack team used a low-variance technique called AIVAT that filters out much of the chance factor and enabled them to come up with statistically significant results with as few as 3,000 games.

We have a history in group of doing variance reduction techniques, Bowling explains.This new technique was pioneered in our work to help separate skill and luck.

Of all the players, 11 poker pros completed the requested 3,000 games over a period of four weeks from November 7 to December 12, 2016. DeepStack handily beat 10 of the 11 with a statistically significant victory margin, and still technically beat the 11th player. DeepStacks victory as analyzed by AIVATwas 486 milli-big-blinds per game (mbb/g). Thatsquite a showing given that 50 mbb/g is considered a sizable margin of victoryamong poker pros. This victory margin also amounted to over 20 standard deviations from zero in statistical terms.

News of DeepStacks success is just the latest blow to human poker-playing egos. ACarnegie Mellon University AI called Libratus achieved its statistically significant victory against four poker pros during a marathon tournament of 120,000 games totalplayedin January 2017. That heavily publicized eventled some online poker fans to fret about the possible death of the gameat the hands of unbeatable poker bots. But to achieve victory, Libratus still calculatedits main poker-playing strategy ahead of time based on abstracted game solvinga computer- and time-intensive process that required15 million processor-core hours on a new supercomputer called Bridges.

Worried poker fans may have even greater cause for concern with the success of DeepStack.Unlike Libratus, DeepStacks remarkably effective forward-looking intuition means itdoes not have to do any extra computing beforehand. Instead, it always looks forward by solvingforactualpossible plays several moves ahead and then relies on its intuition to approximate the rest of the game.

This continual re-solving approach that can take place at any given point in a game is a step beyond the endgame solver that Libratus used only during the last betting rounds of each game. And the fact that DeepStacks approach works on the hardware equivalent of a gaming laptop could mean the world will see the rise of many more capable AI bots tackling a wide variety of challenges beyond pokerin the near future.

It does feel like a breakthrough of the sort that changes the typesof problems we can apply this to, Bowling says. Most of the work of applying this to other problems becomes whether can we get a neural network to apply this to other situations, andI think we have experience with using deep learning in a whole variety of tasks.

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, drones, automation, artificial intelligence, and more. Contact us:e.guizzo@ieee.org

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Computer scientists take valuable lessons from a human vs. AI competition of no-limit Texas hold'em 13May2015

Howand whycomputer programs face off over the poker table 17Jul2012

Making computers unbeatable at Texas Hold 'em could lead to big breakthroughs in artificial intelligence 25Feb2015

A computer algorithm's triumph over the Texas Hold'em card game could lead to real-world security applications 8Jan2015

An AI named Libratus has beaten human pro players in no-limit Texas Hold'em for the first time 31Jan

The European Parliaments draft recommendations for governing the creation and use of robots and artificial intelligence includes rights for smartrobots 22Feb

Shakey's creators and colleagues share inside stories at the celebration and talk about robotics today 17Feb

University of Michigan "micromotes" aim to make the Internet of Things smarter without consuming more power 10Feb

Ubers experiment in San Francisco showed that bicycles and bike lanes are a problem self-driving cars are struggling to crack 31Jan

The rise of deep-learning AI could enable computers to automatically count the crowds at future inauguration days 24Jan

Gill Pratt explains why nobody in the automotive industry is anywhere close to full autonomy 23Jan

Neurala wants to build powerful AI systems that run on smartphone chips to power robots, drones, and self-driving cars 17Jan

An artificial intelligence will play 120,000 hands of heads-up, no-limit Texas Hold'em against four human poker pros 10Jan

An AI alternative to deep learning makes it easier to debug the startups self-driving cars 29Dec2016

3DSignals' deep learning AI can detect early sounds of trouble in cars and other machines before they break down 27Dec2016

If we dont get a ban in place, there will be an AI arms race 15Dec2016

The head of Alphabets innovation lab talks about its latest "moonshot" projects 8Dec2016

Maluuba sees reading comprehension and conversation as key to true AI. It's built a new way to train AIs on those skills 1Dec2016

Game theorist shows how pedestrians will exploit self-driving cars' built-in yen to yield 26Oct2016

At the White House Frontiers Conference, Stanford's Li details three crucial reasons to increase diversity in AI 19Oct2016

Continued here:

Texas Hold'em AI Bot Taps Deep Learning to Demolish Humans - IEEE Spectrum

Every business will rely on AI in five years and most people are worried theyre being left behind – Evening Standard

Microsoft believes that every business will be an AIbusiness in the next five years but there are concerns that people dont fully understand the technology and will be left behind in the AI revolution.

Ahead of Future Decoded, the tech giants annual conference at the ExCel Centre in London, it released a new report, named Accelerating Competitive Advantage with AI, covering how businesses across the UK are using the technology.

The report shows that there is more awareness and adoption of AI overall amongbusinesses, with 56 per cent of businesses adopting AI. However, less than a quarter of these organisations (24 per cent) have an AI strategy and 96 per cent of employees surveyed reporting that their bosses are adding AI without consulting them on the technology. This is fuelling anxiety around the technology, as well as concerns over job security.

Based on the progress were seeing, we believe that every company will be an AI company in five years, Microsofts UK COO Clare Barclay told the Standard. As organisations start to use or think about using [AI], we want to encourage more open dialogue on this topic.

Meet the AI stylists who will help you get dressed in the morning

Open communication is absolutely critical.

To encourage discussions and education around AI, Microsoft is launching a new AI Business School in the UK. It has currently been running as a pilot for the past 12 months, and focuses on explaining the technology of AI, how it can inform strategies and the culture around it. For instance, one aspect of the programme will focus on the ethical decisions leaders have to make when it comes to AI, including how to construct and implement an ethical AI framework.

But its not just for business leaders; whileit will function as a physical space, there are also plans to implement online workshops for anyone to access. We want to make sure were driving broad skills development. Weve made a commitment to train 30,000 public sector employees and 500,000 UK citizens as part of that, adds Barclay.

Microsoft isnt the only tech company examining the role of AI in the UK. Recently,Samsung launched its new FAIR Future Initiative, which aims to educate the public on AI in order to involve everyone in the deployment of the tech. According to research carried out by Samsung, which surveyed the views of 5,250 people in the UK and Ireland, 51 per cent feel AI will have a positive impact on society as a whole, however around 90 per cent of people feel it is too complex to understand.

Thats a significant challenge, Teg Dosanjh, director of Connected Living at Samsung UK and Ireland, told the Standard. We want to tackle that by having an online hub, so people can understand what AI is today, the terminology and demystifying it, and raise awareness around that.

As well as the FAIR Future online hub, Samsung is taking its FAIR Future work on the road to encourage people across the UK to get hands-on with the tech. Its first stop will be at the Norwich Science Festival later this month.

Google.org's Jacquelline Fuller on AI and tech for good

Samsung has been in the AI space for a while, particularly when it comes to smart devices and appliances in the home with its Connected Things platform. So why is now the time right for it to investigate attitudes to AI in real life? Dosanjh says it comes from the way the technology is accelerating.

Were not talking about just algorithms anymore typically consistent calculations but youve got these neural networks. And the technology and the capabilities of neural networks and machine learning have developed significantly over the last five years.

He believes the onus is on tech companiesto explain whats going on in the industry. The survey found that most people gained knowledge on AI from the media, word of mouth and fiction leaving governments and tech companies languishing behind. We havent done a very good job of bringing people on the journey with us on AI. So we have to start at some point and involve everyone in that discussion.

More of our lives and society are going to be impacted by AI and weve got to be very conscious of how we capitalise and bring those opportunities to life.

See the original post here:

Every business will rely on AI in five years and most people are worried theyre being left behind - Evening Standard

AI’s $37 billion market is creating new industries – VentureBeat

Artificial intelligence (AI) is already trending, and its still heating up.

There are countless applications for AI everything from procuring better search results to diagnosing complex medical conditions.

Developers and engineers are pooling their resources to create the best AI algorithms they can, bringing the technology to new industries and pushing the limits of what machine learning can accomplish. The market for AI is projected to hit $36.8 billion by 2025, and may only grow from there as general AI gets closer and closer to human-level functionality.

But the growing wave of AI is about more than just the AI industry. In fact, there are dozens of secondary tech industries that are developing or growing in response to AIs growing needs and theyre worth considering if youre looking for promising investments, or a new career path that can support AI without getting into the thick of machine learning.

AI algorithms typically rely on multiple moving parts at once, which puts a heavy demand on processing power. IBMs Watson, for example, notorious for its victory over human Jeopardy champions back in 2011, drew power from 90 interlinked IBM Power 750 servers. Each of those used a 3.5 GHz POWER7 8-core processor, and 4 threads per core. Overall, thats 16TB of RAM compare that to gaming PCs, even the most advanced of which only rely on 64 GB of RAM. Oh, and dont forget that Watson, while still impressively complicated, is sixyears old at this point. Needs are only going to increase from here.

In response, processing chip companies like Nvidia are scrambling to try and produce processors that are specialized to support deep learning algorithms. Nvidia recently announced a Tesla V100 chip that provides more power for less energy and could hypothetically increase the power of a data center many times over.

Ex-Google employees have also come together to found a company called Groq, also racing to produce a better chip to support machine learning.

Next, AI algorithms need massive amounts of data storage. These machine learning algorithms need to be fed with copious amounts of data if theyre going to succeed in learning what theyre programmed to learn.

Watson, for example, reviewed the entire text of Wikipedia, while Googles DeepMind played and stored countless Go matches to ready itself to beat the world champion. Self-driving cars, which will collect data on their environments to get better and safer in driving humans to their destinations, are estimated to create a whopping 4TB of data every dayand thats per car.

We need a cheap, reliable way to store our data. Thankfully, we already have access to some impressive forms of data storage, but technological futurists are moving ahead to create even better methods of storage. For example, researchers from the University of Southampton have created a method to store data in five dimensions (rather than two), embedded in glass that could last, well, practically forever. Finding a way to roll out new technologies like these for use with advanced AI could be a lucrative economic opportunity.

Were approaching the age of superintelligence, the hypothetical point at which AI becomes far more intellectually capable than its human creators. There are many ethical concerns (and existential concerns) associated with this, from the problem of defining consciousness to the socioeconomic repercussions of distributing that power, many of which have been consolidated by philosopher Nick Bostrom (and followed closely by industry figureheads like Elon Musk, Stephen Hawking, and Bill Gates).

The race is on to see not just who can develop AI, but who can develop security measures, ethical standards, and political frameworks that allow that AI to exist without threatening our way of life (or our future). That includes beefing up cybersecurity, modeling different ways to control AI, and figuring out the best open source frameworks to ensure the entire world has equal, responsible access to these powerful tools.

AI is attracting thousands of people who want to build the machines that may take us into the next great age of humanity. However, we cant forget about the important secondary tech industries that AI both needs and supports. Only with them can AI continue to grow at its already astounding rate.

Larry Alton is a contributing writer at VentureBeat covering artificial intelligence.

Continue reading here:

AI's $37 billion market is creating new industries - VentureBeat