Legaltech 2017: Announcements, AI, And The Future Of Law – Above the Law

I spent most of last week in the Midtown Hilton in New York City attending Legaltech 2017, or Legalweek: The Experience, or some sort of variation of the two. For the most part, it pretty much had the same feel as every other Legaltech Ive attended. But I agree with my fellow Above the Law tech columnist, Bob Ambrogi, that ALM deserves kudos for trying to change the focus a bit. It may take a year or two of experimentation to get it right, but at least theyre trying.

This year, one of the topics that popped up over and over throughout the conference was artificial intelligence and its potential impact on the practice of law. In part the AI focus was attributable to the Keynote speaker on the opening day of the conference,Andrew McAfee, author of The Second Machine Age(affiliate link). His talk focused on ways that AI would disrupt business as usual in the years to come. His predictions were in part premised on his assertion that key technologies had improved greatly in recent years and as a result were in the midst of a convergence of these technologies such that AI is finally coming of age.

I was particularly excited about this keynote sinceId started reading McAfeesbook in mid-December after Klaus Schauser, the CTO of AppFolio, MyCases parent company, recommended it to me. As McAfee explains in his book, its abundantly clear that AI is already having an incredible impact on other industries.

But what about the legal industry? I started mulling over this issue last September after attending ILTA in D.C. andwriting about a few different legal software platforms grounded in AI concepts. Because I find this topic to be so interesting, I decided to hone in on it during my interviews at Legaltech as well, which I livestreamed via Periscope.

First I met with Mark Noel, managing director of professional services at Catalyst Repository Systems. After he shared the news ofCatalysts latest release, Insight Enterprise, a platform for corporate general counsel designed to centralize and streamline discovery processes, we turned to AI and his thoughts on how it will affect the legal industry over the next year. He believes that AI will eventually manage the more tedious parts of practicing law, thus allowing lawyers to focus on the analytical aspects that tend to be more interesting: Some of the types of tasks lawyers are best at I dont see AI taking over anytime soon. A lot of what lawyers work with is justice, fairness, and equity, which are more abstract. The ultimate goal of legal practice the human practitioner is going to have to do, but the the grunt work and repeatable stuff like discovery which is becoming more onerous because of growing data volumes those are the kinds of things these tools can take over for us. You can watch the full interview here.

Next I spoke with AJ Shankar, the founder of Everlaw, an ediscovery platform that recently rolled out an integrated litigation case management tool as well, which I wrote about here. According to AJ, AI is undergoing a renaissance across many different industries. But when it comes to the legal space, its a different story. AI is not ready to make the tough judgments that lawyers make, but it is ready to augment human processes. AI will become a very important assistant for you. It will work hand in hand with humans who will then provide the valuable context. You can watch the full interview here.

I also met with Jack Grow, the president of LawToolBox, which provides calendaring and docketing softwareand he talked to me about their latest integration with DocuSign. Then we moved onto AI and Jack suggested that in the short term, the focus would be on aggregating the data needed to build useful AI platforms for the legal industry. Over the next year software vendors will figure out how to collect better data that can be consumed for analysis later on, so it can be put into an algorithm to make better use of it. Theyll be building the foundation and infrastructure so that they can later take advantage of artificial intelligence. You can watch the full interview here.

And last but certainly not least, I spoke with Jeremiah Kelman, the president of Everchron, a company that Ive covered previously, which provides a collaborative case management platform for litigators. Jeremiah predicts that AI will provide very targeted and specific improvements for lawyers. Replacement of lawyers sounds interesting, but its more about leveraging the information you have and the data that is out there and using it to provide insights and give direction to lawyers as they do their tasks and speed up what they do. From research, ediscovery, case management, and things across the spectrum, well see it in targeted areas and youll get the most impact from leveraging and improving within the existing framework. You can watch the full interview here.

Nicole Black is a Rochester, New York attorney and the Legal Technology Evangelist at MyCase, web-based law practice management software. Shes been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. Shes easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter @nikiblack and she can be reached at niki.black@mycase.com.

View original post here:

Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law

Surge in Remote Working Leads iManage to Launch Virtual AI University for Companies that Want to Harness the Power of the RAVN AI Engine -…

CHICAGO, April 09, 2020 (GLOBE NEWSWIRE) -- iManage, the company dedicated to transforming how professionals work, today announced that it has rolled out a virtual Artificial Intelligence University (AIU), as an adjunct to its customer on-site model. With the virtual offering, legal and financial services professionals can actively participate in project-driven, best-practice remote AI workshops that use their own, real-world data to address specific business issues even amidst the disruption caused by the COVID-19 outbreak.

AIU helps clients to quickly and efficiently learn to apply machine learning and rules-based modeling to classify, find, extract and analyze data within contracts and other legal documents for further action, often automating time-consuming manual processes. In addition to delivering increases in speed and accuracy of data search results, AI frees practitioners to focus on other high-value work. Driven both by the need of organizations to reduce operational costs and to adapt to fundamental shifts toward remote work practices, virtual AIU is playing an important role in helping iManage clients continue to work and collaborate productively. The curriculum empowers end users with all the skills they need to quickly ramp up the efficiency and breadth of their AI projects using the iManage RAVN AI engine.

Participating in AIU was a huge win for us. We immediately saw the impact AI would have in surfacing information we need and allowing us to action it to save time, money and frustration, said Nikki Shaver, Managing Director, Innovation and Knowledge, Paul Hastings. The workshop gave us deep insight into how to train the algorithm effectively for the best possible effect. And, very quickly, more opportunities came to light as to how AI could augment our business in the longer term, continued Shaver.

AI is a transformation technology thats continuing to gain momentum in the legal, financial and professional services sectors. But many firms dont yet have the internal knowledge or training to deliver on its promise. iManage is committed to helping firms establish AI Centers of Excellence not just sell them a kit and walk away, said Nick Thomson, General Manager, iManage RAVN. Weve found the best way to ensure client success is to educate and build up experience inside the firm about how AI works and how to apply it to a broad spectrum of business problems.

Deep Training Delivers Powerful Results

iManage AIUs targeted, hands-on training starts with the fundamentals but delves much deeper enabling organizations to put the flexibility and speed of the technology to work across myriad scenarios. RAVN easily helps facilitate actions like due diligence, compliance reviews or contract repapering, as well as more sophisticated modeling that taps customized rule development to address more unique use cases.

The advanced combination of machine learning and rules-based extraction capabilities in RAVN make it the most trainable platform on the market. Users can teach the software what to look for, where to find it and then how to analyze it using the RAVN AI engine.

Armed with the tools and training to put AI to work across their data stores and documents, AIU graduates can help their organizations unlock critical knowledge and insights in a repeatable way across the enterprise.

Interactive Curriculum Builds Strong Skillsets

The personalized, interactive course is delivered over three half-day sessions, via video conferencing, to a small team of customer stakeholders. Such teams may include data scientists, knowledge managers, lawyers, partners, contract specialists, and trained legal staff. AIU is also available to firms that are considering integrating the RAVN engine and would like to see AI in action as they assess the potential impact of the solution on their businesses.

Expert iManage AI instructors, with deep technology and legal expertise, work with clients in advance to help identify use cases for the virtual AIU. The iManage team fully explores client use cases prior to the training to facilitate the most effective approach to extraction techniques for client projects.

The daily curriculum includes demonstrations with user data and individual and group exercises to evaluate and deepen user skills. Virtual breakout rooms for project drill down and feedback mechanisms, such as polls and surveys, help solidify learning and make the sessions more interactive. Recordings and transcripts allow customers to revisit AIU sessions at any time.

For more information on iManage virtual AIU or on-site training read our AI blog post or contact us at AIU@imanage.com.

Follow iManage via: Twitter: https://twitter.com/imanageinc LinkedIn: https://www.linkedin.com/company/imanage

About iManageiManage transforms how professionals in legal, accounting and financial services get work done by combining artificial intelligence, security and risk mitigation with market leading document and email management. iManage automates routine cognitive tasks, provides powerful insights and streamlines how professionals work, while maintaining the highest level of security and governance over critical client and corporate data. Over one million professionals at over 3,500 organizations in over 65 countries including more than 2,500 law firms and 1,200 corporate legal departments and professional services firms rely on iManage to deliver great client work securely.

Press Contact:Anastasia BullingeriManage +1.312.868.8411press@imanage.com

See the original post here:

Surge in Remote Working Leads iManage to Launch Virtual AI University for Companies that Want to Harness the Power of the RAVN AI Engine -...

True AI cannot be developed until the ‘brain code’ has been cracked: Starmind – ZDNet

Marc Vontobel, CTO & Pascal Kaufmann, CEO, Starmind

Artificial intelligence is stuck today because companies are likening the human brain to a computer, according to Swiss neuroscientist and co-founder of Starmind Pascal Kaufmann. However, the brain does not process information, retrieve knowledge, or store memories like a computer does.

When companies claim to be using AI to power "the next generation" of their products, what they are unknowingly referring to is the intersection of big data, analytics, and automation, Kaufmann told ZDNet.

"Today, so called AI is often just the human intelligence of programmers condensed into source code," said Kaufmann, who worked on cyborgs previously at DARPA.

"We shouldn't need 300 million pictures of cats to be able to say whether something is a cat, cow, or dog. Intelligence is not related to big data; it's related to small data. If you can look at a cat, extract the principles of a cat like children do, then forever understand what a cat is, that's intelligence."

He even said that it's not "true AI" that led to AlphaGo -- a creation of Google subsidiary DeepMind -- mastering what is revered as the world's most demanding strategy game, Go.

The technology behind AlphaGo was able to look at 10 to 20 potential future moves and lay out the highest statistics for success, Kaufmann said, and so the test was one of rule-based strategy rather than artificial intelligence.

The ability for a machine to strategise outside the context of a rule-based game would reflect true AI, according to Kaufmann, who believes that AI will cheat without being programmed not to do so.

Additionally, the ability to automate human behaviour or labour is not necessarily a reflection of machines getting smarter, Kaufmann insisted.

"Take a pump, for example. Instead of collecting water from the river, you can just use a pump. But that is not artificial intelligence; it is the automation of manual work ... Human-level AI would be able to apply insights to new situations," Kaufmann added.

While Facebook's plans to build a brain-computer interface and Elon Musk's plans to merge the human brain with AI have left people wondering how close we are to developing true AI, Kaufmann believes the "brain code" needs to be cracked before we can really advance the field. He said this can only be achieved through neuroscientific research.

Earlier this year, founder of DeepMind Demis Hassabis communicated a similar sentiment in a paper, saying the fields of AI and neuroscience need to be reconnected, and that it's only by understanding natural intelligence that we can develop the artificial kind.

"Many companies are investing their resources in building faster computers ... we need to focus more on [figuring out] the principles of the brain, understand how it works ... rather than just copy/paste information," Kaufmann said.

Kaufmann admitted he doesn't have all the answers, but finds it "interesting" that high-profile entrepreneurs such as Musk and Mark Zuckerberg, none of whom have AI or neuroscience backgrounds, have such strong and opposing views on AI.

Musk and Zuckerberg slung mud at each other in July, with the former warning of "evil AI" destroying humankind if not properly monitored and regulated, while the latter spoke optimistically about AI contributing to the greater good, such as diagnosing diseases before they become fatal.

"One is an AI alarmist and the other makes AI look charming ... AI, like any other technology, can be used for good or used for bad," said Kaufmann, who believes AI needs to be assessed objectively.

In the interim, Kaufmann believes systems need to be designed so that humans and machines can work together, not against each other. For example, Kaufmann envisions a future where humans wear smart lenses -- comparable to the Google Glass -- that act as "the third half of the brain" and pull up relevant information based on conversations they are having.

"Humans don't need to learn stuff like which Roman killed the other Roman ... humans just need to be able to ask the right questions," he said.

"The key difference between human and machine is the ability to ask questions. Machines are more for solutions."

Kaufmann admitted, however, that humans don't know how to ask the right questions a lot of the time, because we are taught to remember facts in school, and those who remember the most facts are the ones who receive the best grades.

He believes humans need to be educated to ask the right questions, adding that the question is 50 percent of the solution. The right questions will not only allow humans to understand the principles of the brain and develop true AI, but will also keep us relevant even when AI systems proliferate, according to Kaufmann.

If we want to slow down job loss, AI systems need to be designed so that humans are at the centre of it, Kaufmann said.

"While many companies want to fully automate human work, we at Starmind want to build a symbiosis between humans and machines. We want to enhance human intelligence. If humans don't embrace the latest technology, they will become irrelevant," he added.

The company claims its self-learning system autonomously connects and maps the internal know-how of large groups of people, allowing employees to tap into their organisation's knowledge base or "corporate brain" when they have queries.

Starmind platform

Starmind is integrated into existing communication channels -- such as Skype for Business or a corporate browser -- eliminating the need to change employee behaviour, Kaufmann said.

Questions typed in the question window are answered instantly if an expert's answer is already stored in Starmind, and new questions are automatically routed to the right expert within the organisation, based on skills, availability patterns, and willingness to share know-how. All answers enhance the corporate knowledge base.

"Our vision is if you connect thousands of human brains in a smart way, you can outsmart any machine," Kaufmann said.

On how this is different to asking a search engine a question, Kaufmann said Google is basically "a big data machine" and mines answers to questions that have been already asked, but is not able to answer brand new questions.

"The future of Starmind is we actually anticipate questions before they're even asked because we know so much about the employee. For example, we can say if you are a new hire and you consume a certain piece of content, there will be a 90 percent probability that you will ask the following three questions within the next three minutes and so here are the solutions."

Starmind is being currently used across more than 40 countries by organisations such as Accenture, Bayer, Nestl, and Telefonica Deutschland.

While Kaufmann thinks it is important at this point in time to enhance human intelligence rather than replicate it artificially, he does believe AI will eventually substitute humans in the workplace. But unlike the grim picture painted by critics, he doesn't think it's a bad thing.

"Why do humans need to work at all? I look forward to all my leisure time. I do not need to work in order to feel like a human," Kaufmann said.

When asked how people would make money and sustain themselves, Kaufmann said society does not need to be ruled by money.

"In many science fiction scenarios, they do not have money. When you look at the ant colonies or other animals, they do not have cash," Kaufmann said.

Additionally, if humans had continuous access to intelligent machines, Kaufmann said "the acceleration of human development will pick up" and "it will give rise to new species".

"AI is the ultimate tool for human advancement," he firmly stated.

Link:

True AI cannot be developed until the 'brain code' has been cracked: Starmind - ZDNet

Why AI is now at the heart of our innovation economy | TechCrunch – TechCrunch

Andrew Keen is the author of three books: Cult of the Amateur, Digital Vertigo and The Internet Is Not The Answer. He produces Futurecast, and is the host of Keen On.

There are few more credible authorities on artificial intelligence (AI) thanHilary Mason the New York-based founder and chief executive of the data science and machine learning consultancyFast Forward Labs.

So, I asked Mason, who is also theData Scientist in Residenceat Accel Partners and theformer Chief Scientist at Bitly, whether todays AI revolution is for real? Or is it, I wondered, just another catch-all phrase used by entrepreneurs and investors to describe the latest Silicon Valley mania?

Mason who sees AI as theumbrella term to describemachine learning andbig data acknowledges that it has become avery trendy area of start-up activity. That said, she says, there has been such rapid technological progress in machine learning over the last five years to make the fieldlegitimately exciting. This progress has been so profound, Mason insists, that it is making AIclose to the heart of our new innovation economy.

But in contrast withthe fearsof prominent technologists like Elon Musk, Mason doesnt worry about the threat to the human species of super intelligent machines. We humans, she says, use machines as tools and the advent of AI doesnt change this.Machines arent rational, she thus argues, implying that there are many more important things for us to worry about than an imminent singularity.

What does concern Mason, however, are questions about the role of women in tech. Thats a question interviewers like myself should be asking men rather than women, she insists. It just createsextra burden for female technologists and thus isnt something that she wants to publicly discuss.

Many thanks to the folks at theGreater Providence Chamber of Commercefor their help in producing this interview.

Continued here:

Why AI is now at the heart of our innovation economy | TechCrunch - TechCrunch

Implication Of AI And IoT Enabled Electric Scooters For Smart Delivery Services – Inc42 Media

Many electric vehicle companies are enabling modern technologies like Artificial Intelligence and IoT in their vehicles

AI and IoT have transformed the entire delivery services especially with the electric vehicles

The implication of AI and IoT in electric vehicles ensure efficiency and safety

Urban logistics and delivery services are one of the main issues of every big and small city. From grocery to food items to everything, the delivery market has grown rapidly with the growth of technology and the Internet. It moves vehicles in rush hours and on roads which are already congested by private traffic.

According to the data of MDS Transmodal Limited, the impact of delivery services is that they represent between 8 and 18% of urban traffic flows and they decrease by 30% the road capacity because of pick-up and deliveries operations and it continues to grow in the coming years. Delivery operations have a high impact on congestion and urban environmental quality. They are responsible for about 25% of CO2 mobility emissions in urban areas.

A new venture that has joined the delivery services is that Electric Vehicles. The electric vehicle industry is growing rapidly to combat pollution. Electric vehicles (EVs) is seen as a catalyst to the reduction of CO2 emissions and more intelligent mode of transportation systems. The Government of India is also pushing for a shift towards electric vehicles for every purpose. The Indian government has claimed that India will move to 30% electric vehicles by 2030.

The Government of India has the vision of making the country electrically mobile. The government of India has encouraged mainstream electric mobility by dedicating INR 10,000 Cr to boost EV usage under Faster Adoption and Manufacturing of Hybrid and Electric Vehicles (FAME) II scheme and a 5% reduction of GST on electric vehicles.

As the technology is growing and many industries are adopting the changes, many electric vehicle companies are enabling modern technologies like Artificial Intelligence and IoT in their vehicles. They are providing these e-scooters for many purposes, from personal use to now in the smart delivery ecosystem.

Use of e-bikes, e-cargo bikes and e-scooters is extremely positive for the enhancement of Corporate Social Responsibility (CSR), visibility and green image among customers and clients, cost savings because it consumes low energy and it is low maintenance and performances are very good. It is easy to access any location in urban areas and reliability is too high with these e-vehicles. This is the reason, nowadays more delivery giants are opting for e-scooters instead of petrol or diesel scooters.

Some problems are related to the usage of electric vehicles like the lack of adequate charging stations, limited autonomy especially in hilly areas and some technical malfunctions of engines and batteries. But the AI and IoT technologies have even come with the solution to all these problems.

AI and IoT have transformed the entire delivery services especially with the electric vehicles (EVs). Now, the electric scooters of the delivery executives are AI and IoT enabled. So that the drivers behaviour can be monitored for safe and timely delivery of goods. Companies have started using Telematics devices for tracking & monitoring vehicle movement during the delivery. These technologies will not only monitor the movement of vehicles but also ensure the safety of drivers in case of any kind of road accidents.

Using AI and IoT, it will be easy to contact the driver and a consumer in case of an emergency. These scooters can be controlled by a mobile application, GPS which are installed on the vehicles and an accelerometer can tell the company every single movement of a scooter during the delivery of the goods.

Using the AI and IoT, e-scooters which are equipped with cellular, GPS, and accelerometer technology, they use machine learning to interpret the habits of their riders and either notify dangerous habits of the drivers or alter their machines to produce safer conditions. Artificial Intelligence has now made it possible for the driver to look at the app after delivery and see where they went, how fast they drove, if they made any dangerous moves, and also give tips for a safer delivery next time.

Attachment of an accelerometer to a scooter with AI and IoT, also made it possible for the company or consumer to see when a rider accelerates too quickly or brakes too sharply. Electric vehicles also come with features like navigation assist, ride statistics, remote diagnostics, voice-enabled app, anti-theft alarm and lock, speedometer call alerts, ride behaviour-based artificial intelligence suggestions, which can be used in case of emergency. AI and IoT have helped the electric scooter to connect the drivers smartphone and store all vehicle-related data on the cloud.

Next level of tech revolution can be seen in the electric vehicle sector. There is 247 connectivity to a cloud server which allows a user to monitor the performance of the vehicle even when the driver is not around. Data analytic algorithms employed by the server analyses the data and notifies the user about possible service needs.

Modern technologies like AI and IoT have also improved the battery charging technology of Electric Vehicles (EV) and reduced the time it takes to stop at a gas station. It is the result of that Electric Vehicles companies are using artificial intelligence to monitor the state of the battery as it is charging. This improvement in battery technology has not only made delivery services faster but also safe for the consumers as well as delivery companies.

See the rest here:

Implication Of AI And IoT Enabled Electric Scooters For Smart Delivery Services - Inc42 Media

Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes Around The World – BuzzFeed News

Facebook confirmed to BuzzFeed News that it has sent a cease-and-desist letter to Clearview AI, asking the company to stop using information from Facebook and Instagram.

Last updated on February 5, 2020, at 8:51 p.m. ET

Posted on February 5, 2020, at 6:09 p.m. ET

As legal pressures and US lawmaker scrutiny mounts, Clearview AI, the facial recognition company that claims to have a database of more than 3 billion photos scraped from websites and social media, is looking to grow around the world.

A document obtained via a public records request reveals that Clearview has been touting a rapid international expansion to prospective clients using a map that highlights how it either has expanded, or plans to expand, to at least 22 more countries, some of which have committed human rights abuses.

The document, part of a presentation given to the North Miami Beach Police Department in November 2019, includes the United Arab Emirates, a country historically hostile to political dissidents, and Qatar and Singapore, the penal codes of which criminalize homosexuality.

Clearview CEO Hoan Ton-That declined to explain whether Clearview is currently working in these countries or hopes to work in them. He did confirm that the company, which had previously claimed that it was working with 600 law enforcement agencies, has relationships with two countries on the map.

Its deeply alarming that they would sell this technology in countries with such a terrible human rights track record."

Clearview is focused on doing business in USA and Canada, Ton-That said. Many countries from around the world have expressed interest in Clearview.

Albert Fox Cahn, a fellow at New York University and the executive director of the Surveillance Technology Oversight Project, told BuzzFeed News that he was disturbed by the possibility that Clearview may be taking its technology abroad.

Its deeply alarming that they would sell this technology in countries with such a terrible human rights track record, enabling potentially authoritarian behavior by other nations, he said.

Clearview has made headlines in past weeks for a facial recognition technology that it claims includes a growing database of some 3 billion photos scraped from social media sites like Instagram, Twitter, YouTube, and Facebook, and for misrepresenting its work with law enforcement by falsely claiming a role in the arrest of a terrorism suspect. The company, which has received cease-and-desist orders from Twitter, YouTube, and Facebook argues that it has a First Amendment right to harvest data from social media.

There is also a First Amendment right to public information, Ton-That told CBS News Wednesday. So the way we have built our system is to only take publicly available information and index it that way.

Cahn dismissed Ton-Thats argument, describing it as more about public relations than it is about the law.

No court has ever found the First Amendment gives a constitutional right to use publicly available information for facial recognition, Cahn said. Just because Clearview may have a right to scrape some of this data, that doesnt mean that they have an immunity from lawsuits from those of us whose information is being sold without our consent.

Scott Drury, a lawyer representing a plaintiff suing Clearview in Illinois for violating a state law on biometric data collection, agreed. Clearviews conduct violates citizens constitutional rights in numerous ways, including by interfering with citizens right to access the courts, he told BuzzFeed News. The issue is not limited to scraping records, but rather whether a private company may scrape records with the intent of performing biometric scans and selling that data to the government.

Clearviews conduct violates citizens constitutional rights in numerous ways."

Potentially more problematic is Clearviews inclusion of nine European Union countries among them Italy, Greece, and the Netherlands on its expansion map. These countries have strict privacy protections under the General Data Protection Regulation (GDPR), a 2016 law that requires businesses to protect the personal data and privacy of EU citizens. Joseph Jerome, a policy counsel for the Center for Democracy and Technology, said it was unclear whether Clearview AI's technology would violate the GDPR.

Jerome said that GDPR protects any information that could be used to identify a person biometric data included but that the EU made exceptions for law enforcement and national security. Clearview also highlighted other non-EU European countries on its map that it hoped to do business with, including the United Kingdom and Ukraine.

Beyond the map which also points to plans to expand to Brazil, Colombia, and Nigeria Clearview has boasted about its exploits abroad. Its website has a large testimonial from a detective constable in the sex crimes unit in Canadian law enforcement who claims that Clearview is hands-down the best thing that has happened to victim identification in the last 10 years. When asked, Ton-That declined to identify the detective or the agency they serve.

Clearview and Ton-That have on occasion exaggerated the company's business relationships, and the presentation sent to North Miami Beach has a few misrepresentations, including two examples in which it suggested that it was used in the investigation of crimes in New York. An NYPD spokesperson previously denied that the department has any relationship with the company and said that the software was not used in either investigation.

Clearview AI has also encouraged law enforcement to test its facial recognition tool in unusual situations, such as identifying dead bodies. The presentation shows graphic images of a dead man and mugshots of a person whom Clearview claimed matched the deceased victim.

Clearview AI has been aggressively promoting its service to US law enforcement. It has suggested that police officers run wild with the tool, encouraging them to test it on friends, family, and celebrities. Emails obtained via a public record request show the company challenging police in Appleton, Wisconsin, to run 100 searches a week.

Investigators who do 100+ Clearview searches have the best chances of successfully solving crimes with Clearview in our experience, the email said. Its the best way to thoroughly test the technology. You never know when a search will turn up a match.

There are currently no federal laws that restrict facial recognition or scraping biometric data from the internet. On Thursday, the House Committee on Homeland Security will hold a hearing to examine the Department of Homeland Security's use facial recognition technology. Ton-That has previously said Clearview is working with DHS.

On Wednesday, Facebook told BuzzFeed News that it had sent multiple letters to Clearview AI to clarify the social network's policies and request information about what the startup was doing. In those letters, Facebook, which owns Instagram, asked that Clearview cease and desist from using any data, images, or media from its social networking sites. Facebook board member Peter Thiel is an investor in Clearview.

Scraping peoples information violates our policies, which is why weve demanded that Clearview stop accessing or using information from Facebook or Instagram," a Facebook spokesperson said. A spokesperson for Thiel did not immediately respond to a request for comment.

Feb. 06, 2020, at 00:28 AM

The House Committee on Homeland Security will hold the hearing on facial recognition. An earlier version of this post misstated the committee.

Read more from the original source:

Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes Around The World - BuzzFeed News

Security Think Tank: Artificial intelligence will be no silver bullet for security – ComputerWeekly.com

By

Published: 03 Jul 2020

Undoubtedly, artificial intelligence (AI) is able to support organisations in tackling their threat landscape and the widening of vulnerabilities as criminals have become more sophisticated. However, AI is no silver bullet when it comes to protecting assets and organisations should be thinking about cyber augmentation, rather than just the automation of cyber security alone.

Areas where AI can currently be deployed include the training of a system to identify even the smallest behaviours of ransomware and malware attacks before it enters the system and then isolate them from that system.

Other examples include automated phishing and data theft detection which are extremely helpful as they involve a real-time response. Context-aware behavioural analytics are also interesting, offering the possibility to immediately spot a change in user behaviour which could signal an attack.

The above are all examples of where machine learning and AI can be useful. However, over-reliance and false assurance could present another problem: As AI improves at safeguarding assets, so too does it improve attacking them. As cutting-edge technologies are applied to improve security, cyber criminals are using the same innovations to get an edge over these defences.

Typical attacks can involve the gathering of information about a system or sabotaging an AI system by flooding it with requests.

Elsewhere, so-called deepfakes are proving a relatively new area of fraud that poses unprecedented challenges. We already know that cyber criminals can litter the web with fakes that can be almost impossible to distinguish real news from fake.

The consequences are such that many legislators and regulators are contemplating the establishment of rule and law to govern this phenomenon. For organisations, this means that deepfakes could lead to much more complex phishing in future, targeting employees by mimicking corporate writing styles or even individual writing style.

In a nutshell, AI can augment cyber security so long as organisations know its limitations and have a clear strategy focusing on the present while constantly looking at the evolving threat landscape.

Ivana Bartoletti is a cyber risk technical director at Deloitte and a founder of Women Leading in AI.

Link:

Security Think Tank: Artificial intelligence will be no silver bullet for security - ComputerWeekly.com

Businesses are finding AI hard to adopt – The Economist

Jun 13th 2020

FACEBOOK: THE INSIDE STORY, Steven Levys recent book about the American social-media giant, paints a vivid picture of the firms size, not in terms of revenues or share price but in the sheer amount of human activity that thrums through its servers. 1.73bn people use Facebook every day, writing comments and uploading videos. An operation on that scale is so big, writes Mr Levy, that it can only be policed by algorithms or armies.

In fact, Facebook uses both. Human moderators work alongside algorithms trained to spot posts that violate either an individual countrys laws or the sites own policies. But algorithms have many advantages over their human counterparts. They do not sleep, or take holidays, or complain about their performance reviews. They are quick, scanning thousands of messages a second, and untiring. And, of course, they do not need to be paid.

And it is not just Facebook. Google uses machine learning to refine search results, and target advertisements; Amazon and Netflix use it to recommend products and television shows to watch; Twitter and TikTok to suggest new users to follow. The ability to provide all these services with minimal human intervention is one reason why tech firms dizzying valuations have been achieved with comparatively small workforces.

Firms in other industries woud love that kind of efficiency. Yet the magic is proving elusive. A survey carried out by Boston Consulting Group and MIT polled almost 2,500 bosses and found that seven out of ten said their AI projects had generated little impact so far. Two-fifths of those with significant investments in AI had yet to report any benefits at all.

Perhaps as a result, bosses seem to be cooling on the idea more generally. Another survey, this one by PwC, found that the number of bosses planning to deploy AI across their firms was 4% in 2020, down from 20% the year before. The number saying they had already implemented AI in multiple areas fell from 27% to 18%. Euan Cameron at PwC says that rushed trials may have been abandoned or rethought, and that the irrational exuberance that has dominated boardrooms for the past few years is fading.

There are several reasons for the reality check. One is prosaic: businesses, particularly big ones, often find change difficult. One parallel from history is with the electrification of factories. Electricity offers big advantages over steam power in terms of both efficiency and convenience. Most of the fundamental technologies had been invented by the end of the 19th century. But electric power nonetheless took more than 30 years to become widely adopted in the rich world.

Reasons specific to AI exist, too. Firms may have been misled by the success of the internet giants, which were perfectly placed to adopt the new technology. They were already staffed by programmers, and were already sitting on huge piles of user-generated data. The uses to which they put AI, at least at firstimproving search results, displaying adverts, recommending new products and the likewere straightforward and easy to measure.

Not everyone is so lucky. Finding staff can be tricky for many firms. AI experts are scarce, and command luxuriant salaries. Only the tech giants and the hedge funds can afford to employ these people, grumbles one senior manager at an organisation that is neither. Academia has been a fertile recruiting ground.

A more subtle problem is that of deciding what to use AI for. Machine intelligence is very different from the biological sort. That means that gauging how difficult machines will find a task can be counter-intuitive. AI researchers call the problem Moravecs paradox, after Hans Moravec, a Canadian roboticist, who noted that, though machines find complex arithmetic and formal logic easy, they struggle with tasks like co-ordinated movement and locomotion which humans take completely for granted.

For example, almost any human can staff a customer-support helpline. Very few can play Go at grandmaster level. Yet Paul Henninger, an AI expert at KPMG, an accountancy firm, says that building a customer-service chatbot is in some ways harder than building a superhuman Go machine. Go has only two possible outcomeswin or loseand both can be easily identified. Individual games can play out in zillions of unique ways, but the underlying rules are few and clearly specified. Such well-defined problems are a good fit for AI. By contrast, says Mr Henninger, a single customer call after a cancelled flight hasmany, many more ways it could go.

What to do? One piece of advice, says James Gralton, engineering director at Ocado, a British warehouse-automation and food-delivery firm, is to start small, and pick projects that can quickly deliver obvious benefits. Ocados warehouses are full of thousands of robots that look like little filing cabinets on wheels. Swarms of them zip around a grid of rails, picking up food to fulfil orders from online shoppers.

Ocados engineers used simple data from the robots, like electricity consumption or torque readings from their wheel motors, to train a machine-learning model to predict when a damaged or worn robot was likely to fail. Since broken-down robots get in the way, removing them for pre-emptive maintenance saves time and money. And implementing the system was comparatively easy.

The robots, warehouses and data all existed already. And the outcome is clear, too, which makes it easy to tell how well the AI model is working: either the system reduces breakdowns and saves money, or it does not. That kind of predictive maintenance, along with things like back-office automation, is a good example of what PWC approvingly calls boring AI (though Mr Gralton would surely object).

There is more to building an AI system than its accuracy in a vacuum. It must also do something that can be integrated into a firms work. During the late 1990s Mr Henninger worked on Fair Isaac Corporations (FICO) Falcon, a credit-card fraud-detection system aimed at banks and credit-card companies that was, he says, one of the first real-world uses for machine learning. As with predictive maintenance, fraud detection was a good fit: the data (in the form of credit-card transaction records) were clean and readily available, and decisions were usefully binary (either a transaction was fraudulent or it wasnt).

But although Falcon was much better at spotting dodgy transactions than banks existing systems, he says, it did not enjoy success as a product until FICO worked out how to help banks do something with the information the model was generating. Falcon was limited by the same thing that holds a lot of AI projects back today: going from a working model to a useful system. In the end, says Mr Henninger, it was the much more mundane task of creating a case-management systemflagging up potential frauds to bank workers, then allowing them to block the transaction, wave it through, or phone clients to double-checkthat persuaded banks that the system was worth buying.

Because they are complicated and open-ended, few problems in the real world are likely to be completely solvable by AI, says Mr Gralton. Managers should therefore plan for how their systems will fail. Often that will mean throwing difficult cases to human beings to judge. That can limit the expected cost savings, especially if a model is poorly tuned and makes frequent wrong decisions.

The tech giants experience of the covid-19 pandemic, which has been accompanied by a deluge of online conspiracy theories, disinformation and nonsense, demonstrates the benefits of always keeping humans in the loop. Because human moderators see sensitive, private data, they typically work in offices with strict security policies (bringing smartphones to work, for instance, is usually prohibited).

In early March, as the disease spread, tech firms sent their content moderators home, where such security is tough to enforce. That meant an increased reliance on the algorithms. The firms were frank about the impact. More videos would end up being removed, said YouTube, including some that may not violate [our] policies. Facebook admitted that less human supervision would likely mean longer response times and more mistakes. AI can do a lot. But it works best when humans are there to hold its hand.

This article appeared in the Technology Quarterly section of the print edition under the headline "Algorithms and armies"

Excerpt from:

Businesses are finding AI hard to adopt - The Economist

What an artificial intelligence researcher fears about AI – CBS News – CBS News

Arend Hintzeis assistant professor of Integrative Biology & Computer Science and Engineering at Michigan State University.

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It's perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, "Matrix"-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become "the destroyer of worlds," as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldn't avoid asking: As an AI expert, what do I fear about artificial intelligence?

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in "2001: A Space Odyssey," is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASA's space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didn't know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Play Video

Five years after beating humans on "Jeopardy!" an IBM technology known as Watson is becoming a tool for doctors treating cancer, the head of IBM ...

Systems like IBM's Watson and Google's Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they loseon "Jeopardy!" or don't defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that "to err is human," so it is likely impossible for us to create a truly safe system.

I'm not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures' performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Play Video

On 60 Minutes Overtime, Charlie Rose explores the labs at Carnegie Mellon on the cutting edge of A.I. See robots learning to go where humans can'...

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we'll find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility that's farther down the line is using evolution to influence the ethics of artificial intelligence systems. It's likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesn't prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Being a scientist doesn't absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don't yet know what it's capable of. But we do need to decide what the desired outcome of advanced AI is.

Play Video

Business leaders weigh in on the possibility of artificial intelligence replacing jobs

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady "hand." Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

tenaciousme, CC Wikimedia Commons

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind's existence in it probably doesn't matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn't just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don't find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

This article was originally published on The Conversation.

Continued here:

What an artificial intelligence researcher fears about AI - CBS News - CBS News

AWS vs. Microsoft Azure will be about sales scale, AI, multi-cloud realities – ZDNet

Amazon Web Services is ramping its sales and marketing investments amid signs that the battle with Microsoft Azure is accelerating. The big question is whether the laws of large numbers is catching up to both cloud titans in terms of growth.

With both Amazon and Microsoft earnings out of the way this week, there is a bit more color on how the cloud wars are playing out. The storyline here is pretty straightforward in that AWS reports as a separate unit within Amazon and Microsoft growth is broken out in the software giant's report, but still tucked away in the commercial cloud.A new Microsoft cloud category to watch: The Microsoft 365 number

The upshot:

So here we are. Microsoft Azure vs. AWS and the cloud market has matured enough to where we're seeing a sales ground war. Let's just say Microsoft gets that sales thing well and a partnership with SAP to co-sell isn't going to hurt.

For now, there is enough cloud growth to go around, but there are signs that IT spending is slowing. Cloud providers are likely to see some of that slowdown. That reality means that the battle between AWS and Azure is going to get interesting.

Wedbush analyst Daniel Ives summed up Microsoft's incursion on AWS following the company's most recent report:

This quarter was a major positive data point for Redmond as well as overall cloud spending, which has been a concern among investors given some cracks in the armor of cloud plays such as Workday and ServiceNow and fears that IT spending is hitting a speed bump heading into 2020. On the contrary, Microsoft delivered strength across the board with no blemishes and importantly gave stronger than expected December quarter guidance which speaks to an inflection point in deal flow as more enterprises pick Redmond for the cloud and thus further narrowing the competitive gap vs. Bezos and AWS.

Indeed, Microsoft CFO Amy Hood said: "In our commercial business, we again saw increased customer commitment across our cloud platform. In Azure, we had material growth in the number of $10 million-plus contracts."

Hood added that Azure gross margins improved as commercial cloud delivered gross margins of 66%. That tally includes Office 365. It appears that the Microsoft 365 strategy is going to pull along Azure sales too. That approach is hard for other cloud providers to replicate.

Meanwhile, Brian Olsavsky, CFO of Amazon, explained that the company is investing in AWS and has banked savings from infrastructure investments made in 2017. He said:

We continue to feel really good about not only the top line but also the bottom line in that business, but we are investing a lot more this year in sales force and marketing personnel mainly to handle a wider group of customers, a increasingly wide group of products. We continue to add thousands of new products and features a year, and we continue to expand geographically.

So the biggest impact that we saw in Q3 year-over-year in the AWS segment was tied to costs related to sales and marketing year-over-year and also, to secondary extent, infrastructure, which, if you look at our capital leases or equipment leases line, it grew 30% on trailing 12-month basis in Q3 of this year, and that was 9% last year. So there's been a step-up in infrastructure cost to support the higher usage demand. So we see those trends continuing into Q4, and that's essentially probably the other element of operating income year-over-year that's shorter than in prior quarters.

Olsavsky said that AWS margins will likely be under pressure.

We will price competitively and continue to pass along pricing reductions to customers both in the form of absolute price reductions and also in the form of new products that will, in effect, cannibalize the old ones. what we're doing is renegotiating or negotiating incremental price decreases for customers who didn't commit to us long term. And if you look in our disclosure on our 10-Q, it shows that we have $27 billion in future commitments for AWS -- from AWS customers, and that's up 54% year-over-year.

Now we'd love to give you that tech zero-sum storyline because it's easy. But AWS vs. Azure is way more complicated. Here are the moving parts that'll determine how this battle plays out going forward.

The sales ground war. AWS is ramping its sales team, but there has to be a talent shortage. Google Cloud Platform is hiring aggressively. Microsoft Azure is drafting off its parent's sales team and enterprise footprint already. And then there are other cloud providers that'll retool sales teams. Rest assured new ServiceNow CEO Bill McDermott is going to be recruiting heavily. It's a good time to be a cloud sales person.

Artificial intelligence. Microsoft CEO Satya Nadella mentioned AI and Azure a bevy of times. Compute, storage and infrastructure frequently is just a cloud precursor to more the AI and machine learning upsell. Azure, AWS and Google Cloud are all betting AI and machine learning will differentiate them.

Multi-cloud realities. The dream is that enterprises will all mix and match the public cloud providers based on needs and pricing. The reality in the short term is going to be that enterprises are likely to bet on one cloud provider with others being involved as leverage. The battle between AWS and Azure will be about which vendor is preferred in the enterprise.

Read more here:

AWS vs. Microsoft Azure will be about sales scale, AI, multi-cloud realities - ZDNet

Sorting Lego sucks, so here’s an AI that does it for you – Engadget – Engadget

You see, Mattheij decided he wanted in on the profitable cottage industry of online Lego reselling, and after placing a bunch of bids for the colorful little blocks on eBay, he came into possession of 2 tons (4,400 pounds) of Lego -- enough to fill his entire garage.

As Mattheij explains in his blog post, resellers can make up to 40 ($45) per kilogram for Lego sets, and rare parts and Lego Technic can fetch up to 100 ($112) per kg. If you really want to rake in the cash, however, you have to go through the exhaustive process of manually sorting through your bulk Lego before selling it in smaller groupings online. Instead of spending an eternity sifting through his own, intimidatingly large collection, Mattheij set to work on building an automated Lego sorter powered by a neural network that could classify the little building blocks. In case you were wondering, Lego comes in more than 38,000 shapes and over 100 shades of color, which amounts to a lot of sorting even with the aid of AI.

Starting with a proof of concept (built using Lego, naturally), Mattheij spent the following six months improving upon his prototype with a lot of DIY handiwork. In his own words, he describes his present setup as a "hodge-podge of re-purposed industrial gear" stuck together using "copious quantities of crazy glue" and a "heavily modified" home treadmill.

The current incarnation uses conveyor belts to carry the Lego past a web camera that is set up to take images of the blocks. These are then fed to the neural network as part of its classification training, and all Mattheij has to do is spot the errors in its judgement.

"As the neural net learns, there are fewer mistakes, and the labeling workload decreases," he states. "By the end of two weeks I had a training data set of 20,000 correctly labeled images."

With his prototype up and running, Mattheij claims he is just waiting for the machine learning software to reliably class all of the images itself, and then he can start selling off the lucrative toy. If Matthiej manages to get the system working, he could then rechannel those profits into new expensive Lego projects.

Visit link:

Sorting Lego sucks, so here's an AI that does it for you - Engadget - Engadget

AI Could Target Autism Before It Even EmergesBut It’s No … – Wired – WIRED

F |*#$.-N*%yuRK4.]^>{Td$q;_?<_?W*_O5O^W"a.YNbGNVOU|}zfiR-7kxSC9E pTM_4yG4bqTqi~S]Ny m%:u-o;j9f8m=dyTtLYgE4dy`a%KxAW,)XKMq5)aGx]Y#+K3(he(+? %")>*h}OB <%(_lV2[fa 6IThj=.Wr^L=K;tmNd:E'7qIeVf(#-tPoe1[{{i_S>gVus4J2:whatUYJ31*'8*Q5JxVi~L".t1NdW;Rf36]D)odIjUsq-(?Y6?)P}CM;]7W&[qb1bQJ6 w>%qw (:K4ovihjotDDkXR!X)F;rb_9c1VUoK;' '8JU"P?T]N7NWX%9| >M!Y]A~9w9KF,A{;gTzUK[cG;7o4uSxJ>r88}^D'GLg1;:W,~I?h$j<%O/Lfxy/_>SZwM@.U J_IWfAYc)Jh;L<<-q1RX/A AS-?=epBG7u_K_x1*$N%lV|2q#*yGi0AM0U:t_ #XDlz/_=~<>)juZ}p1IX(OO B):*NiMo;B`N ?U/_;OGGMSl_~Sn_*`/'{imq|N-:kF85(9n^^'1_i}~r%E*"*@X,:7RfO^vDy4r5'%A]O>V:}x '~RtskPCg3Y?@Yb ZD1:u?%rxw:$_Pmc9By=ZgN| 5 :9,[`6;1S848n7;r1nm%]j{R$ =< pqqb`E9jR-E@ "cwV{4LlxMu3,#! rB}RCDZC#SkWWH/zP1TIS J &q k[.!gjMi,(2}5kb=v+KvTWl q vFP~"fit-w*eI0btjrtoN=} 9R3:j]{PcZTHY10>/Lqe`FX{g5riZivd~BwOnQ Fa< 0X`?U^~Rn])NNI0 ]=an@%.,nrzB)bWw[c5y]AW+xwwv'ygEG6mva W{vj^rtL#PCrqe=85p7b.;C:J-Ma18},:Vaq*n5ja0S>OyN{nP1BOoc8Wz^H>+@Y9#+TQuF]]%qAKR7,){=fH v;u !iU+jX{VB41Uwk{^dle71Tw@VXSH2dh9tul)JPPna.T~rz= mo^=le9zh[-wx&]]%2vxBX_~pJ/YXX(7Y]d&TY~* rE_W18Iu1`v`8EOmOg UOnW]?a89?ujsslD9>B,f]-z~R]<04pO_x|qc~h01iU#(:DOuJ{`=0^6MQmy6hwfoafz:Yf]5Ab7mX&vO+^A.t {Kno LhqDbW !1[mc2YwD'ux`BzpM:cHBBAkD%P rlf a[G.G!3'Xu~xIad* _b6Lu<8uO4Iv_of.Z$E M#1Fb[_6l TAtO~y0=u;<6-}]8('bg'][s17#X #hWl.Hv~n%051F !taX>ax{Oec5*( JuB{P?S|('A$<4PKo!MihuNm3oy}{or,kw$P6;u-s1juT@qhU8&J8ecP1Nsu94 l d:Mbfo(Fzx{ SF^a$3 jgSQzMwgv?Nh""^XGd"< a]~QMSNz@}~m]v.Ra#)rzxaBix{mj:0SK4rD/r|}C9Z(;g <@T72g3& Sh{9E`x9CSX7$o Ie2CBR`"t9KaI%0f,C ,[X"eY/Wd9:"LZ6i`3)LBR`X3W,O'L1QPH,S)#wE3"90I^R Q$g&@$AII6^hBqsYHhMsYN$q#PJ4g]R3#46YPJ4muI@bMd&m9qP.)n[mymI^ld]Rn.XKm!-c(KxR^uI9@9KI&n>f&b ;,aGv$YHMH{u$iP|#)D(]T/ =*GH{u$i')@)IDQhGh&%7Q]R.K]8sWRkx$hB3)IBWp%`BW|xH2Mhg"xK=!IfI=]DA)9| 20uO$YI=Khl@=I'IHCRn#2E$h"&)3$Bi4C(#f"Mj1Yw"qXJ46$QinN}Rr IC1$!IBKP,% M #CRaD|`)9DByt_P!)0E^$h" rN)4L3>$hBcs2!t!n5R s`7Yn"GW&by#%!y7<4*GJ;T)s>P~C[7_O;vvo a%R8q,8]5*f"_WbhnJ>xvt/I'bb6 G7_qnh}gWl#~rP_bRlWuaTwB]"c?Q3#yw9,s?-bIE/ P 1NLXG'Z*]7GxmGa+'Z$_W^.,b). tFe+*(xViOhH N@hdv{^f 9cwM.H;]^]A&t#+K#EYEDn~-)d>Eee*&3";?>Q/1G&oCm1ohkF~ZOU4.Cb[_6l HTAtT-k Y,ac+Ql>O0D<)KN']Y9=_=V:^$!v-C#N@{@;?fqG9#`;m 1UJ3B*ulm$;{m7lt iJEUc]W?ZU+V%5u:1t{.Y1x_cwo M82touT@'uG@ 6`_f$e(&fAuWPMSWFYy<[Pg7#b G~t*`y=l9,Ss ^Q3 D=59xiwtyag]6xejOWzr+=cX"~='/D(($gDQ9sEC_" OD$!t.d_hGtM'5[049qJ&$AIIwk"aXJ4 wIBvpR}BvDN$@:'_tN%I3uIg$W%qeuI'2C;"8 @Ut#.t.6,hG.eutI rtC0$W !lHW6$qe?]R#4te$hB8)I{5d!I"aXJ4G(!#qN@%hlRDuI9!t!&&KvDfJqLMI74Bn1J%BNvDx)1N7%n i $MA(Wh $M$-KM!cJ8+D$}tP.)X1a)9DhB|r&O r3s #'gvf"+XJ4! dD$hB3)"+M(U+[B~.K Rr q.K`BZKv3$h"3iuIY-dwn[mKb&ruZ]RfnymICW]R.KyvD4(%XJ4us$94C4Kd ;B#vDhB#I{uKI%B|QhG'%Q9B#I{u,XmMzy5']5.ePpLT+JmUmXrW~N#>S,]L& zqx40.UiG,*[(nU?}W_gxEw_U?%w{OrXo/&*H-HRT3^"Zy-LZ2D]T7X3N5887g_]9{ /x5>ZMae8OP|(7=*jQ6 m<0oE/@M`0cv{Ti~l]0om!"o6k- Ym~{E2n%wW"].%ay_b_XbXtHs 9B^*b6?6M7t']) g1yfWB~i4;7w;',YI`fCV]/B^DC<7og>V?:+YFWW SNtXdx{CS{!^@i}.+_dxQIe{XSbxsV4O6OB&K%*q`_!.7Ok9,k/tQB$h{SQs>yV-0b:B[IHc(0|ls@Y!N""a*$W-X*f% 7(qF{Sx5 hw{aq q|q>u:LT!Zk|x+.A _ X]u3H c&#fWY"D;sTS~8]RwmXo+1)^g_uU@b*SEgYG- Y@~(RhY% 9$Mb;^r>iDM %0d@3Q=5kK6`7"FY6H6pQ7"CiZ|.r;#~C ofP >)}CI]!U5MYwhkXM+S?-%s iVQe{eS/g!'`%,E]X)=.l>B|kf7 NAS3N{;>6U`xw6t0-{F3k8Bm|8 QhzW i-!C5b (&|t;Xgvp+!=""u'?V>TQ+#~>xlG&r$ -U>K.cEoqw0( %k.qBDW%:%'fHapcq=F4B3AjPjq/UbdVGd/y1Hxc+H d6x=+)o f*`(qqcf/mxlu29E:<>eF0GLO~BAw$G!Sy3D./r&0_(yn zN 9U 3V/c5D/&Oz[ - I `|i=DgAG|[~ O1r5@GI=+qs{P")%E@YrhgAw9hX@P2?W>w+%]0,"Xb?Q"aQ XQkra4$4i@4Epnx4Ez|^GE|65x=u)BuY ,Z<_-Z]'=L;PQH`S:5CKR30T:h-#PBJFE%^XA[-HYPe[[(4H@Y&zs@a|iy<&6e%zJ6D KK8E ED4&:TyO~VJT:@9bXF#KHt%L9%K(aI}Zix(-j/hDvT&;aV*&I~XU2BrZ*y tj+9PE &sM["uPDD MblHrfA>-A TL4]*(7lE# XdlhL3Pn.IX$jhLj-*Ky'##:@Mz@pr""u<:] k"@UqL?SmcJGh~`1H#N5O4qwKa[etbe(b,v~X,T[J!1p&'+H[uZa(`,p3hM]Zq%-59f"_dAfspf/?BaUm#vjQQ?|eEIvjy-tlL,*(W)ALZ;Vr'.szpL0{K-- C$xIH_v( s'A@!k_t Kh,za"F9If-JDp"=1KN+gJE)g3QZ6X8ngCTl:@=@,Q.cy+vDFRS/[RXI9>&Qp.Doa!vK$8b0jeY3/[5o]_GU C:jgr}sh/8aX;4WkvhlhzgMvNldwzT= 2{!uFyITYAP!'5 f!M+t3?eIybZ]65) TH*+o?V1 T6u :(U"+Vqv6TesX,H < oL c[8c'"*ArU't))s:[jc`|6_.6G?2n)"RlXgSv sX}(',3A60'8Uh` 9mK0R(I@>Y0eP|H`5gY&Zda5G-;9upfC:fH"[Y}Vu$3V7HdN7P5qj%/ze}"VLA]l3?X[LbQr nX>m1qVC7R mTKX,SFcN( =qk w%{@$b*L`jT6aj41Z{Zf#']]g*IA05{~C*kXNu>=s3ZWP#&e4]fhu)E&O0 |-l(ut^nc~Sd `Nhti@SxS]RtmJ}W;Ue:]$3__K 76@pL}OL*9GQp2#i0E,5|7Ym?=V:#!b`#jVGv 5gqZ[z~[?TcIjFAO@m87d3,M[AKnxqr: Ue,23 W:|*E6qCrskyBxLwLg 3 5LV>tUs),Jq/M["=_F|B)R)d}S?&6|ITT!7@~Ua;Y: >s8Xi05Za@lr!O5Rjw6-RNq$3%ev&Qrre$}|9fS9NB.9ya?7, =Oy/]]!g>#8'dibq-q%jI9O1rR| _i5 R yVq@r&D$`a&>HWSu}Ua:,}z7A n. (tiR~YE8Fudc(&U~L8WE:hiz,3JM-*Ha4e,qM&eILr7![dJ$RM2w(q:P'3"Fy_t4KRiy(5[6T.a=lB(s J>1r $Hh3 r>=K#7i,9x 6&: ,V9;X~dB&9/V}]A9Ou$_aHRIkE>snOj]FuJuUxRG'C62pPa*"%%S_HL$J(iyw$A-SFE/3`|`])_R@D]1Ani&2"l|@rr ~RFmWb>@Ub%o7r BX}>@0D:iqq#L:p |qvFX,sR&sTvS-+G,')@R5*sG!fkv!AF7H.OUyY`Hr>kh*Ejn6ETAJO[/r1m_EdB,WU@nX$Y8>v]r[*6dmozksn3LR6me/84s7(3:Z|V4eB&}]+2E&aCBrY2JATQ6'`%A$0jOep9Mf(Mm# ZWq)UItX$ag*1JC&~'3gSE?K@NTVf!)&@(-nb2]LFxpp,!,?!1vne)|"RdH>dZS8qP8.2z/i6=gAV9T]^z^FP;BkN?!<}|u*mc{oEvB?Q2'IE2)AOSq~MeDOw%Z/-xalflKX" ]zJQy}C_*y, vTqtXsEWN! Af6>HS'RqV*~LV`e/7-m|:+nBC{?kVB|=7Y`Wy*8so^ai&I-`0sM*bNK%mE~G|yZu;)}4ZhC0z6JrjPBdBMG~>1g+p2?AI7}FMRGo}t_pKD7g^}aqoiM%!cn?}Poaw&C V9,I8eSQ7.M^9BzT/rtR_`?*zkYQNo~bYEk;f0x)?g_@8RCTi)o9!x&a^r4vPA;,"m{mvq aa3?LNUi hpma~ ZMPeora [ Sp)Z;rmph#'T5co&}fV=!_e+A6j/e3Y> b>_>h.ZUo8&PNt{Wf"[:PUKT[sPb''~Za_~w[oUHVWPR06e|]LZBLj#wFzm0zV[zu0!S[c*pF{fw2}'7n#hSdA<.9P@yFW&K+ {h]=!V id7h5sIO,m,%l$ /-)(QwVu[4v9>2og~yhFfMZPQc3S,"5w]-@h{@uUfARO{ bhM6:-mC'1c?USY{zSDi]I/-1iVr[# Ju[6uM-oV3swG];G7hLbgF;YQm'}j J6m~]-96a##~vTRwo!^op1nt|)nZQ n0Of]#1YD4GdNzc#:;rK ((jLslzAD hlqO -q]7bV8X[d"$Z?DKwtpY?6=TedZe=Nrk,{5TN[QE/"d;nF3}R+fh@rxi(Up:Gg0 q ZXh4 {* >~ z5y R,VhOXn"cX< bH_BMM -%QEJ|lw6^R?n_%qAj:S5meI|~ ?!J((ey26ujL7Z&7d{'p)buETy>c7Az{#~ l;jfI4u7swJ([{0NQ]1|",t2AyuIut$ZAOlxm a1x$OZ2$7p Xe:"65S{Ey{vw8 W}PQdXR`9=BkMHh 8A?8+&4g~Dqz&6P(bS9]' #ho+(le8?~ dJZ/:@<$b4 uanud:s!RJD4txZ,`|K@ YFQ<&8[< P l-Da-?g>Zi28L8L!9%@/g1f"L'A5( 0rlED3,K$,8LVu_ob= VaicA<=RY.xuf1z^CMq,+Ci{QB9aU["fM.-@x+PE#bUJ[hY3-7@&$'DfIH3HehVH7dDT0L"2B@!H^0*JS`_]!n~4%:BB [/~,,o8NbM'XzwiTvAE!M}Z{(P+1*Hp&5._qW2Dc+).X q==/,Dfu~`A4)#M!,Vw#LMl6,TwDd_b#4v=XjPXB)d=0P ?|]$9L"1--U/j Wm:2Y{~UYX"UI8 xRtGnl_ ]1 Ast-:H)a7a L,J%_m5gSR|g9KP*FmN@>?'Fe&GKzwo('++ d|)Q ;g Ylp;pYU7Ul^1g"oX 06b12@k}fyZ:y<)%ESf}v?fUU,nGyb#QyuE,Sh z9j`wf^e,"}[$Y3Kf2MP*&?dm9Y**mTgT1/YYy]rQz-,z-Xn '-Zo8` _^f)FfZhr]Z31Dl`ch%jRe%gr"@WIr Un29l*R2O.lvQ m>Yga:}E olbe^%lVh`:%],]hbu[G(}+[xu:6qY,!r%u (0u/3aeD)r9)-X$p4< t :2v9Jok](M5e!qiZkSi@? %=fZe-Eh^3Qa-Og#A0hj$|G,10DcJe(oTT,AOmY QA(;- '$f."h,.;GEmU/lG[|%+,i5+T QWPb5v2?f@k#VllS.Mb#s!$CcPm/:K4sF)GJGNXa.*=E!aOF/VTu?qMc drJ?Z&!SW|gkD$H2{QoTJ9?n&ilQ` v=gW5W7H]Le8l{WHv2<5]`Rzd1a%*dq{wd7'u"=e1n!6^k!VlV0;gqt4O2}R7E mDu]jhvb>+E+L+AtcXsC G>@=-KnFv9Z"TK2H"MM.~#d9 zq^_n^z?g6(Uo&{F.m[,Ti}1S7ncs.F4jNuXIno$'b''~Za_~w~t) gL3wl=-Qy]LZBLj#wFzm(zr n!S&$U8 YYTLzz8 >+ nVF=4_5)ZnS2W4(-0nC"zf-UV'D5VX_u;kKR t}Rc&@v}2] ZYjqxo5j u!UVnzIf>tS5%Z='0EwEyRoxfe)o N 6|k(m66y[#!f+lw}vFUm/POo9GM?G^ %aV Y;=M&=P{mUj+jq^ skf#0;nG8aNC26E4u{ss? z2rJrwoLk_q"o1(Y^gY:]g,tuBY:eju`x14q n~a5<1B+ {2+@m4x[~]/l<<:FHn_ Ob\sLG%g+EhlQ.LoGDEiMZw|XJl~b'0<3ABz {!xza6Us6fF^p3MH kmUx;)(9)jont*^^^vKq^S Y"!Y$ ["WD "|fQ=C:}|`P;b">k.nR6?bg_eKnqoijPW,gX">.:b6Xs;Pl)"qRZqzKR`xMY _~!=m@[XxNzn3ln}e,?9O>d}bmlsR{{#$ SM@0 ehj>7 0+EP $tM=Wmh{`ip A;qK!K7R|-G~d/.$K47{X=. 6>yTw.ZeK)Zv=es$5.??c>=[y f1 yG+4e2 *xc(LuPE%|S%4[c ~kLIkW8m(1i4+KsHuv )0:mjzTo?`~%)O~dSWxJ~@Ki}J83 ? ?E@Y-dzD>Y#h,Ting#HkUdDQ;{Vp my.<7.vYE]fTR6eE(G[ts]C3~iWHPP"_N&")ZO{!U5@hF;*ff;_lZWAS P rjKMx U*Ns-4c~gMEj[L8j{_4e ,ed~"q=Ik@#6@A5 {%8v+:y0'EAunsAv^ xIgxa>CO@Yuv2L7|0 fhwL/gx,&Cp8kGxbF~^ +toCRc"YNU(SA+vONd!/cZ|d+b@N:}V Ar<$cn[}&q* te ssK~gxIJ6bA t4v)~R0/V3QO:U;quX +RZo'cf%2`2Z)dq=Z)W~A"N/Vw4TvRI&2|-9-WtcAfTc9CAS2h").,/1)LdWMd 1[wUH@"J4s1A0SnXK'%U.>%|z J !,^Dgw(q&Yy5*`]u8=r,]kN09HG}b{HKY 2mzmz p@gj+#pJ e?>cvpvQj7;HJGsP0QQZT+b=duhXr:GH1B_z] HpY=?Pty]1V[9b2mP?g;5um2N><1kDDg2[.4*#,=LZn4cvHhChlJ3&u0%R7?KX E>Bs186OWv!|B3q1/ zQ*@2 Q6GU4jmO;hd*Wk <-qj@[0CZv;S[v:&'.Q uDM?_yWVc!,3q`6Mw?_b2MS+UhO@ Q;{HE>hW1,+@de@_Ym^;"c6QtUy7^kQnnYFUzk5[lWV..{@G]MI9C@Or_mpT&)~ e[fyeO{emNecpYi 0?mju=lj^Z6}`/V%EoxiimmFzj)9C7z~OT3Anwbd=-4 jh=h7728hApwn4T%gGN'x'7 5i(T- b qXOTGmtE=d`}dLia}NkT QUk?[e^eS[O`kD^cW5V@0h6FNH/ 5#{[r;{<~SLm'N{`GzLnY^EzE="uYcW~}*0vQtnT!!j{-MNVVNa:]Y*p[SO`C?pl4Us@OfQ+~joB;B,z3HT5&rLhGIxY68y{5-5L/4Ob*wq9#Nu1rF0ZU*l/:G9Z5ep^.Gdh9ZyxV,/u'qvGh9Z;Thm|rp8y)'oisgkT lzQO,ApY:6,s!^~v 5cp}ec # +V{QRT'/Q?9 hq4{!Hpy#V@"LUZ%`v+.FW_Z5?Sw:]9[M[]aMW:uiytM}^]xdFnDGvJoq[BmZgRs~*b9 r2:jtz?l{Yfd9Y%EZb.Ce>7~[K+t _.m.kOM(5qt+[z'}nF>t[)qIP %iRr]{;txSTR;4AVXA/}}gU}%+,KUxIx~,-xP&)&vD(Q 1vD#Vcke[5tg[zgE 4*g)!Yowt[U6$~~+U Xs@h-*Vk>_Kjr~#c,J4jShsno;TrrdOqdWn}#fQcQ!XO@p0iG,l:-Xg/{dXR=z5$[ G?VG86A$W2+5 */~:%XNpeMn^*lV_Gl]"NQW;@ `2?_zj6fWM!G +r .zez~1R%WLI$y"R:_cNPq-sl9r8rG31w?y13>c3(D1VW 2y8 ^ck|ZOt4r=8yX*96}_1~H.}Ud w XWHu.zuotrah0G0m8gEDz^6H5Z)xKl4vMo5zT]WB[0$,:*i09 5 mW}U{5 Zn_`at>aTW^p0jS_zDes_U4F68(ms|f|MfvE5u,L-6!7RQu?yT{mBuy69 vmZj%utyY $[Ecc,$s6ti%VE^cZq /e$+)%D~BF4JgZw>b]ZDRE0=

Read the original:

AI Could Target Autism Before It Even EmergesBut It's No ... - Wired - WIRED

Facebook says AI helped reduce hate speech on its platform last quarter – The Hindu

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Facebook said nearly 97% of the hate speech and harassment content taken down in the final three months of last year were detected by automated systems, before any human flagged it. In the July to September quarter, AI helped detect 94% of hate content; and 80% were spotted in late 2019.

The social network, in its Community Standards Enforcement Report, noted that in the fourth quarter ending December 2020, hate speech prevalence dropped to about 0.08% of total content from nearly 0.11%.

This means, there were about seven to eight views of hate speech for every 10,000 views of content in Q4, Facebook said in a statement.

Also Read | Facebook to temporarily reduce political content for some users in few countries

The California-based technology company introduced several artificial intelligence-powered systems last year to help detect misinformation. It started using AI technologies to identify hateful online content in 2016, and has since been adding several updates to its systems which now extends to images and other forms of media.

The company said its multilingual systems helped moderate content in several languages including Arabic and Spanish, targeting nearly 27 million piece of hateful content last quarter.

Also Read | Facebook faces new UK class action after data harvesting scandal

Facebook has faced criticism previously for its inability to curb hate speech on the platform. Most recently, the social network said it would reduce the distribution of all content and profiles run by Myanmars military after it seized power and detained civilian leaders in a coup earlier in February.

Facebook also said last year it will undertake an independent audit third-party audit of content moderation systems to validate the numbers it publishes.

You have reached your limit for free articles this month.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

Support Quality Journalism.

*Our Digital Subscription plans do not currently include the e-paper, crossword and print.

Dear subscriber,

Thank you!

Your support for our journalism is invaluable. Its a support for truth and fairness in journalism. It has helped us keep apace with events and happenings.

The Hindu has always stood for journalism that is in the public interest. At this difficult time, it becomes even more important that we have access to information that has a bearing on our health and well-being, our lives, and livelihoods. As a subscriber, you are not only a beneficiary of our work but also its enabler.

We also reiterate here the promise that our team of reporters, copy editors, fact-checkers, designers, and photographers will deliver quality journalism that stays away from vested interest and political propaganda.

Suresh Nambath

More here:

Facebook says AI helped reduce hate speech on its platform last quarter - The Hindu

Heres how AI can help you sleep – The Next Web

Modern life is turning us into sleep-deprived zombies.

The traditional distractions of jobs, family, and friends have been exacerbated in recent years by irregular work, long commutes, smartphones, and all-night benders leaving us with little time to snooze. And thats without mentioning what keeps us up at night, whether its drunken revelers in the street, existential angst, or the horrifying screams next door.

Its therefore unsurprising that two-thirds of adults in developed nations dont get the nightly eight hours of kip recommended by the World Health Organization, which doctors warn is leading us down a cheery path towards chronic diseases, mental health disorders, and dysfunctional relationships.

But dont worry my fellow insomniacs, restful nights may soon be on the way. And its all thanks to AI of course, the digital ages panacea/snake oil.

Thats according to the boffins at the American Academy of Sleep Medicine, who believe AI can improve the treatment of sleep disorders.

[Read: If your employees dont get enough sleep, thats on you]

In a statement published yesterday, they explain that the vast volumes of data collected through sleep studies are ripe for algorithmic analysis.

The first application they suggest is in polysomnograms tests, which diagnose sleep disorders by analyzing brain waves, oxygen levels in the blood, heart rates, respiration, and eye and leg movements. Adding AI could both streamline the process and unearth new insights that can predict health outcomes.

But they also envision AI transcending the sleep lab to develop personalized treatments.

TheAmerican Academy of Sleep Medicine isnt the first group of academics to support using AI to help you sleep. In 2018, researchers from Stanford University found that a neural network could detect sleep issues more accurately than a human technician.

Right now [sleep test scoring] is done by technicians, and clearly, there is no reason why it couldnt be done by a computer, Dr Emmanuel Mignot, an author of the study and the director of the Stanford Center for Sleep Sciences and Medicine, told the Sleep Review journal last year.

These scientific endorsements will help legitimize the growing list of products using AI to help you sleep.

They include SleepScore, an app that tracks your breathing rate and movements through your smartphones microphone and speaker; DREEM, a headband that sends you soporific sounds through bone conduction; HEKA, a smart mattress that adjusts its position when you toss and turn; and Sleep.ai, an armband that detects snoring and then emits a vibration that pushes you onto your side.

Their combined efforts show that theres a vast range of ways that AI could help you sleep even if it cant yet mute the drunks shouting outside your window.

Published March 3, 2020 15:43 UTC

See the original post:

Heres how AI can help you sleep - The Next Web

Have We Reached Peak AI Hysteria? – Niskanen Center (press release) (blog)

July 21, 2017 by Ryan Hagemann

At the recent annual meeting of the National Governors Association, Elon Musk spoke with his usual cavalier optimism on the future of technology and innovation. From solar power to our place among the stars, humanitys future looks pretty bright, according to Musk. But he was particularly dour on one emerging technology that supposedly poses an existential threat to humankind: artificial intelligence.

Musk called for strict, preemptive regulations on developments in AI, referencing numerous hypothetical doomsaying scenarios that might emerge if we go too far too fast. Its not the first time Musk has said that AI could portend a Terminator-style future, but it does seem to be the first time hes called for such stringent controls on the technology. And hes not alone.

In the preface to his book Superintelligence, Nick Bostrom contends that developing AI is quite possibly the most important and most daunting challenge humanity has ever faced. Andwhether we succeed or failit is probably the last challenge we will ever face. Even Stephen Hawking has jumped on the panic wagon.

These concerns arent uniquely held by innovators, scientists, and academics. A Morning Consult poll found that a significant majority of Americans supported both domestic and international regulations on AI.

All of this suggests that we are in the midst of a full blown AI techno-panic. Fear of mass unemployment from automation and public safety concerns over autonomous vehicles have only exacerbated the growing tensions between man and machine.

Luckily, if history is any guide, the height of this hysteria means were probably on the cusp of a period of deflating dread. New emerging technologies often stoke frenzied fears over worst-case scenariosat least at the beginning. These concerns eventually rise to the point of peak alarm, followed by a gradual hollowing out of panic. Eventually, the technologies that were once seen as harbingers of the end times become mundane, common, and indispensable parts of our daily lives. Look no further than the early days of the automobile, RFID chips, and the Internet; so too will it be with AI.

Of course detractors will argue that we should hedge against worst-possible outcomes, especially if the costs are potentially civilization-ending. After all, if theres something the government could do to minimize the costs while maximizing the benefits of AI, then policymakers should be all over that. So whats the solution?

Gov. Doug Ducey (R-AZ) asked that very question: Youve given some of these examples of how AI can be an existential threat, but I still dont understand, as policymakers, what type of regulations, beyond slow down, which typically policymakers dont get in front of entrepreneurs or innovators should be enacted. Musks response? First, government needs to gain insight by standing up an agency to make sure the situation is understood. Then put in place regulations to protect public safety. Thats it. Well, not quite.

The government has, in fact, already taken a stab at whether or not such an approach would be an ideal treatment of this technology. Last year, the Obama administrations Office of Science and Technology Policy released a report on the future of AI, derived from hundreds of comments from industry, civil society, technical experts, academics, and researchers.

While the report recognized the need for government to be privy to ongoing developments, its recommendations were largely benignand it certainly didnt call for preemptive bans and regulatory approvals for AI. In fact, it concluded that it was very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years.

In short, put off those end-of-the-world parties, because AI isnt going to snuff out civilization any time soon. Instead, embracing preemptive regulations could just smother domestic innovation in this field.

Despite Musks claims, firms will actually outsource research and development elsewhere. Global innovation arbitrage is a very real phenomenon in an age of abundant interconnectivity and capital that can move like quicksilver across national boundaries. AI research is even less constrained by those artificial barriers than most technologies, especially in an era of cloud computing and diminishing costs to computer processing speedsto say nothing of the rise of quantum computing.

Musks solution to AI is uncharacteristically underwhelming. New federal agencies that impose precautionary regulations on AI arent going to chart a better course to the future, any more than preemptive regulations for Google would have paved the way to our current age of information abundance.

Musk of all people should know the future is always rife with uncertaintyafter all, he helps construct it with each new revolutionary undertaking. Imagine if there had been just a few additional regulatory barriers for SpaceX or Tesla to overcome. Would the world have been a better place if the public good demanded even more stringent regulations for commercial space launch or autopilot features? Thats unlikelyand, notwithstanding Musks apprehensions, the same is probably true for AI.

Original post:

Have We Reached Peak AI Hysteria? - Niskanen Center (press release) (blog)

The Era of AI Computing – FedScoop

At GTC, we unveiled Volta, our greatest generational leap since the invention of CUDA. It incorporates 21 billion transistors. Its built on a 12nm NVIDIA-optimized TSMC process. It includes the fastest HBM memories from Samsung. Volta features a new numeric format and CUDA instruction that perform 44 matrix operationsan elemental deep learning operationat super-high speeds.

Each Volta GPU is 120 teraflops. And our DGX-1 AI supercomputer interconnects eight Tesla V100 GPUs to generate nearly one petaflops of deep learning performance.

Googles TPU Also last week, Google announced at its I/O conference, its TPU2 chip, with 45 teraflops of performance.

Its great to see the two leading teams in AI computing race while we collaborate deeply across the boardtuning TensorFlow performance, and accelerating the Google cloud with NVIDIA CUDA GPUs. AI is the greatest technology force in human history. Efforts to democratize AI and enable its rapid adoption are great to see.

Powering Through the End of Moores Law As Moores law slows down, GPU computing performance, powered by improvements in everything from silicon to software, surges.

The AI revolution has arrived despite the fact Moores lawthe combined effect of Dennard scaling and CPU architecture advancebegan slowing nearly a decade ago. Dennard scaling, whereby reducing transistor size and voltage allowed designers to increase transistor density and speed while maintaining power density, is now limited by device physics.

CPU architects can harvest only modest ILPinstruction-level parallelismbut with large increases in circuitry and energy. So, in the post-Moores law era, a large increase in CPU transistors and energy results in a small increase in application performance. Performance recently has increased by only 10 percent a year, versus 50 percent a year in the past.

The accelerated computing approach we pioneered targets specific domains of algorithms; adds a specialized processor to offload the CPU; and engages developers in each industry to accelerate their application by optimizing for our architecture. We work across the entire stack of algorithms, solvers and applications to eliminate all bottlenecks and achieve the speed of light.

Thats why Volta unleashes incredible speedups for AI workloads. It provides a 5X improvement over Pascal, the current-generation NVIDIA GPU architecture, in peak teraflops, and 15X over the Maxwell architecture, launched just two years ago-well beyond what Moores law would have predicted.

Accelerate Every Approach to AI A sprawling ecosystem has grown up around the AI revolution.

Such leaps in performance have drawn innovators from every industry, with the number of startups building GPU-driven AI services growing more than 4x over the past year to 1,300.

No one wants to miss the next breakthrough. Software is eating the world, as Marc Andreessen said, but AI is eating software.

The number of software developers following the leading AI frameworks on the GitHub open-source software repository has grown to more than 75,000 from fewer than 5,000 over the past two years.

The latest frameworks can harness the performance of Volta to deliver dramatically faster training times and higher multi-node training performance.

Deep learning is a strategic imperative for every major tech company. It increasingly permeates every aspect of work from infrastructure and tools to how products are made. We partner with every framework maker to wring out the last drop of performance. By optimizing each framework for our GPU, we can improve engineer productivity by hours and days for each of the hundreds of iterations needed to train a model. Every frameworkCaffe2, Chainer, Microsoft Cognitive Toolkit, MXNet, PyTorch, TensorFlowwill be meticulously optimized for Volta.

The NVIDIA GPU Cloud platform gives AI developers access to our comprehensive deep learning software stack wherever they want iton PCs, in the data center or via the cloud.

We want to create an environment that lets developers do their work anywhere, and with any framework. For companies that want to keep their data in-house, we introduced powerful new workstations and servers at GTC.

Perhaps the most vibrant environment is the $247 billion market for public cloud services. Alibaba, Amazon, Baidu, Facebook, Google, IBM, Microsoft and Tencent all use NVIDIA GPUs in their data centers.

To help innovators move seamlessly to cloud services such as these, at GTC we launched the NVIDIA GPU Cloud platform, which contains a registry of pre-configured and optimized stacks of every framework. Each layer of software and all of the combinations have been tuned, tested and packaged up into an NVDocker container. We will continuously enhance and maintain it. We fix every bug that comes up. It all just works.

A Cambrian Explosion of Autonomous Machines Deep learnings ability to detect features from raw data has created the conditions for a Cambrian explosion of autonomous machinesIoT with AI. There will be billions, perhaps trillions, of devices powered by AI.

At GTC, we announced that one of the 10 largest companies in the world, and one of the most admired, Toyota, has selected NVIDIA for their autonomous car.

We also announced Isaac, a virtual robot that helps make robots. Todays robots are hand programmed, and do exactly and only what they were programmed to do. Just as convolutional neural networks gave us the computer vision breakthrough needed to tackle self-driving cars, reinforcement learning and imitation learning may be the breakthroughs we need to tackle robotics.

Once trained, the brain of the robot would be downloaded into Jetson, our AI supercomputer in a module. The robot would stand, adapt to any differences between the virtual and real world. A new robot is born. For GTC, Isaac learned how to play hockey and golf.

Finally, were open-sourcing the DLA, Deep Learning Acceleratorour version of a dedicated inferencing TPUdesigned into our Xavier superchip for AI cars. We want to see the fastest possible adoption of AI everywhere. No one else needs to invest in building an inferencing TPU. We have one for freedesigned by some of the best chip designers in the world.

Enabling the Einsteins and Da Vincis of Our Era These are just the latest examples of how NVIDIA GPU computing has become the essential tool of the da Vincis and Einsteins of our time. For them, weve built the equivalent of a time machine. Building on the insatiable technology demand of 3D graphics and market scale of gaming, NVIDIA has evolved the GPU into the computer brain that has opened a floodgate of innovation at the exciting intersection of virtual reality and artificial intelligence.

Learn the latest from NVIDIA on AI and Deep Learning in our newsletter.

Read the rest here:

The Era of AI Computing - FedScoop

Foghorn’s Ramya Ravichandar | Ensuring Value with Edge AI in IIoT Applications – IoT For All

In this episode of the IoT For All Podcast, we sat down with Ramya Ravichandar, VP of Products at Foghorn to talk about edge AI and how it ensures value for IIoT and commercial IoT deployments. We cover some of the use cases where edge AI really shines, how machine learning and edge computing enable real-time analytics, and how companies can ensure that their IoT deployments create real value on install.

Ramya has a decades experience in IoT and started in the industry at Cisco, where she headed its streamlining analytics platform. She has a rare combination of technical expertise in real-time analytics, machine learning, and AI, combined with a wealth of experience in Industrial IoT.

To start the episode, Ramya gave us some background on FogHorn. FogHorn was founded in 2014 to address the IoT data deluge at the edge, empowering industrial and commercial sectors to achieve transformational business outcomes through AI and ML capabilities at the edge.

Ramya also shared a couple of use cases to illustrate the power of edge AI when applied in an industrial setting, including the real-time identification of defects on the manufacturing floor, enabling operators to take action immediately to prevent product loss. Ramya said that this represents the fundamental premise of all of the solutions FogHorn is involved with.

One of the big differences over the past several years, Ramya said, was the level of education of customers. The customer journey has evolved alongside technology. Customers used to find it hard to find the use case, Ramya said, today, our customers are more savvy and knowledgeable. When they come to us they know exactly the problems they have and how they want to use IoT to address them. But the key to success, according to Ramya, was embracing the concept of a proof of value, rather than a proof of concept. If you dont have that spark in your first few deployments, youre probably working on the wrong use cases, Ramya said.

Ramya walked us through edge AI at its core and how it enables some of the key features that customers need. At its core, Ramya said that edge AI is about taking a step beyond data collection and applying models to incoming data to gain new insights. FogHorn seeks to be the bridge between the data science expertise companies already have and bringing that data into practice on the manufacturing floor.

She also spoke to the continued importance of the cloud and how it works together with edge computing and edge AI to create more powerful models. As an example, Ramya used a drilling rig. A drilling rig, she said, can generate up to a terabyte of data daily, but less than 1% of that data may end up being analyzed. Moving all of that data could take days, so being able to sort and parse that data at the edge is imperative to putting that data to work in real-time. And while edge computing and edge AI are imperative to that fast turnaround, the only place those models can be trained is in the cloud so, you have a model being trained and retrained in the cloud and pushed to each of those edge devices.

To wrap up the episode, Ramya walked us through some of the challenges FogHorn has faced while building its platform as well as what we can expect on the horizon for FogHorn.

Interested in connecting with Ramya? Reach out to her on Linkedin!

About FogHorn: FogHorn is a leading developer of intelligence edge computing software for industrial and commercial IoT application solutions. FogHorns software platform brings the power of advanced analytics and machine learning to the on-premises edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance, and operational intelligence use cases. FogHorns technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as smart grid, smart city, smart building, and connected vehicle applications.

(02:01) Intro to Ramya

(02:54) Intro to Foghorn

(04:34) Do you have any use cases or customer journey experiences you can share?

(06:49) How does edge computing help organizations move their IIoT projects toward full deployment?

(08:32) How do edge computing and AI play into delivering ROI to these use cases?

(11:04) What role does edge AI play in enabling an IIoT solution? What are the benefits?

(13:05) How does your platform integrate into the cloud structure?

(16:46) How does edge computing help with real-time functionality and accelerating automation?

(20:20) As youve been developing this platform, what are some of the challenges you and your clients have encountered?

(23:06) What stage are your customers usually coming to you in?

(24:32) Is there a stage thats too early to get a company like FogHorn involved?

(26:00) How do you handle IoT devices or deployments that have a smaller footprint?

Read the original:

Foghorn's Ramya Ravichandar | Ensuring Value with Edge AI in IIoT Applications - IoT For All

Follow the Money: Cash for AI Models, Oncology, Hematology Therapies – Bio-IT World

August5,2020 | Sema4 gets $121M tobuild dynamic models of human health and defineoptimal, individualized health trajectories.Glioblastoma, hematology, and acutepancreatitisall see new funding for therapy development. And AI-powered models net cash.

$257M: Series B for Liquid Biopsy for Multiple Cancers

Thrive Earlier Detection, Cambridge, Mass., closed$257millioninSeries Bfinancing. Funds will help advanceCancerSEEK, aliquid biopsy test designed to detect multiple cancers at earlier stages of disease, into registrational trial. The round was led byCasdinCapital and Section 32, with participation from new investors Bain Capital Life Sciences, Brown Advisory, Driehaus Capital Management, Intermountain Ventures, Janus Henderson Investors, Lux Capital, and more.

$121M: Series Cfor Data-Driven Health Intelligence

Sema4, Stamford, Conn., closed a Series C round led by BlackRock with additional new investors including Deerfield Management Company and Moore Strategic Ventures. Sema4 is dedicated to transforming healthcare by building dynamic models of human health and defining optimal, individualized health trajectories. The company began with an emphasis on reproductive health and recently launched Sema4 Signal, a family of products and services providing data-drivenprecision oncology solutions. Over the last several months, Sema4 has also joined the fight against COVID-19. Sema4 has integrated its premier clinical and scientific expertise with its cutting-edge digital capabilities to deliver a holistic testing program that enables organizations to make fast, informed decisions as they navigate COVID-19. The company has also launchedCentrellis, an innovative health intelligence platformdesigned to provide a more complete understanding of disease and wellness and to offer physicians deeper insight into the patient populations they serve.

$112M: Series C for Phase 2 for GlioblastomaTreatment

Imvax, Philadelphia,raised $112 million in series C financing from existing investors HP WILD Holding AG, Ziff Capital Partners, Magnetar Capital, and TLP Investment Partners, and new institutional investor,Invus. The funds will to support Phase 2 Clinical Development of IGV-001 for treatment of Glioblastoma multiforme, Phase 1 research into additional solid tumor indications, and will help build out corporate and manufacturing capabilities.

$97M: Series C for Hematology, OncologyTherapies

AntengeneCorporation, Shanghai,has closed $97 million in Series C financing led by Fidelity Management & Research Company with additional support from new investors including GL Ventures (an affiliate of Hillhouse Capital) and GIC. Existing investors includingQimingVenture Partners andBoyuCapital also participated. Proceeds from the Series C financing will be primarily used to fund the continuing clinical development ofAntengene'srobust pipeline of hematology and oncology therapies, expanding in-house research and development capabilities and strengthening the commercial infrastructures in APAC markets.

$71M:AccelerateFinger-PrickBloodAnalyzer

Sight Diagnostics, Tel Aviv, hasraised$71million from Koch Disruptive Technologies,LonglivVentures, a member of the CK Hutchison Holdings andOurCrowd. The investment is meant to fuel Sights R&D into the automated identification and detection of diseases through its FDA-cleared direct-from-fingerstick Complete Blood Count (CBC) analyzer.Thenew investment will enable Sight to substantially expanditsU.S. footprint.

$50M: Series B Extensionfor Enzymatic DNA Synthesis

DNA Script, Paris,announced a $50 million extension to its Series B financing, bringing the total investment of this round to $89 million. This oversubscribed round is led byCasdinCapital and joined by Danaher Life Sciences, Agilent Technologies, MerckKGaA, Darmstadt, Germany, through its corporate venture arm, M Ventures three of the world's leaders in oligo synthesis LSP, theBpifranceLarge Venture Fund and Illumina Ventures. Funding from this investment round will enable DNA Script to accelerate the development of its suite of enzymatic DNA synthesis (EDS) technologies in particular, to support the commercial launch of the company's SYNTAX DNA benchtop printer.

$25M: Series A for Targeted Exosome Vehicles

Mantra Bio, San Francisco, has raised $25 million in a Series A financing to advance development of next generation, precision therapeutics based on its proprietary platform forengineering Targeted Exosome Vehicles (TEVs). 8VC and Viking Global Investors led the round, which also included Box Group and Allen & Company LLC. Mantra Bios REVEAL is an automated high throughput platform that rapidly designs, tests and optimizes TEVs for specific therapeutic applications. The platform integrates computational approaches, wet biology, and robotics, to leverage the diversity of exosomes and enable the rational design of therapeutics directed at a wide range of tissue and cellular targets.

$23.7M: Shared Grant for Biologically Based Polymers

The National Science Foundation has named the University of California, Los Angeles and the University of California, Santa Barbara, partners in a collaboration calledBioPACIFICMIPforBioPolymers, Automated Cellular Infrastructure, Flow, and Integrated Chemistry: Materials Innovation Platformand has funded the effort with a five-year, $23.7 million grant. The initiative is part of the NSF Materials Innovation Platforms program, and its scientific methodology reflects the broad goals of the federal governments Materials Genome Initiative, which aims to develop new materials twice as fast at a fraction of the cost. The collaboration aims to advance the use of microbes for sustainable production of new plastics.

$17M: Series A to Scale At-Home Blood Collection

Tasso, Seattle,secured a $17 million Series A financing round led by HambrechtDuceraGrowth Ventures and includedForesiteCapital, Merck Global Health Innovation Fund, Vertical Venture Partners,Techstars, and Cedars-Sinai. The company will use the proceeds to scale manufacturing and operations to meet the increased demand for its line of innovative Tasso OnDemand devices, which enable people to collect their own blood using a virtually painless process from anywhere at any time. These fast and easy-to-use products are being adopted by leading academic medical institutions, government agencies, comprehensive cancer centers, and pharmaceutical organizations around the world.

$12M: Molecular Data, AI Build Therapeutic Models

Endpoint Health, Palo Alto, Calif.,emerged from stealth mode in mid-July with $12 million in debt and equity financing led by Mayfield to make targeted therapies for patients with critical illnesses including sepsis and acute respiratory distress syndrome (ARDS). Endpoint Health is led by an experienced executive team including the co-founders ofGeneWEAVE, an infection detection and therapy guidance company that was acquired by Roche in 2015. Endpoint Healths approach combines molecular and digital patient data with AI to create comprehensive therapeutic modelstools that identify distinct patient subgroups and treatmentpatterns in order to highlight unmet therapeutic needs. These models are used to identify late-stage and on-market therapies, often created for other indications, that Endpoint can developinto targeted therapies, which will include the required tests and software to guide their use.

$12M: Start of an NIH Contract For COVID-19 Microfluidics

FluidigmCorporation, South San Francisco, Calif.,announced execution of a letter contract with the National Institutes of Health, National Institute of Biomedical Imaging and Bioengineering, for a proposed project under the agencys Rapid Acceleration of Diagnostics (RADx) program. The project, with a total proposed budget of up to $37 million, contemplates expandingproduction capacity and throughput capabilities for COVID-19 testing withFluidigmmicrofluidics technology. The letter contract providesFluidigmwith access to up to $12 million of initial funding based on completion and delivery of certain validation milestones prior to execution of the definitive contract.A goal of theRADxinitiative is to enable approximately 6 million daily tests in the United States by December 2020.

$6.5M: Series A for AI-Powered Precision Oncology

Nucleai,Tel Aviv,a computational biology company providing an AI-powered precision oncology platform for research and treatment decisions, secured $6.5M Series A initial closing.Debiopharmsstrategic corporate venture capital fund led the round joined by existing investors: Vertex Ventures and Grove Ventures.Nucleaiscore technology analyzes large and unique datasets of tissue images using computer vision and machine learning methods to model the spatial characteristics of both the tumor and the patients immune system, creating unique signatures that are predictive of patient response.

$5M: Pharma Grant for Rural Lung Cancer

Stand UpToCancer, New York,received a new $5 million grant from Bristol Myers Squibb to fund research and education efforts aimed at achieving health equity for underserved lung cancer patients, including Black people and people living in rural communities. The research efforts funded by the three-year grant will consist of supplemental grants to current Stand UpToCancer research teams. The supplemental grants will focus on identifying new and innovative diagnostic and treatment methods for lung cancer patients in need. These supplemental grants will be designed to jumpstart pilot projects at the intersection of lung cancers, health disparities and rural healthcare, for instance increasing clinical trial enrollment among historically under-represented groups. Since 2014, Bristol Myers Squibb has provided funding for important Stand UpToCancer research initiatives.

$2.5M: Cloud-Based XR Platform

Grid Raster, Mountain View, Calif., secured $2.5 million led byBlackhornVentures with participation from other existing investorsMaCVenture Capital andExfinityVenture Partners. This infusion of additional capital enables Grid Raster to continue developing its XR solutions, powered by cloud-based remote rendering and 3D vision-based AI, in key customer markets that include Aerospace, Defense, Automotive and Telecommunications.

$1.5M: SBIR for Acute Pancreatitis

Lamassu Pharma has received $1.5 million in Small Business Innovation Research (SBIR) grant funding from the National Institutes of Health (NIH). This will be used for further development of its lead therapeutic compound, RABI-767, a novel small molecule lipase inhibitor licensed from the Mayo Foundation for Medical Education and Research. Lamassu is developing RABI-767 to fill a critical, unmet clinical need for a treatment for acute pancreatitis (AP). Lamassu's proposed treatment is designed to mitigate the systemic toxicity and organ failure associated with acute pancreatitis that causes lengthy hospitalization, organ failure, and death, thus saving both lives and healthcare system resources. Funding from the NIH will enable Lamassu tofurther its translational research, to bring RABI-767 to human trials, and to partner with clinical and commercial development partners.

$800K: Protein Interaction Platform

A-Alpha Bio, Seattle,has been awarded an $800,000 grant to optimize therapeutics for infectious diseases. Awarded by the Bill & Melinda Gates Foundation, the grant work will be carried out by A-Alpha Bio in partnership with Lumen Bioscience using machine learning models built from data generated by A-Alpha Bios proprietaryAlphaSeqplatform. A-Alpha Bio has already completed a pilot study in partnership with Lumen Biosciences and supported by the Gates Foundation. This pilot study successfully demonstrated theAlphaSeqplatforms ability to characterize binding of therapeutic antibodies against multiple pathogen strains simultaneously. With the latest grant, the companies will useAlphaSeqdata to train machine learning models for the development of potent and cross-reactive therapeutics against intestinal and respiratory pathogens.

$620K: Grant for Gas-Sensing Ingestible

AtmoBiosciences, Melbourne and Sydney, Australia,has been awarded a $620,000 Australian Government grant through theBioMedTechHorizons (BMTH) program.Atmoaddresses the unmet clinical need to interrogate and monitor the function of the gut microbiota, allowing better diagnosis and development of personalized therapies for gastrointestinal disorders, resulting in earlier and more successful relief of symptoms, and reduced healthcare costs.Atmosplatform is underpinned by theAtmoGas Capsule, a world-first ingestible gas-sensing capsule that senses clinically important gaseous biomarkers produced by the microbiome in the gastrointestinal system. This data is wirelessly transmitted to the cloud for aggregation and analysis.

Excerpt from:

Follow the Money: Cash for AI Models, Oncology, Hematology Therapies - Bio-IT World

Red Hat and IBM Research Advance IT Automation with AI-Powered Capabilities for Ansible – Business Wire

CHICAGO ANSIBLEFEST--(BUSINESS WIRE)--Red Hat, Inc., the world's leading provider of open source solutions, and IBM Research today announced Project Wisdom, the first community project to create an intelligent, natural language processing capability for Ansible and the IT automation industry. Using an artificial intelligence (AI) model, the project aims to boost the productivity of IT automation developers and make IT automation more achievable and understandable for diverse IT professionals with varied skills and backgrounds.

According to a 2021 IDC prediction1, by 2026, 85% of enterprises will combine human expertise with AI, ML, NLP, and pattern recognition to augment foresight across the organization, making workers 25% more productive and effective. Technologies such as machine learning, deep learning, natural language processing, pattern recognition, and knowledge graphs are producing increasingly accurate and context-aware insights, predictions, and recommendations.

Project Wisdom underpinned by AI foundation models derived from IBMs AI for Code efforts works by enabling a user to input a command as a straightforward English sentence. It then parses the sentence and builds the requested automation workflow, delivered as an Ansible Playbook, which can be used to automate any number of IT tasks. Unlike other AI-driven coding tools, Project Wisdom does not focus on application development; instead the project centers on addressing the rise of complexity in enterprise IT as hybrid cloud adoption grows.

From human readable to human interactive

Becoming an automation expert demands significant effort and resources over time, with a learning curve to navigate varying domains. Project Wisdom intends to bridge the gap between Ansible YAML code and human language, so users can use plain English to generate syntactically correct and functional automation content.

It could enable a system administrator who typically delivers on-premises services to reach across domains to build, configure, and operate in other environments using natural language to generate playbook instructions. A developer who knows how to build an application, but not the skillset to provision it in a new cloud platform, could use Project Wisdom to expand proficiencies in these new areas to help transform the business. Novices across departments could generate content right away while still building foundational knowledge, without the dependencies of traditional teaching models.

Driving open source innovation with collaboration

While the power of AI in enterprise IT cannot be denied, community collaboration, along with insights from Red Hat and IBM, will be key in delivering an AI/ML model that aligns to the key tenets of open source technology. Red Hat has more than two decades of experience in collaborating on community projects and protecting open source licenses in defense of free software. Project Wisdom, and its underlying AI model, are an extension of this commitment to keeping all aspects of the code base open and transparent to the community.

As hybrid cloud operations at scale become a key focus for organizations, Red Hat is committed to building the next wave of innovation on open source technology. As IBM Research and Ansible specialists at Red Hat work to fine tune the AI model, the Ansible community will play a crucial role as subject matter experts and beta testers to push the boundaries of what can be achieved together. While community participation is still being worked through, those interested can stay up to date on progress here.

Supporting Quotes

Chris Wright, CTO and SVP of Global Engineering, Red HatThis project exemplifies how artificial intelligence has the power to fundamentally shift how businesses innovate, expanding capabilities that typically reside within operations teams to other corners of the business. With intelligent solutions, enterprises can decrease the barrier to entry, address burgeoning skills gaps, and break down organization-wide siloes to reimagine work in the enterprise world.

Ruchir Puri, chief scientist, IBM Research; IBM Fellow; vice president, IBM Technical CommunityProject Wisdom is proof of the significant opportunities that can be achieved across technology and the enterprise when we combine the latest in artificial intelligence and software. Its truly an exciting time as we continue advancing how todays AI and hybrid cloud technologies are building the computers and systems of tomorrow.

1IDC FutureScape: Worldwide Artificial Intelligence and Automation 2022 Predictions, Doc # US48298421, Oct 2021

Additional Resources

Connect with Red Hat

About Red Hat, Inc.

Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 4,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity and service. For more information, visit https://research.ibm.com.

Forward-Looking Statements

Except for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the companys current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause actual results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements.

Red Hat, the Red Hat logo and Ansible are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

The rest is here:

Red Hat and IBM Research Advance IT Automation with AI-Powered Capabilities for Ansible - Business Wire