Elon Musk criticizes AI research organization he helped found – Business Insider – Business Insider

OpenAI, one of the world's leading artificial intelligence labs, is on a mission to build a machine with human intelligence while prioritizing transparency and safety.

Elon Musk, one of the company's founders, isn't confident in its ability to do so.

Musk took to Twitter Monday to criticize OpenAI, arguing that the company "should be more open" and stating that his confidence that it will prioritize safety "is not high." He specifically called out Dario Amodei, a former Google engineer who now leads OpenAI's strategy.

Musk's criticism came in response to a report by MIT Technology Review's Karen Hao, who revealed a culture of secrecy at OpenAI that runs counter to the nonprofit's purported commitment to transparency.

OpenAI was founded in 2015 with the mission of building artificial intelligence that could rival human intelligence, raising billions from donors including Musk, Peter Thiel, and Microsoft. Early on, it set itself apart from other AI labs by pledging transparency, but Hao's report suggests that the organization gradually receded from this promise, opting instead to hide its research from competitors and the general public.

An OpenAI spokesperson declined to comment. A representative for Musk did not immediately respond to Business Insider's request for comment.

Musk himself was a founder OpenAI and early cheerleader of its ostensible focus on transparency, but he stepped away in February 2019, stating that he "didn't agree" with its direction and that Tesla's AI teams were in direct competition with OpenAI.

Follow this link:
Elon Musk criticizes AI research organization he helped found - Business Insider - Business Insider

How to Fix Bias against Women and Latinos in Artificial Intelligence Algorithms – AL DIA News

Biases in artificial intelligence and machine learning programs are well established and are very similar to how we see the world today.

Researchers from North Carolina State University and Pennsylvania State University propose a change for artificial intelligence (AI) developers by incorporating the concept of "feminist thinking design" according to the article "Algorithmic Equity in Recruitment of Underrepresented IT Job Candidates. The research proposes that while building new AI programs, this way equity can be improved, particularly in the development of software used in recruitment processes.

"There are countless stories about the ways bias manifests itself in artificial intelligence, and there are many pieces of thinking about what contributes to this bias," Fay Payton, professor of information systems/technology on the faculty at the University of North Carolina, said in a news release.

For researchers at these universities, the goal is to propose guidelines that can help develop viable solutions to eliminate bias in algorithms against women, African Americans, and Latinos who are part of the workforce in information technology companies.

"Too many existing hiring algorithms de facto incorporate identity markers that exclude qualified candidates based on gender, race, ethnicity, age, etc.," says Payton, who is the lead co-author of the research. "We are simply looking for equity: that candidates can participate in the recruitment process on an equal basis.

Payton and her collaborators argue that a feminist design approach to thinking could serve as a valuable framework for developing software that significantly reduces algorithmic bias. In this context, applying this thinking would mean incorporating the idea of equity into the design of the algorithm itself.

"The effects of algorithmic bias are compounded by the historical under-representation of women and African-American and Latino software engineers who bring new ideas to equitable design approaches based on their life experiences," says Lynette Yarger, associate professor of information science and technology at Penn State.

The rest is here:
How to Fix Bias against Women and Latinos in Artificial Intelligence Algorithms - AL DIA News

Artificial intelligence and digital initiatives to be scrutinised by MEPs | News – EU News

Commissioner Breton will present to and debate with MEPs the initiatives that the Commission will put forward on 19 February:

When: Wednesday, 19 February, 16.00 to 18.00

Where: European Parliament, Spaak building, room 3C050, Brussels

Live streaming: You can also follow the debate on EP Live

A Strategy for Europe Fit for the Digital Age

The Commission has announced in its 2020 Work Programme that it will put forward a European Strategy for Europe - Fit for the Digital Age, setting out its vision on how to address the challenges and opportunities brought about by digitalisation.

Boosting the single market for digital services and introducing regulatory rules for the digital economy should be addressed in this strategy. It is expected to build on issues covered by the e-commerce directive and the platform-to-business regulation.

White Paper on Artificial Intelligence

The White Paper on Artificial Intelligence (AI) will aim to support its development and uptake in the EU, as well as to ensure that European values are fully respected. It should identify key opportunities and challenges, analyse regulatory options and put forward proposals and policy actions related to, e.g. ethics, transparency, safety and liability.

European Strategy for Data

The purpose of the Data Strategy would be to explore how to make the most of the enormous value of non-personal data as an ever-expanding and re-usable asset in the digital economy. It will build in part on the free flow of non-personal data regulation.

Read the original post:
Artificial intelligence and digital initiatives to be scrutinised by MEPs | News - EU News

Bringing artificial intelligence into the classroom, research lab, and beyond – MIT News

Artificial intelligence is reshaping how we live, learn, and work, and this past fall, MIT undergraduates got to explore and build on some of the tools and coming out of research labs at MIT. Through theUndergraduate Research Opportunities Program(UROP), students worked with researchers at the MIT Quest for Intelligence and elsewhere on projects to improve AI literacy and K-12 education, understand face recognition and how the brain forms new memories, and speed up tedious tasks like cataloging new library material. Six projects are featured below.

Programming Jibo to forge an emotional bond with kids

Nicole Thumma met her first robot when she was 5, at a museum.It was incredible that I could have a conversation, even a simple conversation, with this machine, she says. It made me thinkrobotsarethe most complicated manmade thing, which made me want to learn more about them.

Now a senior at MIT, Thumma spent last fall writing dialogue for the social robot Jibo, the brainchild ofMIT Media Lab Associate ProfessorCynthia Breazeal. In a UROP project co-advised by Breazeal and researcherHae Won Park, Thumma scripted mood-appropriate dialogue to help Jibo bond with students while playing learning exercises together.

Because emotions are complicated, Thumma riffed on a set of basic feelings in her dialogue happy/sad, energized/tired, curious/bored. If Jibo was feeling sad, but energetic and curious, she might program it to say, I'm feeling blue today, but something that always cheers me up is talking with my friends, so I'm glad I'm playing with you. A tired, sad, and bored Jibo might say, with a tilt of its head, I don't feel very good. It's like my wires are all mixed up today. I think this activity will help me feel better.

In these brief interactions, Jibo models its vulnerable side and teaches kids how to express their emotions. At the end of an interaction, kids can give Jibo a virtual token to pick up its mood or energy level. They can see what impact they have on others, says Thumma. In all, she wrote 80 lines of dialogue, an experience that led to her to stay on at MIT for an MEng in robotics. The Jibos she helped build are now in kindergarten classrooms in Georgia, offering emotional and intellectual support as they read stories and play word games with their human companions.

Understanding why familiar faces stand out

With a quick glance, the faces of friends and acquaintances jump out from those of strangers. How does the brain do it?Nancy Kanwishers lab in theDepartment of Brain and Cognitive Sciences (BCS) is building computational models to understand the face-recognition process.Two key findings: the brain starts to register the gender and age of a face before recognizing its identity, and that face perception is more robust for familiar faces.

This fall, second-year student Joanne Yuan worked with postdocKatharina Dobsto understandwhy this is so.In earlier experiments, subjects were shown multiple photographs of familiar faces of American celebrities and unfamiliar faces of German celebrities while their brain activity was measured with magnetoencephalography. Dobs found that subjects processed age and gender before the celebrities identity regardless of whether the face was familiar. But they were much better at unpacking the gender and identity of faces they knew, like Scarlett Johansson, for example. Dobs suggests that the improved gender and identity recognition for familiar faces is due to a feed-forward mechanism rather than top-down retrieval of information from memory.

Yuan has explored both hypotheses with a type of model, convolutional neural networks (CNNs), now widely used in face-recognition tools. She trained a CNN on the face images and studied its layers to understand its processing steps. She found that the model, like Dobs human subjects, appeared to process gender and age before identity, suggesting that both CNNs and the brain are primed for face recognition in similar ways. In another experiment, Yuan trained two CNNs on familiar and unfamiliar faces and found that the CNNs, again like humans, were better at identifying the familiar faces.

Yuan says she enjoyed exploring two fields machine learning and neuroscience while gaining an appreciation for the simple act of recognizing faces. Its pretty complicated and theres so much more to learn, she says.

Exploring memory formation

Protruding from the branching dendrites of brain cells are microscopic nubs that grow and change shape as memories form. Improved imaging techniques have allowed researchers to move closer to these nubs, or spines, deep in the brain to learn more about their role in creating and consolidating memories.

Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, haspioneered a technique for labeling clusters of brain cells, called engram cells, that are linked to specific memories in mice. Through conditioning, researchers train a mouse, for example, to recognize an environment. By tracking the evolution of dendritic spines in cells linked to a single memory trace, before and after the learning episode, researchers can estimate where memories may be physically stored.

But it takes time. Hand-labeling spines in a stack of 100 images can take hours more, if the researcher needs to consult images from previous days to verify that a spine-like nub really is one, saysTimothy OConnor, a software engineer in BCS helping with the project.With 400 images taken in a typical session, annotating the images can take longer than collecting them, he adds.

OConnorcontacted the QuestBridgeto see if the process could be automated. Last fall, undergraduates Julian Viera and Peter Hart began work with Bridge AI engineer Katherine Gallagher to train a neural network to automatically pick out the spines. Because spines vary widely in shape and size, teaching the computer what to look for is one big challenge facing the team as the work continues. If successful, the tool could be useful to a hundred other labs across the country.

Its exciting to work on a project that could have a huge amount of impact, says Viera. Its also cool to be learning something new in computer science and neuroscience.

Speeding up the archival process

Each year, Distinctive Collections at the MIT Libraries receivesa large volume of personal letters, lecture notes, and other materials from donors inside and outside of MITthat tell MITs story and document the history of science and technology.Each of these unique items must be organized and described, with a typical box of material taking up to 20 hours to process and make available to users.

To make the work go faster, Andrei Dumitrescu and Efua Akonor, undergraduates at MIT and Wellesley College respectively, are working with Quest Bridges Katherine Gallagher to develop an automated system for processing archival material donated to MIT. Their goal: todevelop a machine-learning pipeline that can categorize and extract information from scanned images of the records. To accomplish this task, they turned to the U.S. Library of Congress (LOC), which has digitized much of its extensive holdings.

This past fall, the students pulled images of about70,000 documents, including correspondence, speeches, lecture notes, photographs, and bookshoused at the LOC, and trained a classifier to distinguish a letter from, say, a speech. They are now using optical character recognition and a text-analysis toolto extract key details likethe date, author, and recipient of a letter, or the date and topic of a lecture. They will soon incorporate object recognition to describe the content of aphotograph,and are looking forward totestingtheir system on the MIT Libraries own digitized data.

Onehighlight of the project was learning to use Google Cloud. This is the real world, where there are no directions, says Dumitrescu. It was fun to figure things out for ourselves.

Inspiring the next generation of robot engineers

From smartphones to smart speakers, a growing number of devices live in the background of our daily lives, hoovering up data. What we lose in privacy we gain in time-saving personalized recommendations and services. Its one of AIs defining tradeoffs that kids should understand, says third-year student PabloAlejo-Aguirre.AI brings usbeautiful andelegant solutions, but it also has its limitations and biases, he says.

Last year, Alejo-Aguirre worked on an AI literacy project co-advised by Cynthia Breazeal and graduate studentRandi Williams. In collaboration with the nonprofiti2 Learning, Breazeals lab has developed an AI curriculum around a robot named Gizmo that teaches kids how totrain their own robotwith an Arduino micro-controller and a user interface based on Scratch-X, a drag-and-drop programming language for children.

To make Gizmo accessible for third-graders, Alejo-Aguirre developed specialized programming blocks that give the robot simple commands like, turn left for one second, or move forward for one second. He added Bluetooth to control Gizmo remotely and simplified its assembly, replacing screws with acrylic plates that slide and click into place. He also gave kids the choice of rabbit and frog-themed Gizmo faces.The new design is a lot sleeker and cleaner, and the edges are more kid-friendly, he says.

After building and testing several prototypes, Alejo-Aguirre and Williams demoed their creation last summer at a robotics camp. This past fall, Alejo-Aguirre manufactured 100 robots that are now in two schools in Boston and a third in western Massachusetts.Im proud of the technical breakthroughs I made through designing, programming, and building the robot, but Im equally proud of the knowledge that will be shared through this curriculum, he says.

Predicting stock prices with machine learning

In search of a practical machine-learning application to learn more about the field, sophomores Dolapo Adedokun and Daniel Adebi hit on stock picking. We all know buy, sell, or hold, says Adedokun. We wanted to find an easy challenge that anyone could relate to, and develop a guide for how to use machine learning in that context.

The two friends approached the Quest Bridge with their own idea for a UROP project after they were turned away by several labs because of their limited programming experience, says Adedokun. Bridge engineer Katherine Gallagher, however, was willing to take on novices. Were building machine-learning tools for non-AI specialists, she says. I was curious to see how Daniel and Dolapo would approach the problem and reason through the questions they encountered.

Adebi wanted to learn more about reinforcement learning, the trial-and-error AI technique that has allowed computers to surpass humans at chess, Go, and a growing list of video games. So, he and Adedokun worked with Gallagher to structure an experiment to see how reinforcement learning would fare against another AI technique, supervised learning, in predicting stock prices.

In reinforcement learning, an agent is turned loose in an unstructured environment with one objective: to maximize a specific outcome (in this case, profits) without being told explicitly how to do so. Supervised learning, by contrast, uses labeled data to accomplish a goal, much like a problem set with the correct answers included.

Adedokun and Adebi trained both models on seven years of stock-price data, from 2010-17, for Amazon, Microsoft, and Google. They then compared profits generated by the reinforcement learning model and a trading algorithm based on the supervised models price predictions for the following 18 months; they found that their reinforcement learning model produced higher returns.

They developed a Jupyter notebook to share what they learned and explain how they built and tested their models. It was a valuable exercise for all of us, says Gallagher. Daniel and Dolapo got hands-on experience with machine-learning fundamentals, and I got insight into the types of obstacles users with their background might face when trying to use the tools were building at the Bridge.

Go here to see the original:
Bringing artificial intelligence into the classroom, research lab, and beyond - MIT News

The Supply Side: Artificial intelligence is slowly shaping the future of retail – talkbusiness.net

Artificial intelligence (AI), otherwise known as machine learning, is slowly reshaping retail from optimizing back-end supply chain operations to in-store execution. It is also impacting marketing, customer service engagement and anti-fraud activities, according to a report from New York-based information technology industry analyst firm 451 Research.

While AI is far from the mainstream, researchers said plenty of retailers are experimenting with how machine learning can be applied in many areas of retail. The report states retailers wont be the only ones needing to adapt to the disruption of machine learning as customers will also face changes in how they view and experience shopping.

For AI to work to its full potential, researchers said customers will need to be comfortable with increased data sharing if they want to benefit from personalized shopping experiences via machine learning. There will also be those who will struggle with weighing out the benefits of convenience for potentially increased privacy risks.

A recent study by KPMG reviewed the state of AI deployment across retail and other industries. The Capgemini Research Institute estimates AI could add as much as $300 billion in value for the retail sector. As of late 2018, 28% of retailers surveyed by Capgemini were testing AI, up from just 4% in 2016. Capgemini also found AI was creating more jobs than it was replacing.

The majority of use cases focus on customer relations and sales, but Capgemini said there is also a $144 billion savings opportunity from the supply chain through improved efficiency in routing, warehousing, returns management and procurement.

Walmart is using machine learning to automate price markdowns. All clearance markdowns are now automated at the retail giant. The goal is for each store to sell through its product just before the new inventory arrives. In the test stores where machine learning has taken over the inventory management, Walmart said it has increased the sale-through rate by 14% in the first couple of months.

Walmart also recently showcased its Alphabot robotic system in Salem, N.H., by using autonomous carts to retrieve products. Robots assemble orders, then send them to a human employee to check the accuracy, bag them and complete the delivery. Alphabot manages all shelf-stable, refrigerated and frozen products, but fresh products continue to be selected and picked by human employees, the retailer said. Walmart has been testing the Alphabot system for nearly a year, saying the benefits include increased picking speeds of 1,700 picks per hour and storing orders for several hours at appropriate temperatures.

Tom Ward, senior vice president of digital operations at Walmart U.S., said the standard online grocery orders are picked by personal shoppers who fill eight orders at one time, but that is only a fraction of the efficiency achieved with the Alphabot system. Walmart has planned two new Alphabot-enabled warehouses that will serve several store pickup locations. The warehouses will be smaller than the test location in Salem. Given the expense of intuitive technology systems, Walmart officials said it will use them where they make the most sense.

Walmart is also using Bossa Nova robots to scan inventory, a test that was recently expanded to 800 stores in addition to another robotic system being used to scrub floors in hundreds of stores. Machine learning is being used to track inventory, and customer interfaces with chatbots (personal shopping assistants) are being used via the retail giants mobile app.

The National Retail Federation (NRF) recently held its annual conference in New York, and some of the biggest topics discussed were the impact of human-robot interactions and how retailers of all sizes are taking advantage of AI and machine learning across the businesses. Several retailers highlighted ways they were using AI and machine learning across their businesses.

Belk Inc., a department store with nearly 300 stores across 16 states, said it is using AI to help master inventory management. Belk executives said the company is integrating machine learning into ordering, replenishment and allocation systems, including calculating demand for specific sizes by store. Belk said virtual assistants do the heavy lifting, but they are not replacing humans.

Dicks Sporting Goods is also using machine learning to identify patterns and make estimated delivery dates more accurate, according to David Lanners, the companys vice president of retail technology.

Starbucks is also using AI in a process it calls Deep Brew, which leverages AI and machine learning to more accurately manage inventory and ensure adequate staffing for busy periods. The company reports as employees have more time to connect with customers, which has delivered a higher average ticket.

Davids Bridal is also betting on AI to help power its new concierge service designed to help drive more customers into stores. The specialty retailer emerged from bankruptcy in January 2019 and has been working to improve the in-store experience and elevate online customer engagement.

Davids recently launched an AI-powered concierge bot through Apple Business Chat. It connects brands and human customer service agents via bots. Customers use the chatbot to ask questions or seek insights that are shared with stylists. Customers book their appointments online and their questions and online conversation are relayed to the stylist who will assist them in-store.

MKM Partners executive Roxanne Meyer recently said AI may finally be nearing a tipping point as many retailers are exploring the possibilities, but only a few are leveraging it in a meaningful way.

Editors note:The Supply Side sectionof Talk Business & Politics focuses on the companies, organizations, issues and individuals engaged in providing products and services to retailers. The Supply Side is managed by Talk Business & Politics and sponsored byPropak Logistics.

comments

Go here to see the original:
The Supply Side: Artificial intelligence is slowly shaping the future of retail - talkbusiness.net

IESE Business School Launches Artificial Intelligence and the Future of Management Initiative – Yahoo Finance

IESE Business School has launched a new Artificial Intelligence and the Future of Management Initiative, a multidisciplinary project that will look at how artificial intelligence is impacting management, and prepare executives to put Al to use in their companies in an ethical and socially responsible way.

Artificial intelligence, like electricity a century ago, is a general purpose technology that will touch every sphere of economic activity. That places new demands on managers to adapt to the changing competitive landscape, to transform their organizations, and to ensure that employees and themselves -- have the skills required. IESEs new Artificial Intelligence and the Future of Management Initiative will meet those needs for research and education efforts by:

"AI is as much a management challenge as it is a technological challenge," said Dean Franz Heukamp. "With this initiative we want to help current and future managers, as well as policy makers, face the challenges AI presents, enabling them to shape the ways AI is used and ensure that its a force for good in society."

The initiative, led by Professor Sampsa Samila, will bring together the work of IESE professors across a range of departments. The initiatives current research areas include the use of AI in companies, the impact of industrial automation, and changing skill demands in the labor market. IESE also now offers the program Artificial Intelligence for Executives and students in many of the schools programs can opt to take courses related to AI.

About IESE Business School

IESE Business School is the graduate business school of the University of Navarra. Founded in 1958, the school is one of the worlds most international business schools, with campuses in Barcelona, Madrid, Munich, New York and So Paulo. Consistently ranked within the top 10 worldwide, IESE Business School has pioneered business education in Europe since its founding. For more than 60 years, IESE has sought to develop business leaders with solid business skills, a global mindset and a desire to make a positive impact on society. The school distinguishes itself in its general-management approach, extensive use of the case method, international outreach, and emphasis on placing people at the heart of managerial decision-making. In the last five years, IESE has been ranked number 1 in the world for Executive Education programs by the Financial Times. http://www.iese.edu

View source version on businesswire.com: https://www.businesswire.com/news/home/20200217005365/en/

Contacts

Mallory DeesIESE Business Schoolmdees@iese.edu +34 91 211 3197

See the original post here:
IESE Business School Launches Artificial Intelligence and the Future of Management Initiative - Yahoo Finance

Why Bill Gates thinks gene editing and artificial intelligence could save the world – Yahoo News

Microsoft co-founder Bill Gates has been working to improve the state of global health through his nonprofit foundation for 20 years, and today he told the nations premier scientific gathering that advances in artificial intelligence and gene editing could accelerate those improvements exponentially in the years ahead.

We have an opportunity with the advance of tools like artificial intelligence and gene-based editing technologies to build this new generation of health solutions so that they are available to everyone on the planet. And Im very excited about this, Gates said in Seattle during a keynote address at the annual meeting of the American Association for the Advancement of Science.

Such tools promise to have a dramatic impact on several of the biggest challenges on the agenda for the Bill & Melinda Gates Foundation, created by the tech guru and his wife in 2000.

When it comes to fighting malaria and other mosquito-borne diseases, for example, CRISPR-Cas9 and other gene-editing tools are being used to change the insects genome to ensure that they cant pass along the parasites that cause those diseases. The Gates Foundation is investing tens of millions of dollars in technologies to spread those genomic changes rapidly through mosquito populations.

Millions more are being spent to find new ways fighting sickle-cell disease and HIV in humans. Gates said techniques now in development could leapfrog beyond the current state of the art for immunological treatments, which require the costly extraction of cells for genetic engineering, followed by the re-infusion of those modified cells in hopes that theyll take hold.

For sickle-cell disease, the vision is to have in-vivo gene editing techniques, that you just do a single injection using vectors that target and edit these blood-forming cells which are down in the bone marrow, with very high efficiency and very few off-target edits, Gates said. A similar in-vivo therapy could provide a functional cure for HIV patients, he said..

Bill Gates shows how the rise of computational power available for artificial intelligence is outpacing Moores Law. (GeekWire Photo / Todd Bishop)

The rapid rise of artificial intelligence gives Gates further cause for hope. He noted that that the computational power available for AI applications has been doubling every three and a half months on average, dramatically improving on the two-year doubling rate for chip density thats described by Moores Law.

One project is using AI to look for links between maternal nutrition and infant birth weight. Other projects focus on measuring the balance of different types of microbes in the human gut, using high-throughput gene sequencing. The gut microbiome is thought to play a role in health issues ranging from digestive problems to autoimmune diseases to neurological conditions.

This is an area that needed these sequencing tools and the high-scale data processing, including AI, to be able to find the patterns, Gates said. Theres just too much going on there if you had to do it, say, with paper and pencil to understand the 100 trillion organisms and the large amount of genetic material there. This is a fantastic application for the latest AI technology.

Similarly, organs on a chip could accelerate the pace of biomedical research without putting human experimental subjects at risk.

In simple terms, the technology allows in-vitro modeling of human organs in a way that mimics how they work in the human body, Gates said. Theres some degree of simplification. Most of these systems are single-organ systems. They dont reproduce everything, but some of the key elements we do see there, including some of the disease states for example, with the intestine, the liver, the kidney. It lets us understand drug kinetics and drug activity.

Bill Gates explains how gene-drive technology can cause genetic changes to spread rapidly in mosquito populations. (GeekWire Photo / Todd Bishop)

Story continues

The Gates Foundation has backed a number of organ-on-a-chip projects over the years, including one experiment thats using lymph-node organoids to evaluate the safety and efficacy of vaccines. At least one organ-on-a-chip venture based in the Seattle area, Nortis, has gone commercial thanks in part to Gates support.

High-tech health research tends to come at a high cost, but Gates argues that these technologies will eventually drive down the cost of biomedical innovation.

He also argues that funding from governments and nonprofits will have to play a role in the worlds poorer countries, where those who need advanced medical technologies essentially have no voice in the marketplace.

If the solution of the rich country doesnt scale down then theres this awful thing where it might never happen, Gates said during a Q&A with Margaret Hamburg, who chairs the AAAS board of directors.

But if the acceleration of medical technologies does manage to happen around the world, Gates insists that could have repercussions on the worlds other great challenges, including the growing inequality between rich and poor.

Disease is not only a symptom of inequality, he said, but its a huge cause.

Other tidbits from Gates talk:

Read Gates prepared remarks in a posting to his Gates Notes blog, or watch the video on AAAS YouTube channel.

Read more here:
Why Bill Gates thinks gene editing and artificial intelligence could save the world - Yahoo News

Strategies For Patenting Artificial Intelligence Innovations In The Life Sciences – Mondaq News Alerts

18 February 2020

Wolf, Greenfield & Sacks, P.C.

To print this article, all you need is to be registered or login on Mondaq.com.

Today, companies are developing artificial intelligence (AI)systems to meaningfully analyze the deluge of biomedical data. Asubstantial investment in building and deploying machine learning(ML) technologythe most active area of AI technology beingdeveloped todaywarrants carefully considering how to protectthe resulting intellectual property (IP), but there are challengesto doing so. In this article, we explore strategies of protectingIP for ML technology, including what aspects to consider patentinggiven current and ongoing changes to U.S. patent law, and when toconsider trade secret protection.

Generally, developing an ML system involves creating anddeploying a computer program having a model whose performance onsome task improves as additional data is used to train the model.In the life sciences, such data can include medical images, genomicdata, and electronic health records.

For example, an ML model may be trained on magnetic resonance(MR) images to recognize whether a previously unseen MR image of apatient's brain shows a hemorrhage. As another example, an MLmodel may be trained on genomic data for individuals with aparticular cancer to predict whether a patient's genome hasfeatures indicative of the cancer.

Today, neural networks are a popular class of ML models widelyused, and are often referred to as "deep learning" in anod to their multi-layer (deep) structure. Other ML models includeBayesian models, decision trees, random forests, and graphicalmodels. Indeed, rapid development of various ML tools has led to anexplosion of activity in applying them to new problems acrossdiverse fields.

Deploying an ML system typically involves: (1) selecting/designing an ML model, (2) training the ML model using data, and(3) deploying and using the trained ML model in an application.Valuable IP may be generated at each of these stages, and it'sworth considering protecting it through patents. There, however,are a number of challenges in patenting ML systems.

An invention must be new and non-obvious to be patented. Thismakes it difficult to patent the use of off-the-shelf ML technologyeven if in the context of a new application. Simply downloadingfreely available ML software, providing it with data, anddisplaying the results (e.g., to a doctor or researcher) may beviewed by the U.S. Patent and Trademark Office (USPTO) as failingto clear the non-obviousness hurdle. After all, the freelyavailable ML software is distributed precisely so that people canperform this exact processwhy, then, would it not be obviousto do so?

But in reality, building and deploying ML systems requires morework beyond simply downloading and running software. Focusingpatent claims on the results of such efforts will lead to greatersuccess. Here are three examples of potentially patentable aspectsof an ML system:

To see the full article click here

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Intellectual Property from United States

Global Advertising Lawyers Alliance (GALA)

While my seats afforded me only a so-so sight-line to the stage, I had no trouble seeing the ocean of cell phones, in the hands of adoring fans, simultaneously recording (without authorization)...

Weintraub Tobin Chediak Coleman Grodin Law Corporation

Generally, the title to a single motion picture is not entitled to trademark protection. This is the same for the title to single books, songs and other singular creative works

Cowan Liebowitz & Latman PC

What can you do to protect your goodwill if you unknowingly select an unfortunate brand name, or through no fault of your own...

See more here:
Strategies For Patenting Artificial Intelligence Innovations In The Life Sciences - Mondaq News Alerts

535Media unveils AI startup MeSearch, with focus on reader-tailored experience – TribLIVE

A Pittsburgh artificial intelligence startup built on decades of search technology launched Monday with an aim to change the way news is published, delivered and consumed online.

MeSearch, founded as a joint venture with 535Media, will start the first tests of its technology with Trib Total Media to provide neighborhood news, said Joe Lawrence, the companys CEO and general counsel for Trib Total Media.

The company will move into space inside the D.L. Clark Building on Pittsburghs North Shore.

MeSearch uses artificial intelligence to identify content that readers want from a wide range of sources and puts it in front of them, Lawrence said. That could be a piece of journalism, user-generated content uploaded into the system, an ad or a piece of sponsored content created specifically for that reader.

Thats what MeSearch does best, Lawrence said. It finds the most relevant information, and it supplies it to the person who needs it the most.

MeSearchs AI will learn what type of information a reader wants and what days of the week or times of day readers want particular types of information. The website will then be tailored to show content to the reader based on what the AI has learned.

Instead of going to a site, you are going to a personal experience, said Arthur Crivella, founder of Crivella Technologies Ltd., the Pittsburgh firm that developed the technology underlying MeSearch. What were doing is learning. Were using artificial intelligence to characterize a persons interests.

The technology is based on search algorithms developed by Crivella. His company has provided artificial intelligence, machine learning, statistical and data analysis, and other tools to lawyers to assist them with combing through thousands of documents in complex litigation. The algorithms can search for keywords in documents and then search for similarities between all the documents with those keywords and deliver results that the searcher may not have thought relevant when the query was first made.

The company has mapped over 2,500 human emotions based on the language we use, and its technology can reveal whats behind what is said in emails or documents, Lawrence said.

Lawrence said he met Crivella through work with a law firm. As Crivella described his companys search technology, Lawrence saw applications for it outside the legal profession.

The tech platform allows publishers to draw in content from many different sites or sources, users to generate content and be paid for it and readers to see exactly what they were looking for, even if they didnt know what they were looking for in the first place, Lawrence said.

It endeavors to match the user with the best content, Lawrence said. Its constantly searching, constantly learning, and the humans provide feedback.

Lawrence and Crivella said MeSearch will empower content creators to participate in the content creation process.

The creators get to share equally, and they are incentivized to bring their creative talent to the ecosystem, and that will change a lot of things, Lawrence said.

Lawrence said the details on how users will share in the value created by their content is being determined by beta testing, but it will be through a share of advertising revenue and eCommerce commissions.

TribLIVE's Daily and Weekly email newsletters deliver the news you want and information you need, right to your inbox.

Visit link:
535Media unveils AI startup MeSearch, with focus on reader-tailored experience - TribLIVE

Challenges of Artificial Intelligence Adoption in Healthcare – HITInfrastructure.com

February 14, 2020 -Artificial Intelligence (AI) adoption is gradually becoming more prominent in health systems, but 75 percent of healthcare insiders are concerned that AI could threaten the security and privacy of patient data, according to a recent survey from KPMG.

Although 91 percent of healthcare respondents believe that AI implementation is increasing patient access to care, the survey of 751 US business decision makers uncovered. The survey explored the barriers and challenges that have the potential to hamper the integration of AI technologies in healthcare organizations.

Healthcare security is a top concern for insiders with 75 percent responding that they believe AI could threaten patient data privacy. But 86 percent of respondents said their organizations are taking steps to protect patient privacy as it implements AI.

Organizations believe that a broad understanding of AI and talent in the space are musts to ensure success, but many insiders reported major challenges in these areas.

Despite this, only 47 percent of healthcare insiders responded that their organizations offer AI training courses to employees. While only 67 percent said their employees support AI adoption, the lowest ranking of any industry.

Comprehending the full range of AI technology, and how best to apply it in a healthcare setting, is a learned skill that grows out of pilots and tests. Building an AI-ready workforce requires a wholesale change in the approach to training and how to acquire talent. Having people who understand how AI can solve big, complex problems is critical, Melissa Edwards, managing director and digital enablement at KPMG said in the survey.

Cost is a major barrier for organizations as well. Successful AI implementation requires a large investment, which means that organizations who are already feeling budget-burned may be slower to fund AI.

Thirty-seven percent of healthcare industry executives reported that the pace in which they are implementing AI is too slow.

But Edwards highlighted that the pace has actually greatly increased in the past few years.

The pace with which hospital systems have adopted AI and automation programs has dramatically increased since 2017, she said. Virtually, all major healthcare providers are moving ahead with pilots or programs in these areas. The medical literature is showing support of AIs power as a tool to help clinicians.

Fifty-four percent of executives voiced that to date, AI has increased the overall cost of healthcare. The question is, Where do I put my AI efforts to get the greatest gain for the business? Trying to assess what ROI will look like is a very relevant point as they embark on their AI journey, Edward said.

Last year, The White House called for more transparency and explainability in healthcare AI through the National Artificial Intelligence Research and Development Strategic Plan: 2019 Update.

The plan identified eight strategic priorities for federally-funded AI research including to prioritize investments in the next generation of AI that will drive discovery and insight and enable the US to remain a leader in AI and develop effective methods for human-AI collaboration.

The plan also included:

AI technologies are critical for addressing a range of long-term challenges, such as constructing advanced healthcare systems, a robust intelligent transportation system, and resilient energy and telecommunication networks, the plan concluded.

View original post here:
Challenges of Artificial Intelligence Adoption in Healthcare - HITInfrastructure.com