Top Five Data Privacy Issues that Artificial Intelligence and Machine Learning Startups Need to Know – insideBIGDATA

In this special guest feature, Joseph E. Mutschelknaus, a director in Sterne Kesslers Electronics Practice Group, addresses some of the top data privacy compliance issues that startups dealing with AI and ML applications face. Joseph prosecutes post-issuance proceedings and patent applications before the United States Patent & Trademark Office. He also assists with district court litigation and licensing issues. Based in Washington, D.C. and renown for more than four decades for dedication to the protection, transfer, and enforcement of intellectual property rights, Sterne, Kessler, Goldstein & Fox is one of the most highly regarded intellectual property specialty law firms in the world.

Last year, the Federal Trade Commission (FTC) hit both Facebook and Google with record fines relating to their handling of personal data. The California Consumer Privacy Act (CCPA), which is widely viewed the toughest privacy law in the U.S., came online this year. Nearly every U.S. state has its own data breach notification law. And the limits of the EUs General Data Protection Regulation (GDPR), which impacts companies around the world, are being tested in European courts.

For artificial intelligence (AI) startups, data is king. Data is needed to train machine learning algorithms, and in many cases is the key differentiator from competitors. Yet, personal data, that is, data relating to an individual, is also subject an increasing array of regulations.

As last years $5 billion fine on Facebook demonstrates, the penalties for noncompliance with privacy laws can be severe. In this article, I review the top five privacy compliance issues that every AI or machine learning startup needs to be aware of and have a plan to address.

1. Consider how and when data can be anonymized

Privacy laws are concerned with regulating personally identifiable information. If an individuals data can be anonymized, most of the privacy issues evaporate. That said, often the usefulness of data is premised on being able to identify the individual that it is associated with, or at least being able to correlate different data sets that are about the same individual.

Computer scientists may recognize a technique called a one-way hash as a way to anonymize data used to train machine learning algorithms. Hash operations work by converting data into a number in a manner such that the original data cannot be derived from the number alone. For example, if a data record has the name John Smith associated with it, a hash operation may to convert the name John Smith into a numerical form which is mathematically difficult or impossible to derive the individuals name. This anonymization technique is widely used, but is not foolproof. The European data protection authorities have released detailed guidance on how hashes can and cannot be used to anonymize data.

Another factor to consider is that many of these privacy regulations, including the GDPR, cover not just data where an individual is identified, but also data where an individual is identifiable. There is an inherent conflict here. Data scientists want a data set that is as rich as possible. Yet, the richer the data set is, the more likely an individual can be identified from it.

For example, The New York Times wrote an investigative piece on location data. Although the data was anonymized, the Times was able to identify the data record describing the movements of New York City Mayor Bill de Blasio, by simply cross-referencing the data with his known whereabouts at Gracie Mansion. This example illustrates the inherent limits to anonymization in dealing with privacy compliance.

2. What is needed in a compliant privacy policy

Realizing that anonymization may not be possible in the context of your business, the next step has to be in obtaining the consent of the data subjects. This can be tricky, particularly in cases where the underlying data is surreptitiously gathered.

Many companies rely on privacy policies as a way of getting data subjects consent to collect and process personal information. For this to be effective, the privacy policy must explicitly and particularly state how the data is to be used. Generally stating that the data may be used to train algorithms is usually insufficient. If your data scientists find a new use for the data youve collected, you must return to the data subjects and get them to agree to an updated privacy policy. The FTC regards a companys noncompliance with its own privacy policy as an unreasonable trade practice subject to investigation and possible penalty. This sort of noncompliance was the basis for the $5 billion fine assessed against Facebook last year.

3. How to provide a right to be forgotten

To comply with many of these regulations, including the GDPR and CCPA, you must provide not only a way for a data subject to refuse consent, but also a way to for a data subject to withdraw consent already given. This is sometimes called a right to erase or a right to be forgotten. In some cases, a company must provide a way for subjects to restrict uses of data, offering data subjects a menu of ways the company can and cannot use collected data.

In the context of machine learning, this can be very tricky. Some algorithms, once trained, are difficult to untrain. The ability to remove personal information has to be baked into the system design at the outset.

4. What processes and safeguards need to be in place to properly handle personal data

Privacy compliance attorneys need to be directly involved in the product design effort. In even big sophisticated companies, compliance issues usually arise when those responsible for privacy compliance arent aware of or dont understand the underlying technology.

The GDPR requires certain companies to designate data protection officers that are responsible for compliance. There also record-keeping and auditing obligations in many of these regulations.

5. How to ensure that data security practices are legally adequate

Having collected personal data, you are under an obligation to keep it secure. The FTC regularly brings enforcement actions against companies with unreasonably bad security practices and has detailed guidelines on what practices it considers appropriate.

In the event of a data breach does occur, you should immediately contact a lawyer. Every U.S. state has its own laws governing data breach notification and imposes different requirements in terms of notification and possibly remuneration.

Collecting personal data is essential part of many machine learning startups. Lack of a well-constructed compliance program can be an Achilles heel to any business plan. It is a recipe for an expensive lawsuit or government investigation that could be fatal to a young startup business. So, a comprehensive compliance program has to be an essential part of any AI/ML startups business plan.

Sign up for the free insideBIGDATAnewsletter.

Original post:
Top Five Data Privacy Issues that Artificial Intelligence and Machine Learning Startups Need to Know - insideBIGDATA

A Robot Factory or Digital Workers Powered by Machine Learning to Be Introduced by Russias OTB Bank to Read, Comprehend, Process Financial Documents -…

VTB Bank, a leading Russian universal bank with over $6 billion in annual revenue, has introduced a robot factory. The bank, which has more than 77,000 workers, says its planning to introduce 250 digital employees off the production line in the coming years.

VTB Banks management noted that the new robots would help reduce the amount of manual labor or routine operations that need to be performed. The robots are expected to help with document processing and data entry processes. The bank has estimated that the digital employees will help reduce the cost of processing by as much as 4 times.

The first robot may be introduced in the next two to three months, after the bank has discussed how to develop the digital workers with the companys human employees. The consultations with VTBs employees will aim to identify which processes should be automated in order to improve efficiency and streamline operations.

VTB Bank said there should be around 60 bots that will be operating by the end of 2020. These digital employees will reportedly use machine learning and optical character recognition (OCR) to fully automate routine tasks, which will include reading, comprehending and even processing of bank documents.

Vadim Kulik, deputy president and chairman at VTB Bank, stated:

The robotization program in its operating function will allow customers to increase the speed of processes, including, for example, working with credit applications. We will also identify suboptimal processes, improve them, optimize costs and reduce operational risks.

VTB has introduced a special robot that is tasked with verifying and processing data related to loan applications for SMEs that may qualify for various employment support programs. These initiatives are being offered to businesses that have struggled to maintain operations due to COVID-19.

VTB claims that the robots were able to process applications about five times faster than using regular methods. The bank said its now processing as many as 1,500 applications in one business day.

Go here to see the original:
A Robot Factory or Digital Workers Powered by Machine Learning to Be Introduced by Russias OTB Bank to Read, Comprehend, Process Financial Documents -...

MediaValet Recognized as the Winner of the 2020 Microsoft Canada AI & Machine Learning Impact Award – Yahoo Finance

Vancouver, British Columbia--(Newsfile Corp. - July 22, 2020) - MediaValet Inc. (TSXV: MVP) ("MediaValet" or the "Company"), a leading provider of enterprise digital asset management ("DAM") and creative operations software, is proud to announce it has won the 2020 Microsoft Canada AI & Machine Learning IMPACT Award. These annual Canadian awards recognize Microsoft partners that have focused on bettering the lives of Canadians, aligned their efforts with customer excellence, and who have created innovative solutions leveraging Microsoft products, service and technology.

Figure 1

To view an enhanced version of Figure 1, please visit:https://orders.newsfilecorp.com/files/3817/60331_a9cd6021f42fca64_001full.jpg

"We're extremely proud to be recognized by Microsoft for the advancements we've made with artificial intelligence in the digital asset management industry," commented David MacLaren, founder and CEO of MediaValet. "With Microsoft's advanced AI and machine learning capabilities, we've been able to dramatically improve asset discoverability and ROI for our customers - especially those who require customized AI models unique to their businesses - and we've only just begun."

Continued MacLaren, "Over the next few years, AI is going to significantly change the DAM industry and the role it will play within organizations. We look forward to leveraging the new AI advancements on the horizon to continue helping our customers achieve their business objectives."

Microsoft Canada presented these awards in 18 categories on July 22, 2020 at the first-ever virtual Microsoft Inspire conference. Winners were selected based on the outstanding work the companies provided to their customers and community.

"We are honoured to recognize MediaValet for the AI & Machine Learning Award at this year's IMPACT awards," said Suzanne Gagliese, Vice President, One Commercial Partner, Microsoft Canada. "Even throughout a challenging year, MediaValet has proven to be an outstanding partner committed to the highest levels of innovation and customer excellence empowering organizations across Canada with industry-leading solutions to achieve more."

About MediaValet Inc.MediaValet stands at the forefront of the enterprise, cloud-based digital asset management and creative operations industries. Built exclusively on Microsoft Azure and available across 61 Microsoft data center regions around the world, MediaValet delivers unparalleled enterprise-class security, reliability, redundancy and scalability while offering the largest global footprint of any DAM solution. In addition to providing all core enterprise DAM capabilities, local desktop-to-server support for creative teams, and overall cloud redundancy and management for all source, WIP and final assets, MediaValet offers industry-leading integrations into Slack, Adobe Creative Suite, Microsoft Office 365, WorkFront, Wrike, Drupal 8, WordPress and many other best-in-class 3rd party applications.

Follow MediaValet: Blog, Twitter and LinkedInSurf: http://www.mediavalet.com

For additional information:David MacLarenMediaValetTel: (604) 688-2321david.maclaren@mediavalet.com

Babak PedramMediaValetTel: (416) 644-5081babak.pedram@mediavalet.com

About Microsoft InspireMicrosoft Inspire provides Microsoft's partner community with access to key marketing and business strategies, leadership, and information regarding specific customer solutions designed to help partners succeed in the marketplace. Microsoft Inspire provides partners with informative learning opportunities covering sales, marketing, services and technology. More information can be found at https://partner.microsoft.com/en-us/inspire.

Veronica LangveeMicrosoft Canadavelangve@microsoft.com289-305-0247

Product or service names mentioned herein may be the trademarks of their respective owners.

"Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release."

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/60331

Read this article:
MediaValet Recognized as the Winner of the 2020 Microsoft Canada AI & Machine Learning Impact Award - Yahoo Finance

Future Growth of Machine Learning Chips Market 2019-2025 with Prominent Payers Google, Intel Corporation, NVIDIA, Baidu, Bitmain Technologies,…

Machine Learning is essentially a vertical of computer science, and is said to the analysis of information volumes supported machine-driven data models. Machine learning chips for varied needs, take issue in their practicality and design. The makers of those chips are focusing towards development of chips that are additional economical and responsive. Machine Learning Chips Market is predicted to grow with a CAGR of thirty 5.3% throughout the forecast amount of 2018-2026. The analysis report presents a whole assessment of the Market and contains a future trend, current growth factors, attentive opinions, facts, and business valid market information.

Request for sample copy:

https://www.absolutemarketsinsights.com/request_sample.php?id=152

Some of the players operating in the machine learning chips market are AMD (Advanced Micro Devices) (Advanced Micro Devices, Inc), Google, Inc., Intel Corporation, NVIDIA (NVIDIA Corporation), Baidu, Bitmain Technologies (BitMain Technologies Holding Company), Qualcomm (Qualcomm Technologies, Inc.), Amazon (Amazon.com, Inc.), Xilinx ( XILINX INC.), and Samsung among the others.

Demand for Machine Learning Chips has been growing at a fast pace. Primary reason for this can be increasing integration of machine learning technology with the big knowledge volumes being generated from business enterprises. Within the gift state of affairs, the demand for Machine Learning Chips is highest within the North America region. The region accounts for fifty three of the worldwide Machine Learning Chips demand. This world analysis study has been adopted by a combination of effective info exploration techniques, like primary and secondary analysis. The world Machine Learning Chips Market Report look at this state of the various business frameworks through a complete analysis of the assorted regions.

Key Questions Answered in Report:

Enquiry Before Buying This Report:

https://www.absolutemarketsinsights.com/enquiry_before_buying.php?id=152

Machine Learning Chips Market By Chip Type

By Technology

Machine Learning Chips Market By Industry

Machine Learning Chips Market By Geography

Contact Us:

Company: Absolute Markets InsightsEmail Id:[emailprotected]Phone: +91-740-024-2424Contact Name: Shreyas TannaWebsite:www.absolutemarketsinsights.com/

Follow Us on Social Media:https://www.facebook.com/AbsoluteMarketsInsights/https://twitter.com/AbsoluteMIhttps://www.linkedin.com/company/absolute-markets-insights/https://plus.google.com/117657938830005040584

Go here to read the rest:
Future Growth of Machine Learning Chips Market 2019-2025 with Prominent Payers Google, Intel Corporation, NVIDIA, Baidu, Bitmain Technologies,...

Teradata is Selected by Brinker International to Enhance Advanced Analytics, Machine Learning and Data Science Capabilities – Business Wire

SAN DIEGO--(BUSINESS WIRE)--Teradata (NYSE: TDC), the cloud data and analytics company, today announced that after an evaluation of other cloud analytics offerings on the market, Brinker International, Inc. (NYSE: EAT) has reinvested with Teradata, leveraging the Teradata Vantage platform delivered as-a-service, on Amazon Web Services (AWS) as the core of its data foundation to facilitate advanced analytics, machine learning and data science across the organization.

Brinker is one of the world's leading casual dining restaurant companies and has been a Teradata customer for more than two decades. Founded in 1975 and based in Dallas, Texas, Brinker owns, operates, or franchises more than 1,600 restaurants under the names Chili's Grill & Bar and Maggiano's Little Italy. Over the past year, Brinker has been working to further increase its capabilities in advanced analytics and data science.

Being a data-driven organization allows us to make informed decisions to create a better Guest and Team Member experience, said Pankaj Patra, senior vice president and chief information officer at Brinker International. As we looked for more flexible and cost-effective ways to manage and access our data, we evaluated quite a few cloud-native providers. After careful consideration, we decided the best course of action would be to migrate to Teradata Vantage in the cloud and take advantage of its as-a-service offerings to support our analytic goals.

With Teradata Vantage delivered as-a-service, in the cloud, enterprises such as Brinker can focus on mining their data for insights that drive business decisions, rather than on managing infrastructure. By integrating Vantages machine learning capabilities, Brinker can now apply advanced analytics and predictive modeling to its business processes, enabling more accurate sales forecasting, demand and traffic forecasting, team member management, recommendation engines for customers and more.

Were proud of our ongoing relationship with Brinker and its long-standing position as a leader in the restaurant industry a position due in large part to its culture of innovation in using data and analytics to streamline business processes, facilitate rapid decision-making and turn insights into answers, said Ashish Yajnik, vice president of Vantage Cloud at Teradata. Our collaboration with AWS and participation in the AWS Independent Software Vendor (ISV) Workload Migration Program has helped Brinker successfully move their mission-critical data infrastructure to the cloud. We look forward to expanding our relationship by powering their advanced analytics and data science capabilities through the scalable, clean and trusted data foundation that the Vantage platform provides.

Teradata is an Advanced Technology and Consulting Partner in the AWS Partner Network (APN). The company brings proven processes and tools to make migrations to Vantage on AWS low risk and the fastest path to customer value through the AWS ISV Workload Migration an APN Partner program that helps customers migrate ISV workloads to AWS to achieve their business goals and accelerate their cloud journey.

Through the AWS ISV Workload Migration Program, Teradata was able to help Brinker migrate to Vantage on AWS securely and cost effectively. We are pleased to collaborate with Teradata and its long-standing customer Brinker to enhance their cloud practices, said Sabina Joseph, director, Americas ISVs, Amazon Web Services, Inc.

Teradata Vantage is the leading hybrid cloud data analytics software platform that enables ecosystem simplification by unifying analytics, data lakes and data warehouses. With Vantage delivered as-a-service, enterprise-scale companies can eliminate silos and cost-effectively query all their data, all the time, regardless of where the data resides in the cloud using low cost object stores, on multiple clouds, on-premises or anywhere in-between to get a complete view of their business. And by combining Vantage with first party cloud services, Teradata enables customers to expand their cloud ecosystem with deep integration of cloud-specific, cloud-native services.

Webinar

Join Teradata for a live webinar on July 29th, 8:00 9:00 a.m. PT featuring Mark Abramson, lead architect, BI and analytics at Brinker International, and William McKnight, president of McKnight Consulting Group. The session will be moderated by Ed White, vice president, portfolio marketing and competitive intelligence at Teradata. Details below:

Webinar: Brinker's Journey Back to Teradata

Tuesday, July 29th8:00 a.m. 9:00 a.m. PT /11:00 a.m. 12:00 p.m. ET

Registration is required and is open to Teradata prospects, customers, analysts, partners and Teradata employees.

This interactive webinar will highlight:

About Brinker International, Inc.

Hi, welcome to Brinker International, Inc. (NYSE: EAT)! Were one of the worlds leading casual dining restaurant companies. Founded in 1975 in Dallas, Texas, we stay true to our roots, but also enjoy exploring outside of our hometown. As of March 25, 2020, we owned, operated or franchised 1,675 restaurants in 29 countries and two territories under the names Chilis Grill & Bar (1,622 restaurants) and Maggianos Little Italy (53 restaurants). Our passion is making people feel special and we hope you feel that passion each time you visit one of our restaurants or our home office. Find more information about us at http://www.brinker.com, follow us on LinkedIn or review us on Glassdoor.

About Teradata

Teradata transforms how businesses work and people live through the power of data. Teradata leverages all of the data, all of the time, so you can analyze anything, deploy anywhere and deliver analytics that matter. We call this pervasive data intelligence, powered by the cloud. And its the answer to the complexity, cost and inadequacy of todays approach to analytics. Get the answer at teradata.com.

The Teradata logo is a trademark, and Teradata is a registered trademark of Teradata Corporation and/or its affiliates in the U.S. and worldwide.

Excerpt from:
Teradata is Selected by Brinker International to Enhance Advanced Analytics, Machine Learning and Data Science Capabilities - Business Wire

Library of Congress Wants To Try Adding Humans to Automated Processes – Nextgov

The Library of Congress, the biggest physical repository of information in the world, has been digitizing its resources, expanding its digital library and developing automation tools to manage its collection. As those tools bear fruit, Library officials now want to reintroduce humans to the process.

The Librarys Digital Innovation Labs Section has undertaken a range of programs aimed at maximizing the use of digital collections and supporting emerging research methods, including using machine learning and crowdsourcing prototypes, according to a solicitation posted Tuesday to beta.SAM.gov. Now, the Library of Congress seeks to build on these initial experiments to further examine models that expand access to digital collections by combining digitally-enabled human participation with computational methods, otherwise known as human-in-the-loop approaches.

As the Librarys digital collection expands, it needs help properly tagging and verifying the metadata attached to the content. Machine learning tools have been plugging along at this task through the pilot programs, but Library officials want humans to help verify the work is being done correctly, as well as ethically.

Through the contract, the Library wants to procure at least two experimental prototypes using human-in-the-loop workflows to model, test, and evaluate different ethical approaches to applying crowdsourcing and machine learning methods to Library digital collections that enhance collection usability, utility, discoverability and user engagement.

Per the solicitation, at least one of the two prototypes will focus on improving metadata through crowdsourcing methods. As users add feedback on the accuracy of the derived metadata, those results will in turn be funneled back into the automation tools to serve as training data for a machine learning algorithm that shall be applied to Library digital collections to create enhanced metadata.

Library officials stressed the need to incorporate user-centered design in the process or risk alienating the volunteers and researchers contributing to the project. The solicitation pushes this concept beyond basic interface design and requires the contractors to consider user motivations and needs in the workflow.

Workflows shall be adaptive to different use cases and user profiles, to ultimately facilitate meaningful and productive user interactions with the Librarys digital collections, according to the statement of work.

Bids are due by noon August 5. The contract will run from September 1 through June 30, 2021.

See the rest here:
Library of Congress Wants To Try Adding Humans to Automated Processes - Nextgov

Heres one way to make daily covid-19 testing feasible on a mass scale – MIT Technology Review

Its impossible to contain covid-19 without knowing whos infected: until a safe and effective vaccine is widely available, stopping transmission is the name of the game. While testing capacity has increased, its nowhere near whats needed to screen patients without symptoms, who account for nearly half of the viruss transmission.

Our research points to a compelling opportunity for data science to effectively multiply todays testing capacity: if we combine machine learning with test pooling, large populations can be tested weekly or even daily, for as low as $3 to $5 per person per day.

In other words, for the price per test of a cup of coffee, governments can safely reopen the economy and halt ongoing covid-19 transmissionall without building new labs and without new drugs or vaccines.

Most people get tested for the coronavirus because they experienced symptoms, or came in close contact with someone who did. But as offices and schools come under pressure to reopen, organizations will need to grapple with an unpleasant truth: relying on symptoms to guide testing will miss asymptomatic and pre-symptomatic cases, and put everyone at risk.

The current alternatives, though, are not appealing. Infrequent testing (monthly seems to be the default in many proposals) or haphazard screening allow active cases to spread the virus for weeks before its caught. And the price is still high at $100 to $200 or more per test.

Pooled testing, guided by machine-learning algorithms, can fundamentally change this calculus. In pooled testing, many peoples samples are combined into one. If no virus is detected in the combined sample, that means no one in the pool is infected. The entire pool can be cleared with just one test.

But theres a catch: if anyone in the pool is infected, the test will be positive and more testing will be required to figure out who has the virus.

So a key part of knowing how to pool is knowing the likelihood that certain people in the group will be positive, and separating them from the rest. How do we know that risk? Thats where machine learning comes in.

The risk of infection is evolving rapidly in the United Statesthe relative odds in New York and Florida have reversed in a matter of weeks. Risk also differs significantly between peoplecompare a health-care worker with an employee working remotely. Estimating this risk for each person is a perfect job for machine learning.

Using publicly available data from employers and schools, epidemiological data on local infection and testing rates, and more sophisticated data on travel patterns, social contacts, or sewage (pdf), if available, modelers can predict anyone's risk of having covid-19 on a day-by-day basis. This allows highly flexible approaches to pooling that drive huge efficiency gains.

Another advantage: pooled testing gets more efficient when disease prevalence is lower. If a populationsay, all students at a universityis tested daily, the risk of infection is dramatically lowered for everyone in the group, simply because testers remove positives from tomorrows pool when they diagnose them today. That means tomorrows pool can be even larger, which reduces the number of tests needed and thus the cost of testing the population. And with more frequent testing, people who are infected but dont have symptoms can stay home, further reducing spread and making pooled testing even more efficient.

As a result, high-frequency pooled testing with machine learning costs far less than you might think. According to our analysis, testing daily costs only twice as much as testing monthly. And daily testing can actively suppress the virus, whereas monthly testing really only allows us to see how badly things have gone.

This effect can be so powerful, in fact, that under some conditionssuch as in meatpacking plants or nursing homesincreasing frequency can actually lower the number of tests needed, and thus the cost of testing a population, in a given time period. You read that right: testing more often can actually be less expensive for the health-care system.

The last pillar of prevention through testing requires accounting for the viruss spread between people and, therefore, for risk that is correlated. Using machine learning to model social networks has been a growing focus for researchers in computer science, economics, and other fields. Such algorithms, combined with data on jobs, classrooms, university dorms, and many other settings, allow machine-learning tools to estimate the potential that different people will interact. Knowing this likelihood can make group testing even more powerful.

Is high-frequency pooled testing feasible in the real world? While we dont want to minimize the logistical challenges, they are just thatchallenges, not deal-breakers. The US Food and Drug Administration has just approved the first use of pooled testing, and research increasingly shows that this technique is sensitive enough to detect positive cases. So as long as labs are willing, testers can start pooling today.

Though some have called into question the feasibility of pooling given the scale of the current outbreak, this is only a challenge because we traditionally rely on coarseand, as we show in our paper, potentially inaccurateestimates of virus prevalence in large populations. Instead, machine learning can give us the precise individual-level estimates we need to make pooling work even at high prevalences, by identifying those likely to test positive and keeping them out of large pools.

Frequency also pays huge dividends when virus prevalence is high. Before pooled testing is implementedsay, at a factory or schoolthe entire population could complete a one-time screening. Infected people would stay home until they recovered, and high-frequency pooled testing would keep prevalence low by catching disease early.

The logistics of sample collection and pooling in different settings must also be addressed. Were encouraged by the increasing evidence for products, some approved by the FDA, that allow people to collect and submit their own test samples. One is based on saliva, which means collection costs can be kept low even at large scale.

Its high time for high-frequency testing to become a core part of the US strategy to combat covid-19 and reopen the economy. Pooled testing that harnesses the power of machine learning makes paying the associated costs not only viable but, when weighed against the alternative of prolonged closures, a tremendous deal.

Ned Augenblick, Jonathan Kolstad, and Ziad Obermeyer are associate professors at the University of California, Berkeley. They are also cofounders of Berkeley Data Ventures, a consultancy that applies machine learning to health-care problems.

Original post:
Heres one way to make daily covid-19 testing feasible on a mass scale - MIT Technology Review

Global Machine Learning as a Service Market 2020 Analysis, Types, Applications, Forecast and COVID-19 Impact Analysis 2025 – Bandera County Courier

The most recent report titled Global Machine Learning as a Service Market 2020 by Company, Type and Application, Forecast to 2025 issued by MarketsandResearch.biz fetches a scheduled analysis of the market covering historical data and forecast remuneration about the market. The report sheds light on numerous aspects of the current market scenario such as supply chain operations, new product development, and other activities. The report talks about major players and regions, recent developments, and competitive landscape of the global Machine Learning as a Service market. Our team of an analyst is watching continuously the market movement, market drivers, offers real-time analysis regarding growth, decline as well as hurdles, opportunities, and challenges faced by the major players in the global market.

Key Players:

The high growth potential of the market has reassured several players to participate in the market and create a niche for themselves. The report includes an accurate analysis of key players with market value, company profile, and SWOT analysis. The report also includes manufacturing cost analysis mainly included raw materials analysis, the price trend of product, mergers & acquisitions, expansion, key suppliers of the product, concentration rate of Machine Learning as a Service market, manufacturing process analysis. Almost all companies who are listed or profiled are being to upgrade their applications for end-user experience and setting up their permanent base in 2020.

DOWNLOAD FREE SAMPLE REPORT: https://www.marketsandresearch.biz/sample-request/75285

NOTE: Our analysts monitoring the situation across the globe explains that the market will generate remunerative prospects for producers post COVID-19 crisis. The report aims to provide an additional illustration of the latest scenario, economic slowdown, and COVID-19 impact on the overall industry.

This report focused and concentrates on these companies including: Amazon, Alibaba, Microsoftn, Oracle, Tencent, IBM, Baidu, Salesforce, Google, UCloud, Heroku, Rackspace, Clustrix, CSC (Computer Science Corporation), SAP AG, Xeround, Century Link Inc.

Furthermore, the research contributes an in-depth overview of regional level break-up categorized as likely leading growth rate territory, countries with the highest market share in past and current scenario. Some of the geographical break-up incorporated in the study are: North America (United States, Canada and Mexico), Europe (Germany, France, United Kingdom, Russia and Italy), Asia-Pacific (China, Japan, Korea, India, Southeast Asia and Australia), South America (Brazil, Argentina), Middle East & Africa (Saudi Arabia, UAE, Egypt and South Africa)

Market segment by product type, split into Private clouds, Public clouds, Hybrid cloud, etc. along with their consumption (sales), market share and growth rate

Market segment by application, split into Personal, Business, etc. along with their consumption (sales), market share and growth rate

ACCESS FULL REPORT: https://www.marketsandresearch.biz/report/75285/global-machine-learning-as-a-service-market-2020-by-company-type-and-application-forecast-to-2025

Research Objectives and Purpose:

Moreover, the market is segmented on the basis of product type, application, and end-user. The report gives a first-time present and attentive analysis of the size, patterns, production, and supply of Machine Learning as a Service. The report aims to help companies in strategizing their decisions in a better way and finally attain their business goals. The analysts also identify significant trends, drivers, influence factors in global and regions.

Customization of the Report:

This report can be customized to meet the clients requirements. Please connect with our sales team (sales@marketsandresearch.biz), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.

About Us

Marketsandresearch.biz is a leading global Market Research agency providing expert research solutions, trusted by the best. We understand the importance of knowing what global consumers watch and buy, further using the same to document our distinguished research reports. Marketsandresearch.biz has worldwide presence to facilitate real market intelligence using latest methodology, best-in-class research techniques and cost-effective measures for worlds leading research professionals and agencies. We study consumers in more than 100 countries to give you the most complete view of trends and habits worldwide. Marketsandresearch.biz is a leading provider of Full-Service Research, Global Project Management, Market Research Operations and Online Panel Services.

Contact UsMark StoneHead of Business DevelopmentPhone: +1-201-465-4211Email: sales@marketsandresearch.bizWeb: http://www.marketsandresearch.biz

Visit link:
Global Machine Learning as a Service Market 2020 Analysis, Types, Applications, Forecast and COVID-19 Impact Analysis 2025 - Bandera County Courier

What Are The Limitations Of AutoML? – Analytics India Magazine

AutoML is no longer a new term. Since Google released its first AutoML product in 2018, discussions around this technology have been quite prominent. Some regard it as a weapon to achieve general artificial intelligence, while others deem it to be exaggerated. But what everyone agrees on is that AutoML does have extraordinary significance in realising AI advancements.

Using machine learning to process data has increased revenue and efficiency for enterprises. Big tech companies have hundreds of millions of data, which is humanly impossible to process. Machine learning is, therefore, being used to build complex models.

Over the past few years, automatic machine learning algorithms have been used in areas including image recognition, natural language processing, speech recognition, interactive AutoML optimisation, semi-supervised learning, reinforcement learning and more. But it comes with its own set of challenges.

Business Challenges For AutoML

AutoML faces problems such as data and model application. For example, high-quality labelled data is far from enough, and data inconsistencies during offline data analysis will cause bad effects. In addition, teams need to do automatic machine learning processing of unstructured and semi-structured data, which is technically difficult.

Furthermore, the current AutoML system optimization objectives are fixed. Often realistic problems are a combination of multiple objectives, such as the need to make subtle differences between decision-making and cost. With this kind of multi-objective exploration, people have limited ways to effectively judge before the results are obtained.

Such a situation is difficult to support in the current AutoML. The actual business may have customised requirements for the actual machine learning process, for example, for which only a certain type of data processing tool can be used. Such requirements cannot be met in the current black box AutoML solution. Whether it is effective or efficient, AutoML has a lot of room for improvement.

Explainability

While automated machines can find solutions, it may not necessarily be what the user wants. The user may want an explainable model. In fact, experts say explainability itself has great uncertainty, because everyones understanding is different, and it has a great relationship with personal judgment. It is even more challenging to make the model explainable.

Organisations have to work on advancing the development of standards related to explainable machine learning. AutoML can give the results, and experts judge whether they meet their own interpretable and explainable standards and consistency.

AutoML In Dynamic Environment

Another point worth paying attention is that it is more difficult to perform AutoML in a dynamic environment than in a static environment, as the environments keep dynamically changing. Dealing effectively with dynamic environments is an open issue in the academic community, and researchers are continually exploring the field. For dynamic feature learning, companies will need to adapt to changes in data faster, detect distribution changes and automatically adapt to different types of models, etc.

The current mainstream computing frameworks (such as Tensorflow, PyTorch, etc.) are only optimised for single machine learning model training. Companies need to work on redesigning the underlying computing architecture for automatic machine learning, providing configuration evaluation and optimisation for multiple model learning. For dynamic environment learning, it needs to be able to automatically perform model adaptation based on changes in the data distribution.

Home What Are The Limitations Of AutoML

Security and Privacy

Security is the other hot research topic of AutoML. In terms of the security of AutoML, organisations are exploring different technical solutions for different scenarios, such as automatic machine learning for privacy protection, automatic multi-party machine learning, automatic federation, etc.

But the current implementation lacks the support of laws, regulations and industry standards. Organisations need to promote the establishment and improvement of standards such as IEEE standards for federated learning and secure multi-party computing.

Final Thoughts

The AutoML technology has now been implemented in many scenarios, but the challenge is to implement it on a large scale and in more industries. The obstacle is that technological breakthroughs of AutoML require deeper research on the theoretical and algorithmic levels.

Companies are experiencing many iterations of automated machine learning. It has evolved from the earliest two-category expansion to multi-category and regression, from structured data to unstructured data such as images and videos, is used in automatic supervised learning that covers low-quality data, and in automatic multi-party machine learning that protects privacy. In theory, researchers are exploring the boundaries of the AutoML algorithm, because there is no general algorithm that can solve all problems.

comments

View original post here:
What Are The Limitations Of AutoML? - Analytics India Magazine

Artificial Intelligence (AI) in Education Sector Market Insights and Forecast 2020-2025 | Emphasis on Technology (Machine Learning, Deep Learning,…

The Asia-Pacific AI market in education sector stood at US$ 97.5 million in 2018 and is expected to grow at a CAGR of 53.2% during the forecasted period 2019-2025.The education sector is becoming more tailored and convenient for students, the credit for the same can be given to increased application of AI technology in the sector. This technology has plentiful applications that are changing the methodology of learning process. Apart from the learning aspect, AI is also helping to automate and speed up administrative tasks, helping institutions to reduce the time spent on tedious tasks and increasing the amount of time spent on each individual student.

As per the research, the use of AI in the education industry will grow by 47.5% through 2025 as the citizens of the world are becoming more adaptive and open to technology tools. The educational institutes are adopting AI to offer personalized learning experience and enhance tutoring methods. The integration of intelligent algorithms through AI in the learning platform has shown positive impact on the learning of students. This is promoting the growth of the AI in education sector. The technology has transformed the classrooms and changed the role of educators in the learning process by providing more user-friendly and sophisticated tools. The educational institutes are utilising the proficiencies of the AI for content development, curriculum designing, online learning platforms, and administrative operations. Moreover, increasing adoption of web-based services and smartphones has encouraged the educational institutes to move toward online learning solutions to increase their student base and provide high-quality service.

Download Sample of This Strategic Report https://univdatos.com/request_form/form/178

Natural language processing technology dominated the market in APAC region in 2018 and is expected to maintain its dominance throughout 2025

Based on technology, the AI in education market is segmented into machine learning, deep learning, neural network and natural language processing. Out of all, in 2018, natural language processing dominated the market and is expected to maintain its dominance throughout 2025. Machine learning promises to deliver custom in-class teaching by providing real-time feedback based on individual student behaviour and other factors. This improves the chances of better learning. Machine learning also plays an important role in assessments or evaluations by removing biases.

Amongst component, software holds the major share in 2018 and is anticipated to maintain its lead till 2025.

Based on component, the AI in education market is segmented into software, services and hardware.In 2018, software dominated the component segment of Asia-Pacific AI in Education sector market, generating revenue of US$57.2 million, followed by services and hardware.

Browse Complete Summary of This Report https://univdatos.com/report/asia-pacific-market-insights-on-artificial-intelligence-in-education-sector-insights-and-forecast-2019-2025

Higher education institutions were the major adopter of AI in Education in the Asia-Pacific region and is expected to witness highest market growth during the forecasted period

Artificial intelligence in education sector has been on an up-surging trend due to high adoption of AI in country such as China and Japan, paired with development and growth of research & development activities that combines the investments by government and private institutions.

Competitive Landscape-Top 10 Market Players

The leading market players operating in the Asia-Pacific AI in Education sector includes Google, Microsoft Corp, Intel Corporation, IBM, Qualcom, General Electric, Next IT, Siemens, Samsung and SAP SE. Among all, Google dominates the current market. Looking at the growth potential, other players are also investing heavily in the AI technology to uplift the education sector in the Asia-Pacific region.

Feel free to contact us for any queries https://univdatos.com/request_form/form/178

Reasons to buy (The research report presents):

Current and future market size from 2018 to 2025 in terms of value (US$)

Combined analysis of deep dive secondary research and input from primary research through Key Opinion Leader of the industry

Country level details of the overall adoption rate of AI in education sector in the Asia-Pacific region

Analysis of technological advancement in the education sector in countries such as India, China, Singapore, Australia etc.

A quick review of overall industry performance at a glance

An In-depthanalysis of key industry players, with majorfocuson their key financials, product portfolio, expansion strategies, and recent developments

A detailed analysis of regulatory framework, drivers, restraints, key trends and opportunities prevailing in the market

Examination of industry attractiveness with the help of Porters Five Forces analysis

The study comprehensively covers the market across different segments and sub-segments of the technology in the Asia-Pacific education sector

Countries covered:China, India, Japan, Singapore, Australia and others

Customization Options:

The market report on AI in education sector in the Asia-Pacific region can be customized for other Asian countries as well, which are not covered in the report. Besides this, UMI understands that you may have your own business need, hence we also provide fully customized solutions to clients.

Beauty and Personal Care Market Industry Analysis, Size, Share, Growth, Trends, and Forecast 2020-2026

About Us:

UnivDatos Market Insights (UMI), is a passionate market research firm and a subsidiary of Universal Data Solutions. Rigorous secondary and primary research on the market is our USP, hence information presented in our reports is based on facts and realistic assumptions. We have worked with 200+ global clients, including some of the fortune 500 companies. Our clientele praises us for quality of insights, In-depth analysis, custom research abilities and detailed market segmentation.

Contact us:

UnivDatos Market Insights (UMI)

Email: [emailprotected]

Web: https://univdatos.com

Ph: +91 7838604911

See the rest here:
Artificial Intelligence (AI) in Education Sector Market Insights and Forecast 2020-2025 | Emphasis on Technology (Machine Learning, Deep Learning,...