Global Machine Learning as a Service (MLaaS) Market Expected to reach highest CAGR by 2025: Microsoft, International Business Machine, Amazon Web…

The study on Global Machine Learning as a Service (MLaaS) Market , offers deep insights about the Machine Learning as a Service (MLaaS) market covering all the crucial aspects of the market. Moreover, the report provides historical information with future forecast over the forecast period. Some of the important aspects analyzed in the report includes market share, production, key regions, revenue rate as well as key players. This Machine Learning as a Service (MLaaS) report also provides the readers with detailed figures at which the Machine Learning as a Service (MLaaS) market was valued in the historical year and its expected growth in upcoming years. Besides, analysis also forecasts the CAGR at which the Machine Learning as a Service (MLaaS) is expected to mount and major factors driving markets growth. This Machine Learning as a Service (MLaaS) market was accounted for USD xxx million in the historical year and is estimated to reach at USD xxx million by the end of the year 2025..

This study covers following key players:MicrosoftInternational Business MachineAmazon Web ServicesGoogleBigmlFicoHewlett-Packard Enterprise DevelopmentAt&T

Request a sample of this report @ https://www.orbismarketreports.com/sample-request/83250?utm_source=Pooja

To analyze the global Machine Learning as a Service (MLaaS) market the analysis methods used are SWOT analysis and PESTEL analysis. To identify what makes the business stand out and to take the chance to gain advantage from these findings, SWOT analysis is used by marketers. Whereas PESTEL analysis is the study concerning Economic, Technological, legal political, social, environmental matters. For the analysis of market on the terms of research strategies, these techniques are helpful.It consists of the detailed study of current market trends along with the past statistics. The past years are considered as reference to get the predicted data for the forecasted period. Various important factors such as market trends, revenue growth patterns market shares and demand and supply are included in almost all the market research report for every industry. It is very important for the vendors to provide customers with new and improved product/ services in order to gain their loyalty. The up-to-date, complete product knowledge, end users, industry growth will drive the profitability and revenue. Machine Learning as a Service (MLaaS) report studies the current state of the market to analyze the future opportunities and risks.

Access Complete Report @ https://www.orbismarketreports.com/global-machine-learning-as-a-service-mlaas-market-growth-analysis-by-trends-and-forecast-2019-2025?utm_source=Pooja

Market segment by Type, the product can be split into Special ServiceManagement Services

Market segment by Application, split into BankingFinancial ServicesInsuranceAutomobileHealth CareDefenseRetailMedia & EntertainmentCommunicationOther

For the study of the Machine Learning as a Service (MLaaS) market it is very important the past statistics. The report uses past data in the prediction of future data. The keyword market has its impact all over the globe. On global level Machine Learning as a Service (MLaaS) industry is segmented on the basis of product type, applications, and regions. It also focusses on market dynamics, Machine Learning as a Service (MLaaS) growth drivers, developing market segments and the market growth curve is offered based on past, present and future market data. The industry plans, news, and policies are presented at a global and regional level.

Some Major TOC Points:1 Report Overview2 Global Growth Trends3 Market Share by Key Players4 Breakdown Data by Type and ApplicationContinued

For Enquiry before buying report @ https://www.orbismarketreports.com/enquiry-before-buying/83250?utm_source=Pooja

About Us : With unfailing market gauging skills, has been excelling in curating tailored business intelligence data across industry verticals. Constantly thriving to expand our skill development, our strength lies in dedicated intellectuals with dynamic problem solving intent, ever willing to mold boundaries to scale heights in market interpretation.

Contact Us : Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: USA: +1 (972)-362-8199 | IND: +91 895 659 5155

Continue reading here:
Global Machine Learning as a Service (MLaaS) Market Expected to reach highest CAGR by 2025: Microsoft, International Business Machine, Amazon Web...

Udacity partners with AWS to offer scholarships on machine learning for working professionals – Business Insider India

All applicants will be able to join the AWS Machine Learning Foundations Course. While applications are on currently, enrollment for the course begins on May 19.

This course will provide an understanding of software engineering and AWS machine learning concepts including production-level coding and practice object-oriented programming. They will also learn about deep learning techniques and its applications using AWS DeepComposer. Advertisement

A major reason behind the increasing uptake of such niche courses among the modern-age learners has to do with the growing relevance of technology across all spheres the world over. In its wake, many high-value job roles are coming up that require a person to possess immense technical proficiency and knowledge in order to assume them. And machine learning is one of the key components of the ongoing AI revolution driving digital transformation worldwide, said Gabriel Dalporto, CEO of Udacity.

The top 325 performers in the foundation course will be awarded with a scholarship to join Udacitys Machine Learning Engineer Nanodegree program. In this advanced course, the students will work on ML tools from AWS. This includes real-time projects that are focussed on specific machine learning skills.

Advertisement

The Nanodegree program scholarship will begin on August 19.

See also:Advertisement

Here are five apps you need to prepare for JEE Main and NEET competitive exams

See the article here:
Udacity partners with AWS to offer scholarships on machine learning for working professionals - Business Insider India

Quantzig Launches New Article Series on COVID-19’s Impact – ‘Understanding Why Online Food Delivery Companies Are Betting Big on AI and Machine…

LONDON--(BUSINESS WIRE)--As a part of its new article series that analyzes COVID-19s impact across industries, Quantzig, a premier analytics services provider, today announced the completion of its recent article Why Online Food Delivery Companies are Betting Big on AI and Machine Learning

The article also offers comprehensive insights on:

Human activity has slowed down due to the pandemic, but its impact on business operations has not. We offer transformative analytics solutions that can help you explore new opportunities and ensure business stability to thrive in the post-crisis world. Request a FREE proposal to gauge COVID-19s impact on your business.

With machine learning, you dont need to babysit your project every step of the way. Since it means giving machines the ability to learn, it lets them make predictions and also improve the algorithms on their own, says a machine learning expert at Quantzig.

After several years of being confined to technology labs and the pages of sci-fi books, today artificial intelligence (AI) and big data have become the dominant focal point for businesses across industries. Barely a day passes by without new magazine and paper articles, blog entries, and tweets about such advancements in the field of AI and machine learning. Having said that, its not very surprising that AI and machine learning in the food and beverage industry have played a crucial role in the rapid developments that have taken place over the past few years.

Talk to us to learn how our advanced analytics capabilities combined with proprietary algorithms can support your business initiatives and help you thrive in todays competitive environment.

Benefits of AI and Machine Learning

Want comprehensive solution insights from an expert who decodes data? Youre just a click away! Request a FREE demo to discover how our seasoned analytics experts can help you.

As cognitive technologies transform the way people use online services to order food, it becomes imperative for online food delivery companies to comprehend customer needs, identify the dents, and bridge gaps by offering what has been missing in the online food delivery business. The combination of big data, AI, and machine learning is driving real innovation in the food and beverage industry. Such technologies have been proven to deliver fact-based results to online food delivery companies that possess the data and the required analytics expertise.

At Quantzig, we analyze the current business scenario using real-time dashboards to help global enterprises operate more efficiently. Our ability to help performance-driven organizations realize their strategic and operational goals within a short span using data-driven insights has helped us gain a leading edge in the analytics industry. To help businesses ensure business continuity amid the crisis, weve curated a portfolio of advanced COVID-19 impact analytics solutions that not just focus on improving profitability but help enhance stakeholder value, boost customer satisfaction, and help achieve financial objectives.

Request more information to know more about our analytics capabilities and solution offerings.

About Quantzig

Quantzig is a global analytics and advisory firm with offices in the US, UK, Canada, China, and India. For more than 15 years, we have assisted our clients across the globe with end-to-end data modeling capabilities to leverage analytics for prudent decision making. Today, our firm consists of 120+ clients, including 45 Fortune 500 companies. For more information on our engagement policies and pricing plans, visit: https://www.quantzig.com/request-for-proposal

Original post:
Quantzig Launches New Article Series on COVID-19's Impact - 'Understanding Why Online Food Delivery Companies Are Betting Big on AI and Machine...

Facebook, AWS team up to produce open-source PyTorch AI libraries, grad student says he successfully used GPT-2 to write his homework…. – The…

Roundup Hello El Reg readers. If you're stuck inside, and need some AI news to soothe your soul, here's our weekly machine-learning roundup.

Nvidia GTC virtual keynote coming to YouTube: Nvidia cancelled its annual GPU Technology Conference in Silicon Valley in March over the ongoing coronavirus pandemic. The keynote speech was promised to be screened virtually, and then that got canned, too. Now, its back.

CEO Jensen Huang will present his talk on May 14 on YouTube at 0600 PT (1300 UTC). Yes, thats early for people on the US West Coast. And no, Jensen isnt doing it live at that hour: the video is prerecorded.

Still, graphics hardware and AI fans will probably want to keep an eye on the presentation. Huang is expected to unveil specs for a new GPU architecture reportedly named the A100, which is expected to be more powerful than its Tesla V100 chips. Youll be able to watch the keynote when it comes out on Nvidias YouTube channel, here.

Also, Nvidia has partnered up with academics at Kings College London to release MONAI, an open-source AI framework for medical imaging.

The framework packages together tools to help researchers and medical practitioners process image data for computer vision models built with PyTorch. These include things like segmenting features in 3D scans or classifying objects in 2D.

Researchers need a flexible, powerful and composable framework that allows them to do innovative medical AI research, while providing the robustness, testing and documentation necessary for safe hospital deployment, said Jorge Cardoso, chief technology officer of the London Medical Imaging & AI Centre for Value-based Healthcare. Such a tool was missing prior to Project MONAI.

You can play with MONAI on GitHub here, or read about it more here.

New PyTorch libraries for ML production: Speaking of PyTorch, Facebook and AWS have collaborated to release a couple of open-source goodies for deploying machine-learning models.

There are now two new libraries: TorchServe and TorchElastic. TorchServe provides tools to manage and perform inference with PyTorch models. It can be used in any cloud service, and you can find the instructions on how to install and use it here.

TorchElastic allows users to train large models over a cluster of compute nodes with Kubernetes. The distributed training means that even if some servers go down for maintenance or random network issues, the service isnt completely interrupted. It can be used on any cloud provider that supports Kubernetes. You can read how to use the library here.

These libraries enable the community to efficiently productionize AI models at scale and push the state of the art on model exploration as model architectures continue to increase in size and complexity, Facebook said this week.

MIT stops working with blacklisted AI company: MIT has discontinued its five-year research collaboration with iFlyTek, a Chinese AI company the US government flagged as being involved in the ongoing persecution of Uyghur Muslims in China.

Academics at the American university made the decision to cut ties with the controversial startup in February. iFlyTek is among 27 other names that are on the US Bureau of Industry and Securitys Entity List, which forbids American organizations from doing business with without Uncle Sam's permission. Breaking the rules will result in sanctions.

We take very seriously concerns about national security and economic security threats from China and other countries, and human rights issues, Maria Zuber, vice president of research at MIT, said, Wired first reported.

MIT entered a five-year deal with iFlyTek in 2018 to collaborate on AI research focused on human-computer interaction, speech recognition, and computer vision.

The relationship soured when it was revealed iFlyTek was helping the Chinese government build a mass automated voice recognition and monitoring system, according to the non-profit Human Rights Watch. That technology was sold to police bureaus in the provinces of Xinjiang and Anhui, where the majority of the Uyghur population in China resides.

OpenAIs GPT-2 writes university papers: A cheeky masters degree student admitted this week to using OpenAIs giant language model GPT-2 to help write his essays.

The graduate student, named only as Tiago, was interviewed by Futurism. We're told that although he passed his assignments using the machine-learning software, he said the achievement was down to failings within the business school rather than to the prowess of state-of-the-art AI technology.

In other words, his science homework wasn't too rigorously marked in this particular unnamed school, allowing him to successfully pass off machine-generated write-ups of varying quality as his own work and GPT-2's output does vary in quality, depending on how you use it.

You couldnt write an essay on science that could be anywhere near convincing using the methods that I used," he said. "Many of the courses that I take in business school wouldnt make it possible as well.

"However, some particular courses are less information-dense, and so if you can manage to write a few pages with some kind of structure and some kind of argument, you can get through. Its not that great of an achievement, I would say, for GPT-2.

Thanks to the Talk to Transformer tool, anyone can use GPT-2 on a web browser. Tiago would feed opening sentences to the model, and copy and paste the machine-generated responses to put in his essay.

GPT-2 is pretty convincing at first: it has a good grasp of grammar, and there is some level of coherency in its opening paragraphs when responding to a statement or question. Its output quality begins to fall apart, becoming incoherent or absurd, as it rambles in subsequent paragraphs. It also doesnt care about facts, which is why it wont be good as a collaborator for subjects such as history and science.

Sponsored: Practical tips for Office 365 tenant-to-tenant migration

Go here to see the original:
Facebook, AWS team up to produce open-source PyTorch AI libraries, grad student says he successfully used GPT-2 to write his homework.... - The...

Machine Learning Engineers Will Not Exist In 10 Years – Machine Learning Times – machine learning & data science news – The Predictive Analytics…

Originally published in Medium, April 28, 2020

The landscape is evolving quickly. Machine Learning will transition to a commonplace part of every Software Engineers toolkit.

In every field we get specialized roles in the early days, replaced by the commonplace role over time. It seems like this is another case of just that.

Lets unpack.

Machine Learning Engineer as a role is a consequence of the massive hype fueling buzzwords like AI and Data Science in the enterprise. In the early days of Machine Learning, it was a very necessary role. And it commanded a nice little pay bump for many! But Machine Learning Engineer has taken on many different personalities depending on who you ask.

The purists among us say a Machine Learning Engineer is someone who takes models out of the lab and into production. They scale Machine Learning systems, turn reference implementations into production-ready software, and oftentimes cross over into Data Engineering. Theyre typically strong programmers who also have some fundamental knowledge of the models they work with.

But this sounds a lot like a normal software engineer.

Ask some of the top tech companies what Machine Learning Engineer means to them and you might get 10 different answers from 10 survey participants. This should be unsurprising. This is a relatively young role and the folks posting these jobs are managers, oftentimes of many decades who dont have the time (or will) to understand the space.

To continue reading this article, click here.

More:
Machine Learning Engineers Will Not Exist In 10 Years - Machine Learning Times - machine learning & data science news - The Predictive Analytics...

The Dell EMC PowerEdge R7525 Saved Time During Machine Learning Preparation Tasks and Achieved Faster Image Processing Than a HPE ProLiant DL380…

Principled Technologies (PT) ran analytics and synthetic, containerized workloads on a ~$40K Dell EMC PowerEdge R7525 and a similarly priced HPE ProLiant DL380 Gen10 to gauge performance and performance/cost ratio.

To explore the performance on certain machine learning tasks of a ~$40K Dell EMC PowerEdge R7525 server powered by AMD EPYC 7502 processors, the experts at PT set up two testbeds and compared its performance results to those of a similarly priced HPE ProLiant DL380 Gen10 powered by Intel Xeon Gold 6240 processors.

The first study, Finish machine learning preparation tasks on Kubernetes containers in less time with the Dell EMC PowerEdge R7525, utilizes a workload that emulates simple image processing tasks that a company might run in the preparation phase of machine learning.

According to the first study, we found that the Dell EMC server: Processed 3.3 million images in 55.8% less time Processed 2.26x the images each second Had 2.32x the value in terms of image processing rate vs. hardware cost.

The second study, Get better k-means analytics workload performance for your money with the Dell EMC PowerEdge R7525, utilizes a learning algorithm used to mimic data mining that a company might use to improve the customer experience or prevent fraud.

According to the second study, we found that the Dell EMC solution: Completed a k-means clustering workload in 40 percent less time Processed 67 percent more data per second Carried a 74 percent better performance/cost ratio in terms of data processing performance vs. hardware price.

To explore the results PT found when comparing the two current-gen ~$40K server solutions, read the Kubernetes study here facts.pt/rfcwex2 and the k-means study here facts.pt/0jyo64h.

About Principled Technologies, Inc.Principled Technologies, Inc. is the leading provider of technology marketing and learning & development services.

Principled Technologies, Inc. is located in Durham, North Carolina, USA. For more information, please visit http://www.principledtechnologies.com.

Company ContactPrincipled Technologies, Inc.1007 Slater Road, Suite #300Durham, NC 27703press@principledtechnologies.com

See the article here:
The Dell EMC PowerEdge R7525 Saved Time During Machine Learning Preparation Tasks and Achieved Faster Image Processing Than a HPE ProLiant DL380...

‘Err On The Side Of Patient Care’: Doctors Turn To Untested Machine Learning To Monitor Virus – Kaiser Health News

Physicians are prematurely relying on Epic's deterioration index, saying they're unable to wait for a validation process that can take months to years. The artificial intelligence gives them a snapshot of a patient's illness and helps them determine who needs more careful monitoring. News on technology is from Verily, Google, MIT, Livongo and more, as well.

Stat:AI Used To Predict Covid-19 Patients' Decline Before Proven To WorkDozens of hospitals across the country are using an artificial intelligence system created by Epic, the big electronic health record vendor, to predict which Covid-19 patients will become critically ill, even as many are struggling to validate the tools effectiveness on those with the new disease. The rapid uptake of Epics deterioration index is a sign of the challenges imposed by the pandemic: Normally hospitals would take time to test the tool on hundreds of patients, refine the algorithm underlying it, and then adjust care practices to implement it in their clinics. Covid-19 is not giving them that luxury. (Ross, 4/24)

Modern Healthcare:Verily, Google Cloud Develop COVID-19 Chatbot For HospitalsGoogle's sister company Verily Life Sciences has joined the mix of companies offering COVID-19 screening tools that hospitals can add to their websites. The screener, called the COVID-19 Pathfinder, takes the form of a chatbot or voicebotessentially personified computer programs that can instant-message or speak to human users in plain English. (Cohen, 4/23)

Boston Globe:Tech From MIT May Allow Caregivers To Monitor Coronavirus Patients From A DistanceA product developed at the Massachusetts Institute of Technology is being used to remotely monitor patients with COVID-19, using wireless signals to detect breathing patterns of people who do not require hospitalization but who must be watched closely to ensure their conditions remain stable. The device, developed at MITs Computer Science and Artificial Intelligence Laboratory by professor Dina Katabi and her colleagues, could in some situations lower the risk of caregivers becoming infected while treating patients with the coronavirus. (Rosen, 4/23)

Stat:A Gulf Emerges In Health Tech: Some Companies Surge, Others Have LayoffsYou might expect them to be pandemic-proof: Theyre the companies offering glimpses of the future in which you dont have to go to the doctors office, ones that would seem to be insulated from a crisis in which people arent leaving their homes. Yet theres a stark divide emerging among the companies providing high-demand virtual health care, triage, and testing services. While some are hiring up and seeing their stock prices soar, others are furloughing and laying off their workers. (Robbins and Brodwin, 4/24)

Go here to read the rest:
'Err On The Side Of Patient Care': Doctors Turn To Untested Machine Learning To Monitor Virus - Kaiser Health News

SAP Makes Support Experience Even Smarter With ML and AI – AiThority

SAP SE announced several updates, including the Schedule a ManagerandAsk an Expert Peerservices, to its Next-Generation Support approach focused on the customer support experience and enabling customer success. Based on artificial intelligence (AI) and machine learning technologies, SAP has further developed existing functionalities with new, automated capabilities such as theIncident Solution Matching service and automatic translation.

When it comes to customer support, weve seen great success in flipping the customer engagement model by leveraging AI and machine learning technologies across our product support functionalities and solutions, saidAndreas Heckmann, head of Customer Solution Support and Innovation and executive vice president, SAP. To simplify and enhance the customer experience through our award-winning support channels, were making huge steps towards our goal of meeting customers needs by anticipating what they may need before it even occurs.

Recommended AI News: Kofax Presents Partner of the Year Awards

AI and machine learning technologies are key to improving and simplifying the customer support experience. They continue to play an important role in expanding Next-Generation Support to help SAP deliver maximum business outcomes for customers. SAP has expanded its offerings by adding new features to existing services, for example:

Recommended AI News: Kyocera Selects Skyhook to Power Precision Location Services for Rugged DuraXV Extreme

Customers expect their issues to be resolved quickly, and SAP strives toward a consistent line of communication across all support channels, including real-time options. SAP continues to improve, innovate and extend live support for technical issues by connecting directly with customers to provide a personal customer experience. Building on top of live support services, such asExpert ChatandSchedule an Expert, SAP has made significant strides in upgrading its real-time support channels. For example, it now offers the Schedule a Manager service and a peer-to-peer collaboration channel through the Ask an Expert Peer service.

By continuing to invest in AI and machine learningbased technologies, SAP enables more efficient support processes for customers while providing the foundation for predictive support functionalities and superior customer support experiences.

Customers can learn more about the Next-Generation Support approach through theProduct Support Accreditation program, available to SAP customers and partners at no additional cost. Customers can be empowered to get the best out of SAPs product support tools and the Next-Generation Support approach.

Recommended AI News: O.C. Tanner Recognized as a Leader in Everest Group PEAK Matrix Rewards & Recognition Solutions Assessment 2020

Read more:
SAP Makes Support Experience Even Smarter With ML and AI - AiThority

Microsoft: Our AI can spot security flaws from just the titles of developers’ bug reports – ZDNet

Microsoft has revealed how it's applying machine learning to the challenge of correctly identifying which bug reports are actually security-related.

Its goal is to correctly identify security bugs at scale using a machine-learning model to analyze just the label of bug reports.

According to Microsoft, its 47,000 developers generate about 30,000 bugs a month, but only some of the flaws have security implications that need to be addressed during the development cycle.

Microsoft says its machine-learning model correctly distinguishes between security and non-security bugs 99% of the time. It can also accurately identify critical security bugs 97% of the time.

SEE: 10 tips for new cybersecurity pros (free PDF)

The model allows Microsoft to label and prioritize bugs without necessarily throwing more human resources at the challenge. Fortunately for Microsoft, it has a trove of 13 million work items and bugs it's collected since 2001 to train its machine-learning model on.

Microsoft used a supervised learning approach to teach a machine-learning model how to classify data from pre-labeled data and then used that model to label data that wasn't already classified.

Importantly, the classifier is able to classify bug reports just from the title of the bug report, allowing it to get around the problem of handling sensitive information within bug reports such as passwords or personal information.

"We train classifiers for the identification of security bug reports (SBRs) based solely on the title of the reports," explain Mayana Pereira, a Microsoft data scientist, and Scott Christiansen from Microsoft's Customer Security and Trust division in a new paper titled Identifying Security Bug Reports Based Solely on Report Titles and Noisy Data.

"To the best of our knowledge this is the first work to do so. Previous works either used the complete bug report or enhanced the bug report with additional complementary features," they write.

"Classifying bugs based solely on the tile is particularly relevant when the complete bug reports cannot be made available due to privacy concerns. For example, it is notorious the case of bug reports that contain passwords and other sensitive data."

SEE: Zoom vs Microsoft Teams? Now even Parliament is trying to decide

Microsoft still relies on security experts who are involved in training, retraining, and evaluating the model, as well as approving training data that its data scientists fed into the machine-learning model.

"By applying machine learning to our data, we accurately classify which work items are security bugs 99% of the time. The model is also 97% accurate at labeling critical and non-critical security bugs. This level of accuracy gives us confidence that we are catching more security vulnerabilities before they are exploited," Pereira and Christiansen said in a blogpost.

Microsoft plans to share its methodology on GitHub in the coming months.

The rest is here:
Microsoft: Our AI can spot security flaws from just the titles of developers' bug reports - ZDNet

Research Team Uses Machine Learning to Track COVID-19 Spread in Communities and Predict Patient Outcomes – The Ritz Herald

The COVID-19 pandemic is raising critical questions regarding the dynamics of the disease, its risk factors, and the best approach to address it in healthcare systems. MIT Sloan School of Management Prof. Dimitris Bertsimas and nearly two dozen doctoral students are using machine learning and optimization to find answers. Their effort is summarized in the COVIDanalytics platform where their models are generating accurate real-time insight into the pandemic. The group is focusing on four main directions; predicting disease progression, optimizing resource allocation, uncovering clinically important insights, and assisting in the development of COVID-19 testing.

The backbone for each of these analytics projects is data, which weve extracted from public registries, clinical Electronic Health Records, as well as over 120 research papers that we compiled in a new database. Were testing our models against incoming data to determine if it makes good predictions, and we continue to add new data and use machine-learning to make the models more accurate, says Bertsimas.

The first project addresses dilemmas at the front line, such as the need for more supplies and equipment. Protective gear must go to healthcare workers and ventilators to critically ill patients. The researchers developed an epidemiological model to track the progression of COVID-19 in a community, so hospitals can predict surges and determine how to allocate resources.

The team quickly realized that the dynamics of the pandemic differ from one state to another, creating opportunities to mitigate shortages by pooling some of the ventilator supply across states. Thus, they employed optimization to see how ventilators could be shared among the states and created an interactive application that can help both the federal and state governments.

Different regions will hit their peak number of cases at different times, meaning their need for supplies will fluctuate over the course of weeks. This model could be helpful in shaping future public policy, notes Bertsimas.

Recently, the researchers connected with long-time collaborators at Hartford HealthCare to deploy the model, helping the network of seven campuses to assess their needs. Coupling county level data with the patient records, they are rethinking the way resources are allocated across the different clinics to minimize potential shortages.

The third project focuses on building a mortality and disease progression calculator to predict whether someone has the virus, and whether they need hospitalization or even more intensive care. He points out that current advice for patients is at best based on age, and perhaps some symptoms. As data about individual patients is limited, their model uses machine learning based on symptoms, demographics, comorbidities, lab test results as well as a simulation model to generate patient data. Data from new studies is continually added to the model as it becomes available.

We started with data published in Wuhan, Italy, and the U.S., including infection and death rate as well as data coming from patients in the ICU and the effects of social isolation. We enriched them with clinical records from a major hospital in Lombardy which was severely impacted by the spread of the virus. Through that process, we created a new model that is quite accurate. Its power comes from its ability to learn from the data, says Bertsimas.

By probing the severity of the disease in a patient, it can actually guide clinicians in congested areas in a much better way, says Bertsimas.

Their fourth project involves creating a convenient test for COVID-19. Using data from about 100 samples from Morocco, the group is using machine-learning to augment a test previously designed at the Mohammed VI Polytechnic University to come up with more precise results. The model can accurately detect the virus in patients around 90% of the time, while false positives are low.

The team is currently working on expanding the epidemiological model to a global scale, creating more accurate and informed clinical risk calculators, and identifying potential ways that would allow us to go back to normality.

We have released all our source code and made the public database available for other people too. We will continue to do our own analysis, but if other people have better ideas, we welcome them, says Bertsimas.

Continued here:
Research Team Uses Machine Learning to Track COVID-19 Spread in Communities and Predict Patient Outcomes - The Ritz Herald

New AI improves itself through Darwinian-style evolution – Big Think

Machine learning has fundamentally changed how we engage with technology. Today, it's able to curate social media feeds, recognize complex images, drive cars down the interstate, and even diagnose medical conditions, to name a few tasks.

But while machine learning technology can do some things automatically, it still requires a lot of input from human engineers to set it up, and point it in the right direction. Inevitably, that means human biases and limitations are baked into the technology.

So, what if scientists could minimize their influence on the process by creating a system that generates its own machine-learning algorithms? Could it discover new solutions that humans never considered?

To answer these questions, a team of computer scientists at Google developed a project called AutoML-Zero, which is described in a preprint paper published on arXiv.

"Human-designed components bias the search results in favor of human-designed algorithms, possibly reducing the innovation potential of AutoML," the paper states. "Innovation is also limited by having fewer options: you cannot discover what you cannot search for."

Automatic machine learning (AutoML) is a fast-growing area of deep learning. In simple terms, AutoML seeks to automate the end-to-end process of applying machine learning to real-world problems. Unlike other machine-learning techniques, AutoML requires relatively little human effort, which means companies might soon be able to utilize it without having to hire a team of data scientists.

AutoML-Zero is unique because it uses simple mathematical concepts to generate algorithms "from scratch," as the paper states. Then, it selects the best ones, and mutates them through a process that's similar to Darwinian evolution.

AutoML-Zero first randomly generates 100 candidate algorithms, each of which then performs a task, like recognizing an image. The performance of these algorithms is compared to hand-designed algorithms. AutoML-Zero then selects the top-performing algorithm to be the "parent."

"This parent is then copied and mutated to produce a child algorithm that is added to the population, while the oldest algorithm in the population is removed," the paper states.

The system can create thousands of populations at once, which are mutated through random procedures. Over enough cycles, these self-generated algorithms get better at performing tasks.

"The nice thing about this kind of AI is that it can be left to its own devices without any pre-defined parameters, and is able to plug away 24/7 working on developing new algorithms," Ray Walsh, a computer expert and digital researcher at ProPrivacy, told Newsweek.

If computer scientists can scale up this kind of automated machine-learning to complete more complex tasks, it could usher in a new era of machine learning where systems are designed by machines instead of humans. This would likely make it much cheaper to reap the benefits of deep learning, while also leading to novel solutions to real-world problems.

Still, the recent paper was a small-scale proof of concept, and the researchers note that much more research is needed.

"Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent... multiplicative interactions. These results are promising, but there is still much work to be done," the scientists' preprint paper noted.

Related Articles Around the Web

Go here to see the original:
New AI improves itself through Darwinian-style evolution - Big Think

Create Symbiotic Relationships with AI in Business – ReadWrite

Knowingly or unknowingly we are all using artificial intelligence or AI. There is a combination of always-on devices, cloud and edge computing, and APIs in our everyday lives and business practices bringing AI into practice. Here is how to create symbiotic relationships with AI in business.

Even though the relationship between humans and machines is growing ever closer, its much too early to describe many of these collaborations as symbiotic.

When humans have specific types of problems, weve built and trained machines to solve those problems.

Examples include machine learning or ML. The ML algorithms that can identify cancer in brain images. The algorithms can also determine the best placements or designs for online ads, and there are deep learning systems that can predict customer churn in business.

At the moment, we can only imagine how much more productive we will become as we form symbiotic relationships with AI. Routine tasks that currently take hours or days could be abbreviated to 10 or 15 minutes with the aid of a digital partner.

From simple exercises like finding a new restaurant to more expert tasks such as cancer detection, we will increasingly rely on machines for everyday tasks. Dependence on machines might begin as a second pair of eyes or a second opinion, but our commitment to machines (and AI) will evolve into full-on digital collaborators.

Machine learning could bring about a revolution in how we solve problems to which the principle of optimal stopping applies.

Research in mathematics and computer science regarding these problems has shown that the optimal time to stop searching and make a decision is after37% of the time has been spent, options have been reviewed, and parking spaces have been passed.

Examples of these sorts of traditions problems include hiring the right person, making the right amount of R&D investment, and buying or selling a home. Humans tend to stop searching and considering data at about 31% well before they could have found the best option.

Forming symbiotic relationships with machines will free up time for us to focus on honing soft skills such as empathy, management, and strategy. It is not unreasonable to conclude that this symbiotic relationship will even present a new factor in the simple ability to enjoy life outside of work.

Very soon, AI could help us review enough options to find the right homebuyer, apartment tenant, job applicant, and perhaps even the right spouse.

For businesses and organizations with knowledge work as their output employees will benefit in several ways by applying machine learning to their advantage. Employees will use applications that cut across a variety of industries.

Some industry-agnostic roles such as a project manager will be able to offload routine tasks.

Tech will benefit substantially. Similar to how content creators benefit from writing agents such as Grammarly, software developers will benefit from a pair programming agent. The agent will suggest not only the right code syntax, but also the most appropriate framework, library, or API.

These agents will also have the opportunity to improve code quality and user experience drastically.

For industries like construction, AI could take advantage of the increased digitization of blueprints. AI will automate tasks that are routine but critical as project estimation. Depending on the size of the project, a human estimator can take up to four weeks to estimate a project.

Effortlessly, a digital agent could determine the materials needed for the project and set the number of workers necessary to staff the project.

More dramatic still, the AI digital agent could be connected to a supply store and incorporate real-time pricing into the final quote.

Medicine is another prime exampleof an industry ripe for disruption through human-AI symbiosis.

Pharmaceutical companies are leveraging machine learning to determine the optimal levels of research and development, using factors such as projected market size, revenue, and lifetime value of potential drugs.

Many doctors and hospitals have begun to incorporate AI recommendations into their processes. Increasing successes are seen, with 35% of doctors in a 2019 survey stating they use AI in their practices.

Some approaches in medicine have leveraged AI to provide potential options to doctors. Other choices analyze a doctors recommendation to predict the probability of success.

The dynamic symbiotic relationship between doctors and AI will also likely alter how malpractice riskis assessed for insurance.

As AI becomesmore commonplace in healthcareand is proven to improve outcomes for patients and decrease costs for hospitals, malpractice insurance will evolve to see AI as a way to reduce overall risk.

Similarly, doctors and hospitals that invest in AI solutions will see an improved return on investment in the form of lower insurance costs, improved outcomes, and increased efficiency.

Organizations that want to embrace the advances in AI and ML to produce symbiotic relationships between machines and themselves can take these steps.

The first step is to assess how artificial intelligence stands to impact your business as well as your industry and value chain. Examine whether you can add AI to your services.

Will AI change your product entirely, or can AI open new possibilities for entirely new products and services?

Once you complete your assessment and identify your options, break down your potential financial value to the organization. The assessment will uncover both potential risks you could incur and opportunities for new revenue streams you could open once you achieve AI-human symbiosis.

Every organization needs to learn where its data is stored and used. Proactively make this data available across the organization for experimentation, proofs of concepts, and other innovation projects.

Gain a firm understanding of what data you have and who owns it and share the information across the organization safely and democratically. The open network and feeling you are creating with this action are crucial to enabling machines to work for you, and sowing the seeds of innovation.

Assess your workforce to determine the roles that will most likely benefit from AI and machine learning solutions. The assessments can be divided into varying styles across individual employees or teams. These assessments include:

Data-driven thinkers versus big-picture focus thinkers.

Strengths in strategy versus problem-solving strengths.

Skill sets in software development versus the risk assessment skill set.

Is the talent expertise contained in surgery versus the expertise in research and development?

Machines are forging new opportunities for human work throughout the value chain as humans and machines collaborate to create more meaningful human jobs.

An organization must align its approach to building symbiotic relationships with its overarching purpose and that begins with leadership.

Leaders must excite their workforces about the ultimate goal of integrating AI, provide a clear vision for the organizations goals, and assure their workers that machines will enhance and alter (but not replace) their roles.

Its important to create near and long-term plans and then share those timelines across the organization, and connect those benchmarks to your greater purpose.

Organizations wont be able to take advantage of the value of these symbiotic relationships without carefully appraising the opportunities and risks.

Businesses must get their data houses in order and encourage innovation that enhances their talent and their organizations purpose. Only then will humans use AI to its full potential.

Image Credit: franck-v, Unsplash

Daniel Williams is a principal with Pariveda Solutions, specializing in digital strategy, implementation, and analytics. With B.S. and M.S. degrees in Computer Science and Technology Management, he has become an expert in digital transformation and AI/ML.

Read more here:
Create Symbiotic Relationships with AI in Business - ReadWrite

Windows 10 news recap: Halo 2 Anniversary beta invites being sent out, machine learning utilised to identify security bugs, and more – OnMSFT

Welcome back to our Windows 10 news recap, where we go over the top stories of the past week in the world of Microsofts flagship operating system.

Microsoft to introduce PowerToys launcher for Windows 10 in May

A new report suggests that a new update for PowerToys is being prepared that includes a Mac OS style Spotlight launcher, making it easier find apps and files on a Windows 10 PC.

concept design for PowerToys Launcher UX

Microsoft starts sending invites for first Halo 2 Anniversary beta on PC

Invites for the Halo 2 Anniversary beta on PC have started to be sent out this week. Members of the Halo Insider program who have opted into PC flighting will receive an email with the invite.

Microsoft is using machine learning to identify security bugs during software development

In order to help Microsoft identify security bugs and resolve them before public release of software, the company is employing machine learning to find security bugs.

Thats it for this week. Well be back next week with more Windows 10 news.

Originally posted here:
Windows 10 news recap: Halo 2 Anniversary beta invites being sent out, machine learning utilised to identify security bugs, and more - OnMSFT

Model quantifies the impact of quarantine measures on Covid-19’s spread – MIT News

The research described in this article has been published on a preprint server but has not yet been peer-reviewed by scientific or medical experts.

Every day for the past few weeks, charts and graphs plotting the projected apex of Covid-19 infections have been splashed across newspapers and cable news. Many of these models have been built using data from studies on previous outbreaks like SARS or MERS. Now, a team of engineers at MIT has developed a model that uses data from the Covid-19 pandemic in conjunction with a neural network to determine the efficacy of quarantine measures and better predict the spread of the virus.

Our model is the first which uses data from the coronavirus itself and integrates two fields: machine learning and standard epidemiology, explains Raj Dandekar, a PhD candidate studying civil and environmental engineering. Together with George Barbastathis, professor of mechanical engineering, Dandekar has spent the past few months developing the model as part of the final project in class 2.168 (Learning Machines).

Most models used to predict the spread of a disease follow what is known as the SEIR model, which groups people into susceptible, exposed, infected, and recovered. Dandekar and Barbastathis enhanced the SEIR model by training a neural network to capture the number of infected individuals who are under quarantine, and therefore no longer spreading the infection to others.

The model finds that in places like South Korea, where there was immediate government intervention in implementing strong quarantine measures, the virus spread plateaued more quickly. In places that were slower to implement government interventions, like Italy and the United States, the effective reproduction number of Covid-19 remains greater than one, meaning the virus has continued to spread exponentially.

The machine learning algorithm shows that with the current quarantine measures in place, the plateau for both Italy and the United States will arrive somewhere between April 15-20. This prediction is similar to other projections like that of the Institute for Health Metrics and Evaluation.

Our model shows that quarantine restrictions are successful in getting the effective reproduction number from larger than one to smaller than one, says Barbastathis. That corresponds to the point where we can flatten the curve and start seeing fewer infections.

Quantifying the impact of quarantine

In early February, as news of the virus troubling infection rate started dominating headlines, Barbastathis proposed a project to students in class 2.168. At the end of each semester, students in the class are tasked with developing a physical model for a problem in the real world and developing a machine learning algorithm to address it. He proposed that a team of students work on mapping the spread of what was then simply known as the coronavirus.

Students jumped at the opportunity to work on the coronavirus, immediately wanting to tackle a topical problem in typical MIT fashion, adds Barbastathis.

One of those students was Dandekar. The project really interested me because I got to apply this new field of scientific machine learning to a very pressing problem, he says.

As Covid-19 started to spread across the globe, the scope of the project expanded. What had originally started as a project looking just at spread within Wuhan, China grew to also include the spread in Italy, South Korea, and the United States.

The duo started modeling the spread of the virus in each of these four regions after the 500th case was recorded. That milestone marked a clear delineation in how different governments implemented quarantine orders.

Armed with precise data from each of these countries, the research team took the standard SEIR model and augmented it with a neural network that learns how infected individuals under quarantine impact the rate of infection. They trained the neural network through 500 iterations so it could then teach itself how to predict patterns in the infection spread.

Using this model, the research team was able to draw a direct correlation between quarantine measures and a reduction in the effective reproduction number of the virus.

The neural network is learning what we are calling the quarantine control strength function, explains Dandekar. In South Korea, where strong measures were implemented quickly, the quarantine control strength function has been effective in reducing the number of new infections. In the United States, where quarantine measures have been slowly rolled out since mid-March, it has been more difficult to stop the spread of the virus.

Predicting the plateau

As the number of cases in a particular country decreases, the forecasting model transitions from an exponential regime to a linear one. Italy began entering this linear regime in early April, with the U.S. not far behind it.

The machine learning algorithm Dandekar and Barbastathis have developed predictedthat the United States will start to shift from an exponential regime to a linear regime in the first week of April, with a stagnation in the infected case count likely betweenApril 15 and April20. It also suggests that the infection count will reach 600,000 in the United States before the rate of infection starts to stagnate.

This is a really crucial moment of time. If we relax quarantine measures, it could lead to disaster, says Barbastathis.

According to Barbastathis, one only has to look to Singapore to see the dangers that could stem from relaxing quarantine measures too quickly. While the team didnt study Singapores Covid-19 cases in their research, the second wave of infection this country is currently experiencing reflects their models finding about the correlation between quarantine measures and infection rate.

If the U.S. were to follow the same policy of relaxing quarantine measures too soon, we have predicted that the consequences would be far more catastrophic, Barbastathis adds.

The team plans to share the model with other researchers in the hopes that it can help inform Covid-19 quarantine strategies that can successfully slow the rate of infection.

Visit link:
Model quantifies the impact of quarantine measures on Covid-19's spread - MIT News

Machine Learning as a Service (MLaaS) Market | Outlook and Opportunities in Grooming Regions with Forecast to 2029 – Jewish Life News

Documenting the Industry Development of Machine Learning as a Service (MLaaS) Market concentrating on the industry that holds a massive market share 2020 both concerning volume and value With top countries data, Manufacturers, Suppliers, In-depth research on market dynamics, export research report and forecast to 2029

As per the report, the Machine Learning as a Service (MLaaS) Market is anticipated to gain substantial returns while registering a profitable annual growth rate during the predicted time period.The global machine learning as a service (mlaas) market research report takes a chapter-wise approach in explaining the dynamics and trends in the machine learning as a service (mlaas) industry.The report also provides the industry growth with CAGR in the forecast to 2029.

A deep analysis of microeconomic and macroeconomic factors affecting the growth of the market are also discussed in this report. The report includes information related to On-going demand and supply forecast. It gives a wide stage offering numerous open doors for different businesses, firms, associations, and start-ups and also contains authenticate estimations to grow universally by contending among themselves and giving better and agreeable administrations to the clients. In-depth future innovations of machine learning as a service (mlaas) Market with SWOT analysis on the basis Of type, application, region to understand the Strength, Weaknesses, Opportunities, and threats in front of the businesses.

Get a Sample Report for More Insightful Information(Use official eMail ID to Get Higher Priority):https://market.us/report/machine-learning-as-a-service-mlaas-market/request-sample/

***[Note: Our Complimentary Sample Report Accommodate a Brief Introduction To The Synopsis, TOC, List of Tables and Figures, Competitive Landscape and Geographic Segmentation, Innovation and Future Developments Based on Research Methodology are also Included]

An Evaluation of the Machine Learning as a Service (MLaaS) Market:

The report is a detailed competitive outlook including the Machine Learning as a Service (MLaaS) Market updates, future growth, business prospects, forthcoming developments and future investments by forecast to 2029. The region-wise analysis of machine learning as a service (mlaas) market is done in the report that covers revenue, volume, size, value, and such valuable data. The report mentions a brief overview of the manufacturer base of this industry, which is comprised of companies such as- Google, IBM Corporation, Microsoft Corporation, Amazon Web Services, BigML, FICO, Yottamine Analytics, Ersatz Labs, Predictron Labs, H2O.ai, AT and T, Sift Science.

Segmentation Overview:

Product Type Segmentation :

Software Tools, Cloud and Web-based Application Programming Interface (APIs), Other

Application Segmentation :

Manufacturing, Retail, Healthcare and Life Sciences, Telecom, BFSI, Other (Energy and Utilities, Education, Government)

To know more about how the report uncovers exhaustive insights |Enquire Here: https://market.us/report/machine-learning-as-a-service-mlaas-market/#inquiry

Key Highlights of the Machine Learning as a Service (MLaaS) Market:

The fundamental details related to Machine Learning as a Service (MLaaS) industry like the product definition, product segmentation, price, a variety of statements, demand and supply statistics are covered in this article.

The comprehensive study of machine learning as a service (mlaas) market based on development opportunities, growth restraining factors and the probability of investment will anticipate the market growth.

The study of emerging Machine Learning as a Service (MLaaS) market segments and the existing market segments will help the readers in preparing the marketing strategies.

The study presents major market drivers that will augment the machine learning as a service (mlaas) market commercialization landscape.

The study performs a complete analysis of these propellers that will impact the profit matrix of this industry positively.

The study exhibits information about the pivotal challenges restraining market expansion

The market review for the global market is done in context to region, share, and size.

The important tactics of top players in the market.

Other points comprised in the Machine Learning as a Service (MLaaS) report are driving factors, limiting factors, new upcoming opportunities, encountered challenges, technological advancements, flourishing segments, and major trends of the market.

Check Table of Contents of This Report @https://market.us/report/machine-learning-as-a-service-mlaas-market//#toc

Get in Touch with Us :

Mr. Benni Johnson

Market.us (Powered By Prudour Pvt. Ltd.)

Send Email:[emailprotected]

Address:420 Lexington Avenue, Suite 300 New York City, NY 10170, United States

Tel:+1 718 618 4351

Website:https://market.us

Our Trending Blog:https://foodnbeveragesmarket.com/

Get More News From Other Reputed Sources:

Hospital Pharmaceuticals Market Set Encounter Paramount Growth and Forecast 2029 | Sanofi, Bristol-Myers Squibb, Roche | BioSpace

Laser Diode Drivers Market Technological Trends in 2020-2029 | Analog Devices (Linear Technology), Maxim Integrated, Texas Instruments

Link:
Machine Learning as a Service (MLaaS) Market | Outlook and Opportunities in Grooming Regions with Forecast to 2029 - Jewish Life News

Podcast of the Week: TWIML AI Podcast – 9to5Mac

During the COVID19 pandemic, I decided that I wanted to use the time at home to invest in myself. One of the things I was challenged by in a recent episode of Business Casual was when Mark Cuban discussed the role of Artificial Intelligence in the future and recommended some tools to learn more. He mentioned some Coursera courses, so I am currently working my way through some of their AI training, but he also mentioned an AI-focused podcast called theTWIMLAI Podcast that I added to my podcast subscription list.

9to5Macs Podcast of the Week is a weekly recommendation of a podcast you should add to your subscription list.

TWIML (This Week in Machine Learning and AI) is a perfect way to hear from industry experts about how Machine Learning and AI will change our world. I plan to work through the back catalog soon, but the newest episodes have been informative. I particularly enjoyed this episode with Cathy Wu, Gilbert W. Winslow Career Development Assistant Professor in the Department of Civil and Environmental Engineering at MIT where they discussed simulating the future of traffic.

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. By sharing and amplifying the voices of a broad and diverse spectrum of machine learning and AI researchers, practitioners, and innovators, our programs help make ML and AI more accessible, and enhance the lives of our audience and their communities.

TWIML has its origins in This Week in Machine Learning & AI, a podcast Sam launched in mid2016 to a small but enthusiastic reception. Fast forward three years, and the TWIML AI Podcast is now a leading voice in the field, with over five million downloads and a large and engaged community following. Our offerings now include online meetups and study groups, conferences, and a variety of educational content.

Subscribe to the TWIML AI Podcast on Apple Podcasts, Spotify, Castro, Overcast, Pocket Casts, and RSS.

Dont forget about the great lineup of podcasts on the 9to5 Network.

FTC: We use income earning auto affiliate links. More.

Check out 9to5Mac on YouTube for more Apple news:

Read more here:
Podcast of the Week: TWIML AI Podcast - 9to5Mac

IBM Joins SCTEISBE Explorer Initiative To Help Shape Future Of AI And ML – AiThority

IBMhas joined theSCTEISBE Explorer Initiativeas a member of the artificial intelligence (AI) and machine learning (ML) working group. IBM is the first company from outside the cable telecommunications industry to join Explorer.

IBM will collaborate with subject matter experts from across industries to develop AI and ML standards and best practices. By sharing expertise and insights fostered within their organizations, members will help shape the standards that will enable the wide-spread availability of AI and ML applications.

Recommended AI News:Azure DevSecOps Jumpstart Now Available In The Microsoft Azure Marketplace

Integrating advancements in AI and machine learning with the deployment of agile, open, and secure, software-defined networks will help usher in new innovations, many of which will transform the way we connect, saidSteve Canepa, global industry managing director, telecommunications, media & entertainment for IBM. The industry is going through a dramatic transformation as it prepares for a different marketplace with different demands, and we are energized by this collaboration. As the network becomes a cloud platform, it will help drive innovative data-driven services and applications to bring value to both enterprises and consumers.

SCTEISBE announced the expansion of its award-winning Standards program in lateMarch 2020with the introduction of the Explorer Initiative. As part of the initiative seven new working groups will bring together leaders with diverse backgrounds to develop standards forAI and ML, smart cities, aging in place and telehealth, telemedicine, autonomous transport, extended spectrum (up to 3.0 GHz), and human factors affecting network reliability. Explorer working groups were chosen for their potential to impact telecommunications infrastructure, take advantage of the benefits of cables10G platform,and improve societys ability to cope with natural disasters and health crises like COVID-19.

Recommended AI News:Zilliant Price IQ Is Integrated With Oracle Cloud And Now Available In The Oracle Cloud Marketplace

The COVID-19 pandemic has demonstrated the importance of technology and connectivity to modern society and by many accounts, increased the speed of digital transformation across industries, saidChris Bastian, SCTEISBE senior vice president and CTIO. Explorer will help us turn innovative concepts into reality by giving industry leaders the opportunity to learn from each other, reduce development costs, ensure their connectivity needs are met, and ultimately get to market faster.

Recommended: AiThority Interview With Elie Melois, CTO And Co-Founder At LumApps

Share and Enjoy !

Here is the original post:
IBM Joins SCTEISBE Explorer Initiative To Help Shape Future Of AI And ML - AiThority

My Invisalign app uses machine learning and facial recognition to sell the benefits of dental work – TechRepublic

Align Technology uses DevSecOps tactics to keep complex projects on track and align business and IT goals.

Image: AndreyPopov/Getty Images/iStockphoto

Align Technology's Chief Digital Officer Sreelakshmi Kolli is using machine learning and DevOps tactics to power the company's digital transformation.

Kolli led the cross-functional team that developed the latest version of the company's My Invisalign app. The app combines several technologies into one product including virtual reality, facial recognition, and machine learning. Kolli said that using a DevOps approach helped to keep this complex work on track.

"The feasibility and proof of concept phase gives us an understanding of how the technology drives revenue and/or customer experience," she said. "Modular architecture and microservices allows incremental feature delivery that reduces risk and allows for continuous delivery of innovation."

SEE: Research: Microservices bring faster application delivery and greater flexibility to enterprises (TechRepublic Premium)

The customer-facing app accomplishes several goals at once, the company said:

More than 7.5 million people have used the clear plastic molds to straighten their teeth, the company said. Align Technology has used data from these patients to train a machine learning algorithm that powers the visualization feature in the mobile app. The SmileView feature uses machine learning to predict what a person's smile will look like when the braces come off.

Kolli started with Align Technology as a software engineer in 2003. Now she leads an integrated software engineering group focused on product technology strategy and development of global consumer, customer and enterprise applications and infrastructure. This includes end user and cloud computing, voice and data networks and storage. She also led the company's global business transformation initiative to deliver platforms to support customer experience and to simplify business processes.

Kolli used the development process of the My Invisalign app as an opportunity to move the dev team to DevSecOps practices. Kolli said that this shift represents a cultural change, and making the transition requires a common understanding among all teams on what the approach means to the engineering lifecycle.

"Teams can make small incremental changes to get on the DevSecOps journey (instead of a large transformation initiative)," she said. "Investing in automation is also a must for continuous integration, continuous testing, continuous code analysis and vulnerability scans." To build the machine learning expertise required to improve and support the My Invisalign app, she has hired team members with that skill set and built up expertise internally.

"We continue to integrate data science to all applications to deliver great visualization experiences and quality outcomes," she said.

Align Technology uses AWS to run its workloads.

In addition to keeping patients connected with orthodontists, the My Invisalign app is a marketing tool to convince families to opt for the transparent but expensive alternative to metal braces.

Kolli said that IT leaders should work closely with business leaders to make sure initiatives support business goals such as revenue growth, improved customer experience, or operational efficiencies, and modernize the IT operation as well.

"Making the line of connection between the technology tasks and agility to go to market helps build shared accountability to keep technical debt in control," she said.

Align Technology released the revamped app in late 2019. In May of this year, the company released a digital version tool for doctors that combines a photo of the patient's face with their 3D Invisalign treatment plan.

This ClinCheck "In-Face" Visualization is designed to help doctors manage patient treatment plans.

The visualization workflow combines three components of Align's digital treatment platform: Invisalign Photo Uploader for patient photos, the iTero intraoral scanner to capture data needed for the 3D model of the patient's teeth, and ClinCheck Pro 6.0. ClinCheck Pro 6.0 allows doctors to modify treatment plans through 3D controls.

These new product releases are the first in a series of innovations to reimagine the digital treatment planning process for doctors, Raj Pudipeddi, Align's chief innovation, product, and marketing officer and senior vice president, said in a press release about the product.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read more from the original source:
My Invisalign app uses machine learning and facial recognition to sell the benefits of dental work - TechRepublic

Big data and machine learning are growing at massive rates. This training explains why – The Next Web

TLDR: The Complete 2020 Big Data and Machine Learning Bundle breaks down understanding and getting started in two of the tech eras biggest new growth sectors.

Its instructive to know just how big Big Data really is. And the reality is that its now so big that the word big doesnt even effectively do it justice anymore. Right now, humankind is creating 2.5 quintillion bytes of data every day. And its growing exponentially, with 90 percent of all data created in just the past two years. By 2023, the big data industry will be worth about $77 billion and thats despite the fact that unstructured data is identified as a problem by 95 percent of all businesses.

Meanwhile, data analysis is also the background of other emerging fields, like the explosion of machine learning projects that have companies like Apple scooping up machine learning upstarts.

The bottom is that if you understand Big Data, you can effectively right your own ticket salary-wise. You can jump into this fascinating field the right way with the training in The Complete 2020 Big Data and Machine Learning Bundle, on sale now for $39.90, over 90 percent off from TNW Deals.

This collection includes 10 courses featuring 68 hours of instruction covering the basics of big data, the tools data analysts need to know, how machines are being taught to think for themselves, and the career applications for learning all this cutting-edge technology.

Everything starts with getting a handle on how data scientists corral mountains of raw information. Six of these courses focus on big data training, including close exploration of the essential industry-leading tools that make it possible. If you dont know what Hadoop, Scala or Elasticsearch do or that Spark Streaming is a quickly developing technology for processing mass data sets in real-time, you will after these courses.

Meanwhile, the remaining four courses center on machine learning, starting with a Machine Learning for Absolute Beginners Level 1 course that helps first-timers get a grasp on the foundations of machine learning, artificial intelligence and deep learning. Students also learn about the Python coding languages role in machine learning as well as how tools like Tensorflow and Keras impact that learning.

A training package valued at almost $1,300, you can start turning Big Data and machine learning into a career with this instruction for just $39.90.

Prices are subject to change.

Read next: The 'average' Robinhood trader is no match for the S&P 500, just like Buffett

Read our daily coverage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.

For tips and tricks on working remotely, check out our Growth Quarters articles here or follow us on Twitter.

See the article here:
Big data and machine learning are growing at massive rates. This training explains why - The Next Web

The cost of training machines is becoming a problem – The Economist

Jun 11th 2020

THE FUNDAMENTAL assumption of the computing industry is that number-crunching gets cheaper all the time. Moores law, the industrys master metronome, predicts that the number of components that can be squeezed onto a microchip of a given sizeand thus, loosely, the amount of computational power available at a given costdoubles every two years.

For many comparatively simple AI applications, that means that the cost of training a computer is falling, says Christopher Manning, the director of Stanford Universitys AI Lab. But that is not true everywhere. A combination of ballooning complexity and competition means costs at the cutting edge are rising sharply.

Dr Manning gives the example of BERT, an AI language model built by Google in 2018 and used in the firms search engine. It had more than 350m internal parameters and a prodigious appetite for data. It was trained using 3.3bn words of text culled mostly from Wikipedia, an online encyclopedia. These days, says Dr Manning, Wikipedia is not such a large data-set. If you can train a system on 30bn words its going to perform better than one trained on 3bn. And more data means more computing power to crunch it all.

OpenAI, a research firm based in California, says demand for processing power took off in 2012, as excitement around machine learning was starting to build. It has accelerated sharply. By 2018, the computer power used to train big models had risen 300,000-fold, and was doubling every three and a half months (see chart). It should knowto train its own OpenAI Five system, designed to beat humans at Defense of the Ancients 2, a popular video game, it scaled machine learning to unprecedented levels, running thousands of chips non-stop for more than ten months.

Exact figures on how much this all costs are scarce. But a paper published in 2019 by researchers at the University of Massachusetts Amherst estimated that training one version of Transformer, another big language model, could cost as much as $3m. Jerome Pesenti, Facebooks head of AI, says that one round of training for the biggest models can cost millions of dollars in electricity consumption.

Facebook, which turned a profit of $18.5bn in 2019, can afford those bills. Those less flush with cash are feeling the pinch. Andreessen Horowitz, an influential American venture-capital firm, has pointed out that many AI startups rent their processing power from cloud-computing firms like Amazon and Microsoft. The resulting billssometimes 25% of revenue or moreare one reason, it says, that AI startups may make for less attractive investments than old-style software companies. In March Dr Mannings colleagues at Stanford, including Fei-Fei Li, an AI luminary, launched the National Research Cloud, a cloud-computing initiative to help American AI researchers keep up with spiralling bills.

The growing demand for computing power has fuelled a boom in chip design and specialised devices that can perform the calculations used in AI efficiently. The first wave of specialist chips were graphics processing units (GPUs), designed in the 1990s to boost video-game graphics. As luck would have it, GPUs are also fairly well-suited to the sort of mathematics found in AI.

Further specialisation is possible, and companies are piling in to provide it. In December, Intel, a giant chipmaker, bought Habana Labs, an Israeli firm, for $2bn. Graphcore, a British firm founded in 2016, was valued at $2bn in 2019. Incumbents such as Nvidia, the biggest GPU-maker, have reworked their designs to accommodate AI. Google has designed its own tensor-processing unit (TPU) chips in-house. Baidu, a Chinese tech giant, has done the same with its own Kunlun chips. Alfonso Marone at KPMG reckons the market for specialised AI chips is already worth around $10bn, and could reach $80bn by 2025.

Computer architectures need to follow the structure of the data theyre processing, says Nigel Toon, one of Graphcores co-founders. The most basic feature of AI workloads is that they are embarrassingly parallel, which means they can be cut into thousands of chunks which can all be worked on at the same time. Graphcores chips, for instance, have more than 1,200 individual number-crunching cores, and can be linked together to provide still more power. Cerebras, a Californian startup, has taken an extreme approach. Chips are usually made in batches, with dozens or hundreds etched onto standard silicon wafers 300mm in diameter. Each of Cerebrass chips takes up an entire wafer by itself. That lets the firm cram 400,000 cores onto each.

Other optimisations are important, too. Andrew Feldman, one of Cerebrass founders, points out that AI models spend a lot of their time multiplying numbers by zero. Since those calculations always yield zero, each one is unnecessary, and Cerebrass chips are designed to avoid performing them. Unlike many tasks, says Mr Toon at Graphcore, ultra-precise calculations are not needed in AI. That means chip designers can save energy by reducing the fidelity of the numbers their creations are juggling. (Exactly how fuzzy the calculations can get remains an open question.)

All that can add up to big gains. Mr Toon reckons that Graphcores current chips are anywhere between ten and 50 times more efficient than GPUs. They have already found their way into specialised computers sold by Dell, as well as into Azure, Microsofts cloud-computing service. Cerebras has delivered equipment to two big American government laboratories.

Moores law isnt possible any more

Such innovations will be increasingly important, for the AIfuelled explosion in demand for computer power comes just as Moores law is running out of steam. Shrinking chips is getting harder, and the benefits of doing so are not what they were. Last year Jensen Huang, Nvidias founder, opined bluntly that Moores law isnt possible any more.

Other researchers are therefore looking at more exotic ideas. One is quantum computing, which uses the counter-intuitive properties of quantum mechanics to provide big speed-ups for some sorts of computation. One way to think about machine learning is as an optimisation problem, in which a computer is trying to make trade-offs between millions of variables to arrive at a solution that minimises as many as possible. A quantum-computing technique called Grovers algorithm offers big potential speed-ups, says Krysta Svore, who leads the Quantum Architectures and Computation Group at Microsoft Research.

Another idea is to take inspiration from biology, which proves that current brute-force approaches are not the only way. Cerebrass chips consume around 15kW when running flat-out, enough to power dozens of houses (an equivalent number of GPUs consumes many times more). A human brain, by contrast, uses about 20W of energyabout a thousandth as muchand is in many ways cleverer than its silicon counterpart. Firms such as Intel and IBM are therefore investigating neuromorphic chips, which contain components designed to mimic more closely the electrical behaviour of the neurons that make up biological brains.

For now, though, all that is far off. Quantum computers are relatively well-understood in theory, but despite billions of dollars in funding from tech giants such as Google, Microsoft and IBM, actually building them remains an engineering challenge. Neuromorphic chips have been built with existing technologies, but their designers are hamstrung by the fact that neuroscientists still do not understand what exactly brains do, or how they do it.

That means that, for the foreseeable future, AI researchers will have to squeeze every drop of performance from existing computing technologies. Mr Toon is bullish, arguing that there are plenty of gains to be had from more specialised hardware and from tweaking existing software to run faster. To quantify the nascent fields progress, he offers an analogy with video games: Were past Pong, he says. Were maybe at Pac-Man by now. All those without millions to spend will be hoping he is right.

This article appeared in the Technology Quarterly section of the print edition under the headline "Machine, learning"

Read more here:
The cost of training machines is becoming a problem - The Economist