The remaking of war; Part 2: Machine-learning set to usher in a whole new era of intelligent warfare – Firstpost

Editor's note:This is the second part of aseries on the evolution of war and warfareacross decades. Over the course of these articles, the relationship between technology, politics and war will be put under the magnifying glass.

Effective war-fighting demands that militaries should be able to peek into the future. As such, victory in battle is often linked with clairvoyance.

Let me explain. Suppose you are leading a convoy in battle and expect to meet resistance at some point soon. If you could see precisely where and when this is going to happen, you can call in an airstrike to vastly diminish the enemy's forces, thereby increasing your chances of victory when you finally meet it.

While modern satellites and sensors linked with battle units provide such a capability first demonstrated by the US military with striking effect in the 1991 Gulf War the quest for such a capability has been around as long as wars which is to say, forever. Watch towers on castles with sentries, for example, are also sensors albeit highly imperfect ones. They sought to render the battlefield "transparent", to use modern terminology, in the face of cavalry charges by the enemy.

At the heart of this quest for battlefield transparency lies intelligence, the first key attribute of warfare. Our colloquial understanding of the word and its use in the context of war can appear to be disconnected, but the two are not. If "intelligence refers to an individual's or entity's ability to make sense of the environment",as security-studies scholar Robert Jervis defined it, intelligent behaviour in war and everyday life are identical. It is, to continue the Jervis quote, the consequent ability "to understand the capabilities and intentions of others and the vulnerabilities and opportunities that result". The demands of modern warfare require that militaries augment this ability using a wide array of technologies.

The goal of intelligent warfare is very simple: See the enemy and prepare (that is to observe and orient) and feed this information to the war fighters (who then decide on what to do, and finally act through deployment of firepower). This cycle, endlessly repeated across many weapon-systems, is the famous OODA loop pioneered by maverick American fighter pilot, progenitor of the F-16 jet, and military theorist John Boyd beginning in 1987. It is an elegant reimagination of war. As one scholar of Boyd's theory put it, "war can be construed of as a collision of organisations going through their respective OODA loops".

To wit, the faster you can complete these loops perfectly (and your enemy's job is to not let you do so while it goes about with its own OODA loops), the better off you are in battle. Modern militaries seek this advantage by gathering as much information as it is possible about the enemy's forces and disposition, through space-based satellites, electronic and acoustic sensors and, increasingly, unmanned drones. Put simply, the basic idea is to have a rich 'information web' in the form of battlefield networks which links war fighters with machines that help identify their targets ahead. A good network mediated by fast communication channels shrinks time as it were, by bringing the future enemy action closer.

Representational image. AFP

The modern search for the decisive advantage that secret information about enemy forces often brings came to the fore with the Cold War, driven by the fear of nuclear annihilation in the hands of the enemy. In the mid-1950s, the United States Central Intelligence Agencys U-2 spy planes flew over large swathes of Soviet territory in order to assess enemy capabilities; its Corona satellite programme, also launched roughly around the same time, marked the beginning of space-based reconnaissance. Both were among the most closely guarded secrets of the early Cold War.

But the United States also used other more exotic methods to keep an eye on Soviet facilities to have an upper hand, should war break out. For example, it sought to detect enemy radar facilities by looking for faint radio waves they bounced off the moon.

The problem with having (sophisticated) cameras alone as sensors as was the case with the U-2 planes as well as the Corona satellite is that one is at the mercy of weather conditions such as cloud cover over the area of interest. Contemporary airborne- or space-based radars, which build composite images of the ground using pulses of radio waves, overcome this problem. In general, radar performance does not depend on the weather despite a famous claim to the contrary. That said, these 'synthetic aperture radars' (SAR) are often unable to pick up very fine-resolution details unlike optical cameras.

The use of sensors is hardly limited to land warfare. Increasingly, underwater 'nets' of sensors are being conceived to detect enemy ships. It is speculated that China has already made considerable progress in this direction, by deploying underwater gliders that can transmit its detections to other military units in real time. The People's Liberation Army has also sought to use space-based LIDARs (radar-like instruments which use pulsed lasers instead of radio-waves) to detect submarines 1,600 feet below the water surface.

Means of detection of course are a small (but significant) part of the solution in battlefield transparency. A large part of one's ability to wage intelligent wars depend on the ability to integrate the acquired information with battle units and weapon-systems for final decision and action. But remember, the first thing your enemy is likely to do is to prevent you from doing so, by jamming electronic communications or even targeting your communications satellite using a missile of the kind India tested last March. In a future war, major militaries will operate in such contested environments where a major goal of the adversary will be to disrupt the flow of information.

Artificial intelligence (AI) may eventually come to the rescue to OODA loops, but in a manner whose political and ethical costs are still unknown. Note that AI too obeys the definition Jervis set for intelligence, the holy grail being the design of all-purpose computers that can learn about the environment on their own and make decisions autonomously based on circumstances.

Such computers are still some way in the future. What we do have is a narrower form of AI where algorithms deployed on large computers manage to learn certain tasks by teaching themselves from human-supplied data. These machine-learning algorithms have made stupendous progress in the recent years. In 2016, Google AlphaGo a machine-learning algorithm defeated the reigning world champion in a notoriously difficult East Asian board game setting a new benchmark for AI.

Programmes like AlphaGo are designed after how networks of neurons in the human brain and in the part of the brain responsible for processing visual images, in particular are arranged and known to function biologically. Therefore, it is not a surprise that the problem of image recognition has served as a benchmark of sorts for such programmes.

Recall that militaries are naturally interested in not only gathering images of adversary forces but also recognising what they see in them, a challenge with often-grainy SAR images, for example. (In fact, the simplest of machine-learning algorithms modelled on neurons the Perceptron was invented by Frank Rosenblatt in 1958 using US Navy's funds.) While machine-learning programmes until now have only made breakthroughs with optical images last year, in a demonstration by private defence giant Lockheed Martin, one such algorithm scanned the entire American state of Pennsylvania and correctly identified all petroleum fracking sites radar images are not out of sight.

Should AI programmes be able to process images from all wavelengths, one way to bypass the 'contested environment problem' is to let weapons armed with them observe, orient, decide, and act all without the need for humans. In a seminal book on lethal autonomous weapons, American defence strategist Paul Scharre describes this as taking people off the OODA loop. As he notes, while the United States officially does not subscribe to the idea of weapons deciding on what to hit, the research agencies in that country have continued to make significant progress on the issue of automated target recognition.

Other forces have not been as circumspect about deploying weapon-systems without humans playing a significant role in OODA loops. The Russian military has repeatedly claimed that it has the ability to deploy AI-based nuclear weapons. This have been interpreted to include cruise missiles with nuclear warheads moving more than five times the speed of sound.

How can India potentially leverage such intelligent weapons? Consider the issue of a nuclear counterforce strike against Pakistan where New Delhi destroys Rawalpindi's nukes before they can be used against Indian targets. While India's plans to do so are a subject of considerable analytical debate, one can perhaps wildly speculate about the following scenario.

Based on Pakistan's mountainous topography including the Northern Highlands and the Balochistan plateau, it is quite likely that it will seek to conceal them there, inside cave-like structures or in hardened silos, in sites that are otherwise very hard to recognise. Machine-learning programmes dedicated to the task of image recognition from satellite surveillance data can improve India's ability to identify many more such sites than what is currently possible. This ability, coupled with precision-strike missiles, will vastly improve India's counterforce posture should it officially adopt one.

All this is not to say that the era of omniscient intelligent weapons is firmly upon us. Machine-learning algorithms for pattern recognition are still work-in-progress in many cases and far from being fool-proof. (For example, one such programme had considerable difficulty telling the difference between a turtle and a rifle.) But if current trends in the evolution of machine-learning continue, a whole new era of intelligent warfare may not be far.

Read the first part of the series here:Risk of 19th Century international politics being pursued using 21st Century military means looms large

Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.

Original post:
The remaking of war; Part 2: Machine-learning set to usher in a whole new era of intelligent warfare - Firstpost

All the companies from Y Combinators W20 Demo Day, Part III: Hardware, Robots, AI and Developer Tools – TechCrunch

Y Combinators Demo Day was a bit different this time around.

As concerns grew over the spread of COVID-19, Y Combinator shifted the event format away from the two-day gathering in San Francisco weve gotten used to, instead opting to have its entire class debut to invited investors and media via YCs Demo Day website.

In a bit of a surprise twist, YC also moved Demo Day forward one week, citing accelerated pacing from investors. Alas, this meant switching up its plan for each company to have a recorded pitch on the Demo Day website; instead, each company pitched via slides, a few paragraphs outlining what theyre doing and the traction theyre seeing, and team bios. Its unclear so far how this new format in combination with the rapidly evolving investment climate will impact this class.

As we do with each class, weve collected our notes on each company based on information gathered from their pitches, websites and, in some cases, our earlier coverage of them.

To make things a bit easier to read, weve split things up by category rather than have it be one huge wall of text. These are the companies that are working on hardware, robotics, AI, machine learning or tools for developers. You can find the other categories (such as biotech, consumer, and fintech) here.

Datasaur: A tool meant to help humans label machine data data sets more accurately and efficiently through things like auto-correct, auto-suggest and keyboard hotkeys. Its free for individual labelers, $100 per month for teams of up to 20 labelers, with custom pricing for larger teams.

1build: Automatic, data-driven job cost estimates for construction companies. You upload your plans, and 1build says it can prepare accurate bids in minutes. The company projects a revenue run rate of over $600,000, and says it has completed estimates for mega companies like Amazon, Starbucks and 7-Eleven.

Handl: An API for turning paper documents including handwritten ones into structured data ready to be plunked into a database or CRM. While the company says that around 85% of its processing is handled by their AI, its backed by humans to validate data when the AIs confidence is low. Nine months after launch, the company is seeing an ARR of $0.9 million.

Zumo Labs: Uses game engines to generate pre-labeled training data for computer vision systems. By synthesizing the data rather than collecting it from photos/videos of the real world, the company says it can create massive data sets faster, cheaper and without privacy issues.

Teleo: Retrofits existing construction equipment to allow operators to control them remotely. The company says it has built a fully functional teleoperated loader since being founded three months ago, and plans to charge construction companies a flat monthly fee per vehicle. The companys co-founders were previously head of Hardware Engineering and director of Product Manager at Lyft, with both having worked on Googles Street View team.

Menten AI: Menten AI says its using quantum computing and machine learning combined with synthetic biology to design new protein-based drugs.

Turing Labs Inc.: Automated, simulated testing of different formulas for consumer goods like soaps and deodorant. Home products and cosmetics can be months of work for R&D labs. Turing has built an AI engine that helps with this process much like the AI engines used in drug discovery cutting down the time to days. Its already working with some of the biggest CPG companies in the world. You can find our previous coverage on Turing here.

Segmed: Segmed is building data sets for AI-driven medical research. Rather than requiring each and every researcher to individually partner with hospitals and imaging facilities, Segmed partners with these organizations (currently over 50) and standardizes, labels and anonymizes the data.

Ardis AI: Ardis AI wants to build the foundation of artificial general intelligence technology that read and comprehend text like a human. By combining neural networks, symbolic reasoning and new natural language processing techniques, Ardis AI can serve companies that dont want to hire teams to do data extraction and labeling.

Agnoris: Agnoris analyzes a restaurants point-of-sale data to recommend changes to pricing, delivery menus and staffing. For $3,600 per year per restaurant location, Agnoris claims to be able to raise profits by 20%. The company started after the founder opened a restaurant that was packed yet losing money, so it built machine learning tools to improve margins and now its selling that software to all eateries.

Froglabs: Froglabs provides weather forecasting AI to businesses for predicting solar and wind energy production, delivery delays, staffing shortages, sales demand and food availability. By ingesting petabytes of weather data, it can save companies money by ensuring their logistics arent disrupted. Founded by a long-time Googler who started its Project Loon internet-beaming weather balloons, its now signing up e-commerce, retail, rideshare, restaurant and event businesses.

PillarPlus: PillarPlus is a platform that automates the blueprint-designing phase of a building project. It takes a design from an architect or contractor and maps out mechanical, fire, electrical and plumbing details, and estimates the bill of materials and project cost, steps that otherwise take months of work.

Glisten: Glisten uses computer vision and machine learning technologies to develop better, more consistent data sets for e-commerce companies. Its first product is an AI-based tool to populate and enrich sparse product data. Find our previous coverage of Glisten here.

nextmv: Nextmv gives its customers the ability to create their own logistics algorithms automatically allowing businesses to optimize fleets and manage routes internally.

Visual One: Movement-detecting security cameras can bring up a lot of false positives: theres motion, yes, but not necessarily anything harmful. Visual One has built an AI platform that integrates with home security cameras to read the specific movements that they detect. Owners can create customised alerts so they get notifications only for what they care about. The companys software can check for furniture-destroying pets, package-lifting thieves, the death-defying antics of toddlers and more. Find our previous coverage of Visual One here.

PostEra: Medicinal chemistry-as-a-service is the idea here: PostEras platform can design and synthesize molecules faster and at a lower cost than the typical R&D lab, speeding up the research time it takes to test new combinations in the drug discovery process.

Cyberdontics: Robotics have already revolutionized surgery, courtesy of companies like da Vinci-maker, Intuitive. Cyberdontics is aimed at doing the same for oral surgery, beginning with crowns one of the more expensive and time-intensive procedures. The company says its robot is capable of performing the generally two-hour procedure in 15 minutes, charging a mere $140 for the job.

Avion: Focused on inhabitants of difficult to reach areas in Africa, Avion is building a drone-based delivery system. The plans consist of medium and long-range medical drones tied to a centralized hub. The drones are hybrid and autonomous with vertical take-off capabilities, able to take 5-kg payloads as far as 150 kms.

SOMATIC: Industrial bathroom cleaning is a prime dull/dirty candidate to be replaced by automation. Somatic builds large robots that are trained to clean restrooms via VR. The system sprays and wipes down surfaces and is capable of opening doors and riding up and down in the elevator. Find our previous coverage of SOMATIC here.

RoboTire: Anyone whos ever sat in a service shop waiting room knows how time-intensive the process can be. RoboTire promises to cut the wait time from 60 minutes down to 10 for a set of four tires. The company has begun piloting the technology in locations around the U.S. Find our previous coverage of RoboTire here.

Morphle: Designed to replace outdated analog microscopes, Morphles system uses robotic automation to improve imaging. The startup processes higher-resolution images than far pricier systems and with a much smaller failure rate. Morphle has begun selling its system to labs in India.

Daedalus: Founded by an early engineer at OpenAI, Daedalus is building autonomous software to allow industrial robots to operate without human programming, beginning with CNC machines. The company projects that it can improve productivity in the metal machining market by 5x.

Exosonic, Inc.: Exosonic makes supersonic commercial aircraft that dont have to produce a loud sonic boom, so they can be flown over land. Its goal is a plane that can fly from SF to NYC in three hours. The CEO worked on NASAs low-boom X-59 aircraft while at Lockheed Martin. Exosonic now has letters of intent from a major airline and two Department of Defense groups, plus a $300,000 U.S. Air Force contract.

Nimbus: Founded by a serial entrepreneur and based in Ann Arbor, Mich., Nimbus is developing the next-generation vehicle platform for urban transportation. Founder Lihang Nong previously launched the fuel-injection systems developer PicoSpray and is now looking to answer the question, Can a vehicle be several times more space and energy efficient than todays cars while actually being more comfortable to ride in?

UrbanKisaan: UrbanKisaan is a vertical farming operation based in India that delivers fresh produce subscriptions to households. Its farms of stacked-up hydroponic tables can be located near cities with just 1% of the land usage of traditional agriculture, and there are no pesticides necessary. In a market with a growing middle class seeking healthy foods, delivering from farm-to-door could let UrbanKisaan control quality and its margins.

Talyn Air: Two former SpaceX engineers have developed a long-range electric vertical take-off and landing (eVTOL) aircraft for passengers and cargo. The startup has created an electric fixed-wing aircraft that is caught mid-air with a custom winged drone during take offs and landings, an approach that its founders say give this aircraft three times the range of its competitors, at 350 miles.

BuildBuddy: Two ex-Googlers want to provide a Google-style development environment to all by building an open-source UI/feature set on top of Googles Bazel software. The company says that their solution speeds up build times by up to 10x. Its free for independent developers, with the price scaling from $4 per user to $49 per user depending on the size of the team and the features required.

Dataline: Meant to let websites gather analytics data from users who are using ad-blocking tools. Claiming that most ad-blocker users care mostly about display ads or cross-site tracking, the company says that first-party analytics gets hit as collateral damage. By acting as a smart proxy that runs on a sub-domain, Dataline avoids most ad-blocking systems (for now, presumably.)

Cortex: Many modern online software applications are powered by countless independent, purpose-focused tools or microservices. Cortex monitors your apps microservices to automatically flag the right person (hooking into Datadog/Slack/PagerDuty/etc.) when one breaks.

apitracker: Even if your website seems to be loading fine, the APIs you use to make it work might be having trouble, breaking things in not so obvious ways. Apitracker tracks your APIs. It monitors the APIs you use, alerting you when one of them starts to fail and providing insights into their overall performance.

Freshpaint: Freshpaints autotrack system collects all pageviews/clicks/etc. across your site, allowing you to push it into tools like Google Analytics/Facebook Pixel etc. retroactively without requiring your dev team to make manual trackers for each event. The base plan is free for sites with fewer than 3,000 users and $300 for sites with up to 50,000 monthly users, after which point the pricing shifts to custom packaging.

Datree: Datree allows companies to set up rules and security policies for their codebase, and ensures those rules are followed before any code is merged. Charging $28 per developer (noting that its free for independent/open source projects), theyve pulled in ~$230K in revenue to date. Find our previous coverage of Datree here.

fly.io: Deploys your app on servers that are physically closer to your users, decreasing latency and improving the user experience. If your app grows more popular in a certain city, Fly detects that and scales resources accordingly.

Sweeps: Sweeps claims that they can make your website 40% faster with one line of code, by more intelligently loading all of the third-party tools that a website is using. The team says that their tech not only improves speed but does so while improving SEO.

Orbiter: Orbiter is an automatic real-time monitoring and alert system integrated with Slack to ensure better customer service and revenue management.

Release: Product releases can be tricky. Release provides a staging management toolkit it builds a staging environment each time theres a pull request, allowing for faster/more collaborative development cycles.

Signadot: Signadot is monitoring and management software for the microservices that modern startups rely on to power their own applications and services, hopefully flagging issues before they become apparent to the end user.

Raycast: Raycast is a universal command bar for developers and many of the tools they use. Users can integrate apps including Jira, GitHub or Slack and take a Superhuman-like approach to completing forms and tasks. The team is pitching the tool as a way to help engineers get their non-engineering work done quickly.

Cotter: Cotter is building a phone number-based login platform that authenticates a users device in a workflow that the companys founders say has the convenience of SMS-based OTP without the security issues. The startup is aiming to target customers in developing countries where email is less utilized and less convenient as a login.

ditto: Dittos founders are hoping to create the Figma for words, helping teams plan out more thoughtfully the copy they use to describe their products and workflows. The collaboration tool created by Stanford roommates Jolena Ma and Jessica Ouyang currently has 80+ different companies represented among their users.

Scout: A continuous integration and deployment toolkit for machine learning experiments inside a GitHub workflow.

ToDesktop: ToDesktop has designed a service to automate all of your desktop application publishing needs. It works with Windows, Mac and Linux and provides native installers, auto-updates, code signing and crash reports without the need for any infrastructure or configurations for developers.

DeepSource: DeepSource is a code review tool that allows developers to check for bug risks, anti-patterns, performance issues and security flaws in Python and Go.

Flowbot: Flowbot is a natural language, autocomplete search tool for coding in Python. It lets Python developers type in plain English when they cant remember the exact function theyre thinking of, with Flowbot digging through documentation and considering the context to find the code it thinks youre looking for.

PostHog: PostHog is a software service that lets developers understand how their users are actually working with their products. Its a product analytics toolkit for open-source programmers.

Follow this link:
All the companies from Y Combinators W20 Demo Day, Part III: Hardware, Robots, AI and Developer Tools - TechCrunch

Rapid Industrialization to Boost Machine Learning Courses Growth by 2019-2025 – Keep Reading

The global Machine Learning Courses market reached ~US$ xx Mn in 2018 and is anticipated grow at a CAGR of xx% over the forecast period 2019-2029. In this Machine Learning Courses market study, the following years are considered to predict the market footprint:

The business intelligence study of the Machine Learning Courses market covers the estimation size of the market both in terms of value (Mn/Bn USD) and volume (x units). In a bid to recognize the growth prospects in the Machine Learning Courses market, the market study has been geographically fragmented into important regions that are progressing faster than the overall market. Each segment of the Machine Learning Courses market has been individually analyzed on the basis of pricing, distribution, and demand prospect for the following regions:

Each market player encompassed in the Machine Learning Courses market study is assessed according to its market share, production footprint, current launches, agreements, ongoing R&D projects, and business tactics. In addition, the Machine Learning Courses market study scrutinizes the strengths, weaknesses, opportunities and threats (SWOT) analysis.

Request Sample Report @ https://www.researchmoz.com/enquiry.php?type=S&repid=2350100&source=atm

On the basis of age group, the global Machine Learning Courses market report covers the footprint, and adoption pattern of the segments including

The key players covered in this study

EdX

Ivy Professional School

NobleProg

Udacity

Edvancer

Udemy

Simplilearn

Jigsaw Academy

BitBootCamp

Metis

DataCamp

Market segment by Type, the product can be split into

Rote Learning

Learning From Instruction

Learning By Deduction

Learning By Analogy

Explanation-Based Learning

Learning From Induction

Market segment by Application, split into

Data Mining

Computer Vision

Natural Language Processing

Biometrics Recognition

Search Engines

Medical Diagnostics

Detection Of Credit Card Fraud

Securities Market Analysis

DNA Sequencing

Market segment by Regions/Countries, this report covers

United States

Europe

China

Japan

Southeast Asia

India

Central & South America

The study objectives of this report are:

To analyze global Machine Learning Courses status, future forecast, growth opportunity, key market and key players.

To present the Machine Learning Courses development in United States, Europe and China.

To strategically profile the key players and comprehensively analyze their development plan and strategies.

To define, describe and forecast the market by product type, market and key regions.

In this study, the years considered to estimate the market size of Machine Learning Courses are as follows:

History Year: 2014-2018

Base Year: 2018

Estimated Year: 2019

Forecast Year 2019 to 2025

For the data information by region, company, type and application, 2018 is considered as the base year. Whenever data information was unavailable for the base year, the prior year has been considered.

Make An EnquiryAbout This Report @ https://www.researchmoz.com/enquiry.php?type=E&repid=2350100&source=atm

What insights readers can gather from the Machine Learning Courses market report?

The Machine Learning Courses market report answers the following queries:

Why Choose Machine Learning Courses Market Report?

You can Buy This Report from Here @ https://www.researchmoz.com/checkout?rep_id=2350100&licType=S&source=atm

For More Information Kindly Contact:

ResearchMoz.com

Mr. Nachiket Ghumare,

90 State Street,

Albany NY,

United States 12207

Tel: +1-518-621-2074

USA-Canada Toll Free: 866-997-4948

Email: [emailprotected]

More here:
Rapid Industrialization to Boost Machine Learning Courses Growth by 2019-2025 - Keep Reading

RIT professor explores the art and science of statistical machine learning – RIT University News Services

Statistical machine learning is at the core of modern-day advances in artificial intelligence, but a Rochester Institute of Technology professor argues that applying it correctly requires equal parts science and art. Professor Ernest Fokou of RITs School of Mathematical Sciences emphasized the human element of statistical machine learning in his primer on the field that graced the cover of a recent edition of Notices of the American Mathematical Society.

One of the most important commodities in your life is common sense, said Fokou. Mathematics is beautiful, but mathematics is your servant. When you sit down and design a model, data can be very stubborn. We design models with assumptions of what the data will show or look like, but the data never looks exactly like what you expect. You may have a nice central tenet, but theres always something thats going to require your human intervention. Thats where the art comes in. After you run all these statistical techniques, when it comes down to drawing the final conclusion, you need your common sense.

Statistical machine learning is a field that combines mathematics, probability, statistics, computer science, cognitive neuroscience and psychology to create models that learn from data and make predictions about the world. One of its earliest applications was when the United States Postal Service used it to accurately learn and recognize handwritten letters and digits to autonomously sort letters. Today, we see it applied in a variety of settings, from facial recognition technology on smartphones to self-driving cars.

Researchers have developed many different learning machines and statistical models that can be applied to a given problem, but there is no one-size-fits-all method that works well for all situations. Fokou said using selecting the appropriate method requires mathematical and statistical rigor along with practical knowledge. His paper explains the central concepts and approaches, which he hopes will get more people involved in the field and harvesting its potential.

Statistical machine learning is the main tool behind artificial intelligence, said Fokou. Its allowing us to construct extensions of the human being so our lives, transportation, agriculture, medicine and education can all be better. Thanks to statistical machine learning, you can understand the processes by which people learn and slowly and steadily help humanity access a higher level.

This year, Fokou has been on sabbatical traveling the world exploring new frontiers in statistical machine learning. Fokous full article is available on the AMS website.

Here is the original post:
RIT professor explores the art and science of statistical machine learning - RIT University News Services

Machine learning could replace subjective diagnosis of mental health disorders – techAU

AI is taking over almost every industry and the health industry has some of the biggest benefits to gain. Machine Learning, a discipline of AI, is showing good signs of being able to superseed human capabilities in accurately identifying mental health disorders.

CSIRO have announced the results of a study of 101 participants, that used ML to diagnose bipolar or depression. The error rate was between 20 30% and while that isnt yet better than humans and isnt ready for clinical use, it does show a promising sign for the future.

The machine-learning system detects patterns in data: a process known as training. Like autonomous drivings use of computer vision, the results get better, with the more data you can provide it. Its expected to improve as its fed more data on how people play the game.

One of the big challenges in psychiatry is misdiagnosis.

It gives us first-hand information about what is happening in the brain, which can be an alternative route of information for diagnosis.

The immediate aim is to build a tool that will help clinicians, but the long-term goal is to replace subjective diagnosis altogether. Between depression and bipolar disorder, theres a significant incidence of misdiagnosis of bipolar people as being depressed, as much as 60%. Around one third of them remain misdiagnosed for more than 10 years.

It is estimated that within 5 years, computers could be making the diagnosis, rather than humans as we improve the ability for AI to understand the complex human brain.

The study involved having users play a simple game game where you select between two boxes on screen. One box rewards you with greater frequency than the other; you have to collect the most points.

Whether you stick with orange or experiment with blue, or just randomly alternate these decisions paint a picture of how your brain works.

Often the traditional signatures of mental illness are too subtle for humans to notice, but its the kind of thing machine-learning AI thrives on. Now CSIRO researchers have developed a system they say can peer into the mind with significant accuracy, and could revolutionise mental health diagnosis.

Last year, researchers reported they hadfound a wayof analysing language from Facebook status updates to predict future diagnoses of depression.

More information at CSIRO.

Read the original:
Machine learning could replace subjective diagnosis of mental health disorders - techAU

Next-gen supercomputers are fast-tracking treatments for the coronavirus in a race against time – CNBC

A computer image created by Nexu Science Communication together with Trinity College in Dublin, shows a model structurally representative of a betacoronavirus which is the type of virus linked to COVID-19.

Source: NEXU Science Communication | Reuters

Research has gone digital, and medical science is no exception. As the novel coronavirus continues to spread, for instance, scientists searching for a treatment have drafted IBM's Summit supercomputer, the world's most powerful high-performance computing facility, according to the Top500 list, to help find promising candidate drugs.

One way of treating an infection could be with a compound that sticks to a certain part of the virus, disarming it. With tens of thousands of processors spanning an area as large as two tennis courts, the Summit facility at Oak Ridge National Laboratory (ORNL) has more computational power than 1 million top-of-the-line laptops. Using that muscle, researchers digitally simulated how 8,000 different molecules would interact with the virus a Herculean task for your typical personal computer.

"It took us a day or two, whereas it has traditionally taken months on a normal computer," said Jeremy Smith, director of the University of Tennessee/ORNL Center for Molecular Biophysics and principal researcher in the study.

Simulations alone can't prove a treatment will work, but the project was able to identify 77 candidate molecules that other researchers can now test in trials. The fight against the novel coronavirus is just one example of how supercomputers have become an essential part of the process of discovery. The $200 million Summit and similar machines also simulate the birth of the universe, explosions from atomic weapons and a host of events too complicated or too violent to recreate in a lab.

The current generation's formidable power is just a taste of what's to come. Aurora, a $500 million Intel machine currently under installation at Argonne National Laboratory, will herald the long-awaited arrival of "exaflop" facilities capable of a billion billion calculations per second (five times more than Summit) in 2021 with others to follow. China, Japan and the European Union are all expected to switch on similar "exascale" systems in the next five years.

These new machines will enable new discoveries, but only for the select few researchers with the programming know-how required to efficiently marshal their considerable resources. What's more, technological hurdles lead some experts to believe that exascale computing might be the end of the line. For these reasons, scientists are increasingly attempting to harness artificial intelligenceto accomplish more research with less computational power.

"We as an industry have become too captive to building systems that execute the benchmark well without necessarily paying attention to how systems are used," says Dave Turek, vice president of technical computing for IBM Cognitive Systems. He likens high-performance computing record-seeking to focusing on building the world's fastest race car instead of highway-ready minivans. "The ability to inform the classic ways of doing HPC with AI becomes really the innovation wave that's coursing through HPC today."

Just getting to the verge of exascale computing has taken a decade of research and collaboration between the Department of Energy and private vendors. "It's been a journey," says Patricia Damkroger, general manager of Intel's high-performance computing division. "Ten years ago, they said it couldn't be done."

While each system has its own unique architecture, Summit, Aurora, and the upcoming Frontier supercomputer all represent variations on a theme: they harness the immense power of graphical processing units (GPUs) alongside traditional central processing units (CPUs). GPUs can carry out more simultaneous operations than a CPU can, so leaning on these workhorses has let Intel and IBM design machines that would have otherwise required untold megawatts of energy.

IBM's Summit supercomputer currently holds the record for the world's fastest supercomputer.

Source: IBM

That computational power lets Summit, which is known as a "pre-exascale" computer because it runs at 0.2 exaflops, simulate one single supernova explosion in about two months, according to Bronson Messer, the acting director of science for the Oak Ridge Leadership Computing Facility. He hopes that machines like Aurora (1 exaflop) and the upcoming Frontier supercomputer (1.5 exaflops) will get that time down to about a week. Damkroger looks forward to medical applications. Where current supercomputers can digitally model a single heart, for instance, exascale machines will be able to simulate how the heart works together with blood vessels, she predicts.

But even as exascale developers take a victory lap, they know that two challenges mean the add-more-GPUs formula is likely approaching a plateau in its scientific usefulness. First, GPUs are strong but dumbbest suited to simple operations such as arithmetic and geometric calculations that they can crowdsource among their many components. Researchers have written simulations to run on flexible CPUs for decades and shifting to GPUs often requires starting from scratch.

GPU's have thousands of cores for simultaneous computation, but each handles simple instructions.

Source: IBM

"The real issue that we're wrestling with at this point is how do we move our code over" from running on CPUs to running on GPUs, says Richard Loft, a computational scientist at the National Center for Atmospheric Research, home of Top500's 44th ranking supercomputerCheyenne, a CPU-based machine "It's labor intensive, and they're difficult to program."

Second, the more processors a machine has, the harder it is to coordinate the sharing of calculations. For the climate modeling that Loft does, machines with more processors better answer questions like "what is the chance of a once-in-a-millennium deluge," because they can run more identical simulations simultaneously and build up more robust statistics. But they don't ultimately enable the climate models themselves to get much more sophisticated.

For that, the actual processors have to get faster, a feat that bumps up against what's physically possible. Faster processors need smaller transistors, and current transistors measure about 7 nanometers. Companies might be able to shrink that size, Turek says, but only to a point. "You can't get to zero [nanometers]," he says. "You have to invoke other kinds of approaches."

If supercomputers can't get much more powerful, researchers will have to get smarter about how they use the facilities. Traditional computing is often an exercise in brute forcing a problem, and machine learning techniques may allow researchers to approach complex calculations with more finesse.

More from Tech Trends:Robotic medicine to fight the coronavirusRemote work techology that is key

Take drug design. A pharmacist considering a dozen ingredients faces countless possible recipes, varying amounts of each compound, which could take a supercomputer years to simulate. An emerging machine learning technique known as Bayesian Optimization asks, does the computer really need to check every single option? Rather than systematically sweeping the field, the method helps isolate the most promising drugs by implementing common-sense assumptions. Once it finds one reasonably effective solution, for instance, it might prioritize seeking small improvements with minor tweaks.

In trial-and-error fields like materials science and cosmetics, Turek says that this strategy can reduce the number of simulations needed by 70% to 90%. Recently, for instance, the technique has led to breakthroughs in battery design and the discovery of a new antibiotic.

Fields like climate science and particle physics use brute-force computation in a different way, by starting with simple mathematical laws of nature and calculating the behavior of complex systems. Climate models, for instance, try to predict how air currents conspire with forests, cities, and oceans to determine global temperature.

Mike Pritchard, a climatologist at the University of California, Irvine, hopes to figure out how clouds fit into this picture, but most current climate models are blind to features smaller than a few dozen miles wide. Crunching the numbers for a worldwide layer of clouds, which might be just a couple hundred feet tall, simply requires more mathematical brawn than any supercomputer can deliver.

Unless the computer understands how clouds interact better than we do, that is. Pritchard is one of many climatologists experimenting with training neural networksa machine learning technique that looks for patterns by trial and errorto mimic cloud behavior. This approach takes a lot of computing power up front to generate realistic clouds for the neural network to imitate. But once the network has learned how to produce plausible cloudlike behavior, it can replace the computationally intensive laws of nature in the global model, at least in theory. "It's a very exciting time," Pritchard says. "It could be totally revolutionary, if it's credible."

Companies are preparing their machines so researchers like Pritchard can take full advantage of the computational tools they're developing. Turek says IBM is focusing on designing AI-ready machines capable of extreme multitasking and quickly shuttling around huge quantities of information, and the Department of Energy contract for Aurora is Intel's first that specifies a benchmark for certain AI applications, according to Damkroger. Intel is also developing an open-source software toolkit called oneAPI that will make it easier for developers to create programs that run efficiently on a variety of processors, including CPUs and GPUs.As exascale and machine learning tools become increasingly available, scientists hope they'll be able to move past the computer engineering and focus on making new discoveries. "When we get to exascale that's only going to be half the story," Messer says. "What we actually accomplish at the exascale will be what matters."

Go here to see the original:
Next-gen supercomputers are fast-tracking treatments for the coronavirus in a race against time - CNBC

Decoding the Future Trajectory of Healthcare with AI – ReadWrite

Artificial Intelligence (AI) is getting increasingly sophisticated day by day in its application, with enhanced efficiency and speed at a lower cost. Every single sector has been reaping benefits from AI in recent times. The Healthcare industry is no exception. Here is decoding the future trajectory of healthcare with AI.

The impact of artificial intelligence in the healthcare industry through machine learning (ML) and natural language processing (NLP) is transforming care delivery. Additionally, patients are expected to gain relatively high access to their health-related information than before through various applications such as smart wearable devices and mobile electronic medical records (EMR).

The personalized healthcare will authorize patients to take the wheel of their well-being, facilitate high-end healthcare, and promote better patient-provider communication to underprivileged areas.

For instance, IBM Watson for Health is helping healthcare organizations to apply cognitive technology to provide a vast amount of power diagnosis and health-related information.

In addition, Googles DeepMind Health is collaborating with researchers, clinicians, and patients in order to solve real-world healthcare problems. Additionally, the company has combined systems neuroscience with machine learning to develop strong general-purpose learning algorithms within neural networks to mimic the human brain.

Companies are working towards developing AI technology to solve several existing challenges, especially within the healthcare space. Strong focus on funding and starting AI healthcare programs played a significant role in Microsoft Corporations decision to launch a 5-year, US$ 40 million program known as AI for Health in January 2019.

The Microsoft program will use artificial intelligence tools to resolve some of the greatest healthcare challenges including global health crises, treatment, and disease diagnosis. Microsoft has also ensured that academia, non-profit, and research organizations have access to this technology, technical experts, and resources to leverage AI for care delivery and research.

In January 2020, these factors influenced Takeda Pharmaceuticals Company and MITs School of Engineering to join hands for three years to drive innovation and application of AI in the healthcare industry and drug development.

AI applications are only centered on three main investment areas: Diagnostics, Engagement, and Digitization. With the rapid advancement in technologies. There are exciting breakthroughs in incorporating AI in medical services.

The most interesting aspect of AI is robots. Robots are not only replacing trained medical staff but also making them more efficient in several areas. Robots help in controlling the cost while potentially providing better care and performing accurate surgery in limited space.

China and the U.S. have started investing in the development of robots to support doctors. In November 2017, a robot in China passed a medical licensing exam using only an AI brain. Also, it was the first-ever semi-automated operating robot that was used to suture blood vessels as fine as 0.03 mm.

In order to prevent coronavirus from spreading, the American doctors are relying on a robot that can measure the patients act and vitals. In addition, robots are also being used for recovery and consulting assistance and transporting units. These robots are showcasing significant potential in revolutionizing medical procedures in the future.

Precision medicine is an emerging approach to disease prevention and treatment. The precision medication approach allows researchers and doctors to predict more accurate treatment and prevention strategies.

The advent of precision medicine technology has allowed healthcare to actively track patients physiology in real-time, take multi-dimensional data, and create predictive algorithms that use collective learnings to calculate individual outcomes.

In recent years, there has been an immense focus on enabling direct-to-consumer genomics. Now, companies are aiming to create patient-centric products within digitization processes and genomics related to ordering complex testing in clinics.

In January 2020, ixLayer, a start-up based in San-Francisco, launched one of its kind precision health testing platforms to enhance the delivery of diagnostic testing and to shorten the complex relationship among physicians, precision health tests, and patients.

Personal health monitoring is a promising example of AI in healthcare. With the emergence of advanced AI and Internet of Medical Things (IoMT), demand for consumer-oriented products such as smart wearables for monitoring well-being is growing significantly.

Owing to the rapid proliferation of smart wearables and mobile apps, enterprises are introducing varied options to monitor personal health.

In October 2019, Gali Health, a health technology company, introduced its Gali AI-powered personal health assistant for people suffering from inflammatory bowel diseases (IBD). It offers health tracking and analytical tools, medically-vetted educational resources, and emotional support to the IBD community.

Similarly, start-ups are also coming forward with innovative devices integrated with state-of-the-art AI technology to contribute to the growing demand for personal health monitoring.

In recent years, AI has been used in numerous ways to support the medical imaging of all kinds. At present, the biggest use for AI is to assist in the analysis of images and perform single narrow recognition tasks.

In the United States, AI is considered highly valuable in enhancing business operations and patients care. It has the greatest impact on patient care by improving the accuracy of clinical outcomes and medical diagnosis.

Strong presence of leading market players in the country is bolstering the demand for medical imaging in hospitals and research centers.

In January 2020, Hitachi Healthcare Americas announced to start a new dedicated R&D center in North America. Medical imaging will leverage the advancements in machine learning and artificial intelligence to bring about next-gen of medical imaging technology.

With a plethora of issues driven by the growing rate of chronic disease and the aging population, the need for new innovative solutions in the healthcare industry is moving on an upswing.

Unleashing AIs complete potential in the healthcare industry is not an easy task. Both healthcare providers and AI developers together will have to tackle all the obstacles on the path towards the integration of new technologies.

Clearing all the hurdles will need a compounding of technological refinement and shifting mindsets. As AI trend become more deep-rooted, it is giving rise to highly ubiquitous discussions. Will AI replace the doctors and medical professionals, especially radiologists and physicians? The answer to this is, it will increase the efficiency of the medical professionals.

Initiatives by IBM Watson and Googles DeepMind will soon unlock the critical answers. However, AI aims to mimic the human brain in healthcare, human judgment, and intuitions that cannot be substituted.

Even though AI is augmenting in existing capabilities of the industry, it is unlikely to fully replace human intervention. AI skilled forces will swap only those who dont want to embrace technology.

Healthcare is a dynamic industry with significant opportunities. However, uncertainty, cost concerns, and complexity are making it an unnerving one.

The best opportunity for healthcare in the near future are hybrid models, where clinicians and physicians will be supported for treatment planning, diagnosis, and identifying risk factors. Also, with an increase in the number of geriatric population and the rise of health-related concerns across the globe, the overall burden of disease management has augmented.

Patients are also expecting better treatment and care. Due to growing innovations in the healthcare industry with respect to improved diagnosis and treatment, AI has gained consideration among the patients and doctors.

In order to develop better medical technology, entrepreneurs, healthcare service providers, investors, policy developers, and patients are coming together.

These factors are set to exhibit a brighter future of AI in the healthcare industry. It is extremely likely that there will be widespread use and massive advancements of AI integrated technology in the next few years. Moreover, healthcare providers are expected to invest in adequate IT infrastructure solutions and data centers to support new technological development.

Healthcare companies should continually integrate new technologies to build strong value and to keep the patients attention.

-

The insights presented in the article are based on a recent research study on Global Artificial Intelligence In Healthcare Market by Future Market Insights.

Abhishek Budholiya is a tech blogger, digital marketing pro, and has contributed to numerous tech magazines. Currently, as a technology and digital branding consultant, he offers his analysis on the tech market research landscape. His forte is analysing the commercial viability of a new breakthrough, a trait you can see in his writing. When he is not ruminating about the tech world, he can be found playing table tennis or hanging out with his friends.

See the article here:
Decoding the Future Trajectory of Healthcare with AI - ReadWrite

Skill up for the digital future with India’s #1 Machine Learning Lab and AI Research Center – Inventiva

Every tech professional today, irrespective of their role in the organisation, needs to be AI/ML-ready to compete in the new world order. In keeping with the current and future demand for professionals with expertise in AI and Machine Learning (ML), and to help build a holistic understanding of the subject, IIIT Hyderabad, in association with TalentSprint, an ed-tech platform, is offering an AI/ML Executive Certification Program for working professionals in India and abroad.

The programme is designed for working professionals in a 13-week format that involves masterclass lectures, hands-on labs, mentorship, hackathons, and workshops to ensure fast-track learning. The programme is conducted in Hyderabad to enable a wider audience to benefit from the expertise of IIIT Hyderabads Machine Learning Lab.

The programme has successfully completed 11 cohorts with 2200+ participants who are currently working with more than 600 top companies.

You can apply for the 14th cohort here

Participants will get access to in-person classes every weekend. This enables professionals from in and around Hyderabad to build AI/ML expertise from Indias top Machine Learning Lab at IIIT Hyderabad.

With a balanced mix of lectures and labs, the programme will also host hackathons, group labs, and workshops. Participants will also get assistance from mentors throughout the programme. The programmes Hackathons, Group Labs, and Workshops also enable participants to work in teams of exceptional peer groups. Moreover, the lectures are delivered by world class faculty and industry experts.

Refresh your knowledge on coding and the mathematics necessary for building expertise in AI/ML

Learn to translate real-world problems into AI/ML abstractions

Learn about and apply standard AI/ML algorithms to create AI/ML applications

Implement practical solutions using Deep Learning Techniques and Toolchains

Participate in industry projects and hackathons

While there are a number of courses on offer in this domain, what makes this AI/ML Executive Certification Program stand out is the fact that it is offered by Indias No. 1 Machine Learning Lab at IIIT Hyderabad. The programme follows a unique 5-step learning process to ensure fast-track learning: Masterclass Lectures, Hands-on Labs, Mentorship, Hackathons and Workshops. Moreover, participants also get a chance to learn and collaborate with leading people from academia, industry and global bluechip Institutions.

The institute has been the torch bearer of research for several years. It hosts the Kohli Center (KCIS), Indias leading center on intelligent systems. KCISs research was featured in 600 publications and has received 5,792 citations in academic publications. It also hosts the Center for Visual Information Technology (CVIT) that focuses on basic and advanced research in Image Processing Computer Vision, Computer Graphics and Machine Learnin

Tech professionals with at least one year work experience and coding background are encouraged to apply. The programme is especially beneficial for business leaders, CXOs, project managers/developers, analysts and developers. Applications for the 14th cohort are closing on March 20. Apply today!

Go here to read the rest:
Skill up for the digital future with India's #1 Machine Learning Lab and AI Research Center - Inventiva

ServiceNow pulls on its platforms, talks up machine learning, analytics in biggest release since ex-SAP boss took reins – The Register

As is the way with the 21st century, IT companies are apt to get meta and ServiceNow is no exception.

In its biggest product release since the arrival of SAP revenue-boosting Bill McDermott as new CEO, the cloudy business process company is positioning itself as the "platform of platforms". Which goes to show, if nothing else, that platformization also applies to platforms.

To avoid plunging into an Escher-eque tailspin of abstraction, it is best to look at what Now Platform Orlando actually does and who, if anyone, it might help.

The idea is that ServiceNow's tools make routine business activity much easier and slicker. To this the company is adding intelligence, analytics and AI, it said.

Take the arrival of a new employee. They might need to be set up on HR and payroll systems, get access to IT equipment and applications, have facilities management give them the right desk and workspace, be given building security access and perhaps have to sign some legal documents.

Rather than multiple people doing each of these tasks with different IT systems, ServiceNow will make one poor soul do it using its single platform, which accesses all the other prerequisite applications, said David Flesh, ServiceNow product marketing director.

It is also chucking chatbots at that luckless staffer. In January, ServiceNow bought Passage AI, a startup that helps customers build chatbots in multiple languages. It is using this technology to create virtual assistants to help with some of the most common requests that hit HR and IT service desks, for example password resets, getting assess to Wi-Fi, that kind of thing.

This can also mean staffers don't have to worry where they send requests, meaning if, for example, they've just found out they're going to become a parent, they can fire questions at an agent rather than HR, their boss or the finance team. The firm said: "Agents are a great way for employees find information and abstracts that organizational complexity."

ServiceNow has also introduced machine learning, for example, in IT operations management, which uses systems data to identify when a service is degrading and what could be causing the problem. "You get more specific information about the cause and suggested actions to take to actually remediate the problem," Flesh said.

Customers looking to use this feature will still have to train the machine learning models on historic datasets from their operations and validate models, as per the usual ML pipeline. But ServiceNow makes the process more graphical, and comes with its knowledge of common predictors of operational problems.

Lastly, analytics is a new feature in the update. Users can include key performance indicators in the workflows they create, and the platform includes the tools to track and analyse those KPIs and suggest how to improve performance. It also suggests useful KPIs.

Another application of the analytics tools is for IT teams - traditionally the company's core users - monitoring cloud services. ServiceNow said it helps optimise organisations' cloud usage by "making intelligent recommendations on managing usage across business hours, choosing the right resources and enforcing usage policies".

With McDermott's arrival and a slew of new features and customer references, ServiceNow is getting a lot of attention, but many of these technologies exist in other products.

There are independent robotic process automation (RPA) vendors who build automation into common tasks, while application vendors are also introducing automation within their own environments. But as application and platform upgrade cycles are sluggish, and RPA has proved difficult to scale, ServiceNow may find a receptive audience for its, er, platform of platforms.

Sponsored: Practical tips for Office 365 tenant-to-tenant migration

Visit link:
ServiceNow pulls on its platforms, talks up machine learning, analytics in biggest release since ex-SAP boss took reins - The Register

Navigating the New Landscape of AI Platforms – Harvard Business Review

Executive Summary

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tooling for AI systems than they do building the AI systems themselves. Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling, and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies.

Nearly two years ago, Seattle Sport Sciences, a company that provides data to soccer club executives, coaches, trainers and players to improve training, made a hard turn into AI. It began developing a system that tracks ball physics and player movements from video feeds. To build it, the company needed to label millions of video frames to teach computer algorithms what to look for. It started out by hiring a small team to sit in front of computer screens, identifying players and balls on each frame. But it quickly realized that it needed a software platform in order to scale. Soon, its expensive data science team was spending most of its time building a platform to handle massive amounts of data.

These are heady days when every CEO can see or at least sense opportunities for machine-learning systems to transform their business. Nearly every company has processes suited for machine learning, which is really just a way of teaching computers to recognize patterns and make decisions based on those patterns, often faster and more accurately than humans. Is that a dog on the road in front of me? Apply the brakes. Is that a tumor on that X-ray? Alert the doctor. Is that a weed in the field? Spray it with herbicide.

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tools for AI systems than they do building the systems themselves. A recent survey of 500 companies by the firm Algorithmia found that expensive teams spend less than a quarter of their time training and iterating machine-learning models, which is their primary job function.

Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies, like Seattle Sports Science.

Frustrated that its data science team was spinning its wheels, Seattle Sports Sciences AI architect John Milton finally found a commercial solution that did the job. I wish I had realized that we needed those tools, said Milton. He hadnt factored the infrastructure into their original budget and having to go back to senior management and ask for it wasnt a pleasant experience for anyone.

The AI giants, Google, Amazon, Microsoft and Apple, among others, have steadily released tools to the public, many of them free, including vast libraries of code that engineers can compile into deep-learning models. Facebooks powerful object-recognition tool, Detectron, has become one of the most widely adopted open-source projects since its release in 2018. But using those tools can still be a challenge, because they dont necessarily work together. This means data science teams have to build connections between each tool to get them to do the job a company needs.

The newest leap on the horizon addresses this pain point. New platforms are now allowing engineers to plug in components without worrying about the connections.

For example, Determined AI and Paperspace sell platforms for managing the machine-learning workflow. Determined AIs platform includes automated elements to help data scientists find the best architecture for neural networks, while Paperspace comes with access to dedicated GPUs in the cloud.

If companies dont have access to a unified platform, theyre saying, Heres this open source thing that does hyperparameter tuning. Heres this other thing that does distributed training, and they are literally gluing them all together, said Evan Sparks, cofounder of Determined AI. The way theyre doing it is really with duct tape.

Labelbox is a training data platform, or TDP, for managing the labeling of data so that data science teams can work efficiently with annotation teams across the globe. (The author of this article is the companys co-founder.) It gives companies the ability to track their data, spot, and fix bias in the data and optimize the quality of their training data before feeding it into their machine-learning models.

Its the solution that Seattle Sports Sciences uses. John Deere uses the platform to label images of individual plants, so that smart tractors can spot weeds and deliver pesticide precisely, saving money and sparing the environment unnecessary chemicals.

Meanwhile, companies no longer need to hire experienced researchers to write machine-learning algorithms, the steam engines of today. They can find them for free or license them from companies who have solved similar problems before.

Algorithmia, which helps companies deploy, serve and scale their machine-learning models, operates an algorithm marketplace so data science teams dont duplicate other peoples effort by building their own. Users can search through the 7,000 different algorithms on the companys platform and license one or upload their own.

Companies can even buy complete off-the-shelf deep learning models ready for implementation.

Fritz.ai, for example, offers a number of pre-trained models that can detect objects in videos or transfer artwork styles from one image to another all of which run locally on mobile devices. The companys premium services include creating custom models and more automation features for managing and tweaking models.

And while companies can use a TDP to label training data, they can also find pre-labeled datasets, many for free, that are general enough to solve many problems.

Soon, companies will even offer machine-learning as a service: Customers will simply upload data and an objective and be able to access a trained model through an API.

In the late 18th century, Maudslays lathe led to standardized screw threads and, in turn, to interchangeable parts, which spread the industrial revolution far and wide. Machine-learning tools will do the same for AI, and, as a result of these advances, companies are able to implement machine-learning with fewer data scientists and less senior data science teams. Thats important given the looming machine-learning, human resources crunch: According to a 2019 Dun & Bradstreet report, 40 percent of respondents from Forbes Global 2000 organizations say they are adding more AI-related jobs. And the number of AI-related job listings on the recruitment portal Indeed.com jumped 29 percent from May 2018 to May 2019. Most of that demand is for supervised-learning engineers.

But C-suite executives need to understand the need for those tools and budget accordingly. Just as Seattle Sports Sciences learned, its better to familiarize yourself with the full machine-learning workflow and identify necessary tooling before embarking on a project.

That tooling can be expensive, whether the decision is to build or to buy. As is often the case with key business infrastructure, there are hidden costs to building. Buying a solution might look more expensive up front, but it is often cheaper in the long run.

Once youve identified the necessary infrastructure, survey the market to see what solutions are out there and build the cost of that infrastructure into your budget. Dont fall for a hard sell. The industry is young, both in terms of the time that its been around and the age of its entrepreneurs. The ones who are in it out of passion are idealistic and mission driven. They believe they are democratizing an incredibly powerful new technology.

The AI tooling industry is facing more than enough demand. If you sense someone is chasing dollars, be wary. The serious players are eager to share their knowledge and help guide business leaders toward success. Successes benefit everyone.

Follow this link:
Navigating the New Landscape of AI Platforms - Harvard Business Review