Artificial Intelligence Emerges as the Superhero of Tech Era – Analytics Insight

Artificial Intelligence (AI) has changed the world for the better.AIandroboticshave existed in fictional stories and movies for a very long time. They were shown both at good and bad lights. However, in the tech era, AI is unfolding its features to be a lifesaver.

Everyone seems to be suddenly interested in AI. It changes the face of products and services it involves. AI has worked as a push-up element in various sectors including business, marketing, agriculture, banking, etc. There is no industry that doesnt have the touch of AI. Somehow, everyone is a beneficiary of emerging technology.

The recent trend in AI is quite different from how it worked so far. Scientists and researchers are finding ways to improve AI to a place where it can save human lives. Keep away all those soap operas and movies where you saw robots and AI enslaving humans. That is far beyond reality for now. Henceforth, AI applications are featured to make humans live safely.

Autonomous vehicles to Impede accidents

Vehicles are a part of a humans journey. No one can imagine a world without vehicles. They were one of the beginnings of the evolution of technology and mechanism. Today, we have far advanced vehicles on the road. Still, technological growth couldnt save human lives.

According to areport, in a year around 1.35 million people are killed on roadways around the world. In a day, 3,700 people are killed globally in road traffic crashes involving cars, buses, motorcycles, bicycles, trucks or pedestrians. The most vulnerable are pedestrians, bicyclists and motorcyclists.

It is too late to sensitize people to minimize the use of vehicles or follow traffic instructions properly. The cat is out of the bag already. Therefore, to tackle the situation AI can be used. The introduction ofautonomous vehiclesis a revolution to the mechanical industry. The vehicles main concern is how people could share the road safely without hurting each other. Autonomous vehicles have computer vision attached with which it could detect and prevent accidents. But these vehicles are yet to bang the road.

AI applications are acting as accident preventers. An application named!importantis designed to minimize the risk of accidents with all certified connected vehicles like cars, trucks, buses, autonomous vehicles and construction equipment. It can even feature drones. The app creates a virtual protection zone around pedestrians, wheelchair users, cyclists and motorcyclists using their devices. If a connected vehicle goes near !important vehicle, the brakes are triggered automatically.

Health applications to detect medical conditions

At a time when the pandemic is infecting and killing millions of people, it is unsafe to have human-to-human contact. Even doctors and frontline workers are at high risk of contracting the disease despite PPE gears. Hospitals and medical institutions are searching for ways to minimize the human hand in the helping system eventually to provide goodhealthcare facilities through AI.

Catalyst.ai and healthcare.ai designed byHealth Catalystare some of the life-saving applications developed using AI. The applications with its machine learning technology can identify patients with great risk of readmission and provide clinicians guidelines to address their problems. The application has also helped prevent hospital-acquired infections and chronic diseases, and reduce mortality rates. A hospital in Israel is testingsmart hospital roomsthat could save patients lives as well as doctors and nurses. The technology keeps away patients from health workers with the use of AI-powered robots, virtual reality glasses and early warnings.

Doctors are wise enough to find the ailment when they check a patient. But AI applications detect diseases and emergencies during the ambulance dispatch.Corti, an application to understand the medical conditions of patients was inspired to create an AI-enabled system that can identify cardiac arrest.

The voice-based digital assistant attends the emergency calls and listens to the patients complaints to determine the medical condition. The application decreased cardiac arrest cases by 43% through its features. The company is currently working on making the application detect other ailments.

Collated data for drug production

Following the detection of health issue comes treatment. No disease can be cured without providing proper medical attention and drugs. If finding the disease is an improved AI task, so is coming up with relative drugs for the ailment. It is very important to be choosy and particular while prescribing a drug to a patient as it involves side effects that can lead to other health risks.

Okwinis designing pharmaceutical solutions through AI-powered medical research and development. The company uses a machine-learning algorithm to create models designed to predict disease evolution, improve treatment and enhance the way drugs are developed for the diseases. Okwin obtains data from hospital partners to find ways to help patients improve drugs more quickly with fewer side effects.

Image recognition software to track traffickers

Human life threats are not always health-related. According to aUN reportfrom 2016, around 63,000 are victims of human trafficking across in a year. Human trafficking is a global issue. Countries and governments are trying their hand to keep human trafficking under control. But it is not an easy job as human traffickers are taking the work in shadow without anyones notice. Women and children are the most vulnerable to trafficking.

AI stands as a rescue operation for surging human traffic issues.Delta 8.7is an organization that applies AI and computational science to track and stop human traffickers. The organization can use its AI technology to recognize the image to track both the criminals and victims.

Despite facing a pandemic and losing millions of lives, humans still believe that life is invaluable. Artificial Intelligence featured applications have found a solution to help prevent road accidents, provide healthcare and minimize human trafficking through its technology. AI is giving humans a chance to live a safe and happy life.

Go here to read the rest:
Artificial Intelligence Emerges as the Superhero of Tech Era - Analytics Insight

Artificial Intelligence Is Here To Calm Your Road Rage – TIME

I am behind the wheel of a Nissan Leaf, circling a parking lot, trying not to let the days nagging worries and checklists distract me to the point of imperiling pedestrians. Like all drivers, I am unwittingly communicating my stress to this vehicle in countless subtle ways: the strength of my grip on the steering wheel, the slight expansion of my back against the seat as I breathe, the things I mutter to myself as I pilot around cars and distracted pedestrians checking their phones in the parking lot.

Hello, Corinne, a calm voice says from the audio system. Whats stressing you out right now?

The conversation that ensues offers a window into the ways in which artificial intelligence could transform our experience behind the wheel: not by driving the car for us, but by taking better care of us as we drive.

Before coronavirus drastically altered our routines, three-quarters of U.S. workerssome 118 million peoplecommuted to the office alone in a car. From 2009 to 2019, Americans added an average of two minutes to their commute each way, according to U.S. Census data. That negligible daily average is driven by a sharp increase in the number of people making super commutes of 90 minutes or more each way, a population that increased 32% from 2005 to 2017. The long-term impact of COVID-19 on commuting isnt clear, but former transit riders who opt to drive instead of crowding into buses or subway cars may well make up for car commuters who skip at least some of their daily drives and work from home instead.

Longer commutes are associated with increased physical health risks like high blood pressure, obesity, stroke and sleep disorders. A 2017 research project at the University of the West of England found that every extra minute of the survey respondents commutes correlated with lower job and leisure time satisfaction. Adding 20 minutes to a commute, researchers found, has the same depressing effect on job satisfaction as a 19% pay cut.

Switching modes of transit can offer some relief: people who walk, bike or take trains to work tend to be happier commuters than those who drive (and, as a University of Amsterdam study recently found, they tend to miss their commute more during lockdown). But reliable public transit is not universally available, nor are decent jobs always close to affordable housing.

Technology has long promised that an imminent solution is right around the corner: self-driving cars. In the near future, tech companies claim, humans wont drive so much as be ferried about by fully autonomous cars that will navigate safely and efficiently to their destinations, leaving the people inside free to sleep, work or relax as easily as if they were on their own couch. A commute might be a lot less stressful if you could nap the whole way there, or get lost in a book or Netflix series without having to worry about exits or collisions.

Google executives went on the record claiming self-driving cars would be widely available within five years in 2012; they said the same thing again in 2015. Elon Musk throws out ship dates for fully autonomous Teslas as often as doomsday cult leaders reschedule the end of the world. Yet these forecasted utopias have still not arrived.

The majority of carmakers have walked back their most ambitious estimates. It will likely be decades before such cars are a reality for even a majority of drivers. In the meantime, the car commute remains a big, unpleasant, unhacked chunk of time in millions of Americans daily lives.

A smaller and less heralded group of researchers is working on how cars can make us happier while we drive them. It may be decades before artificial intelligence can completely take over piloting our vehicles. In the short run, however, it may be able to make us happierand healthierpilots.

Lane changes, left turns, four-way stops and the like are governed by rules, but also rely on drivers making on-the-spot judgments with potentially deadly consequences. These are also the moments where driver stress spikes.

Many smart car features currently on the market give drivers data that assist with these decisions, like sensors that alert them when cars are in their blind spots or their vehicle is drifting out of its lane.

Another thing that causes drivers stress is uncertainty. One 2015 study found commuters who drove themselves to work were more stressed by the journey than were transit riders or other commuters, largely because of the inconsistency that accidents, roadwork and other traffic snarls caused in their schedules. But even if we cant control the variables that affect a commute, were calmer if we can at least anticipate themhence the popularity of real-time arrival screens at subway and bus stops.

The Beaverton, Ore.-based company Traffic Technology Services (TTS) makes a product called the Personal Signal Assistant, a platform that enables cars to communicate with traffic signals in areas where that data is publicly available. TTSs first client, Audi, used the system to build a tool that counts down the remaining seconds of a red light (visually, on the dashboard) when a car is stopped at one, and suggests speed modifications as the car approaches a green light. The tool was designed to keep traffic flowingno more honking at distracted drivers who dont notice the light has turned green. But users also reported a marked decrease in stress. At the moment, the technology works in 26 North American metropolitan areas and two cities in Europe.

TTS has 60 full- and part-time employees in the U.S. and Germany, and recently partnered with Lamborghini, Bentley and a handful of corporate clients. Yet CEO Thomas Bauer says it can be hard to interest investors in technologies that focus on improving human drivers experience instead of just rendering them obsolete. We certainly dont draw the same excitement with investors as [companies focused on] autonomous driving, Bauer says. What we do is not quite as exciting because it doesnt take the driver out of the picture just yet.

Pablo Paredes, a clinical assistant professor of psychiatry and behavioral sciences at the Stanford School of Medicine, is the director of the schools Pervasive Wellbeing Technology Lab. Situated in a corner of a cavernous Palo Alto, Calif., office building that used to be the headquarters of the defunct health-technology company Theranos, the lab looks for ways to rejigger the habits and objects people use in their everyday lives to improve mental and physical health. Team members dont have to look far for reminders of what happens when grandiose promises arent backed up by data: Theranos circular logo is still inlaid in brass in the buildings marble-floored atrium.

It can be hard to tell the labs experiments from its standard-issue office furniture. To overcome the inertia that often leads users of adjustable-height desks to sit more often than stand, one of the workstations in the teams cluster of cubicles has been outfitted with a sensor and mechanical nodule that make it rise and lower at preset intervals, smoothly enough that a cup of coffee wont spill. In early trials, users particularly absorbed in their work just kept typing as the desk rose up and slowly stood along with it.

But the millions of hours consumed in the U.S. each day by the daily drive to work hold special fascination for Paredes. Hes drawn to the challenge of transforming a part of the day generally thought of as detrimental to health into something therapeutic. The commute for me is the big elephant in the room, he says. There are very simple things that were overlooking in normal life that can be greatly improved and really repurposed to help a lot of people.

In a 2018 study, Paredes and his colleagues found that its possible to infer a drivers muscle tensiona proxy for stressfrom the movement of their hands on a cars steering wheel. Theyre now experimenting with cameras that detect neck tension by noting the subtle changes in the angle of a drivers head as it bobs with the cars movements.

The flagship of the teams mindful-commuting project is the silver-colored Nissan Leaf in their parking lot. The factory-standard electric vehicle has been tricked out with a suite of technologies designed to work together to decrease a drivers stress.

On a test drive earlier this year, a chatbot speaking through the cars audio system offered me the option of engaging in a guided breathing exercise. When I verbally agreed, the drivers seatback began vibrating at intervals, while the voice instructed me to breathe along with its rhythm.

The lab published the results of a small study earlier this year showing that the seat-guided exercise reduced driver stress and breathing rates without impairing performance. They are now experimenting with a second vibrating system to see if lower-frequency vibrations could be used to slow breathing rates (and therefore stress) without any conscious effort on the drivers part.

The goal, eventually, is a mass-market car that can detect an elevation in a drivers stress level, via seat and steering wheel sensors or the neck-tension cameras. It would then automatically engage the calming-breath exercise, or talk through a problem or tell a joke to ease tension, using scripts developed with the input of cognitive behavioral therapists.

These technologies have value even as cars autonomous capabilities advance, Paredes says. Even if a car is fully self-driving, the human inside will still often be a captive audience of one, encased in a private space with private worries and fears.

Smarter technologies alone arent the solution to commuters problems. The auto industry has a long history of raising drivers tolerance for long commutes by making cars more comfortable and attractive places to beall the while promising a better driving experience thats just around the corner, says Peter Norton, an associate professor of science, technology, and society at the University of Virginia and author of Fighting Traffic: The Dawn of the Motor Age in the American City. From his perspective, stress-busting seats would join radios and air conditioners as distractions from bigger discussions about planning, transit and growing inequality, all of which could offer much more value to commuters than a nicer car.

In addition, how long it will be before these latest features become widely available options is an open question. Paredes lab had to suspend work during the pandemic, as its hard to maintain social distancing while working inside of a compact sedan. TTS is in talks to expand its offerings to other automakers, and Paredes has filed patents on some of his labs inventions. But just because a technology is relatively easy to integrate in a car doesnt mean it will be standard soon. The first commercially available backup cameras came on the market in 1991. Despite their effectiveness in reducing collisions, only 24% of cars on the road had them by 2016, according to the Insurance Institute for Highway Safety, and most were newer luxury vehicles. (The cameras are now required by law in all new vehicles.)

These technologies also raise new questions of inequality and exploitation. Its one thing for a commuter to opt for a seat that calms them down after a tough day. But if you drive for a living, should the company that owns your vehicle have the right to insist that you use a seat cover that elevates your breath rate and keeps you alert at the wheel? Who owns the health data your car collects, and who gets to access it? All of the unanswered questions that self-driving technologies raise apply to self-soothing technologies as well.

Back in Palo Alto, the pandemic still weeks away, I am piloting the Leaf around the parking lot with a member of the lab gamely along for the ride in the back. The chatbot asks again whats stressing me out. I have a deadline, I say, for a magazine article about cars and artificial intelligence.

The bot asks if this problem is significantly affecting my life (not really), if Ive encountered something similar before (yep), if previous strategies could be adapted to this scenario (they can) and when Ill be able to enter a plan to tackle this problem in my calendar (later, when Im not driving). I do feel a little better. I talk to myself alone in the car all the time. Its kind of nice to have the car talk back.

Great. Im glad you can do something about it. By breaking down a problem into tiny steps, we can often string together a solution, the car says. Sound good?

For your security, we've sent a confirmation email to the address you entered. Click the link to confirm your subscription and begin receiving our newsletters. If you don't get the confirmation within 10 minutes, please check your spam folder.

Contact us at editors@time.com.

See the original post:
Artificial Intelligence Is Here To Calm Your Road Rage - TIME

Artificial Intelligence Identifies 80,000 Spiral Galaxies Promises More Astronomical Discoveries in the Future – SciTechDaily

Conceptual illustration of how artificial intelligence classifies various types of galaxies according to their morphologies. Credit: NAOJ/HSC-SSP

Astronomers have applied artificial intelligence (AI) to ultra-wide field-of-view images of the distant Universe captured by the Subaru Telescope, and have achieved a very high accuracy for finding and classifying spiral galaxies in those images. This technique, in combination with citizen science, is expected to yield further discoveries in the future.

A research group, consisting of astronomers mainly from the National Astronomical Observatory of Japan (NAOJ), applied a deep-learning technique, a type of AI, to classify galaxies in a large dataset of images obtained with the Subaru Telescope. Thanks to its high sensitivity, as many as 560,000 galaxies have been detected in the images. It would be extremely difficult to visually process this large number of galaxies one by one with human eyes for morphological classification. The AI enabled the team to perform the processing without human intervention.

Automated processing techniques for extraction and judgment of features with deep-learning algorithms have been rapidly developed since 2012. Now they usually surpass humans in terms of accuracy and are used for autonomous vehicles, security cameras, and many other applications. Dr. Ken-ichi Tadaki, a Project Assistant Professor at NAOJ, came up with the idea that if AI can classify images of cats and dogs, it should be able to distinguish galaxies with spiral patterns from galaxies without spiral patterns. Indeed, using training data prepared by humans, the AI successfully classified the galaxy morphologies with an accuracy of 97.5%. Then applying the trained AI to the full data set, it identified spirals in about 80,000 galaxies.

Now that this technique has been proven effective, it can be extended to classify galaxies into more detailed classes, by training the AI on the basis of a substantial number of galaxies classified by humans. NAOJ is now running a citizen-science project GALAXY CRUISE, where citizens examine galaxy images taken with the Subaru Telescope to search for features suggesting that the galaxy is colliding or merging with another galaxy. The advisor of GALAXY CRUISE, Associate Professor Masayuki Tanaka has high hopes for the study of galaxies using artificial intelligence and says, The Subaru Strategic Program is serious Big Data containing an almost countless number of galaxies. Scientifically, it is very interesting to tackle such big data with a collaboration of citizen astronomers and machines. By employing deep-learning on top of the classifications made by citizen scientists in GALAXY CRUISE, chances are, we can find a great number of colliding and merging galaxies.

Reference: Spin Parity of Spiral Galaxies II: A catalogue of 80k spiral galaxies using big data from the Subaru Hyper Suprime-Cam Survey and deep learning by Ken-ichi Tadaki, Masanori Iye, Hideya Fukumoto, Masao Hayashi, Cristian E Rusu, Rhythm Shimakawa and Tomoka Tosaki, 2 July 202, Monthly Notices of the Royal Astronomical Society.DOI: 10.1093/mnras/staa1880

Follow this link:
Artificial Intelligence Identifies 80,000 Spiral Galaxies Promises More Astronomical Discoveries in the Future - SciTechDaily

Worldwide Spending on Artificial Intelligence Is Expected to Double in Four Years, Reaching $110 Billion in 2024, According to New IDC Spending Guide…

FRAMINGHAM, Mass.--(BUSINESS WIRE)--Global spending on artificial intelligence (AI) is forecast to double over the next four years, growing from $50.1 billion in 2020 to more than $110 billion in 2024. According to the International Data Corporation (IDC) Worldwide Artificial Intelligence Spending Guide, spending on AI systems will accelerate over the next several years as organizations deploy artificial intelligence as part of their digital transformation efforts and to remain competitive in the digital economy. The compound annual growth rate (CAGR) for the 2019-2024 period will be 20.1%.

"Companies will adopt AI not just because they can, but because they must," said Ritu Jyoti, program vice president, Artificial Intelligence at IDC. "AI is the technology that will help businesses to be agile, innovate, and scale. The companies that become 'AI powered' will have the ability to synthesize information (using AI to convert data into information and then into knowledge), the capacity to learn (using AI to understand relationships between knowledge and apply the learning to business problems), and the capability to deliver insights at scale (using AI to support decisions and automation)."

Two of the leading drivers for AI adoption are delivering a better customer experience and helping employees to get better at their jobs. This is reflected in the leading use cases for AI, which include automated customer service agents, sales process recommendation and automation, automated threat intelligence and prevention, and IT automation. Combined, these four use cases will represent nearly a third of all AI spending this year. Some of the fastest growing use cases are automated human resources, IT automation, and pharmaceutical research and discovery.

The two industries that will spend the most on AI solutions throughout the forecast are Retail and Banking. The Retail industry will largely focus its AI investments on improving the customer experience via chatbots and recommendation engines while Banking will include spending on fraud analysis and investigation and program advisors and recommendation systems. Discrete Manufacturing, Process Manufacturing, and Healthcare will round out the top 5 industries for AI spending in 2020. The industries that will see the fastest growth in AI spending over the 2020-2024 forecast are Media, Federal/Central Government, and Professional Services.

"COVID-19 caused a slowdown in AI investments across the Transportation industry as well as the Personal and Consumer Services industry, which includes leisure and hospitality businesses. These industries will be cautious with their AI investments in 2020 as their focus will be on cost containment and revenue generation rather than innovation or digital experiences," said Andrea Minonne, senior research analyst, Customer Insights & Analysis. "On the other hand, AI has played a role in helping societies deal with large-scale disruptions caused by quarantines and lockdowns. Some European governments have partnered with AI start-ups to deploy AI solutions to monitor the outcomes of their social distancing rules and assess if the public was complying with rules. Also, hospitals across Europe are using AI to speed up COVID-19 diagnosis and testing, to provide automated remote consultations, and to optimize capacity at hospitals."

"This release of the Artificial Intelligence Spending Guide was adjusted for the impact of COVID-19," said Stacey Soohoo, research manager, Customer Insights & Analysis. "In the short term, the pandemic caused supply chain disruptions and store closures with continued impact expected to linger into 2021 and the outyears. For the most impacted industries, this has caused some delays in AI deployments. Elsewhere, enterprises have seen a silver lining in the current situation: an opportunity to become more resilient and agile in the long run. Artificial intelligence continues to be a key technology in the road to recovery for many enterprises and adopting artificial intelligence will help many to rebuild or enhance future revenue streams and operations."

Software and services will each account for a little more than one third of all AI spending this year with hardware delivering the remainder. The largest share of software spending will go to AI applications ($14.1 billion) while the largest category of services spending will be IT services ($14.5 billion). Servers ($11.2 billion) will dominate hardware spending. Software will see the fastest growth in spending over the forecast period with a five-year CAGR of 22.5%.

On a geographic basis, the United States will deliver more than half of all AI spending throughout the forecast, led by the Retail and Banking industries. Western Europe will be the second largest geographic region, led by Banking, Retail, and Discrete Manufacturing. China will be the third largest region for AI spending with State/Local Government, Banking, and Professional Services as the leading industries. The strongest spending growth over the five-year forecast will be in Japan (32.1% CAGR) and Latin America (25.1% CAGR).

The Worldwide Artificial Intelligence Spending Guide sizes spending for technologies that analyze, organize, access, and provide advisory services based on a range of unstructured information. The Spending Guide quantifies the AI opportunity by providing data for 27 use cases across 19 industries in nine regions and 32 countries. Data is also available for the related hardware, software, and services categories. This version (V2 2020) of the Spending Guide incorporates updated estimates for the impact of COVID-19 across all technology and industry markets as of the end of May 2020.

About IDC Spending Guides

IDC's Spending Guides provide a granular view of key technology markets from a regional, vertical industry, use case, buyer, and technology perspective. The spending guides are delivered via pivot table format or custom query tool, allowing the user to easily extract meaningful information about each market by viewing data trends and relationships.

Click here to learn about IDC's full suite of data products and how you can leverage them to grow your business.

About IDC

International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. With more than 1,100 analysts worldwide, IDC offers global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. IDC's analysis and insight helps IT professionals, business executives, and the investment community to make fact-based technology decisions and to achieve their key business objectives. Founded in 1964, IDC is a wholly-owned subsidiary of International Data Group (IDG), the world's leading tech media, data and marketing services company. To learn more about IDC, please visit http://www.idc.com. Follow IDC on Twitter at @IDC and LinkedIn. Subscribe to the IDC Blog for industry news and insights: http://bit.ly/IDCBlog_Subscribe.

View original post here:
Worldwide Spending on Artificial Intelligence Is Expected to Double in Four Years, Reaching $110 Billion in 2024, According to New IDC Spending Guide...

Defense Innovation Unit Teaching Artificial Intelligence to Detect Cancer – Department of Defense

The Defense Innovation Unit is bringing together the best of commercially available artificial intelligence technology and the Defense Department's vast cache of archived medical data to teach computers how to identify cancers and other medical irregularities.

The result will be new tools medical professionals can use to more accurately and more quickly identify medical issues in patients.

The new DIU project, called "Predictive Health," also involves the Defense Health Agency, three private-sector businesses and the Joint Artificial Intelligence Center.

The new capability directly supports the development of the JAIC's warfighter health initiative, which is working with the Defense Health Agency and the military services to field AI solutions that are aimed at transforming military health care. The JAIC is also providing the funding and adding technical expertise for the broader initiative.

"The JAIC's contributions to this initiative have engendered the strategic development of required infrastructure to enable AI-augmented radiographic and pathologic diagnostic capabilities," said Navy Capt. (Dr.) Hassan Tetteh, the JAIC's Warfighter Health Mission Initiative chief. "Given the military's unique, diverse, and rich data, this initiative has the potential to compliment other significant military medical advancements to include antisepsis, blood transfusions, and vaccines."

A big part of the Predictive Health project will involve training AI to look at de-identified DOD medical imagery to teach it to identify cancers. The AI can then be used with augmented reality microscopes to help medical professionals better identify cancer cells.

Nathanael Higgins, the support contractor managing the program for DIU, explained what the project will mean for the department.

"From a big-picture perspective, this is about integrating AI into the DOD health care system," Higgins said. "There are four critical areas we think this technology can impact. The first one is, it's going to help drive down cost."

The earlier medical practitioners can catch a disease, Higgins said, the easier it will be to anticipate outcomes and to provide less invasive treatments. That means lower cost to the health care system overall, and to the patient, he added.

Another big issue for DOD is maximizing personnel readiness, Higgins said.

"If you can cut down on the number of acute issues that come up that prevent people from doing their job, you essentially help our warfighting force," he explained.

Helping medical professionals do their jobs better is also a big part of the Predictive Health project, Higgins said.

"Medical professionals are already overworked," he said. "We're essentially giving them an additional tool that will help them make confident decisions and know that they made the right decision so that we're not facing as many false negatives or false positives. And ultimately we're able to identify these types of disease states earlier, and that'll help the long-term prognosis."

In line with the department adding an additional line of effort focused on taking care of people to the National Defense Strategy, Higgins said using AI to identify medical conditions early will help to optimize warfighter performance as well.

"Early diagnosis equals less acute injuries, which means less invasive procedures, which means we have more guys and gals in our frontline forces and less cost on the military health care system," he said. "The ultimate value here is really saving lives as people are our most valuable resource."

Using AI to look for cancer first requires researchers to teach AI what cancer looks like. This requires having access to a large set of training data. For the Predictive Health project, this will mean a lot of medical imagery of the kind produced by CT scans, MRIs, X-rays and slide imagery made from biopsies, and knowing ahead of time that the imagery depicts the kind of illnesses, such as cancer, that researchers hope to train the AI to identify.

DOD has access to a large set of this kind of data. Dr. Niels Olson, the DIU chief medical officer and originator of the Predictive Health project, said DOD also has a very diverse set of data, given its size and the array of people for which the department's health care system is responsible.

"If you think about it, the DOD, through retired and active duty service, is probably one of the largest health care systems in the world, at about 9 million people," Olson said. "The more data a tool has available to it, the more effective it is. That's kind of what makes DOD unique. We have a larger pool of information to draw from, so that you can select more diverse cases."

"Unlike some of the other large systems, we have a pretty good representation of the U.S. population," he said. "The military actually has a nice smooth distribution of population in a lot of ways that other regional systems don't have. And we have it at scale."

While DOD does have access to a large set of diverse medical imaging data that can be used to train an AI, Olson said privacy will not be an issue.

"We'll use de-identified information, imaging, from clinical specimens," Olson said. "So this means actual CT images and actual MRI images of people who have a disease, where you remove all of the identifiers and then just use the diagnostic imaging and the actual diagnosis that the pathologist or radiologist wrote down."

AI doesn't need to know who the medical imaging has come from it just needs to see a picture of cancer to learn what cancer is.

"All the computer sees is an image that is associated with some kind of disease, condition or cancer," Olson said. "We are ensuring that we mitigate all risk associated with [the Health Insurance Portability and Accountability Act of 1996], personally identifiable information and personal health information."

Using the DOD's access to training data and commercially available AI technology, the DIU's Predictive Health project will need to train the AI to identify cancers. Olson explained that teaching an AI to look at a medical image and identify what is cancer is a process similar to that of a parent teaching a child to correctly identify things they might see during a walk through the neighborhood.

"The kid asks 'Mom, is that a tree?' And Mom says, 'No, that's a dog,'" Olson explained. "The kids learn by getting it wrong. You make a guess. We formally call that an inference, a guess is an inference. And if the machine gets it wrong, we tell it that it got it wrong."

The AI can guess over and over again, learning each time about how it got the answer wrong and why, until it eventually learns how to correctly identify a cancer within the training set of data, Olson said, though he said he doesn't want it to get too good.

Overtraining, Olson said, means the AI has essentially memorized the training set of data and can get a perfect score on a test using that data. An overtrained system is unprepared, however, to look at new information, such as new medical images from actual patients, and find what it's supposed to find.

"If I memorize it, then my test performance will be perfect, but when I take it out in the real world, it would be very brittle," Olson said.

Once well trained, the AI can be used with an "augmented reality microscope," or ARM, so pathologists can more quickly and accurately identify diseases in medical imagery, Olson said.

"An augmented reality microscope has a little camera and a tiny little projector, and the little camera sends information to a computer and the computer sends different information back to the projector," Olson said. "The projector pushes information into something like a heads-up display for a pilot, where information is projected in front of the eyes."

With an ARM, medical professionals view tissue samples with information provided by an AI overlaid over the top information that helps them more accurately identify cells that might be cancerous, for instance.

While the AI that DIU hopes to train will eventually help medical professionals do a better job of identifying cancers, it won't replace their expertise. There must always be a medical professional making the final call when it comes to treatment for patients, Higgins said.

"The prototype of this technology that we're adopting will not replace the practitioner," he said. "It is an enabler it is not a cure-all. It is designed to enhance our people and their decision making. If there's one thing that's true about DOD, it's that people are our most important resource. We want to give them the best tools to succeed at their job.

"AI is obviously the pinnacle of that type of tool in terms of what it can do and how it can help people make decisions," he continued. "The intent here is to arm them with an additional tool so that they make confident decisions 100% of the time."

The Predictive Health project is expected to end within 24 months, and the project might then make its way out to practitioners for further testing.

The role of DIU is taking commercial technology, prototyping it beyond a proof of concept, and building it into a scalable solution for DOD.

Excerpt from:
Defense Innovation Unit Teaching Artificial Intelligence to Detect Cancer - Department of Defense

7 Successful Ways To Use Artificial Intelligence To Improve Your Business Processes – Forbes

Now more than ever, you may be looking for ways to make your business more efficient, more streamlined, more cost-effective, and better able to cope with changing market needs. Artificial intelligence in particular, AI-driven automation is helping companies achieve all this and more.

7 Successful Ways To Use Artificial Intelligence To Improve Your Business Processes

Here are seven ways AI is transforming everyday business processes for the better.

1. Improving meetings

Okay, so AI cant eliminate meetings altogether. In fact, the coronavirus pandemic has shown us how maintaining human connections is vital, even from a distance which means meetings are definitely here to stay. But AI can at least help to cut down the tiresome admin involved before, during, and after meetings.

For example, voice assistants such as Google Duplex can schedule appointments for you. Then theres Voiceas EVA assistant, which can listen in on your meetings, capture key highlights and actions, and create and share actionable notes afterward. Another tool, called Sonia, does a similar thing, but is designed to capture client calls, transcribing the entire conversation, and automatically summarizing key items and actions.

2. Enhancing sales and marketing

Many off-the-peg CRM solutions now incorporate AI analytics, enabling sales teams to automatically generate valuable insights. For example, Salesforces Einstein AI technology can predict which customers are most likely to generate more revenue, and which are most likely to take their custom elsewhere. Armed with knowledge like this, salespeople can focus their time and energy where it matters most.

Then theres the widespread use of chatbots, which is helping organizations boost sales, drive revenue, and grow their audience. In one example, UK retailer Marks & Spencer added a virtual digital assistant function to its website to help customers solve common issues a move which has reportedly saved millions of pounds worth of sales that would otherwise have been lost as frustrated customers bounce off the site.

3. Assessing and improving customer service

When it comes to call center operations, automation is nothing new; simple inquiries have been met with automated menu services for some time. But one tech company says it can help companies automatically judge the quality of human customer service calls. Transcosmoss AI solution automatically assesses the quality of service given at speed with human accuracy and can detect inappropriate and problematic customer service with more than twice the accuracy of a voice recognition system.

4. Improving product development processes

Generative design is a cutting-edge field that uses AI to augment the creative process. With generative design software, you simply input your design goals and other requirements and let the software explore all the possible designs that could fulfill those specifications meaning you can quickly generate multiple designs from a single idea. The software does all the heavy lifting of working out what works and what doesnt, saving many, many hours of time. Plus, you avoid the expense of creating prototypes that dont deliver.

5. Automating content generation

This article wasnt written by a robot. But it could have been. Because, thanks to AI, machines are now capable of generating engaging, informative text to the extent that organizations like Forbes are producing articles with the help of AI.

From writing product descriptions and web copy, to industry articles and reports, theres a range of AI-driven content tools available. For example, e-commerce leader Alibaba has come up with a tool called AI-CopyWriter thats capable of generating more than 20,000 lines of copy in just one second.

6. Enhancing the manufacturing process

The use of robots in manufacturing is well established. But the latest generation of robotic systems is capable of working alongside humans and interacting seamlessly (and safely) with the human workforce. This has given rise to the term cobots" or collaborative robots.

Thanks to AI technologies like machine vision, cobots are aware of the humans around them and can react accordingly for example, by adjusting their speed or reversing to avoid humans meaning workflows can be designed to get the very best out of both humans and robots. Easy to program, fast to set up, and with an average price tag of around $24,000 each, cobots are a viable option to help smaller and mid-sized firms compete with larger manufacturers.

7. Refining recruitment

HR may not seem an obvious match with AI. Yet AI is fast finding many uses in HR processes, including recruitment. For large employers like Unilever, which recruits around 30,000 people a year and handles 1.8 million applications, finding ways to streamline and improve the recruitment process is essential. Thats why Unilever partnered with AI recruitment specialist Pymetrics to create an online platform capable of conducting initial assessments of candidates in their own home. According to Unilever, around 70,000 person-hours of interviewing and assessing candidates have been cut thanks to this automated screening of candidates.

AI is going to impact businesses of all shapes and sizes, across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

More:
7 Successful Ways To Use Artificial Intelligence To Improve Your Business Processes - Forbes

IDTechEx Research Details Opportunities and Challenges of Artificial Intelligence in Robotic Surgery – PRNewswire

BOSTON, Aug. 24, 2020 /PRNewswire/ -- In its recently published report "Innovations in Robotic Surgery 2020-2030: Technologies, Players & Markets", IDTechEx reports that the robotic surgery market will reach over $12 billion by 2030. The report breaks down the market landscape and emerging technologies in the field of robotic surgery.

The rapid progress of artificial intelligence (AI) technologies in the last 5-10 years has led many to associate it with robotic surgery systems. Currently, however, few robotic surgery systems are equipped with AI-driven human-robot interaction capabilities.

AI offers numerous opportunities for the advancement of robotic surgery. It can facilitate interaction mediums between surgeons and surgical robots, for example by recognizing surgeons' movements (e.g. head, eyes, hand) and converting them into an action command for the surgical robot. AI can also enable verbal manipulation of a surgical robot through speech recognition arm. Although the precision and the accuracy of speech recognition has improved with the integration of deep learning in speech recognition, this type of technology remains in its early stages and requires further development to become reliable.

AI facilitates robotic instrument positioning. For example, ML algorithms in orthopedic surgery robots allow pre-operative planning by building a virtual model of the patient's anatomy and enable the creation of a trajectory for intervention (e.g. drilling, screw implantation). This reduces the chance of human error.

So, when will AI become widely implemented in robotic surgery systems? Currently, its use is restricted to image recognition algorithms for pre-operative planning. There is currently no clear path for other forms of AI in robotic surgery.

Regulations are the biggest roadblock. Regulatory frameworks are not built to accommodate adaptive technologies such as AI because AI algorithms constantly learn and change. When an algorithm adapts, it is no longer the same algorithm and cannot be utilized in the medical practice without updating approvals. While their understanding of AI remains vague, regulatory bodies view this unpredictability as too risky to approve for a surgical robot. They are in the process of designing new methods to regulate AI, but this will take years to come into effect.

To find out more on the use of AI in robotic surgery, please refer to the IDTechEx report "Innovations in Robotic Surgery 2020-2030: Technologies, Players & Markets". IDTechEx's findings are not restricted to AI only and cover the entire robotic surgery industry. The report breaks down the market landscape and emerging technologies, highlights the latest trends and provides market forecasts for the next decade.

For more information on this report, please visit http://www.IDTechEx.com/RoSurgery or for the full portfolio of related research available from IDTechEx please visit http://www.IDTechEx.com/Research.

IDTechEx guides your strategic business decisions through its Research, Consultancy and Event products, helping you profit from emerging technologies. For more information on IDTechEx Research and Consultancy, contact [emailprotected] or visit http://www.IDTechEx.com.

Media Contact:

Natalie MoretonDigital Marketing Manager [emailprotected] +44(0)1223 812300

SOURCE IDTechEx

See the rest here:
IDTechEx Research Details Opportunities and Challenges of Artificial Intelligence in Robotic Surgery - PRNewswire

Device Insight and Sentian launch the era of Artificial Intelligence of Things – IoT Business News

New alliance connecting AI and IoT.

This cooperation combines both of the most important current fields of technology, AI and IoT, to form an Artificial Intelligence of Things (AIoT) and at the same time take the intelligent automation of industrial manufacturing processes to a whole new level, enabling companies to increase the efficiency of their production by up to 30 percent.

Until now, most industrial companies have concentrated on predictive maintenance, leaving the opportunity to optimize their core processes with the help of artificial intelligence unused. In fact, it is precisely these gradual improvements in production processes that offer promising business value, enabling companies to significantly increase their product quality level as well as the efficiency of their operations.

The goal of the innovative AIoT approach is to continuously reduce deviations from the optimum within manufacturing processes. Fewer deviations mean improved machine and system performance, less waste and lower costs and above all, more highest-quality products. The result: income and profit, as well as customer satisfaction will increase noticeably. Production will be transformed into a Smart Factory.

For the implementation of AIoT projects, Device Insight brings its expertise in connecting machines, aggregating and managing IoT data and linking AI applications into the partnership. Additional added value is created by the Munich-based IoT pioneers many years of expertise in the analysis and visualization of evaluations based on high-performance IoT components.

Swedish AI specialist Sentian contributes its advanced algorithms and solutions that help reduce deviations within individual production processes or even entire plants. Sentians mathematical optimization approach is groundbreaking, allowing fast and extremely precise planning as well as flexible replanning throughout production. Another special component is Sentians novel, model-based approach to Reinforcement Learning the latest development in deep learning.

Thanks to this unique combination of AI and IoT, Device Insight and Sentian are now able to accompany companies on the way to intelligent production away from individual solutions and selective improvements, such as those possible with predictive maintenance, and towards a holistically optimized smart factory.

Predictive maintenance is still very important for the industry. When it comes to process optimization, however, predictive maintenance can only be of limited help, says Marten Schirge, Managing Director at Device Insight.

The real challenge within industry lies elsewhere. These days, many control systems are outdated and not very adaptable, while at the same time machines are becoming increasingly complex. This is the conflict area where we begin with AIoT. Together with our partner Sentian, we want to help companies fully exploit the hidden potential for better efficiency, higher quality and ultimately more profit.

Bringing AI to the core of production enables companies to truly benefit from AI, says Martin Rugfelt, CEO at Sentian. The potential of AIoT and our cooperation to deliver fully scalable solutions provides proof of value rather than just technical proofs. AI is ready to be operationalized.

Read more here:
Device Insight and Sentian launch the era of Artificial Intelligence of Things - IoT Business News

EchoNous, Inc. Seeks to Redefine Bedside Care With the Launch of Trio, an Advanced Artificial Intelligence Capability on Its Kosmos Platform -…

REDMOND, Wash., Aug. 24, 2020 /PRNewswire/ --EchoNous is launching Trio*, a set of algorithms for its cutting-edge POCUS tool, Kosmos, that will make scanning more accessible for doctors of all experience levels. The technology will help doctors guide the probe into position, grade image quality, and label cardiac structures in real-time. Reducing the steep learning curve associated with ultrasound, the AI helps doctors arrive at a confident diagnosis faster and more easily.

"The physical, or bedside exam, hasn't fundamentally changed since before we had color TV," says EchoNous founder Kevin Goodwin. "The launch of our AI-driven guiding, grading, and labeling is a big first step in our mission to revolutionize bedside clinical assessment."

The Trio of algorithms is powered by machine learning, and designed to help doctors break the barriers that have impeded ultrasound adoption: the nuances of acquiring clear images and reliably interpreting the results. In addition, it can help doctors quickly calculate key measures like ejection fraction once they are locked into the best view.

"For all those clinicians who have been reluctant or unable to start using ultrasound, and don't have an expert to stand over their shoulder and coach them, help has arrived in the form of Kosmos," says Dr. Mike Blaivas, EchoNous Chief Medical Officer and emergency physician at St. Francis Hospital-Columbus.

For medical students just learning to scan, Kosmos helps ensure they guide the probe properly and understand what they're seeing. For more experienced doctors in primary care, acute care, cardiology, and beyond, Kosmos adds confidence that they're acquiring the optimal image, even for less familiar angles.

"Ultimately this is about raising standards for the patient," says Dr. Adaira Landry, emergency physician and ultrasound faculty at Brigham and Women's Hospital. "The more doctors we have using POCUS fluently, the more patients will be diagnosed quickly and accurately. No wasted motion. No unnecessary steps." As the first to embed these AI capabilities into the physical device, Kosmos can give doctors a far more holistic view of their patients immediately and without leaving the bedside. EchoNous will continue to release new AI-driven applications over the next year, all aimed at empowering doctors at the point-of-care.

*The Trio is a real-time automatic image labeling, grading and guidance system to enable the collection of images by healthcare practitioners, including those who are not trained in sonography, to address urgent image analysis needs during the declared COVID-19 public health emergency. The Trio is intended to be used by qualified healthcare professionals or under the supervision or in-person guidance of a trained or licensed healthcare professional. This feature has not been cleared by the FDA.

About EchoNous: EchoNous' vision since inception has been to create an unprecedented diagnostic tool in the hand-held format that is low-cost and delivers high clinical value through the meaningful application of artificial intelligence. EchoNous will continue to apply deep learning tools to clinical challenges in everyday healthcare.

http://www.echonous.com

http://www.kosmosplatform.com

Media Contact:

Anais Concepcion[emailprotected](425) 420-0517

Related Video

SOURCE EchoNous Inc.

Echonous Homepage

Link:
EchoNous, Inc. Seeks to Redefine Bedside Care With the Launch of Trio, an Advanced Artificial Intelligence Capability on Its Kosmos Platform -...

Moravian Academy junior is helping harness artificial intelligence to spot COVID-19 – lehighvalleylive.com

Mikail Jaffer is about to start his junior year at Moravian Academy, and isnt yet sure what he wants to pursue after graduation.

But one thing that's clear is he isn't afraid to set his sights high.

Jaffer, 16 and from the Allentown area, is working with a Plano, Texas-based company called CovidScan.ai on harnessing the power of artificial intelligence to diagnose COVID-19, the illness caused by the novel coronavirus.

He grew up in the biopharmaceutical industry by way of his mother and father, Fatima and Gulam Jaffer, who own Yourway, an integrated biopharmaceutical supply chain solutions provider based in Upper Macungie Township.

A classmate introduced him to the CovidScan company, and he jumped at the opportunity.

Basically I came in and helped come up with different ideas to help develop the program and make it more friendly to the user and give more data or insight to doctors or radiologists, he told lehighvalleylive.com.

Mikail Jaffer, a rising junior at Moravian Academy, is inset against a file photo of a traditional method of analyzing chest X-rays. He is working with a company called CovidScan.ai to advance an artificial intelligence program for diagnosing COVID-19 based on chest X-ray images.Courtesy photo/NJ Advance Media file photo

The idea is to run chest X-ray images through the artificial intelligence program to quickly determine whether the patient has COVID-19 or some other lung disorder, or is normal. Its trained on thousands of images and designed to be incorporated into traditional health care digital systems for widespread use, said Moksh Nirvaan, the companys co-founder and head of AI development.

The programs overall accuracy rate is running around 96%, which breaks down to near 99% for COVID-19 cases, 95% for non-coronavirus ailments and 92% for normal diagnoses, Nirvaan said in a telephone interview from Plano.

The effort won a cash prize for second place in a Facebook Hackathon earlier this year.

This type of technology shows promise, particularly for areas with too few physicians or radiologists, the National Institutes of Health said in a research publication focused on "Chest X-ray Analysis using Machine Intelligence Research for HIV/TB Screening."

Advances in machine learning and artificial intelligence techniques offer a promise to supplement rapid, accurate, and reliable computer-assisted disease screening, the NIH says. Such techniques are particularly valuable in overburdened and/or resource constrained regions. These regions also tend to exhibit high prevalence of infectious diseases and report high mortality.

CovidScan.ai has been in the works since spring, when the founders realized the unprecedented strain COVID-19 was placing on the health care system, Nirvaan said.

Plans are to partner with five to 10 clinics to begin validating the program as early as September or October before bringing it to market, he said.

Jaffer has been helpful "from a business standpoint for scalability" and efforts to get the program into use, according to Nirvaan.

My idea was, how could I help to advance this technology while also bringing it to market, Jaffer said.

Globally, as of Friday, there have been 22,536,278 confirmed cases of COVID-19, including 789,197 deaths, reported to the World Health Organization. The United States from Jan. 20 to Friday has seen 5,477,305 confirmed cases of COVID-19 with 172,033 deaths, according to the WHO.

Our journalism needs your support. Please subscribe today to lehighvalleylive.com.

Kurt Bresswein may be reached at kbresswein@lehighvalleylive.com.

More:
Moravian Academy junior is helping harness artificial intelligence to spot COVID-19 - lehighvalleylive.com