7 Successful Ways To Use Artificial Intelligence To Improve Your Business Processes – Forbes

Now more than ever, you may be looking for ways to make your business more efficient, more streamlined, more cost-effective, and better able to cope with changing market needs. Artificial intelligence in particular, AI-driven automation is helping companies achieve all this and more.

7 Successful Ways To Use Artificial Intelligence To Improve Your Business Processes

Here are seven ways AI is transforming everyday business processes for the better.

1. Improving meetings

Okay, so AI cant eliminate meetings altogether. In fact, the coronavirus pandemic has shown us how maintaining human connections is vital, even from a distance which means meetings are definitely here to stay. But AI can at least help to cut down the tiresome admin involved before, during, and after meetings.

For example, voice assistants such as Google Duplex can schedule appointments for you. Then theres Voiceas EVA assistant, which can listen in on your meetings, capture key highlights and actions, and create and share actionable notes afterward. Another tool, called Sonia, does a similar thing, but is designed to capture client calls, transcribing the entire conversation, and automatically summarizing key items and actions.

2. Enhancing sales and marketing

Many off-the-peg CRM solutions now incorporate AI analytics, enabling sales teams to automatically generate valuable insights. For example, Salesforces Einstein AI technology can predict which customers are most likely to generate more revenue, and which are most likely to take their custom elsewhere. Armed with knowledge like this, salespeople can focus their time and energy where it matters most.

Then theres the widespread use of chatbots, which is helping organizations boost sales, drive revenue, and grow their audience. In one example, UK retailer Marks & Spencer added a virtual digital assistant function to its website to help customers solve common issues a move which has reportedly saved millions of pounds worth of sales that would otherwise have been lost as frustrated customers bounce off the site.

3. Assessing and improving customer service

When it comes to call center operations, automation is nothing new; simple inquiries have been met with automated menu services for some time. But one tech company says it can help companies automatically judge the quality of human customer service calls. Transcosmoss AI solution automatically assesses the quality of service given at speed with human accuracy and can detect inappropriate and problematic customer service with more than twice the accuracy of a voice recognition system.

4. Improving product development processes

Generative design is a cutting-edge field that uses AI to augment the creative process. With generative design software, you simply input your design goals and other requirements and let the software explore all the possible designs that could fulfill those specifications meaning you can quickly generate multiple designs from a single idea. The software does all the heavy lifting of working out what works and what doesnt, saving many, many hours of time. Plus, you avoid the expense of creating prototypes that dont deliver.

5. Automating content generation

This article wasnt written by a robot. But it could have been. Because, thanks to AI, machines are now capable of generating engaging, informative text to the extent that organizations like Forbes are producing articles with the help of AI.

From writing product descriptions and web copy, to industry articles and reports, theres a range of AI-driven content tools available. For example, e-commerce leader Alibaba has come up with a tool called AI-CopyWriter thats capable of generating more than 20,000 lines of copy in just one second.

6. Enhancing the manufacturing process

The use of robots in manufacturing is well established. But the latest generation of robotic systems is capable of working alongside humans and interacting seamlessly (and safely) with the human workforce. This has given rise to the term cobots" or collaborative robots.

Thanks to AI technologies like machine vision, cobots are aware of the humans around them and can react accordingly for example, by adjusting their speed or reversing to avoid humans meaning workflows can be designed to get the very best out of both humans and robots. Easy to program, fast to set up, and with an average price tag of around $24,000 each, cobots are a viable option to help smaller and mid-sized firms compete with larger manufacturers.

7. Refining recruitment

HR may not seem an obvious match with AI. Yet AI is fast finding many uses in HR processes, including recruitment. For large employers like Unilever, which recruits around 30,000 people a year and handles 1.8 million applications, finding ways to streamline and improve the recruitment process is essential. Thats why Unilever partnered with AI recruitment specialist Pymetrics to create an online platform capable of conducting initial assessments of candidates in their own home. According to Unilever, around 70,000 person-hours of interviewing and assessing candidates have been cut thanks to this automated screening of candidates.

AI is going to impact businesses of all shapes and sizes, across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

More:
7 Successful Ways To Use Artificial Intelligence To Improve Your Business Processes - Forbes

Defense Innovation Unit Teaching Artificial Intelligence to Detect Cancer – Department of Defense

The Defense Innovation Unit is bringing together the best of commercially available artificial intelligence technology and the Defense Department's vast cache of archived medical data to teach computers how to identify cancers and other medical irregularities.

The result will be new tools medical professionals can use to more accurately and more quickly identify medical issues in patients.

The new DIU project, called "Predictive Health," also involves the Defense Health Agency, three private-sector businesses and the Joint Artificial Intelligence Center.

The new capability directly supports the development of the JAIC's warfighter health initiative, which is working with the Defense Health Agency and the military services to field AI solutions that are aimed at transforming military health care. The JAIC is also providing the funding and adding technical expertise for the broader initiative.

"The JAIC's contributions to this initiative have engendered the strategic development of required infrastructure to enable AI-augmented radiographic and pathologic diagnostic capabilities," said Navy Capt. (Dr.) Hassan Tetteh, the JAIC's Warfighter Health Mission Initiative chief. "Given the military's unique, diverse, and rich data, this initiative has the potential to compliment other significant military medical advancements to include antisepsis, blood transfusions, and vaccines."

A big part of the Predictive Health project will involve training AI to look at de-identified DOD medical imagery to teach it to identify cancers. The AI can then be used with augmented reality microscopes to help medical professionals better identify cancer cells.

Nathanael Higgins, the support contractor managing the program for DIU, explained what the project will mean for the department.

"From a big-picture perspective, this is about integrating AI into the DOD health care system," Higgins said. "There are four critical areas we think this technology can impact. The first one is, it's going to help drive down cost."

The earlier medical practitioners can catch a disease, Higgins said, the easier it will be to anticipate outcomes and to provide less invasive treatments. That means lower cost to the health care system overall, and to the patient, he added.

Another big issue for DOD is maximizing personnel readiness, Higgins said.

"If you can cut down on the number of acute issues that come up that prevent people from doing their job, you essentially help our warfighting force," he explained.

Helping medical professionals do their jobs better is also a big part of the Predictive Health project, Higgins said.

"Medical professionals are already overworked," he said. "We're essentially giving them an additional tool that will help them make confident decisions and know that they made the right decision so that we're not facing as many false negatives or false positives. And ultimately we're able to identify these types of disease states earlier, and that'll help the long-term prognosis."

In line with the department adding an additional line of effort focused on taking care of people to the National Defense Strategy, Higgins said using AI to identify medical conditions early will help to optimize warfighter performance as well.

"Early diagnosis equals less acute injuries, which means less invasive procedures, which means we have more guys and gals in our frontline forces and less cost on the military health care system," he said. "The ultimate value here is really saving lives as people are our most valuable resource."

Using AI to look for cancer first requires researchers to teach AI what cancer looks like. This requires having access to a large set of training data. For the Predictive Health project, this will mean a lot of medical imagery of the kind produced by CT scans, MRIs, X-rays and slide imagery made from biopsies, and knowing ahead of time that the imagery depicts the kind of illnesses, such as cancer, that researchers hope to train the AI to identify.

DOD has access to a large set of this kind of data. Dr. Niels Olson, the DIU chief medical officer and originator of the Predictive Health project, said DOD also has a very diverse set of data, given its size and the array of people for which the department's health care system is responsible.

"If you think about it, the DOD, through retired and active duty service, is probably one of the largest health care systems in the world, at about 9 million people," Olson said. "The more data a tool has available to it, the more effective it is. That's kind of what makes DOD unique. We have a larger pool of information to draw from, so that you can select more diverse cases."

"Unlike some of the other large systems, we have a pretty good representation of the U.S. population," he said. "The military actually has a nice smooth distribution of population in a lot of ways that other regional systems don't have. And we have it at scale."

While DOD does have access to a large set of diverse medical imaging data that can be used to train an AI, Olson said privacy will not be an issue.

"We'll use de-identified information, imaging, from clinical specimens," Olson said. "So this means actual CT images and actual MRI images of people who have a disease, where you remove all of the identifiers and then just use the diagnostic imaging and the actual diagnosis that the pathologist or radiologist wrote down."

AI doesn't need to know who the medical imaging has come from it just needs to see a picture of cancer to learn what cancer is.

"All the computer sees is an image that is associated with some kind of disease, condition or cancer," Olson said. "We are ensuring that we mitigate all risk associated with [the Health Insurance Portability and Accountability Act of 1996], personally identifiable information and personal health information."

Using the DOD's access to training data and commercially available AI technology, the DIU's Predictive Health project will need to train the AI to identify cancers. Olson explained that teaching an AI to look at a medical image and identify what is cancer is a process similar to that of a parent teaching a child to correctly identify things they might see during a walk through the neighborhood.

"The kid asks 'Mom, is that a tree?' And Mom says, 'No, that's a dog,'" Olson explained. "The kids learn by getting it wrong. You make a guess. We formally call that an inference, a guess is an inference. And if the machine gets it wrong, we tell it that it got it wrong."

The AI can guess over and over again, learning each time about how it got the answer wrong and why, until it eventually learns how to correctly identify a cancer within the training set of data, Olson said, though he said he doesn't want it to get too good.

Overtraining, Olson said, means the AI has essentially memorized the training set of data and can get a perfect score on a test using that data. An overtrained system is unprepared, however, to look at new information, such as new medical images from actual patients, and find what it's supposed to find.

"If I memorize it, then my test performance will be perfect, but when I take it out in the real world, it would be very brittle," Olson said.

Once well trained, the AI can be used with an "augmented reality microscope," or ARM, so pathologists can more quickly and accurately identify diseases in medical imagery, Olson said.

"An augmented reality microscope has a little camera and a tiny little projector, and the little camera sends information to a computer and the computer sends different information back to the projector," Olson said. "The projector pushes information into something like a heads-up display for a pilot, where information is projected in front of the eyes."

With an ARM, medical professionals view tissue samples with information provided by an AI overlaid over the top information that helps them more accurately identify cells that might be cancerous, for instance.

While the AI that DIU hopes to train will eventually help medical professionals do a better job of identifying cancers, it won't replace their expertise. There must always be a medical professional making the final call when it comes to treatment for patients, Higgins said.

"The prototype of this technology that we're adopting will not replace the practitioner," he said. "It is an enabler it is not a cure-all. It is designed to enhance our people and their decision making. If there's one thing that's true about DOD, it's that people are our most important resource. We want to give them the best tools to succeed at their job.

"AI is obviously the pinnacle of that type of tool in terms of what it can do and how it can help people make decisions," he continued. "The intent here is to arm them with an additional tool so that they make confident decisions 100% of the time."

The Predictive Health project is expected to end within 24 months, and the project might then make its way out to practitioners for further testing.

The role of DIU is taking commercial technology, prototyping it beyond a proof of concept, and building it into a scalable solution for DOD.

Excerpt from:
Defense Innovation Unit Teaching Artificial Intelligence to Detect Cancer - Department of Defense

IDTechEx Research Details Opportunities and Challenges of Artificial Intelligence in Robotic Surgery – PRNewswire

BOSTON, Aug. 24, 2020 /PRNewswire/ -- In its recently published report "Innovations in Robotic Surgery 2020-2030: Technologies, Players & Markets", IDTechEx reports that the robotic surgery market will reach over $12 billion by 2030. The report breaks down the market landscape and emerging technologies in the field of robotic surgery.

The rapid progress of artificial intelligence (AI) technologies in the last 5-10 years has led many to associate it with robotic surgery systems. Currently, however, few robotic surgery systems are equipped with AI-driven human-robot interaction capabilities.

AI offers numerous opportunities for the advancement of robotic surgery. It can facilitate interaction mediums between surgeons and surgical robots, for example by recognizing surgeons' movements (e.g. head, eyes, hand) and converting them into an action command for the surgical robot. AI can also enable verbal manipulation of a surgical robot through speech recognition arm. Although the precision and the accuracy of speech recognition has improved with the integration of deep learning in speech recognition, this type of technology remains in its early stages and requires further development to become reliable.

AI facilitates robotic instrument positioning. For example, ML algorithms in orthopedic surgery robots allow pre-operative planning by building a virtual model of the patient's anatomy and enable the creation of a trajectory for intervention (e.g. drilling, screw implantation). This reduces the chance of human error.

So, when will AI become widely implemented in robotic surgery systems? Currently, its use is restricted to image recognition algorithms for pre-operative planning. There is currently no clear path for other forms of AI in robotic surgery.

Regulations are the biggest roadblock. Regulatory frameworks are not built to accommodate adaptive technologies such as AI because AI algorithms constantly learn and change. When an algorithm adapts, it is no longer the same algorithm and cannot be utilized in the medical practice without updating approvals. While their understanding of AI remains vague, regulatory bodies view this unpredictability as too risky to approve for a surgical robot. They are in the process of designing new methods to regulate AI, but this will take years to come into effect.

To find out more on the use of AI in robotic surgery, please refer to the IDTechEx report "Innovations in Robotic Surgery 2020-2030: Technologies, Players & Markets". IDTechEx's findings are not restricted to AI only and cover the entire robotic surgery industry. The report breaks down the market landscape and emerging technologies, highlights the latest trends and provides market forecasts for the next decade.

For more information on this report, please visit http://www.IDTechEx.com/RoSurgery or for the full portfolio of related research available from IDTechEx please visit http://www.IDTechEx.com/Research.

IDTechEx guides your strategic business decisions through its Research, Consultancy and Event products, helping you profit from emerging technologies. For more information on IDTechEx Research and Consultancy, contact [emailprotected] or visit http://www.IDTechEx.com.

Media Contact:

Natalie MoretonDigital Marketing Manager [emailprotected] +44(0)1223 812300

SOURCE IDTechEx

See the rest here:
IDTechEx Research Details Opportunities and Challenges of Artificial Intelligence in Robotic Surgery - PRNewswire

Device Insight and Sentian launch the era of Artificial Intelligence of Things – IoT Business News

New alliance connecting AI and IoT.

This cooperation combines both of the most important current fields of technology, AI and IoT, to form an Artificial Intelligence of Things (AIoT) and at the same time take the intelligent automation of industrial manufacturing processes to a whole new level, enabling companies to increase the efficiency of their production by up to 30 percent.

Until now, most industrial companies have concentrated on predictive maintenance, leaving the opportunity to optimize their core processes with the help of artificial intelligence unused. In fact, it is precisely these gradual improvements in production processes that offer promising business value, enabling companies to significantly increase their product quality level as well as the efficiency of their operations.

The goal of the innovative AIoT approach is to continuously reduce deviations from the optimum within manufacturing processes. Fewer deviations mean improved machine and system performance, less waste and lower costs and above all, more highest-quality products. The result: income and profit, as well as customer satisfaction will increase noticeably. Production will be transformed into a Smart Factory.

For the implementation of AIoT projects, Device Insight brings its expertise in connecting machines, aggregating and managing IoT data and linking AI applications into the partnership. Additional added value is created by the Munich-based IoT pioneers many years of expertise in the analysis and visualization of evaluations based on high-performance IoT components.

Swedish AI specialist Sentian contributes its advanced algorithms and solutions that help reduce deviations within individual production processes or even entire plants. Sentians mathematical optimization approach is groundbreaking, allowing fast and extremely precise planning as well as flexible replanning throughout production. Another special component is Sentians novel, model-based approach to Reinforcement Learning the latest development in deep learning.

Thanks to this unique combination of AI and IoT, Device Insight and Sentian are now able to accompany companies on the way to intelligent production away from individual solutions and selective improvements, such as those possible with predictive maintenance, and towards a holistically optimized smart factory.

Predictive maintenance is still very important for the industry. When it comes to process optimization, however, predictive maintenance can only be of limited help, says Marten Schirge, Managing Director at Device Insight.

The real challenge within industry lies elsewhere. These days, many control systems are outdated and not very adaptable, while at the same time machines are becoming increasingly complex. This is the conflict area where we begin with AIoT. Together with our partner Sentian, we want to help companies fully exploit the hidden potential for better efficiency, higher quality and ultimately more profit.

Bringing AI to the core of production enables companies to truly benefit from AI, says Martin Rugfelt, CEO at Sentian. The potential of AIoT and our cooperation to deliver fully scalable solutions provides proof of value rather than just technical proofs. AI is ready to be operationalized.

Read more here:
Device Insight and Sentian launch the era of Artificial Intelligence of Things - IoT Business News

EchoNous, Inc. Seeks to Redefine Bedside Care With the Launch of Trio, an Advanced Artificial Intelligence Capability on Its Kosmos Platform -…

REDMOND, Wash., Aug. 24, 2020 /PRNewswire/ --EchoNous is launching Trio*, a set of algorithms for its cutting-edge POCUS tool, Kosmos, that will make scanning more accessible for doctors of all experience levels. The technology will help doctors guide the probe into position, grade image quality, and label cardiac structures in real-time. Reducing the steep learning curve associated with ultrasound, the AI helps doctors arrive at a confident diagnosis faster and more easily.

"The physical, or bedside exam, hasn't fundamentally changed since before we had color TV," says EchoNous founder Kevin Goodwin. "The launch of our AI-driven guiding, grading, and labeling is a big first step in our mission to revolutionize bedside clinical assessment."

The Trio of algorithms is powered by machine learning, and designed to help doctors break the barriers that have impeded ultrasound adoption: the nuances of acquiring clear images and reliably interpreting the results. In addition, it can help doctors quickly calculate key measures like ejection fraction once they are locked into the best view.

"For all those clinicians who have been reluctant or unable to start using ultrasound, and don't have an expert to stand over their shoulder and coach them, help has arrived in the form of Kosmos," says Dr. Mike Blaivas, EchoNous Chief Medical Officer and emergency physician at St. Francis Hospital-Columbus.

For medical students just learning to scan, Kosmos helps ensure they guide the probe properly and understand what they're seeing. For more experienced doctors in primary care, acute care, cardiology, and beyond, Kosmos adds confidence that they're acquiring the optimal image, even for less familiar angles.

"Ultimately this is about raising standards for the patient," says Dr. Adaira Landry, emergency physician and ultrasound faculty at Brigham and Women's Hospital. "The more doctors we have using POCUS fluently, the more patients will be diagnosed quickly and accurately. No wasted motion. No unnecessary steps." As the first to embed these AI capabilities into the physical device, Kosmos can give doctors a far more holistic view of their patients immediately and without leaving the bedside. EchoNous will continue to release new AI-driven applications over the next year, all aimed at empowering doctors at the point-of-care.

*The Trio is a real-time automatic image labeling, grading and guidance system to enable the collection of images by healthcare practitioners, including those who are not trained in sonography, to address urgent image analysis needs during the declared COVID-19 public health emergency. The Trio is intended to be used by qualified healthcare professionals or under the supervision or in-person guidance of a trained or licensed healthcare professional. This feature has not been cleared by the FDA.

About EchoNous: EchoNous' vision since inception has been to create an unprecedented diagnostic tool in the hand-held format that is low-cost and delivers high clinical value through the meaningful application of artificial intelligence. EchoNous will continue to apply deep learning tools to clinical challenges in everyday healthcare.

http://www.echonous.com

http://www.kosmosplatform.com

Media Contact:

Anais Concepcion[emailprotected](425) 420-0517

Related Video

SOURCE EchoNous Inc.

Echonous Homepage

Link:
EchoNous, Inc. Seeks to Redefine Bedside Care With the Launch of Trio, an Advanced Artificial Intelligence Capability on Its Kosmos Platform -...

Artificial intelligence has a high IQ but no emotional intelligence, and that comes with a cost – BBC Focus Magazine

Several years ago, I packed up my life in Cairo, Egypt, and moved to the UK to pursue my PhD thousands of miles away from everyone I knew and loved. As I settled into my new life, I found myself spending more hours with my laptop than with any other human being.

I felt isolated and incredibly homesick. Chatting online with my family back home, I was often in tears, but they had no idea how I was feeling behind my screen (with the exception of a sad face emoticon that I would send).

I realised then that our technology and devices which we consider to be smart, and helpful in many aspects of our lives are emotion blind. Technology has a high IQ, but no EQ, no emotional intelligence, and that comes with a cost.

Face-to-face, people share so much emotion and meaning beyond the words they say, through facial expressions, gestures, vocal intonations.

Read more about emotions:

But online, were relegated to texts, emojis, and one-dimensional expressions of how we feel. All of the richness of our communication disappears in cyberspace, making it more difficult to meaningfully and empathetically connect with one another.

That issue is even more pressing now that weve all been separated from each other by social distancing. Were leaning on technology to stay in touch with loved ones, to work and learn remotely, and everything in between. But we all know it doesnt feel the same. This begs the question: what can we do to preserve our humanity and our emotions in a digital world?

I believe that we can harness the power of artificial intelligence (AI) to redesign technology to not mask these elements of what makes us human, but emphasise them. Ive staked my career on this idea, and believe that AI with emotional intelligence Emotion AI will become ingrained in the fabric of the devices we use every day.

Theres a rapidly evolving market around Emotion AI: software that can detect nuanced human emotions and complex cognitive states from peoples facial and vocal expressions.

As with any new technology, we need to be thoughtful about how Emotion AI is developed, and where it is deployed, given the highly personal and private nature of peoples emotions. But I fundamentally believe that this technology has the power to amplify our empathy, online and offline, if applied mindfully and deliberately.

So, what would an emotionally intelligent digital world look like?

One area Ive been thinking a lot about recently is online conferences and virtual meetings. Like everyone else, I have spent more than my fair share of hours on Zoom over the past few months while my company, Affectiva, is remote, and many conferences where I was scheduled to speak have shifted to virtual.

Read more about artificial intelligence:

When Im leading a team meeting or presenting a keynote in person, Im able to take a pulse on how the audience is feeling based on the energy in the room.

I can riff off of signs of interest or excitement if I see peoples faces light up at an idea; or I can change course if I sense that people are becoming bored or zoning out. But presenting online is like being in a vacuum. I have no idea how people are reacting to what Im saying, or if theyre even paying attention.

If online video platforms were built with Emotion AI, we could restore some of the energy and emotion thats lost, and make meetings and conferences much more engaging as a result. If participants were willing to turn on their devices camera, and opted in to using Emotion AI, the technology could decipher peoples emotional expressions and aggregate that data in real time.

Picture an emotion newsfeed or graph that skyrocketed when attendees were excited or smiling, and tapered off when they became bored or disengaged. Not only would that insight help me as a presenter; but also, it could give attendees a sense of the energy in the (virtual) meeting room, helping restore the camaraderie we feel at in-person events or meetings.

The applications of Emotion AI arent limited to our work lives, though. Equally, if not more exciting, is its potential to bolster our interpersonal communications, and help us interact with each other more meaningfully. That drove one of the very first applications of Emotion AI that I explored: building an Emotion AI application for autism.

People on the autism spectrum often struggle with recognising and responding to non-verbal communications and emotions. But Emotion AI could be used as a tool to help people learn to navigate challenging social and emotional situations. Picture smart glasses with Emotion AI built in, that could give the wearer insight into the emotions of people theyre interacting with.

Companies are already turning this idea into reality. For example, a company called Brain Power is developing the worlds first augmented smart glass system (powered by Emotion AI) for kids and students on the autism spectrum.

The results are powerful: parents recount stories of being able to connect with their kids on an emotional level that was previously unimaginable, and the impact that has is incredibly moving.

Rana el Kaliouby

Another exciting application is in the automotive industry, where Emotion AI can improve road safety. Each year, dangerous driving behaviour (such as drowsy and distracted driving) causes thousands of accidents and fatalities in the UK alone.

Automakers are turning to Emotion AI to address these issues, by developing in-vehicle systems that use Emotion AI to detect signs of distracted or drowsy driving. The idea is that, if a vehicle recognised that its driver was starting to doze off, or was becoming distracted, the system could send an alert to remind the driver to keep their eyes on the road or pull over if the behaviour reaches a dangerous point.

Beyond making our roads safer, Emotion AI could also improve the transportation experience. Automakers are exploring in-cabin sensing systems that can understand the state of the driver, the cabin, and the occupants in it, in order to optimise and personalise the ride.

For example, after a long day at work, your car might be able to sense if youre tired or stressed. Perhaps it could play soothing music, or recommend stopping for take-out from your favourite restaurant on the way home.

Read more about future technology:

When I think about the future with Emotion AI, I envision a world where the de-facto human-machine interface more closely mirrors the way humans interact: through conversation, perception and empathy.

I believe this technology will be built into the fabric of the devices we use each day, ranging from our phones to our cars, our smart fridges to our smart speakers in turn, making our lives safer, healthier and more productive; while making our interactions with others more meaningful and empathetic.

Still, we cannot ignore the potential risks of AI. We need to be deliberate in recognising them and mitigating them. Human-centric AI systems including Emotion AI deal with data that is highly personal, and we may not (and should not) always be okay with that.

For example, I feel strongly that AI should not be deployed for use cases like security or surveillance, where there is no opportunity for people to opt-in and consent. Beyond considering how AI is deployed, its critical for the tech industry to examine how AI is developed.

The risk of bias in AI is a significant concern: if AI is not built with diverse data, and by a diverse team, they will fail to recognise all people, and stand to disenfranchise those they interact with.

Its up to tech leaders to come together and establish standards for AI ethics, and to advocate for thoughtful regulation. The industry still has a ways to go, but this movement is critical in order to realise the positive potential of this technology.

At the end of the day, we need to put the human before the artificial both in arming our technology with emotional intelligence and in determining the role it will play in our world.

Girl Decoded by Rana el Kaliouby is out now (20, Penguin Business).

Excerpt from:
Artificial intelligence has a high IQ but no emotional intelligence, and that comes with a cost - BBC Focus Magazine

Moravian Academy junior is helping harness artificial intelligence to spot COVID-19 – lehighvalleylive.com

Mikail Jaffer is about to start his junior year at Moravian Academy, and isnt yet sure what he wants to pursue after graduation.

But one thing that's clear is he isn't afraid to set his sights high.

Jaffer, 16 and from the Allentown area, is working with a Plano, Texas-based company called CovidScan.ai on harnessing the power of artificial intelligence to diagnose COVID-19, the illness caused by the novel coronavirus.

He grew up in the biopharmaceutical industry by way of his mother and father, Fatima and Gulam Jaffer, who own Yourway, an integrated biopharmaceutical supply chain solutions provider based in Upper Macungie Township.

A classmate introduced him to the CovidScan company, and he jumped at the opportunity.

Basically I came in and helped come up with different ideas to help develop the program and make it more friendly to the user and give more data or insight to doctors or radiologists, he told lehighvalleylive.com.

Mikail Jaffer, a rising junior at Moravian Academy, is inset against a file photo of a traditional method of analyzing chest X-rays. He is working with a company called CovidScan.ai to advance an artificial intelligence program for diagnosing COVID-19 based on chest X-ray images.Courtesy photo/NJ Advance Media file photo

The idea is to run chest X-ray images through the artificial intelligence program to quickly determine whether the patient has COVID-19 or some other lung disorder, or is normal. Its trained on thousands of images and designed to be incorporated into traditional health care digital systems for widespread use, said Moksh Nirvaan, the companys co-founder and head of AI development.

The programs overall accuracy rate is running around 96%, which breaks down to near 99% for COVID-19 cases, 95% for non-coronavirus ailments and 92% for normal diagnoses, Nirvaan said in a telephone interview from Plano.

The effort won a cash prize for second place in a Facebook Hackathon earlier this year.

This type of technology shows promise, particularly for areas with too few physicians or radiologists, the National Institutes of Health said in a research publication focused on "Chest X-ray Analysis using Machine Intelligence Research for HIV/TB Screening."

Advances in machine learning and artificial intelligence techniques offer a promise to supplement rapid, accurate, and reliable computer-assisted disease screening, the NIH says. Such techniques are particularly valuable in overburdened and/or resource constrained regions. These regions also tend to exhibit high prevalence of infectious diseases and report high mortality.

CovidScan.ai has been in the works since spring, when the founders realized the unprecedented strain COVID-19 was placing on the health care system, Nirvaan said.

Plans are to partner with five to 10 clinics to begin validating the program as early as September or October before bringing it to market, he said.

Jaffer has been helpful "from a business standpoint for scalability" and efforts to get the program into use, according to Nirvaan.

My idea was, how could I help to advance this technology while also bringing it to market, Jaffer said.

Globally, as of Friday, there have been 22,536,278 confirmed cases of COVID-19, including 789,197 deaths, reported to the World Health Organization. The United States from Jan. 20 to Friday has seen 5,477,305 confirmed cases of COVID-19 with 172,033 deaths, according to the WHO.

Our journalism needs your support. Please subscribe today to lehighvalleylive.com.

Kurt Bresswein may be reached at kbresswein@lehighvalleylive.com.

More:
Moravian Academy junior is helping harness artificial intelligence to spot COVID-19 - lehighvalleylive.com

TikTok sues Trump over his pending order to ban its app – CTV News

NEW YORK -- Video app TikTok is suing the Trump Administration over its efforts to ban the popular Chinese-owned service over national-security concerns.

TikTok, which is owned by China's ByteDance, insisted Monday that it is not a national-security threat and that the government is acting to further the president's anti-China political campaign. The company said the government is acting without evidence for its allegations or due process. It filed suit Monday in federal court in California against the Commerce Department, U.S. President Donald Trump and Commerce Secretary Wilbur Ross, saying that it sought to prevent the government from impermissibly banning TikTok by overturning President Donald Trump's executive order.

Trump issued two orders in August. The first, on Aug. 6, imposed a sweeping but unspecified ban on anytransaction with ByteDance, to take effect by Sept. 20. A week later, he ordered ByteDance to sell U.S. TikTok assets within 90 days. TikTok's lawsuit isn't fighting the sell order.

In its complaint, TikTok said that while the full scope of the Aug. 6 ban order remains unclear until the Commerce Department fleshes it out, the order still poses an existential threat to TikTok's U.S. business. It said it would move to block action by the Commerce Department once it issues rules.

The Commerce Department and White House did not immediately reply to requests for comment.

Over past year, TikTok has tried to put distance between its app, which it says has 100 million U.S. users, and its Chinese owners. It installed a former top Disney executive as its American CEO and named two other Americans chief security officer and general counsel. TikTok has also said it is willing to sell its U.S. operations and has held talks with Microsoft with to buy parts of its English-language app. Other companies and investors have reportedly expressed interest as well.

Both Republican and Democratic lawmakers have shared concerns about TikTok that ranged from its vulnerability to censorship and misinformation campaigns to the safety of user data and children's privacy. But the administration has provided no specific evidence that TikTok has made U.S. users' data available to the Chinese government. The Trump administration has ratcheted up tensions with China over trade and tech security issues; the president has also blamed China for the global coronavirus pandemic. His administration has sought to hobble Chinese telecom equipment maker Huawei and the Chinese messaging app WeChat.

U.S. officials point to the hypothetical threat that lies in the Chinese government's ability to demand co-operation from Chinese companies. TikTok says it has not shared U.S. user data with the Chinese government and would not do so, and that it does not censor videos at the request of Chinese authorities.

In its complaint, TikTok said that it has protected U.S. user data by storing it in the U.S. and Singapore, not China. It said it has also set up software barriers to ensure that TikTok segregates its U.S. user data from other ByteDance products.

The company says Trump's Aug. 6 order violated TikTok's Fifth Amendment due-process rights by giving it no notice or opportunity to be heard. It claims that the order is not based on a genuine national emergency and the administration hasn't proven that TikTok's activities meet the legal standard of an unusual and extraordinary threat required by the International Emergency Economic Powers Act, which Trump cited as one of the bases for his order.

Getting a court to overturn the government's determination that TikTok is a national-security threat would be difficult, legal experts said. That true even though the Trump administration is pushing authority in places it's never been used before, said Paul Marquardt, a foreign-investment review lawyer with Cleary Gottlieb in Washington.

Still, the administration has significant discretion on national security, said Christian Davis, a Washington lawyer with Akin Gump whose practice focuses on foreign investment and international trade. Due-process claims might be easier to argue, Davis said, but those issues could be cured with a modification of the order.

TikTok has grown rapidly in the past two years, racking up nearly 700 million global users as of July 2020, the company said in its complaint. That has drawn the attention of officials in the U.S. as well as in other countries - India has also cracked down on TikTok - as well as U.S. competitors. Facebook recently rolled out a TikTok copycat feature in its Instagram app and its CEO Mark Zuckerberg has publicly criticized TikTok.

Read this article:

TikTok sues Trump over his pending order to ban its app - CTV News

Why the connected car rides on open source – VentureBeat

The automobile is one of the most exciting frontiers in our connected lives today. Infotainment systems, real-time maps, and advanced driver assistance systems are already commonplace in newer vehicles, but it is still relatively early days for the fully connected car. The exhilarating vision of the driving experience of the near future includes augmented reality dashboards, progressively more autonomous operations, and increased integration with the outside world, from interacting with smart home devices to automatically finding parking spots nearby.

Were headed towards a time when every car will become connected. IDC forecasts worldwide shipments of connected cars vehicles that have internet access and onboard modems to communicate with external systems to reach 76.3 million by 2023, a nearly 50% increase over 2019.

For the notion of a connected car to fully realize its potential, however, the many distributed players in this field vehicle manufacturers, sensor providers, infotainment and app vendors, cloud providers, telcos, security vendors, and more must join hands to collaborate in ways that havent always come naturally to the automotive industry.

Traditionally, technology initiatives in the automotive industry have tended to be highly fragmented, with car makers favoring proprietary technology and in-house development to retain as much competitive advantage as possible. But that approach is quickly becoming obsolete in the new connected car era. We now find ourselves in an environment that involves a far more complex set of software and hardware systems to interoperate in a proper manner that satisfies the stringent road safety requirements of today.

Fortunately, a viable way forward has started to emerge. What the new connected landscape needs is a heterogeneous ecosystem of several different parties each an expert in their own niche working together to advance common goals in connectivity. Such cross-industry coalitions foster a culture of cooperation and define common standards aimed at accelerating the development and adoption of new connected car technologies.

In recent years, we have seen some key coalitions form in the field to collaboratively advance the car of the future. These include the Linux Foundations ELISA (Enabling Linux In Safety Applications) project that brings together automotive OEMs and chip manufacturers to develop standards around open-source safety-critical systems, the connected vehicles initiative at the Institute of Electrical and Electronics Engineers (IEEE), and the Connected Car Working Group at the Cellular Telecommunications Industry Association (CTIA).

As a result, Linux and open source software in general has become the go-to platform for driving connected car innovation. This has allowed more experts to contribute to the advances and has enabled auto makers to harness the superior economics, faster software cycles, and more reliable codebases of open source.

For instance, Subaru experienced the power of open source for the connected car firsthand when it set out to produce an infotainment system that would surprise and impress customers accustomed to the brands utilitarian image.

Using an open-source software stack, Subaru was able to start development with a codebase that was already 70% to a production-ready project, avoiding the need to build everything from scratch, and significantly reduce the launch cycle for its new Starlink infotainment platform in 2020 Outback and Legacy models.

Then theres data collection. Systems in connected cars collect vast amounts of data, which inform the driver about everything from when they should change the oil to the location of the nearest coffee shop. But this data also can be used to gather insights for improving driver safety and the driving experience and to formulate new products and services. When these datasets are openly shared as anonymized bundles of raw information, engineers from different companies are able to collaboratively solve problems relevant to the entire connected car ecosystem and help standardize solutions that benefit everybody.

Theres yet more for the argument in favor of collaboration and open source to grease the wheels of the connected car movement. It attracts contributions from the best and brightest developers from around the world, for whom using open source components in all their work has become as natural as power steering.

So how can companies foster unified, open platforms for the connected car?

One way is to enthusiastically support and participate in cross-industry coalitions. This work is important because the standards that result from these groups provide the common ground for companies to build, certify, and deploy their solutions. Another benefit is that it gives government agencies the concrete processes and quantitative measure to fairly and efficiently regulate the technology.

They should encourage as many of their people as possible to contribute code to open-source initiatives such as the Autoware Foundation so that the quality can continually be improved. And automotive OEMs need to double down on sharing field data that helps data scientists understand and recommend the broader trends around which platforms should be defined.

A unified software platform for the connected car is surely within reach if the collegial spirit of open source catches on in the automobile industry.

Tom Canning is Vice-President for IoT and Devices at Canonical, the company behind Ubuntu.

See the original post here:
Why the connected car rides on open source - VentureBeat

Open source security: Securing the worlds code, together – ETCIO.com

By Maneesh Sharma

Open source is all-pervasive in the software universe. Almost every organization using software to enhance digital transformation or business agility is consuming open source in some way. In fact, today 99% of all software projects are created using open source. Organizations whose software stack is built on open-source have been able to quickly pivot and recalibrate to meet the needs of the current environment because of the agility that open-source offers.

The adoption of open source components increases, so can security risks for both developers and security teams. For organizations on a digital transformation journey, security must be a top priority. With so much of their code being created and consumed in a collaborative manner, the need to ensure security is even more critical for them. The average software project depends on over 200 other components. Therefore, a safe and healthy open source community isnt just good for open source software, but it also benefits the millions of businesses that depend on it.

A key aspect of making security a collective responsibility is that developers are empowered to continually check for vulnerabilities as part of the development and testing phase. This approach is known as shift-left. By shifting security left, developers are able to uncover and fix vulnerabilities in the early stages of the software development lifecycle, so these are rooted out before the code is deployed to production.

Open source is fundamentally more securable than proprietary code because of this very collaborative nature of how it is built there are more experts involved in identifying and remedying security issues in the code. But security research is a specialist skill and the supply for researchers far outweighs the demand, so much so that security researchers are on average outnumbered 500:1 when compared to developers. This is where the community can help by rapidly identifying and disclosing vulnerabilities in code.

Open source development platforms are fast evolving to support this collaborative approach to building secure code, and provide tools to expand security research capabilities. From automating detection and remediation, to tracking emerging security vulnerabilities, these platforms are focused on helping developers identify threats and fix vulnerabilities before code enters in the production cycle.

Many forward-thinking enterprises are turning to open source to innovate at speed. The open source promise is one where security is an inextricable part of the entire product lifecycle, and not handled in isolation. In turn, businesses that use and build with open source must not only encourage secure practices across the development lifecycle, but they should also think about committing resources back to the wider open source community so we can create a more secure digital world that benefits everyone.

The author is Country Manager, GitHub India

View post:
Open source security: Securing the worlds code, together - ETCIO.com