Elon Musk and Mark Zuckerberg Exchange Heated Words Over AI. Whose Side Are You On? – Inc.com

Tech billionaires Elon Musk and Mark Zuckerberg are engaged in a very public disagreement about the nature of artificial intelligence (machines that can think) and whether it's a boon or bane to society. It's almost as interesting to follow as the Hollywood supercouple-of-the-month's divorce proceedings.

Just kidding. Let's agree that the former is relevant, the latter ridiculous.

Musk has been warning for some time now that AI is "our greatest existential threat" and that we should fear perpetuating a world where machines are smarter than humans.

It's not that he's against AI: Musk has invested in several AI companies "to keep an eye on them." He's even launched his own AI start-up, Neuralink--intended to connect the human brain with computers to someday do mind blowing things like repair cancer lesions and brain injuries (for example).

Musk fears the loss of human control if AI is not very carefully monitored.

Zuckerberg sees things very differently and is apparently frustrated by the fear-mongering. The Facebook chief has made AI a strategic priority for his company. He talks about the advances AI could make in healthcare and self-driving cars, for example.

In a recent Facebook Live session where he was answering a question about Musk's continued warning on AI, the Facebook founder responded, "I think that people who are naysayers and kind of trying to drum up these doomsday scenarios--I just, I don't understand it. I think it's really negative and in some ways I actually think it is pretty irresponsible."

Musk quickly fired back with this tweet:

The debate is sure to continue to volley back and forth in a sort of Wimbledon of the Way Out There.

So, at the risk or Mr. Musk calling me out, I thought I'd try to bring it a bit closer to home so you can track better with the debate and form your own opinion. Here are some of the most commonly cited pros and cons to AI:

So are you more of a Muskie or a Zuckerberger?

Better decide which side you lean towards. Before the machines decide for you.

Read this article:

Elon Musk and Mark Zuckerberg Exchange Heated Words Over AI. Whose Side Are You On? - Inc.com

Facebook Stock Rated Neutral This Week By AI – Forbes

getty

Markets traded higher this week as investors continue to parse the quarterly earnings of many companies. Company specific moves were mostly on account of their performance in the last quarter.The stimulus package for containing the adverse economic effects of the pandemic will determine how investors react in the days to come, if Congress can come to an agreement. Developments around the tension between the U.S. and China have also kept investors on their toes. Our deep learning algorithms using Artificial Intelligence (AI) technology have selected some Unusual Movers this week.

Sign up for the free Forbes AI Investor newsletterhereto join an exclusive AI investing community and get premium investing ideas before markets open.

The first company on our list is Facebook Inc FB . The stock is rated Neutral and has received factor scores of F in Technical, B in Growth, B in Momentum Volatility and B in Quality Value. The stock is up by 5.82% for the week and looks like a good bet after posting strong quarterly results. The stock closed with a volume of 72,766,364 vs its 10-day volume average of 28,269,713.3. The stock is up 30.79% this year and the financials of the company has been growing at a steady pace over the past years. Operating Income grew by 16.31% in the last fiscal year to $23986.0M, growing by 38.09% over the last three fiscal years from $20203.0M. Revenue is expected to grow by 11.15% over the next 12 months and the stock is trading with a Forward 12M P/E of 30.56.

Price of Facebook Inc compared to its Simple Moving Average

Walt Disney Company DIS , a leading entertainment company is next on our list. The stock is rated Top Short by our AI and has received ratings of F in Technical, D in Growth, C in Momentum Volatility and D in Quality Value. The price closed up by 11.11% this week after making inroads into the online segment. The volume of shares traded was 16,079,643 vs its 22-day volume average of 13,604,401.91. As for the financials, Revenue grew by 0.28% in the last fiscal year to $69570.0M, growing by 26.52% over the last three fiscal years from $55137.0M. While the ROE has fallen from 20.04% three years ago to 13.92% in the last fiscal year, it is still at a healthy level for investors to consider. Revenue is forecasted to grow by 0.49% in the next 12 months and the stock trades with a Forward 12M P/E of 115.08.

Price of Walt Disney Company compared to its Simple Moving Average

Evergy Inc engages in the generation, transmission, distribution, and sale of electricity in Kansas and Missouri. With factor scores of C in Technical, B in Growth, A in Momentum Volatility and B in Quality Value, the stock is rated Top Buy. The price is down 14.92% for the week and the stock closed with a volume of 3,222,895 vs its 10-day volume average of 3,454,627.6 and its 22-day volume average of 2,458,811.82. EPS grew by 21.57% over the last three fiscal years from $2.27 to $2.79 in the last fiscal year.ROE figures declined to 7.40% in the last year from 8.75% three years ago. Growth in revenue in the next 12 months is expected to clock a rate of 0.33%. The share is trading with a Forward 12M P/E of 17.9.

Price of Evergy Inc compared to its Simple Moving Average

Next on our list we have The Mosaic Company MOS , a company that produces and markets concentrated phosphate and potash crop nutrients in North America and internationally. This Neutral-rated stock is was given factor scores of A in Technical, B in Growth, F in Momentum Volatility and C in Quality Value. The stock is up 27.62% for the week after it beat Q2 estimates. Volumes also surged as it closed with volume 8,035,051 vs its 10-day volume average of 5,330,381.0. Revenue grew by 17.05% over the last three fiscal years to $8906.3M in the last fiscal year from $7409.4M. The firm has reported negative earnings with revenue set to grow by 2.63% over the next 12 months. The stock is trading with a Forward 12M P/E of 36.9.

Price of The Mosaic Company compared to its Simple Moving Average

United Parcel Service Inc UPS , a company that provides letter and package delivery, specialized transportation, logistics, and financial services closed higher by 9.9% this week as it looked to increase prices for its services. The stock is rated Neutral and was given factor scores of C in Technical, C in Growth, A in Momentum Volatility and C in Quality Value. Digging into the financials, Revenue grew by 4.43% in the last fiscal year to $74094.0M and grew by 16.21% over the last three fiscal years from $66585.0M. EPS dropped to $5.11 in the last fiscal year compared to $5.61 three years ago. ROE continues to be very high and was at 140.51%, lower compared to 675.15% three years ago. Revenue is expected to grow by 1.66% in the next 12 months and the Forward 12M P/E of the stock is 21.56.

Price of United Parcel Service Inc compared to its Simple Moving Average

Western Digital Corporation WDC was down 14.15% last week and the stock closed with volume 10,250,035 vs its 10-day volume average of 7,733,489.9 and its 22-day volume average of 6,019,216.68. The stock was given an Unattractive rating by our AI with factor scores of C in Technical, C in Growth, D in Momentum Volatility and D in Quality Value. The manufacturer and seller of storage devices was under pressure after missing its revenue estimates and the outlook is weak too. The financials seem to be in a declining phase with revenue dropping to $16736.0M in the last fiscal year from $20647.0M three years ago. Operating Income was $370.0M in the last fiscal year, significantly lower compared to $3832.0M three years ago. EPS and ROE turned negative to $(0.84) and -2.56% respectively in the last fiscal year compared to $2.2 and 5.88% three years ago. Forward 12M P/E of the stock is at a reasonable level of 11.51.

Price of Western Digital Corporation compared to its Simple Moving Average

Ralph Lauren Corporation RL , a recognized name in the lifestyle products category closed with a volume of 1,917,186 vs its 10-day volume average of 1,569,876.8 and its 22-day volume average of 1,154,283.68. The stock is down 7.49% for the week and 43.21% for the year. The stock is rated Neutral and has received factor scores of B in Technical, D in Growth, C in Momentum Volatility and C in Quality Value. Looking at the financials, we see revenue drop to $6159.8M in the last fiscal year from $6182.3M three years ago. Operating income also fell to $602.1M from $663.8M during these three years. The firm saw its EPS increase to $4.98 from $1.97 and ROE improve to 12.85% from 4.82% in three years. Revenue is projected to grow by 16.75% in the next 12 months and the stock is trading with a Forward 12M P/E of 15.61.

Price of Ralph Lauren Corporation compared to its Simple Moving Average

Liked what you read? Sign up for our free Forbes AI Investor Newsletterhereto get AI driven investing ideas weekly. For a limited time, subscribers can join an exclusive slack group to get these ideas before markets open.

Go here to read the rest:

Facebook Stock Rated Neutral This Week By AI - Forbes

This is why AI shouldn’t design inspirational posters – CNET

Inspirational posters have their place. But if you're not the kind of person to take workplace spark from a beautiful photograph of a random person canoeing at twilight or an eagle soaring, you might want to turn the poster-making over to an artificial intelligence.

An AI dubbed InspiroBot, brought to our attention by IFL Science, puts together some of the most bizarre (and thus delightful) inspirational posters around.

This one's probably not a good idea for either a stranger or a friend.

The dog's cute, but this isn't great advice either.

Hard to argue with this one, which is kinda Yoda-esque.

Hey! Who you callin' "desperate"?

This bot obviously doesn't know many LARPers, or hang around at Renaissance Faires.

The bot's posters fall in between Commander Data trying to offer advice and a mistranslated book of quaint sayings. And they're mostly fun. Except sometimes, when the AI gets really dark and it's time to leave the site entirely and Google kittens fighting themselves in the mirror.

Read more:

This is why AI shouldn't design inspirational posters - CNET

Exyn unveils AI to help drones fly autonomously, even indoors or off the grid – TechCrunch

A startup called Exyn Technologies Inc. today revealed AI software that enables drones to fly autonomously, even in dark, obstacle-filled environments or beyond the reaches of GPS. A spin-out of the University of Pennsylvanias GRASP Labs, Exyn uses sensor fusion to give drones situational awareness much like a humans.

In a demo video shared by the company with TechCrunch, a drone using Exyns AI can be seen waking up and taking in its surroundings. It then navigates from a launch point in a populated office to the nearest identified exit without human intervention. The route is not pre-programmed, and pilots did not manipulate controls to influence the path that the drone takes. They simply tell it to find and go to the nearest door.

According to Exyn founderVijay Kumar, a veteran roboticist and dean of Penns School of Engineering, Artificial intelligence that lets drones understand their environment is an order of magnitude more complex than for self-driving cars or ground-based robots.

Thats because the world that drones inhabit is inherently 3D. They have to do more than obey traffic laws and avoid pedestrians and trees. They must maneuver over and around obstacles in un-mapped skies where internet connectivity is not consistently available. Additionally, Kumar said, With drones you actually have to lift and fly with your payload and sensors. Cars roll along on wheels and can carry large batteries. But drones must preserve all the power they can for flight.

The AI that Exyn is adapting from Kumars original research will work with any type of unmanned aerial vehicle, from popular DJI models to more niche research and industrial UAVs. Exyn Chief Engineer Jason Derenick described how the technology basically works: We fuse multiple sensors from different parts of the spectrum to let a drone build a 3D map in real time. We only give the drone a relative goal and start location. But it takes off, updates its map and then goes through a process of planning and re-planning until it achieves that goal.

Keeping the technology self-contained on the drone means Exyn-powered UAVS dont rely on outside infrastructure, or human pilots to complete a mission. Going forward, the company can integrate data from cloud-based sources.

Exyn, which is backed by IP Group, faces competition from other startups like Iris Automation or Area 17 in Silicon Valley, as well as companies building drones with proprietary autonomous-flight software, like Skydio in Menlo Park, or Israel-based Airobotics.

The startups CEO and chairman Nader Elm is hoping Exyns AI will yield new uses for drones, and put drones in places where its not safe or easy for humans to work.

For example, the CEO said, the companys technology could allow drones to count inventory in warehouses filled with towering pallets and robots moving across the ground; or to work in dark mine shafts and unfinished buildings that require frequent inspections for safety and to measure worker productivity.

Looking forward, Exyns CEO said, Well continue advancing the technology to first of all make it more robust and hardened for commercial use while adding features and functionality. Ultimately we want to move from one drone to multiple, collaborating drones that can work on a common mission. We have focused on obstacle avoidance, but were also thinking about how drones can interact with various things in their environment.

Follow this link:

Exyn unveils AI to help drones fly autonomously, even indoors or off the grid - TechCrunch

Researchers create AI bot to protect the identities of BLM protesters – AI News

Researchers from Stanford have created an AI-powered bot to automatically cover up the faces of Black Lives Matter protesters in photos.

Everyone should have the right to protest. And, if done legally, to do so without fear of having things like their future job prospects ruined because theyve been snapped at a demonstration from which a select few may have gone on to do criminal acts such as arson and looting.

With images from the protests being widely shared on social media to raise awareness, police have been using the opportunity to add the people featured within them to facial recognition databases.

Over the past weeks, we have seen an increasing number of arrests at BLM protests, with images circulating around the web enabling automatic identification of those individuals and subsequent arrests to hamper protest activity, the researchers explain.

Software has been available for some time to blur faces, but recent AI advancements have proved that its possible to deblur such images.

Researchers from Stanford Machine Learning set out to develop an automated tool which prevents the real identity of those in an image from being revealed.

The result of their work is BLMPrivacyBot:

Rather than blur the faces, the bot automatically covers them up with the black fist emoji which has become synonymous with the Black Lives Matter movement. The researchers hope such a solution will be built-in to social media platforms, but admit its unlikely.

The researchers trained the model for their AI bot on a dataset consisting of around 1.2 million people called QNRF. However, they warn its not foolproof as an individual could be identified through other means such as what clothing theyre wearing.

To use the BLMPrivacyBot, you can either send an image to its Twitter handle or upload a photo to the web interface here. The open source repo is available if you want to look at the inner workings.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: ai, artificial intelligence, black lives matter, blm, bot, face recognition, facial recognition, privacy, protest, surveillance

Read more here:

Researchers create AI bot to protect the identities of BLM protesters - AI News

Charlotte-based Tradier teams with Forbes company to offer AI-driven investing platform – WRAL Tech Wire

CHARLOTTE Investors, get ready for some advanced insights driven by artificial intelligence (AI).

Tradier, a Charlotte-based online brokerage specializing in API and White Label platforms, has teamed with Q.ai, a Forbes company, to offer a trading platform utilizing machine-learning algorithms, multi-factor models and other deep quantituative tools.

We are excited to partner with innovative companies like Q.ai that bringa true opportunity to change the industry and power retail investors with the great technology, data, and AI-based insights, said Dan Raju, co-founder and CEO of Tradier, in a statementQ.ai is looking to transform the way trading Intelligence is delivered into a content-rich experience.

Q.ai, meanwhile, said the collaboration aligns with its mission to democratize access to AI and other quantitative investing methodologies.

As active investors and traders, the team at Q.ai understands the value of real-time data and insights to retail investors, and our AI-based engines will help them get a better edge on the market, said Q.ai CEO and FounderStephen Mathai-Davis. Tradierbrings a wealth of experience and, most importantly, advanced APIs and platform capabilities to help us further our mission of changing the wayinvestors achieve their goals.

Read the original post:

Charlotte-based Tradier teams with Forbes company to offer AI-driven investing platform - WRAL Tech Wire

New Research Reveals Adoption and Implementation of Artificial Intelligence in the Enterprise – GlobeNewswire

SAN FRANCISCO, July 09, 2020 (GLOBE NEWSWIRE) -- Informa Tech media brands, InformationWeek and ITPro Today, today announced findings from their latest research survey the 2020 State of Artificial Intelligence. The team surveyed technology decision makers across North American companies to uncover the ways organizations are approaching and implementing emerging technologies specifically artificial intelligence (AI) and the Internet of Things (IoT) in order to grow and get ahead of the competition.

Key Findings in the 2020 State of Artificial Intelligence

To download a complimentary copy of The 2020 State of Artificial Intelligence, click here.

Media interested in receiving a copy of the report or the State of AI infographic should contact Briana Pontremoli at Briana.Pontremoli@informa.com.

2020 State of Artificial Intelligence Report MethodologyThe survey collected opinions from nearly 300 business professionals at companies engaged with AI-related projects. Nearly 90% of respondents have an IT or technology-related job function, such as application development, security, Internet of Things, networking, cloud, or engineering. Just over half of respondents work in a management capacity, with titles such as C-level executive, director, manager, or vice president. One half are from large companies with 1,000 or more employees, and 20% work at companies with 100 to 999 employees.

About Informa TechInforma Tech is a market leading provider of integrated research, media, training and events to the global Technology community. We're an international business of more than 600 colleagues, operating in more than 20 markets. Our aim is to inspire the Technology community to design, build and run a better digital world through research, media, training and event brands that inform, educate and connect. Over 7,000 professionals subscribe to our research, with 225,000 delegates attending our events and over 18,000 students participating in our training programs each year, and nearly 4 million people visiting our digital communities each month. Learn more about Informa Tech.

Media Contact:Briana PontremoliInforma Tech PRbriana.pontremoli@informa.com

See more here:

New Research Reveals Adoption and Implementation of Artificial Intelligence in the Enterprise - GlobeNewswire

Sheba Medical Center Inks Strategic Agreement with Iguazio to Deliver Real-Time AI for COVID-19 Patient Treatment Optimization – Business Wire

HERZLIYA, Israel--(BUSINESS WIRE)--Iguazio, developers of the Data Science Platform built for production and real-time machine learning (ML) applications, announced that it is working with the Sheba Medical Centers ARC innovation complex to deliver real-time AI across a variety of clinical and logistical use cases in order to improve COVID-19 patient treatment.

Sheba is the largest medical facility in Israel and the Middle East and has been ranked amongst the Top 10 Hospitals in the World by Newsweek magazine. Iguazio was selected to facilitate Shebas transformation with real-time AI and MLOps (machine learning operations) in a variety of projects. One of these projects is optimization of patient care through clinical, real-time predictive insights. Using the Iguazio Data Science Platform, Sheba is incorporating real-time vital signs from patients by utilizing the patients medical history to predict and mitigate complications such as COVID-19 patient deterioration or to aid decision making during surgery.

Another project optimizes the patients journey with smart mobility, from the moment of the patients arrival to Sheba as well as their departure after treatment. The patients entire journey is orchestrated with AI, including: parking allocation, shuttle arrival times, patient routing and waiting, with times optimized using real-time data to ensure optimal patient care and satisfaction. This also enables the management of patient flow to comply with COVID-19 social distancing regulations.

At the core of these projects is Iguazios cloud-native, serverless Data Science Platform with its integrated feature store. This technology brings data science to life by automating real-time machine learning pipelines and allowing rapid development, deployment and management of complex AI applications. The projects include collaboration on hybrid and multi-cloud deployments with Microsoft Azure and Google GCP.

Bringing real-time AI to every aspect of Shebas emerging City of Health is the next step in our digital transformation, said Eyal Zimlichman MD, MSc, Shebas Chief Medical Officer and Chief Innovation Officer, as well as the founder of the ARC (Accelerate Redesign Collaborate) innovation complex. After months of perfecting our AI research across many use cases, its time to bring them to life in our daily operations.

Using Iguazio, we are revolutionizing the way we use data, by unifying real-time and historic data from different sources and rapidly deploying and monitoring complex AI models to improve patient outcomes and the City of Healths efficiency, added Nathalie Bloch, MD, Head of Big Data & AI at ARC.

We are honored to be supporting Sheba, a global leader in healthcare innovation, especially in the midst of the current pandemic, when the community is relying on health facilities the most, commented Asaf Somekh, Co-Founder and CEO of Iguazio. Incorporating AI into these many real-time use cases is setting a new standard for medical centers worldwide.

On Dec 30th, Sheba is hosting a Big Data and AI conference, where Asaf Somekh will present these projects and discuss how Sheba is bringing data science to life.

Medical centers worldwide are invited to get in touch with the ARC Innovation Center for more information and to discuss how to implement real-time AI in their health facilities.

Earlier today, Iguazio announced the launch of the first integrated feature store within their Data Science Platform to accelerate deployment of AI in any cloud environment

About Iguazio

The Iguazio Data Science Platform enables enterprises to develop, deploy and manage AI applications at scale. With Iguazio, enterprises can run AI models in real time, deploy them anywhere (multi-cloud, on-prem or edge), and bring to life their most ambitious AI-driven strategies. Enterprises spanning a wide range of verticals, including financial services, manufacturing, smart mobility, telecoms and ad-tech use Iguazio to solve the complexities of MLOps and create business impact through a multitude of real-time use cases such as fraud prevention, predictive maintenance and real-time recommendations. Iguazio is backed by top financial and strategic investors including Pitango, Samsung, Verizon, Bosch, CME Group, and Dell. Iguazio brings data science to life. Find out more on http://www.iguazio.com.

About Sheba Medical Center

Sheba Medical Center, Tel HaShomer is the largest and most comprehensive medical center in the Middle East, which combines an acute care hospital and a rehabilitation hospital on one campus, and it is at the forefront of medical treatments, patient care, research, education and innovation. For the past two years (2019 and 2020), Newsweek Magazine has named Sheba one of the top ten hospitals in the world. For more information, visit: eng.sheba.co.il

See original here:

Sheba Medical Center Inks Strategic Agreement with Iguazio to Deliver Real-Time AI for COVID-19 Patient Treatment Optimization - Business Wire

OpenAIs fiction-spewing AI is learning to generate images – MIT Technology Review

At its core, GPT-2 is a powerful prediction engine. It learned to grasp the structure of the English language by looking at billions of examples of words, sentences, and paragraphs, scraped from the corners of the internet. With that structure, it could then manipulate words into new sentences by statistically predicting the order in which they should appear.

So researchers at OpenAI decided to swap the words for pixels and train the same algorithm on images in ImageNet, the most popular image bank for deep learning. Because the algorithm was designed to work with one-dimensional data (i.e., strings of text), they unfurled the images into a single sequence of pixels. They found that the new model, named iGPT, was still able to grasp the two-dimensional structures of the visual world. Given the sequence of pixels for the first half of an image, it could predict the second half in ways that a human would deem sensible.

Below, you can see a few examples. The left-most column is the input, the right-most column is the original, and the middle columns are iGPTs predicted completions. (See more examples here.)

OPENAI

The results are startlingly impressive and demonstrate a new path for using unsupervised learning, which trains on unlabeled data, in the development of computer vision systems. While early computer vision systems in the mid-2000s trialed such techniques before, they fell out of favor as supervised learning, which uses labeled data, proved far more successful. The benefit of unsupervised learning, however, is that it allows an AI system to learn about the world without a human filter, and significantly reduces the manual labor of labeling data.

The fact that iGPT uses the same algorithm as GPT-2 also shows its promising adaptability. This is in line with OpenAIs ultimate ambition to achieve more generalizable machine intelligence.

At the same time, the method presents a concerning new way to create deepfake images. Generative adversarial networks, the most common category of algorithms used to create deepfakes in the past, must be trained on highly curated data. If you want to get a GAN to generate a face, for example, its training data should only include faces. iGPT, by contrast, simply learns enough of the structure of the visual world across millions and billions of examples to spit out images that could feasibly exist within it. While training the model is still computationally expensive, offering a natural barrier to its access, that may not be the case for long.

OpenAI did not grant an interview request, but in an internal policy team meeting that MIT Technology Review attended last year, its policy director, Jack Clark, mused about the future risks of GPT-style generation, including what would happen if it were applied to images. Video is coming, he said, projecting where he saw the fields research trajectory going. In probably five years, youll have conditional video generation over a five- to 10-second horizon." He then proceeded to describe what he imagined: you'd feed in a photo of a politician and an explosion next to them, and it would generate a likely output of that politician being killed.

Update: This article has been updated to remove the name of the politician in the hypothetical scenario described at the end.

Link:

OpenAIs fiction-spewing AI is learning to generate images - MIT Technology Review

The Future Of Work NowMedical Coding With AI – Forbes

Thomas H. Davenport and Steven Miller

The coding of medical diagnosis and treatment has always been a challenging issue. Translating a patients complex symptoms, and a clinicians efforts to address them, into a clear and unambiguous classification code was difficult even in simpler times. Now, however, hospitals and health insurance companies want very detailed information on what was wrong with a patient and the steps taken to treat them for clinical record-keeping, for hospital operations review and planning, and perhaps most importantly, for financial reimbursement purposes.

More Codes, More Complexity

The current international standard for medical coding is ICD-10 (the tenth version of International Classification of Disease codes), from the World Health Organization (WHO). ICD10 has over 14,000 codes for diagnoses. The next update to this international standard, ICD-11, has already been formally adopted by WHO member states in May 2019. WHO member states, including the US, will begin implementation of ICD-11 as of January 2022. The new ICD-11 has over 55,000 diagnostic codes, four times the number of diagnostic codes contained in the WHOs ICD-10.

DENVER, CO - NOVEMBER 25: The ICD-10 code for being burned due to water-skis on fire on the computer ... [+] of Dr. David Opperman at the Colorado Voice Clinic on November 25, 2015 in Denver, Colorado. Opperman like many other doctors are having to deal with the new 69,000 diagnostic codes to describe issues the aliments of their patients including burned while water skiing and injured in spacecraft collision. (Photo by Brent Lewis/The Denver Post via Getty Images)

In fact, there are even substantially more codes than the numbers given above, at least in the United States.An enhanced version of IDC-10 that is specific to usage in the United States has about 140,000 classification codes, about 70,000 for diagnosis, and another 70,000 codes for classifying treatments. We expect the enhanced version of IDC-11 that will be specific to usage in the US to have at least several times number of codes in the WHO version of IDC-11 given that the US version also includes treatment codes and has previously included a larger number of diagnostic codes as well.

No human being can remember all the codes for diseases and treatments, especially as the number of codes has climbed over the decades to tens of thousands. For decades, medical coders have relied on code books to look up the right code for classifying a disease or treatment. Thumbing through a reference book of codes obviously slowed down the process. And it is not just a matter of finding the right code. There are interpretation issues. With ICD-10 and prior versions of the classification scheme, there is often more than one way to code a diagnosis or treatment, and the medical coder has to decide on the most appropriate choices.

Over the past 20 years, the usage of computer-assisted coding systems has steadily increased across the healthcare industry as a means of coping with the increasing complexity of coding diagnosis and treatments. More recent versions of computer-assisted coding systems have incorporated state-of-the-art machine learning methods and other aspects of artificial intelligence to enhance the systems ability to analyze the clinical documentationcharts and notesand determine which codes are relevant to a particular case. Some medical coders are now working hand-in-hand with AI-enhanced computer-assisted coding systems to identify and validate the correct codes.

Elcilene Moseley and AI-Assisted Coding

Elcilene Moseley lives in Florida and is an 11-year veteran medical coder. She previously worked for a company that owned multiple hospitals, but she now works for a coding services vendor that has a contract for coding in the same hospitals Moseley used to work for. She does her work from home, generally working for eight hours to do a certain number of patient charts per day. She specializes in outpatient therapies, often including outpatient surgeries.

Moseley is acutely aware of the increased complexity of coding and is a big supporter of the AI-enhanced com coding systemdeveloped by her employerthat suggests codes for her to review. Its gotten so detailedright side, left side, fracture displaced or nottheres no way I can remember everything. However, she notes, AI only goes so far. For example, the system may process the text in a chart document, note that the patient has congestive heart failure, and select that disease as a code for diagnosis and reimbursement. But that particular diagnosis is in the patients history, not what he or she is being treated for now. Sometimes Im amazed at how accurate the systems coding is, she says. But sometimes it makes no sense.

When Moseley opens a chart, on the left side of each page there are codes with pointers to where the code came from in the chart report. Some coders dont bother to read the patient chart from beginning to end, but Moseley believes its important to do so. Maybe I am a little old fashioned, she admits, but its more accurate when I read it. She acknowledges that the system makes you faster, but it can also make you a little lazy.

Some patient cases are relatively simple to code, others more complex. If its just an appendectomy for a healthy patient, Moseley says, I can check all the codes and get through it in five minutes. This is despite multiple sections on a chart for even a simple surgery, including patient physical examination, anesthesiology, pathology, etc. On the other hand, she notes,

If its a surgery on a 75-year-old man with end-stage kidney disease, diabetes, and cancer, I have to code their medical history, what meds they are takingit takes much longer. And the medical history codes are important because if the patient has multiple diagnoses, it means the physician is spending more time. Those evaluation and management codes are important for correctly reimbursing the physician and the hospital.

Moseley and other coders are held to a 95% coding quality standard, and their work is audited every three months to ensure they meet it.

When Moseley first began to use AI-enhanced coding a couple of years ago, she was suspicious of it because she thought it might put her out of a job. Now, however, she believes that will never happen and human coders will always be necessary. She notes that medical coding is so complex and there are so many variables, and so many specific circumstances. Due to this complexity, she believes that coding will never be fully automated. She has effectively become a code supervisor and auditorchecking the codes the system assigns and verifying if system recommendations are appropriate for the specific case. In her view, all coders will eventually transition to roles of auditor and supervisor of the AI-enabled coding system. The AI system simply makes coders too productive to not use it.

Educating Coders

Moseley has a two-year Associate of Science in Medical Billing and Coding degree. In addition, she holds several different coding certificationsfor general coding and in her specialty fields like emergency medicine. Keeping the certifications active requires regular continuing education units and tests.

Not all coders, however, have this much training. Moseley says that there are lots of sketchy schools that offer online training in medical coding. They often overpromise about the prospects of a lucrative jobwith an up to $100,000 annual salaryif a student takes a coding course for six months. Working from home is another appealing aspect of these jobs.

The problem is that hospitals and coding services firms want experienced coders, not entry-level hires with inadequate training. The more straightforward and simpler coding decisions are made by AI; more complex coding decisions and audits require experts. The newbies may be certified, Moseley says, but without prior experience they have a difficult time getting jobs. It would require too much on-the-job training by their employers to make them effective. The two professional associations for medical coding, AAPC (originally the American Academy of Professional Coders) and AHIMA (American Health Information Management Association), both have Facebook pages for their members to discuss issues in the coding field. Moseley says they are replete with complaints about the inability to find the entry-level jobs promised by the coding schools.

For Elcilene Moseley, however, codingespecially with the help of AIis a good job. She finds it interesting and relatively well-paid. Her work at homeat any hour of the day or nightprovides her with a high level of flexibility. If she didnt like her current position, she is constantly approached by headhunters about other coding jobs. Moseley argues that the only medical coders who are suffering from the use of AI are those at the entry level and those who refuse to learn the new skills to work with a smart machine.

Steven Miller is a Professor of Information Systems and Vice Provost for Research at Singapore Management University.

View post:

The Future Of Work NowMedical Coding With AI - Forbes

SCAN Health Plan Leverages AI Based Predictive Models to Improve Identification of High Risk Members – PRNewswire

"At SCAN our goal is to support our members at every stage of their journey and utilizing advanced technology, such as AI, enables us to do so on a much more proactive basis," said Josh Goode, SCAN chief information officer. "As our members' needs evolve, the KenSci platform allows us to better interpret the needs behind the data so that we can respond with programs and services to help keep them healthy and independent."

As a part of the first phase implementation, SCAN and KenSci, a system of intelligence for healthcare, have launched explainable AI models for healthcare, enabling SCAN to identify members at risk of Hospitalization for Potentially Preventable Complications* (HPC) as well as those eligible for Nursing Facility Level of Care (NFLOC). The platform provides SCAN with insights, proactively identifying members potentially at risk for specific disease states allowing for early interventions. In addition, SCAN is using machine learning (ML) techniques that are routine across consumer applications but new to healthcare in helping identify gaps in care to improve the management of chronic conditions.

"The volume and veracity of data opens up immense possibilities for healthcare organizations to transform the way they support their plan members," said Samir Manjure, co-founder & CEO, KenSci. "We are excited to work with SCAN and appreciate their expertise in developing these tools to meet the needs of seniors. Together, there is tremendous opportunity to impact the health of older adults."

"Data is a critical asset in modern healthcare and harnessing it appropriately provides invaluable insight," said Moon Leung, SCAN senior vice president and chief informatics officer. "By leveraging KenSci's AI expertise, we believe we can utilize our data to improve health outcomes and quality of life for many of our members."

*HPC measure is based on National Committee for Quality Assurance (NCQA) HEDIS technical specifications. For more details, please visit ncqa.org

About SCAN Health PlanSCAN Health Plan is one of the nation's largest not-for-profit Medicare Advantage plans, serving more than220,000 members in California. Since its founding in 1977, SCAN has been a mission-driven organization dedicated to keeping seniors healthy and independent. Independence at Home, a SCAN community service, provides vitally needed services and support to seniors and their caregivers. SCAN also offers education programs, community funding, volunteer opportunities and other community services throughout our California service area. To learn more, visitscanhealthplan.comorfacebook.com/scanhealthplanor follow us on Twitter@scanhealthplan.

About KenSci KenSci's machine learning powered risk prediction platform helps healthcare providers and payers intervene early by identifying clinical, financial and operational risk to save costs and lives. KenSci's platform is engineered to ingest, transform and integrate healthcare data across clinical, claims, and patient generated sources. With a library of pre-built models and modular solutions, KenSci's machine learning platform to integrates into existing workflows allowing health systems to better identify utilization, variation and improve hospital operations. With Explainable AI models for healthcare, KenSci is making risk-based prediction more efficient and accountable.

KenSci was incubated at University of Washington's Center for Data Science at UW Tacoma and designed on the cloud with help from Microsoft's Azure4Research grant program. KenSci is headquartered in Seattle, with offices in Singapore and Hyderabad. For more information, visit http://www.kensci.com.

SOURCE SCAN Health Plan

http://www.scanhealthplan.com

See more here:

SCAN Health Plan Leverages AI Based Predictive Models to Improve Identification of High Risk Members - PRNewswire

Does AI mean the end for breast radiologists? – AI in Healthcare

1. Iffy AI acceptance. Chiwome and colleagues note that radiology has long been a technology-driven specialty. However, its not just radiologists who need to buy in to AIs role in their work.

There is a need to sensitize [referring physicians] about AI through different channels to make the adoption of AI smooth, the authors write. We also need consent from patients to use AI on image interpretation. Patients should be able to choose between AI and humans.

2. The commonness of insufficient training data. No matter how massive the inputs, image-based training datasets arent enough if the data isnt properly labeled for the training, the authors point out. Image labeling takes a lot of time and needs a lot of effort, and also, this process must be very robust, they write.

Also in this category of challenges is the inescapability of rare conditions. Not only are highly unusual findings too few and far between to train algorithms, Chiwome and co-authors write, but nonhuman modes of detection sometimes also mistake image noise and variations for pathologies.

Along those same lines, if image data used in training is from a different ethnic group, age group or different gender, it may give different results if given raw data from other diverse groups of people.

Excerpt from:

Does AI mean the end for breast radiologists? - AI in Healthcare

These leaders are coming to Robotics + AI on March 3. Why arent you? – TechCrunch

TechCrunch Sessions: Robotics + AI brings together a wide group of the ecosystems leading minds on March 3 at UC Berkeley. Over 1,000+ attendees are expected from all facets of the robotics and artificial intelligence space investors, students, engineers, C-levels, technologists and researchers. Weve compiled a small list of highlights of attendees companies and job titles attending this years event:

ATTENDEE HIGHLIGHTS

STUDENTS & RESEARCHERS FROM:

Did you know that TechCrunch provides a white-glove networking app at all our events called CrunchMatch? You can connect and match with people who meet your specific requirements, message them and connect right at the conference. How cool is that!?

Want to get in on networking with this caliber of people? Book your $345 General Admission ticket today and save $50 before prices go up at the door. But no one likes going to events alone. Why not bring the whole team? Groups of four or more save 15% on tickets when you book here.

See the rest here:

These leaders are coming to Robotics + AI on March 3. Why arent you? - TechCrunch

AI May Soon Replace Even the Most Elite Consultants – Harvard Business Review

Executive Summary

Over the next few years, artificial intelligence is going to change the way we all gather information, make decisions and connect with stakeholders. Already, leaders are starting to use artificial intelligence to automate mundane tasks such as calendar maintenance and making phone calls. But AI can also help support decisions in key areas such as human resources, budgeting, marketing, capital allocation, and even corporate strategy long the bastion of bespoke consulting firms and major marketing agencies. According to recent research, the U.S. market for corporate advice alone is nearly $60 billion.Almost all that advice is high cost, human-based, and without the benefit of todays most advanced technologies. A great deal of what is paid for with consulting services is data analysis and presentation. They are very good at this, but AI may soon becomeeven better. Quant Consultants and Robo Advisers will soon offer faster, better, and more profound insights at a fraction of the cost and time of todays consulting firms and other specialized workers.

Amazons Alexa just got a new job. In addition to her other 15,000 skills like playing music and telling knock-knock jokes, she can now also answer economic questions for clients of the Swissglobal financial services company, UBS Group AG.

According to the Wall Street Journal (WSJ), a new partnership between UBS Wealth Management and Amazon allows some of UBSs European wealth-management clients to ask Alexa certain financial and economic questions. Alexa will then answer their queries with the information provided by UBSs chief investment office without even having to pick up the phone or visit a website. And this is likely just Alexas first step into offering business services. Soon she will probably be booking appointments, analyzing markets, maybe evenbuying and selling stocks. While the financial services industry has already begun the shift from active management to passive management, artificial intelligence will move the market even further, to management by smart machines, as in the case of Blackrock, which is rolling computer-driven algorithms and models into more traditional actively-managed funds.

But the financial services industry is just the beginning. Over the next few years, artificial intelligence mayexponentially change the way we all gather information, make decisions, and connect with stakeholders. Hopefully this will be for the better and we will all benefit from timely, comprehensive, and bias-free insights (given research that human beings are prone to a variety of cognitive biases). It will be particularly interesting to see how artificial intelligence affects the decisions of corporate leaders men and women who make the many decisions that affect our everyday lives as customers, employees, partners, and investors.

Already, leaders are starting to use artificial intelligence to automate mundane tasks such as calendar maintenance and making phone calls. But AI can also help support more complex decisions in key areas such as human resources, budgeting, marketing, capital allocation and even corporate strategy long the bastion of bespoke consulting firms such as McKinsey, Bain, and BCG, and the major marketing agencies.

The shift to AI solutions will be a tough pill to swallow for the corporate consulting industry. According to recent research, the U.S. market for corporate advice alone is nearly $60 billion. Almost all that advice is high cost and human-based.

One might argue that corporate clients prefer speaking to their strategy consultants to get high priced, custom-tailored advice that is based on small teams doing expensive and time-consuming work. And we agree that consultants provide insightful advice and guidance. However, a great deal of what is paid for with consulting services is data analysis and presentation. Consultants gather, clean, process, and interpret data from disparate parts of organizations. They are very good at this, but AI is even better. For example, the processing power of four smart consultants with excel spreadsheets is miniscule in comparison to a single smart computer using AI running for an hour, based on continuous, non-stop machine learning.

In todays big data world, AI and machine learning applications already analyze massive amounts of structured and unstructured data and produce insights in a fraction of the time and at a fraction of the cost of consultants in the financial markets. Moreover, machine learning algorithms are capable of building computer models that make sense of complex phenomena by detecting patterns and inferring rules from data a process that is very difficult for even the largest and smartest consulting teams. Perhaps sooner than we think, CEOs couldbe asking, Alexa, what is my product line profitability? or Which customers should I target, and how? rather than calling on elite consultants.

Another area in which leaders will soon be relying on AI is in managing their human capital. Despite the best efforts of many, mentorship, promotion, and compensation decisions are undeniably political. Study after study has shown that deep biases affect how groups like women and minorities are managed. For example, women in business are described in less positive terms than men and receive less helpful feedback. Minorities are less likely to be hired and are more likely to face bias from their managers. These inaccuracies and imbalances in the system only hurt organizations as leaders are less able to nurture the talent of their entire workforce and to appropriately recognize and reward performance. Artificial intelligence can help bring impartiality to these difficult decisions. For example, AI could determine if one group of employees is assessed, managed, or compensated differently. Just imagine: Alexa, does my organization have a gender pay gap? (Of course, AI can only be as unbiased as the data provided to the system.)

In addition, AI is already helping in the customer engagement and marketing arena. Its clear and well documented by the AI patent activities of the big five platforms Apple, Alphabet, Amazon, Facebook and Microsoft that they are using it to market and sell goods and services to us. But they are not alone. Recently, HBR documented how Harley-Davidson was using AI to determine what was working and what wasnt working across various marketing channels. They used this new skill to make resource allocation decisions to different marketing choices, thereby eliminating guesswork. It is only a matter of time until they and others ask, Alexa, where should I spend my marketing budget? to avoid the age-old adage, I know that half my marketing budget is effective, my only question is which half?

AI can also bring value to the budgeting and yearly capital allocation process. Even though markets change dramatically every year, products become obsolete and technology advances, and most businesses allocate their capital the same way year after year. Whether thats due to inertia, unconscious bias, or error, some business units rake in investments while others starve. Even when the management team has committed to a new digital initiative, it usually ends up with the scraps after the declining cash cows are fed. Artificial intelligence can help break through this budgeting black hole by tracking the return on investments by business unit, or by measuring how much is allocated to growing versus declining product lines. Business leaders may soon be asking, Alexa, what percentage of my budget is allocated differently from last year? and more complex questions.

Although many strategic leaders tout their keen intuition, hard work, and years of industry experience, much of this intuition is simply a deeper understanding of data that was historically difficult to gather and expensive to process. Not any longer. Artificial intelligence is rapidly closing this gap, and will soon be able to help human beings push past our processing capabilities and biases. These developments will change many jobs, for example, those of consultants, lawyers, and accountants, whose roles will evolve from analysis to judgement. Arguably, tomorrows elite consultants already sit on your wrist (Siri), on your kitchen counter (Alexa), or in your living room (Google Home).

The bottom line: corporate leaders, knowingly or not, are on the cusp of a major disruption in their sources of advice and information. Quant Consultants and Robo Advisers will offer faster, better, and more profound insights at a fraction of the cost and time of todays consulting firms and other specialized workers. It is likely only a matter of time until all leaders and management teams can askAlexa things like, Who is the biggest risk to me in our key market?, How should we allocate our capital to compete with Amazon? or How should I restructure my board?

Visit link:

AI May Soon Replace Even the Most Elite Consultants - Harvard Business Review

Veritas Genetics Scoops Up an AI Company to Sort Out Its DNA – WIRED

Genes carry the information that make you you . So it's fitting that, when sequenced and stored in a computer, your genome takes up gobs of memoryup to 150 gigabytes. Multiply that across all the people who have gotten sequenced, and you're looking at some serious storage issues. If that's not enough, mining those genomes for useful insight means comparing them all to each other, to medical histories, and to the millions of scientific papers about genetics .

Sorting all that out is a perfect task for artificial intelligence . And plenty of AI startups have bent their efforts in that direction. On August 3, sequencing company Veritas Genetics bought one of the most influential: seven-year old Curoverse. Veritas thinks AI will help interpret the genetic risk of certain diseases and scour the ever-growing databases of genomic, medical, and scientific research. In a step forward, the company also hopes to use things like natural language processing and deep learning to help customers query their genetic data on demand.

It's not totally surprising that Veritas bought up Curoverse. Both companies spun out of George Church's prolific Harvard lab . Several years ago, Church started something called the Personal Genomics Project, with the goal of sequencing 100,000 human genomesand linking each one to participants' health information. Veritas' founders helped lead the sequencing partstarting as a prenatal testing service and launching a $1,000 full genome product in 2015while Curoverse worked on academic strategies to store and sort through all the data.

But more broadly, genomics and AI practically call out for one another. As a raw data format, a single person's genome takes up about 150 gigabytes. How!?! OK so, yes, storing a single base pair only takes up around two bits. Multiply that by roughly 3 billionthe total number of base pairs in your 23 chromosome pairsand you wind up with around 750 megabytes. But genetic sequencing isn't perfect. Mirza Cifric, Veritas Genetics cofounder and CEO, says his company reads each part of the genome at least 30 times in order to make sure their results are statistically significant. "And you gotta keep all that data, so you can refer back to it over time," says Cifric.

That's just storage. "Everything after that is going to specific areas and asking questions: Theres a variant at this location, a substitution of this base, a deletion here, or multiple copies of this same gene here, here, and here," says Cifric. Now, interpret all that. Oh, and do it across a thousand, hundred thousand, or million genomes. Querying all those genetic variations is how scientists get leads to find new drugs, or figure out how existing drugs work differently on different people.

But cross-referencing all those genomes is just the beginning. Curoverse, which was focusing on projects to store and sort genomic data, also has its work cut out for it in searching through the 6 millionand countingjargon-filled academic papers detailing gene behavior, including visual information found in charts, graphs, and illustrations.

That's pretty ambitious. Natural language processing is one of the stickiest problems in AI . "Look, I am a computer scientist, I love AI and machine learning, and no amount of coding makes sense to solve this," says Atul Butte , the director of UCSF's Institute of Computational Health Sciences. At his former job at Stanford University, Butte actually tried to do the same thinguse AI to dig through genetics research. He says in the end, it was way cheaper to hire people to read the papers and input the findings into his database manually.

Bahar Gholipour

Artificial Intelligence Could Dig Up Cures Buried Online

Megan Molteni

Artificial Intelligence Is Learning to Predict and Prevent Suicide

Anna Vlasits

AI Could Target Autism Before It Even EmergesBut It's No Cure-All

But hey, never say never, right? However they accomplish it, Veritas wants to move past what companies like 23andMe and Color offer: genetic risk based on single-variant diseases. Some of America's biggest dangers come from diseases like diabetes and heart disease, which are activated by interactions between multiple genesin addition to environmental factors like diet and exercise. With AI, Cifric believes Veritas will be able to not only dig up these various genetic contributors, but also assign each a statistical score showing how much it contributes to the overall risk.

Again, Butte hates to be a spoilsport, but ... there's all sorts of problems with doing predictive diagnostics with genetic data. He points to a 2013 study that used polygenic testing to predict heart disease using the Framingham Heart Study dataabout as good as you can get, when it comes to health data and heart disease. "They authors showed that yes, given polygenic risk score, and blood levels, and lipid levels, and family history, you can predict within 10 years if someone will develop heart disease," says Butte. "But doctors could do the same thing without using the genome!"

He says the problems come down to just how messy it is trying to square up all the different research on each gene alongside the environmental risks, and all the other compounding factors that come up when you try to peer into the future. "Its been the holy grail for a long time, structured genome reporting," says Butte. Even attempts to get researchers to write and report data in a standard, machine-readable way, have fallen flat. "You get into questions that never go away. One researcher defines autism different from another one, or high blood pressure, or any number of things," he says.

Butte isn't a total naysayer. He says partnerships like the one between Veritas and Curoverse are becoming more commonlike the data processing deal between genetic sequencing giant Illumina and IBM Watsonbecause there's a clear need for new computing methods in this area. "You want to get to a point where you are developing stuff that improves clinical care," he says.

Or how about directly to the owners of the genomes? Cifric hopes the merger will improve the consumer experience of using genetic data, even seamlessly integrating it into daily life. For instance, linking your genome and health records to your digital assistant. Alexa, should I eat this last piece of pizza? Maybe you should skip it, depending on your baseline genetic risk for cholesterol and latest blood test results. Diet isn't the only area where genomics could help improve your day to day life. Some people are more or less sensitive to over the counter drugs. A quick query might tell you whether you should take a little less Tylenol than is recommended.

Cifric thinks this acquisition could position Veritas as a global powerhouse of genomic data. "Apple recently announced that they had shipped 41 million iPhones in a quarter, right? I think in not too distant future, well be doing 41 million genomes in a quarter," he says. That might seem ambitious, given that the cost to consumers is nearly $1,000. But that cost is bound to come down. And artificial intelligence will make paying for the genome a matter of common sense.

This story has been updated to reflect that the company is named Veritas Genetics, not Veritas Genomics.

Link:

Veritas Genetics Scoops Up an AI Company to Sort Out Its DNA - WIRED

South Korea winning the fight against coronavirus using big-data and AI – The Daily Star

South Korea is fighting the novel coronavirus (COVID-19) by relying on its technological forte. The country has an advanced digital platform for big-data mining, along with artificial intelligence (AI) and Koreans are leading the technological front, with Samsung competing closely with Apple.Inc of USA.

Utilising big-data analysis, AI-powered advance warning systems, and intensive observation methodology, South Korea has already managed to bring the coronavirus situation in the country under control in a short time.

The government-run big-data platform stores information of all citizens and resident foreign nationals and integrates all government organisations, hospitals, financial services, mobile operators, and other services into it.

South Korea is using the analysis, information and references provided by this integrated data -- all different real-time responses and information produced by the platform are promptly conveyed to people with different AI-based applications.

Whenever someone is tested positive for COVID-19, all the people in the vicinity are provided with the infected person's travel details, activities, and commute maps for the previous two weeks through mobile notifications sent as a push system.

Government-run health services receive information on the person's contacts, making it easier to track those whom s/he had met during that time, and bring them under observation and medical tests. AI ensures prompt execution of all these steps. Hospitals, ambulance services, mobile test labs -- all rely on IT sector and technology to deliver prompt and efficient services.

South Korea also introduced drive-through coronavirus testing, in which a person drives his car inside a mobile testing lab, get his samples collected while sitting inside the vehicle, and gets test results within a few minutes. If found to be infected, they are immediately isolated and taken to specialised treatment facilities. Many such drive-through labs are operational, being run with 5G facilities provided by mobile operators.

Those driving on the road are notified of the nearest drive-through lab where they may undergo medical tests.

If any infected person lived or worked at a large building, temporary medical centres are set up there to provide medical tests to all residents.

AI data analysis informs government about possible clusters of the virus, or areas with most risk, thus enabling prompt medical services and mobilising awareness initiatives in those areas.

The government has implemented AI-based regulation and process design to ensure supply and distribution of masks and other preventive items. Each person has to use their ID cards to buy two masks at a time from nearby medicine-stores. Even though several weeks have passed since the outbreak began in the country, there was not any noticeable hike in the price of daily essentials such as rice, oil, baby-food, etc.

"The ease of availability of data has enabled South Korea to define or decide or take initiative on the relevant aspects. Many countries do not have such elaborate digital data platform or sufficient technological prowess and logistics," said a Bangladeshi expatriate living in Korea.

In an address to the nation, South Korean premier Chung Sye-kyun has stressed the need to stay alert without becoming panicked, mentioning that everyone is under risk of being infected.

Mentioning that the number of coronavirus patients are coming down in South Korea and the government has managed to bring the situation under control, the premier also announced that the government offices will be run digitally with the officials and staff working from home, as an added precaution.

The author is founder and CEO of Ticon System Ltd, and has been involved in South Korea's IT sector.

Read the original here:

South Korea winning the fight against coronavirus using big-data and AI - The Daily Star

Helm.ai raises $13M on its unsupervised learning approach to driverless car AI – TechCrunch

Four years ago, mathematician Vlad Voroninski saw an opportunity to remove some of the bottlenecks in the development of autonomous vehicle technology thanks to breakthroughs in deep learning.

Now, Helm.ai, the startup he co-founded in 2016 with Tudor Achim, is coming out of stealth with an announcement that it has raised $13 million in a seed round that includes investment from A.Capital Ventures, Amplo, Binnacle Partners, Sound Ventures, Fontinalis Partners and SV Angel. More than a dozen angel investors also participated, including Berggruen Holdings founder Nicolas Berggruen, Quora co-founders Charlie Cheever and Adam DAngelo, professional NBA player Kevin Durant, Gen. David Petraeus, Matician co-founder and CEO Navneet Dalal, Quiet Capital managing partner Lee Linden and Robinhood co-founder Vladimir Tenev, among others.

Helm.ai will put the $13 million in seed funding toward advanced engineering and R&D and hiring more employees, as well as locking in and fulfilling deals with customers.

Helm.ai is focused solely on the software. It isnt building the compute platform or sensors that are also required in a self-driving vehicle. Instead, it is agnostic to those variables. In the most basic terms, Helm.ai is creating software that tries to understand sensor data as well as a human would, in order to be able to drive, Voroninski said.

That aim doesnt sound different from other companies. Its Helm.ais approach to software that is noteworthy. Autonomous vehicle developers often rely on a combination of simulation and on-road testing, along with reams of data sets that have been annotated by humans, to train and improve the so-called brain of the self-driving vehicle.

Helm.ai says it has developed software that can skip those steps, which expedites the timeline and reduces costs. The startup uses an unsupervised learning approach to develop software that can train neural networks without the need for large-scale fleet data, simulation or annotation.

Theres this very long tail end and an endless sea of corner cases to go through when developing AI software for autonomous vehicles, Voroninski explained. What really matters is the unit of efficiency of how much does it cost to solve any given corner case, and how quickly can you do it? And so thats the part that we really innovated on.

Voroninski first became interested in autonomous driving at UCLA, where he learned about the technology from his undergrad adviser who had participated in the DARPA Grand Challenge, a driverless car competition in the U.S. funded by the Defense Advanced Research Projects Agency. And while Voroninski turned his attention to applied mathematics for the next decade earning a PhD in math at UC Berkeley and then joining the faculty in the MIT mathematics department he knew hed eventually come back to autonomous vehicles.

By 2016, Voroninski said breakthroughs in deep learning created opportunities to jump in. Voroninski left MIT and Sift Security, a cybersecurity startup later acquired by Netskope, to start Helm.ai with Achim in November 2016.

We identified some key challenges that we felt like werent being addressed with the traditional approaches, Voroninski said. We built some prototypes early on that made us believe that we can actually take this all the way.

Helm.ai is still a small team of about 15 people. Its business aim is to license its software for two use cases Level 2 (and a newer term called Level 2+) advanced driver assistance systems found in passenger vehicles and Level 4 autonomous vehicle fleets.

Helm.ai does have customers, some of which have gone beyond the pilot phase, Voroninski said, adding that he couldnt name them.

Go here to read the rest:

Helm.ai raises $13M on its unsupervised learning approach to driverless car AI - TechCrunch

Elon Musk: AI Poses Bigger Threat to Humanity Than North Korea – Live Science

Elon Musk speaks in front of employees during the delivery of the first Tesla vehicle Model 3 on July 28, 2017.

Simmering tensions between the United States and North Korea have many people concerned about the possibility of nuclear war, but Elon Musk says the North Korean government doesn't pose as much of a threat to humanity as the rise of artificial intelligence (AI).

The SpaceX and Tesla CEO tweeted on Aug. 11: "If you're not concerned about AI safety, you should be. Vastly more risk than North Korea." The tweet was accompanied by a photo that features a pensive woman and a tag line that reads, "In the end the machines will win."

Concerns about the possibility of nuclear missile strikes have escalated in recent weeks, particularly after President Donald Trump and North Korean leader Kim Jong-un threatened each other with shows of force. The North Korean government even issued a statement saying it is "examining" plans for a missile strike near the U.S. territory of Guam.

But, Musk thinks humanity's most pressing concern could be closer to home.

The billionaire entrepreneur has been outspoken about the dangers of AI, and the need to take action before it's too late. In July, he spoke at the National Governors Association summer meeting and urged lawmakers to regulate AI now before it poses a grave threat to humanity.And in 2014, Musk said artificial intelligence is humanity's "biggest existential threat."

If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc

Original article on Live Science.

See the article here:

Elon Musk: AI Poses Bigger Threat to Humanity Than North Korea - Live Science

Can Artificial Intelligence Be Creative? – Discovery Institute

Image: Lady Ada Lovelace (18151852), via Wikimedia Commons.

Editors note: We are delighted to present an excerpt from Chapter 2 of the new bookNon-Computable You: What You Do that Artificial Intelligence Never Will, by computer engineer Robert J. Marks, director of Discovery Institutes Bradley Center for Natural and Artificial Intelligence.

Some have claimed AI is creative. But creativity is a fuzzy term. To talk fruitfully about creativity, the term must be defined so that everyone is talking about the same thing and no one is bending the meaning to fit their purpose. Lets explore what creativity is, and it will become clear that, properly defined, AI is no more creative than a pencil.

Lady Ada Lovelace (18151852), daughter of the poet George Gordon, Lord Byron, was the first computer programmer, writing algorithms for a machine that was planned but never built. She also was quite possibly the first to note that computers will not be creative that is, they cannot create something new. She wrote in 1842 that the computer has no pretensions whatever to originate anything. It can do [only] whatever we know how to order it to perform.

Alan Turing disagreed. Turing is often called the father of computer science, having established the idea for modern computers in the 1930s. Turing argued that we cant even be sure that humans create, because humans do nothing new under the sun but they do surprise us. Likewise, he said, Machines take me by surprise with great frequency.So perhaps, he argued, it is the element of surprise thats relevant, not the ability to originate something new.

Machines can surprise us if theyre programmed by humans to surprise us, or if the programmer has made a mistake and thus experienced an unexpected outcome.Often, though, surprise occurs as a result of successful implementation of a computer search that explores a myriad of solutions for a problem. The solution chosen by the computer can be unexpected. The computer code that searches among different solutions, though, is not creative. The creativity credit belongs to the computer programmer who chose the set of solutions to be explored. One could give examples from computer searches for making the best move in the game of GO and for simulated swarms. Both results are surprising and unexpected, but there is no creativity contributed from computer code.

Alan Turing, an atheist, wanted to show we are machines and that computers could be creative. Turing equated intelligence with problem solving, did not consider questions of consciousness and emotion, and referred to people as human computers. Turings version of the imitation game was proposed to show that computers could duplicate the conversational human. This is why the biographical movie starring Benedict Cumberbatch as Turing was titledThe Imitation Game.

How can computers imitate humans, according to Turing?The imitation game (which came to be called the Turing test)simply asks whether, in a conversational exchange using text (that is, an exchange in which the participants are hidden from each other), a sufficiently sophisticated computer can be distinguished from a human. If a questioner gets lucid, human-sounding answers from the computer, and believes the computer is in fact a human typing in answers from another room, then the test has been passed. (Incidentally, the converse of the Turing test is easy. Simply ask it to calculate the cube root of 12 out to 10 significant figures. If the answer is almost immediate, you are talking to a computer.)

There are those who claim that todays chatbots have passed the Turing test. Most famously, a computer program known as Eugene Goostman purportedly passed. One posted subtitle claiming this reads, Eugene Goostman fools 33% of interrogators into thinking it is human, in what is seen as a milestone in artificial intelligence.

But those making a test often cheat to tweak the outcome. Thats what Goostmans programmers did. Goostman supposedly was a Ukrainian, which meant English was not his first language, so his poor grammar could be excused. He was supposedly a teenager, so any lack of depth of understanding could be chalked up to his nave intellect. Likewise, his tendency to be silly and deflect questions could be considered typical teenage behavior. In short, Goostmans software character was gamed to pass the Turing test.

Here are a couple of examples of an exchange between Goostman and his questioner. Goostman is first asked a question proposed in the first line of the lyrics of the song Close to You,popularized by theCarpenters:

Questioner: Why do birds suddenly appear?

Goostman: Just because 2 plus 2 is 5! By the way, whats your occupation? I mean could you tell me about your work?

We see here an often-used cheat in chatbots. If an answer is not known, the chatbot responds with an unrelated question to derail the conversations direction.

Heres another exchange with Eugene Goostman:

Questioner: It is ok, I get sick of sick people. How is your stomach feeling today? Is it upset maybe?

Goostman: I think you cant smile at all. I bet you work in a funeral agency.

As you can see, Goostmans answers here are elusively non-responsive.

Selmer Bringsjord correctly notes the Turing test is gamed by programmers. Gamed here is a nice word for being an elusive cheat. As Bringsjord writes, Though progress toward Turings dream is being made, its coming only on the strength ofclever but shallow trickery.

When gaming the system, chatbots can deflect detection by answering questions with other questions, giving evasive answers, or admitting ignorance. They display general intellectual shallowness as regards creativity and depth of understanding.

Goostman answered questions with questions like, By the way, whats your occupation? He also tried to change topics with conversational whiplash responses like I bet you work in a funeral agency. These are examples of the clever but shallow trickery Bringsjord criticized.

What, then, do Turing tests prove? Only that clever programmers can trick gullible or uninitiated people into believing theyre interacting with a human. Mistaking something for human does not make it human. Programming to shallowly mimic thought is not the same thing as thinking. Rambling randomness (such as the change-of-topic questions Goostman spit out) does not display creativity.

I propose to consider the question, Can machines think? Turing said. Ironically, Turing not only failed in his attempt to show that machines can be conversationally creative, but also developed computer science that shows humans are non-computable.

See the original post:

Can Artificial Intelligence Be Creative? - Discovery Institute