The origins of AI in healthcare, and where it can help the industry now – Healthcare IT News

Healthcare is at an inflection point. Machine learning and data science are becoming key components in developing predictive and prescriptive analytics. AI-powered applications are transforming the health sector by reducing spend, improving patient outcomes and increasing accessibility to care.

But where did AI in healthcare stem from? And what factors are driving AI use in healthcare today? Dr. Taha Kass-Hout, general manager forhealthcare and AI, and chief medical officerat Amazon Web Services, offered some historical perspective during a HIMSS20 Digital educational session, Healthcares Prescription for Transformation: AI.

In medicine, at the end of the day, we want to know what sort of patient has a disease and what disease a patient has, so predicting what each patient needs and delivering the best care for them, thats ultimately the definition of precision health or precision medicine, Kass-Hout said.

HIMSS20 Digital

The intersection of medicine and AI is really not a new concept, he added. Many have heard of a 1979 project that used artificial intelligence as it applied to infection, such as meningitis and sepsis.

AI in medicine even goes back to 1964 with Eliza, the very first chatbot, which was a conversational tool that recreated the conversation between a psychotherapist and a patient, he explained. That also was the early days of applying artificial intelligence and rules-based systems on the interaction between patients and their caregivers, he added.

But up until three years ago, deep learning, when it comes to the most advanced algorithms, was never mentioned in The New England Journal of Medicine or The Lancet or even JAMA, he noted.

Today, if youre looking at PubMed, it cites over 12,000 publications with deep learning, over 50,000 machine learning, and over 100,000 pieces of scientific healthcare literature with artificial intelligence, with the point that most of that is highly skewed toward perhaps the last few years.

Looking at this literature, one sees that most of the applications seen today of artificial intelligence in healthcare have involvedpattern recognition, prediction and natural language understanding, he added.

If you look at the overall value of why AI is really important, especially in our current situation with the global pandemic we live in, 50% of the worlds population has no access to essential healthcare, Kass-Hout stated.

If you look at the United States alone, 10% of the population has no insurance and 30% of the working population are underinsured, and insurance costs per individual have reached over $20,000-$30,000 in the last year alone.

So the healthcare industry also should look at AI as it relates to the way the industry collects the information for medical records, he suggested. For example, the way it does collect this information is error-prone, where 30% of medical errors are causing more than 500,000 deaths per year.

On a related note, when it comes to the need for AI, there is a projected shortage in the U.S. of more than 120,000 clinicians over the next decade, he added.

So this is really where, if we think about more of this global view of the problem as well as the population, we can see where AI and advancements in AI can really help us overcome many things; for example, performing tasks that doctors cant, said Kass-Hout, using large data sets and modern computational tools like deep learning and the power of the cloud to recognize patterns too subtle for any human to discern.

In the HIMSS20 Digital educational session, attendees can hear directly from four experts on how and why they are focusing on some of the industrys biggest opportunities and where AI can help tackle both financial and operational inefficiencies that plague global health systems today.

Kass-Hout is joined by Karen Murphy, RN, executive vice president and chief innovation officer at Geisinger; Dr. Marc Overhage, former vice president of intelligence strategy and chief medical informatics officer at Cerner; and Stefan Behrens, CEO and c-founder of Gyant, a vendor of an AI-powered virtual assistant. To attend the session, click here.

Twitter:@SiwickiHealthITEmail the writer:bill.siwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more:

The origins of AI in healthcare, and where it can help the industry now - Healthcare IT News

Predicting chaos using aerosols and AI – Washington University in St. Louis Newsroom

If a poisonous gas were released in a bioterrorism attack, the ability to predict the path of its molecules through turbulent winds, temperature changes and unstable buoyancies could mean life or death. Understanding how a city will grow and change over a 20-year period could lead to more sustainable planning and affordable housing.

Deriving equations to solve such problems adding up all of the relevant forces is, at best, difficult to the point of near-impossibility and, at worst, actually impossible. But machine learning can help.

Using the motion of aerosol particles through a system in flux, researchers from the McKelvey School of Engineering at Washington University in St. Louis have devised a new model, based on a deep learning method, that can help researchers predict the behavior of chaotic systems, whether those systems are in the lab, in the pasture or anywhere else.

That is the beauty of aerosols, said Rajan Chakrabarty, assistant professor of energy, environmental and chemical engineering. Its beyond one discipline, its just fundamental particles floating in air and you just observe the chaos.

The research was published as a cover article in the Journal of Aerosol Science.

Chakrabarty and his team postdoctoral researcher Pai Liu and Jingwei Gan, then a PhD candidate at the Illinois Institute of Technology tested two deep learning methods and determined that the generative adversarial network produced the most accurate outcomes. This kind of AI is first fed information about a real-world process, then, based on that data, it creates a simulation of that process.

Motivated by game theory, a generative adversarial network receives both the ground truth (real) and randomly generated data (fake) and tries to determine which is real and which is fake.

This process repeats many times, providing feedback, and the system as a whole gets continually better at generating data matching on which it was trained.

It is computationally expensive to describe the chaotic motion of an aerosol particle through a turbulent system, so Chakrabarty and his team needed real data a real example to train its system. This is where aerosols came in.

The team used the buoyancy-opposed flame in the Chakrabarty lab to create examples on which the AI could be trained. In this case, we experimentally added chaos to a system by introducing buoyancy and temperature differences, Chakrabarty said. Then, they turned on a high-speed camera and recorded 3-D trajectory datasets for soot particles as they meandered through, zipped around and shot across the flame.

They trained two kinds of artificial intelligence models with the data from the fire chamber: the variational autoencoder method and a generative adversarial network (GAN). Each model then produced its own simulation. Only the GANs trajectories mirrored the statistical traits found in the experiments, producing true-to-life simulations of chaotic aerosol particles.

The real-time trajectory of a particle next to the simulated trajectory produced by the GAN

Chakrabartys deep learning model can do more than simulate where soot, or chemicals, will wind up once released into the atmosphere. You see many examples of this kind of chaos, from foraging animals, to the transport of atmospheric pollutants and biothreats, to search and rescue strategies, he said.

In fact, the lab is now working with a psychiatrist looking at the efficacy of treatment in children with tic syndrome. Tics are chaotic, Chakrabarty explained, so the typical clinical trial setup may not be effective in determining a medications efficacy.

The wide application of this new deep learning model speaks not only to the power of artificial intelligence, but also may say something more salient about reality.

Chaos, or order, depends on the eye of the beholder, he said. What this tells you is that there are certain laws that govern everything around us. But theyre hidden.

You just have to uncover them.

Read this article:

Predicting chaos using aerosols and AI - Washington University in St. Louis Newsroom

AI can determine our motivations using a simple camera – TNW

Credit: Silver Logic Labs

Silver Logic Labs (SLL) is in the people business. Technically, its an AI startup, but what it really does is figure out what people want. At first glance theyve simply found a better way to do focus-groups, but after talking to CEO Jerimiah Hamon weve learned theres nothing simple about the work hes doing.

The majority of AI in the world is being taught to do boring stuff. The machines are learning to analyze data, and scrape websites. Theyre being forced to to sew shirts and watch us sleep. Hamon and his team created an algorithm that analyzes the tiniest of human movements, using a camera, and determines what that person is feeling.

Credit: Silver Logic Labs

Dont worry if your mind isnt blown right now it takes a little explanation to sink in. Imagine youre trying to determine whether a TV show will be popular with an audience and youve gathered a group of test-viewers whove just seen your show. How do you know if theyre responding honestly, or simply trying to respond in the way they think they should. Hamon told us:

You have these situations where youre trying to determine how people feel about something that could possibly be considered controversial, or that people might not want to be honest about. You might have a scene with two men kissing each other, or two women. You might have a scene where a dog gets hit by a car in such a way that its supposed to be funny.

Well find, sometimes, people will respond that they didnt like those things, but then when we analyze what they were doing while they were watching it, and we pick up these details and we see theyre expressing joy, or arousal, quite often.

And were better at predicting whether that show is going to do well based on our insight, than if you just go by how people respond to the list of questions.

SLL is trying to solve one of the oldest problems in the world: people lie. In fact, according to the fictional Dr. House, M.D. Everybody lies. More importantly though, Hamon who is not-at-all fictional told us:

With our system we find that we get a lot more data. We can use it to watch every second and compare every second to every other second in a way a person watching cant. So when asked Can you predict a Nielsen rating? the answer is yes, the lowest accuracy rating weve got is about 89% thats the lowest.

Being able to determine the viability of a TV show, or how people feel about a specific scene in a movie is a pretty neat trick. The fact that theyve adapted the technology to work with almost any laptop camera for survey purposes such as observing someone watching a video clip at home is astounding.

Hamon told us that the algorithms work so well his team almost always ends up flunking certain respondents for being under the influence of a substance. A drug detecting robot that can be employed through any connected camera? Thats a little spooky.

SLL does more than provide analytics for TV shows and movies, in fact its ambitions might be some of the highest weve ever seen for an AI company. We asked Hamon how this technology was supposed to be used outside of simply detecting if someone liked something or not:

Im very passionate about health care. With this we can identify neural-deficits very quickly. We did a lot of research and it turns out its proven that if youre going to have a stroke, you will have a series of micro-strokes first. These are undetectable most of the time, sometimes even to the people having them if you live at a nursing home for instance, we could have cameras set up, we could detect those.

These people might have a one percent change in gait, we could see that for example, our system might be able to detect the first of the micro-strokes and signal for help.

The company also wants to change the way law enforcement works. Hamon believes that dash-cams and body-cams that utilize this technology will save lives. He proposed a what-if scenario:

Say youve got someone running up to a building and theres someone on guard, they might see this person running who, incidentally, has just lost their baby and needs help as a threat, maybe due to a lack of training or because theyre scared.

The other side is maybe you see someone running and think they need your help when in reality they have a pound of explosives in their backpack. We know that how a person moves is different based on how they feel. People make these decisions under extreme pressure and theyre not always right.

There are even more uses for educational applications. The potential to determine exactly how students respond to a teacher, or to tailor a specific lesson to an individual could help a lot of people, especially those who arent benefiting from traditional methods.

Its about time someone created an AI that helps us better understand each other in a practical sense that might actually save lives.

Read next: Your social media use is helping scientists monitor the worlds ecosystems

Read the original post:

AI can determine our motivations using a simple camera - TNW

Comcast credits AI software for handling the pandemic internet traffic crush – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

Comcast said investments in artificial intelligence software and network capacity have helped it meet internet traffic demand during the pandemic.

Elad Nafshi, senior vice president for next-generation access networks at Comcast Xfinity, said in an interview with VentureBeat that the nations internet network has held up during the surge of residential internet traffic from people working at home. But this success wasnt just because of capital spending on fiber-optic networks. Rather, it has depended on a suite of AI and machine-learning software that gives the company visibility into its network, adds capacity quickly when needed, and fixes problems before humans notice them.

Comcasts network is accessible to more than 59 million U.S. homes via 800,000 miles of cable (about 3 times the distance to the moon). Back in March, Comcast said internet traffic had risen 32% because of COVID-19 but assured everyone it had the capacity to handle peak traffic demands in the U.S. The company also saw a 36% increase in mobile data use over Wi-Fi on Xfinity mobile.

The first part of the growth was because of work from home, Jan Hofmeyr, chief network officer at the Comcast Technology Center in Philadelphia, said in an interview with VentureBeat. Things like video conferencing started to drive a lot of traffic. The consumption of video went up significantly. And then with kids being home, you could see playing games going upward. We saw it go up across the board.

But since March and April, the traffic from Comcasts 21 million subscribers has hit a plateau. People are getting out of their homes more and the initial surge of work-from-home has normalized, Hofmeyr said.

The company normally adds capacity 12 to 18 months ahead of time, with typical plans targeting 45% a year increases in traffic. Since 2017, Comcast has invested $12 billion in the network and added 33,331 new route miles of fiber optic cable. Those investments have enabled the company to double capacity every 2.5 years, Hofmeyr said.

Above: Comcast executive vice president and chief network officer Jan Hofmeyr.

Image Credit: Comcast

With COVID-19, we obviously saw a massive surge in the network, and looking back in retrospect the network was highly reliable, Hofmeyr said. We were able to respond quickly as we saw the spike in traffic. We were able to add capacity without having to take the network down. It was designed for that.

During the initial stages of the pandemic, the new technologies were able to handle regional surges while internet traffic spiked as much as 60%. Nafshi told VentureBeat the network cant handle surges just by getting bigger. In March and April, Comcast added 35 terabits per second of peak capacity to regional networks. And the company added 1,700 100-gigabit links to the core network, compared to 500 in the same months a year earlier.

The companys software, called Comcast Octave, helps manage traffic complexity, working behind the scenes where customers dont notice it. The AI platform was developed by Comcast engineers in Philadelphia. It checks 4,000-plus telemetry data points (such as external network noise, power levels, and other technical issues that can add up to a big impact on performance) on more than 50 million modems across the network every 20 minutes. While invisible, the AI and machine learning tech has played a valuable role over the past several months.

COVID-19 was a very unique experience for us, said Nafshi. When youre building networks, you never build for the situation where everyone gets locked up in their room in their homes and suddenly they jump online. Now, thats the new normal. The challenge we are presented with is how to enable our customers to shelter in place and work and be entertained.

Octave is programmed to detect when modems arent using all the bandwidth available to them as efficiently as possible. Then it automatically adjusts them, delivering substantial increases in speed and capacity. Octave is a new technology, so when COVID-19 hit, Comcast had only rolled it out to part of the network.

To meet the sudden demand, a team of about 25 Octave engineers worked seven-day weeks to reduce the deployment process from months to weeks. As a result, customers experienced a nearly 36% increase in capacity just as they were using more bandwidth than ever before for working, streaming, gaming, and videoconferencing.

Weve had a fair amount of experience already looking at data patterns and acting on it, Nafshi said. We had an interactive platform deployed that we were leaning on. We looked at the data network conditions and decided what knobs we need to turn on our infrastructure in order to really optimize how packets get delivered to the home.

Comcast took the data it had collected and put it into algorithmic solutions to predict where interference could disrupt networks or trouble points might appear.

We have to turn the knobs so that we optimize delivery to your house, which would not be the same as the delivery to my home, Nafshi said. We provide you with much more reliable service by detecting the patterns that lead up to breakage and then have the network self-heal based on those patterns. Were making that completely transparent to the customer. The network can self-heal autonomously in a self-feedback loop. Its a seamless platform for the customer.

Above: The Comcast Technology Center in Philadelphia.

Image Credit: Comcast

Before introducing Comcast Octave, the company also deployed its Smart Network Platform. Developed by Comcast engineers, this suite of software tools automates core network functions. As a result of this investment, Comcast was able to dramatically cut down the number of outages customers experience and their duration. The outages are now lasting a matter of minutes sometimes, compared to hours before, said Noam Raffaelli, senior vice president of network and communications engineering at Comcast Xfinity, in an interview with VentureBeat.

We are trying to benefit from innovation on software to basically drive our outcomes and our operational key performance indicators (KPIs) down so things like outage minutes or minutes to repair go down, said Raffaelli. We look at data across our network and use data science to understand trends and do correlations between events we see on the network. We have telemetry and automation, so we can operate the equipment without the manual interference of our engineers. We mitigate issues before there is any degradation in the networks.

On top of that, the equipment is more secure and more automated, Raffaelli said. Comcast has also been able to figure out how to build redundancies into the network so it can hold up in the case of accidents, such as a backhoe operator cutting a fiber-optic cable.

This gives us an unprecedented real-time view of our network and unprecedented insights into what the customer experience is, Raffaelli said. Weve had a double-digit improvement in outage minutes and repair. We are building redundant links across the network.

A tool called NetIQ uses machine learning to scan the core network continuously, making thousands of measurements every hour. Before NetIQ, Comcast would often find out about a service-impacting issue like a fiber cut when it started seeing service degradation or getting customer calls.

With NetIQ in place, Comcast can see an outage instantly. The company has reduced the average amount of time it takes to detect a potentially service-impacting issue on the core network from 90 minutes to less than five minutes, which has paid off during COVID-19.

I witnessed some of this firsthand, as Im a Comcast subscriber. In four months, Ive had only one outage. I logged into my service account via the phone and got a message saying my area was experiencing an outage that was expected to last for 90 minutes. After that, the network was fixed and I have stayed on it since.

Above: Comcast manages its network from the CTC in Philadelphia.

Image Credit: Comcast

Gamers are among the hardest internet users to please, as they want to download a new game as soon as its available. They also want low latency, or no interaction delays, which is important in things like multiplayer shooting games like Call of Duty: Warzone, where you dont want confusion over who pulled a trigger first.

We are laser-focused on latency across our network. Its an extremely important metric that we track very closely across the entire network, Hofmeyr said. We feel very bullish and very excited about what we are able to deliver from a business perspective. I dont believe that we have a negative perspective, any impact on gaming from a latency perspective.

He added, Gaming is writing two things for us. One is the game downloads are just becoming bigger and bigger. This is very common today that a game download is multi-gig. And when they are released, you see massive expansion and growth in terms of downloads. On the latency side, we continuously invest. We are looking at AI. We are looking at software and tools to help improve it over time.

Game companies invest in low-latency game servers and improving the connections between specific gamers who are in the same match or the same region so latency doesnt affect them as much. But infrastructure companies like Comcast can also improve latency.

Content delivery networks are an integral part of making video delivery more efficient. Comcast video is delivered through the companys own CDNs, which position videos throughout the network so they can be delivered in as short a distance as possible to the viewer. The company constantly monitors peaks in traffic and designs the network for those peaks. Having a lot of people playing a game or watching a video at the same time establishes new peaks. But the 1,700 100-gig links allow the company to deal with those peaks by helping each region deal with peaks in specific parts of the network.

Above: Inside Comcasts CTC in Philadelphia.

Image Credit: Comcast

While its still early in the process, Comcast is moving to a virtualized, cloud-based network architecture so it can manage accelerating demand and deliver faster, more reliable service. Virtualization means taking functions that were once performed by large, purpose-built pieces of hardware hardware that required manual upgrades to deliver innovation and moving them into the cloud.

Transitioning into web-based software is helping us self-heal much faster and build our capabilities faster, Nafshi said. If there is a failure point, you fail at a container level rather than an appliance level, and that greatly reduces the time to repair and mitigate.

By doing this, Comcast will reduce the innovation cycles on those functions from years down to months. One example of this is the virtual CMTS initiative. (A CMTS is a large piece of hardware that serves an entire neighborhood, delivering traffic between the core network and homes.) Increasingly, Comcast has been making those devices virtual by transitioning their functions into software that runs in data centers.

This not only allows Comcast to innovate faster, it also provides two key benefits for customers. First, it allows the firm to introduce much smaller failure points into the system, grouping customers into smaller groups so if one part of the network environment experiences an issue, it affects far fewer people. Second, the virtual architecture lets Comcast leverage other AI tools to have far greater visibility into the health of the network and to self-heal issues without human intervention.

Upload speeds increased somewhat during COVID-19, but not nearly as much as downloading did. Uploads are driven by things such as livestreamers, who share their video across a network of fans. In the future, Comcast is promising symmetrical download and upload speeds at 10 gigabits a second. It hasnt said when that will happen, but Cable Labs, the research arm of the cable industry, is working on the technology.

Its something that is very much in development, Hofmeyr said. Its going to be remarkable. We can deploy on top of existing infrastructure by leveraging AI software and the evolving DOCSIS protocol.

Read the original post:

Comcast credits AI software for handling the pandemic internet traffic crush - VentureBeat

What Does An AI Chip Look Like? – SemiEngineering

Depending upon your point of reference, artificial intelligence will be the next big thing or it will play a major role in all of the next big things.

This explains the frenzy of activity in this sector over the past 18 months. Big companies are paying billions of dollars to acquire startup companies, and even more for R&D. In addition, governments around the globe are pouring additional billions into universities and research houses. A global race is underway to create the best architectures and systems to handle the huge volumes of data that need to be processed to make AI work.

Market projections are rising accordingly. Annual AI revenues are predicted to reach $36.8 billion by 2025, according to Tractica. The research house says it has identified 27 different industry segments and 191 use cases for AI so far.

Fig. 1. AI revenue growth projection. Source: Tractica

But dig deeper and it quickly becomes apparent there is no single best way to tackle AI. In fact, there isnt even a consistent definition of what AI is or the data types that will need to be analyzed.

There are three problems that need to be addressed here, said Raik Brinkmann, president and CEO of OneSpin Solutions. The first is that you need to deal with a huge amount of data. The second is to build an interconnect for parallel processing. And the third is power, which is a direct result of the amount of data that you have to move around. So you really need to move from a von Neumann architecture to a data flow architecture. But what exactly does that look like?

So far there are few answers, which is why the first chips in this market include various combinations of off-the-shelf CPUs, GPUs, FPGAs and DSPs. While new designs are under development by companies such as Intel, Google, Nvidia, Qualcomm and IBM, its not clear whose approach will win. It appears that at least one CPU always will be required to control these systems, but as streaming data is parallelized, co-processors of various types will be required.

Much of the processing in AI involves matrix multiplication and addition. Large numbers of GPUs working in parallel offer an inexpensive approach, but the penalty is higher power. FPGAs with built-in DSP blocks and local memory are more energy efficient, but they generally aremore expensive. This also is a segment where software and hardware really need to be co-developed, but much of the software is far behind the hardware.

There is an enormous amount of activity in research and educational institutions right now, said Wally Rhines, chairman and CEO of Mentor Graphics. There is a new processor development race. There are also standard GPUs being used for deep learning, and at the same time there are a whole bunch of people doing work with CPUs. The goal is to make neural networks behave more like the human brain, which will stimulate a whole new wave of design.

Vision processing has received most of the attention when it comes to AI, largely because Tesla has introduced self-driving capabilities nearly 15 years before the expected rollout of autonomous vehicles. That has opened a huge market for this technology, and for chip and overall system architectures needed to process data collected by image sensors, radar and LiDAR. But many economists and consulting firms are looking beyond this market to how AI will affect overall productivity. A recent report from Accenture predicts that AI will more than double GDP for some countries (see Fig. 2 below). While that is expected to cause significant disruption in jobs, the overall revenue improvement is too big to ignore.

Fig. 2: AIs projected impact.

Aart de Geus, chairman and co-CEO of Synopsys, points to three waves of electronicscomputation and networking, mobility, and digital intelligence. In the latter category, the focus shifts from the technology itself to what it can do for people.

Youll see processors with neural networking IP for facial recognition and vision processing in automobiles, said de Geus. Machine learning is the other side of this. There is a massive push for more capabilities, and the state of the art is doing this faster. This will drive development to 7nm and 5nm and beyond.

Current approaches Vision processing in self-driving dominates much of the current research in AI, but the technology also has a growing role in drones and robotics.

For AI applications in imaging, the computational complexity is high, said Robert Blake, president and CEO of Achronix. With wireless, the mathematics is well understood. With image processing, its like the Wild West. Its a very varied workload. It will take 5 to 10 years before that market shakes out, but there certainly will be a big role for programmable logic because of the need for variable precision arithmetic that can be done in a highly parallel fashion.

FPGAs are very good at matrix multiplication. On top of that, programmability adds some necessary flexibility and future-proofing into designs, because at this point it is not clear where the so-called intelligence will reside in a design. Some of the data used to make decisions will be processed locally, some will be processed in data centers. But the percentage of each could change for each implementation.

Thats has a big impact on AI chip and software design. While the big picture for AI hasnt changed muchmost of what is labeled AI is closer to machine learning than true AIthe understanding of how to build these systems has changed significantly.

With cars, what people are doing is taking existing stuff and putting it together, said Kurt Shuler, vice president of marketing at Arteris. For a really efficient embedded system to be able to learn, though, it needs a highly efficient hardware system. There are a few different approaches being used for that. If you look at vision processing, what youre doing is trying to figure out what is it that a device is seeing and how you infer from that. That could include data from vision sensors, LiDAR and radar, and then you apply specialized algorithms. A lot of what is going on here is trying to mimic whats going on in the brain using deep and convolutional neural networks.

Where this differs from true artificial intelligence is that the current state of the art is being able to detect and avoid objects, while true artificial intelligence would be able to add a level of reasoning, such as how to get through a throng of people cross a street or whether a child chasing a ball is likely to run into the street. In the former, judgments are based on input from a variety of sensors based upon massive data crunching and pre-programmed behavior. In the latter, machines would be able to make value judgments, such as the many possible consequences of swerving to avoid the childand which is the best choice.

Sensor fusion is an idea that comes out of aircraft in the 1990s, said Shuler. You get it into a common data format where a machine can crunch it. If youre in the military, youre worried about someone shooting at you. In a car, its about someone pushing a stroller in front of you. All of these systems need extremely high bandwidth, and all of them have to have safety built into them. And on top of that, you have to protect the data because security is becoming a bigger and bigger issue. So what you need is both computational efficiency and programming efficiency.

This is what is missing in many of the designs today because so much of the development is built with off-the-shelf parts.

If you optimize the network, optimize the problem, minimize the number of bits and utilize hardware customized for a convolutional neural network, you can achieve a 2X to 3X order of magnitude improvement in power reduction, said Samer Hijazi, senior architect at Cadence and director of the companys Deep Learning Group. The efficiency comes from software algorithms and hardware IP.

Google is attempting to alter that formula. The company has developed Tensor processing units (TPUs), which are ASICs created specifically for machine learning. And in an effort to speed up AI development, the company in 2015 turned its TensorFlow software into open source.

Fig. 3: Googles TPU board. Source: Google.

Others have their own platforms. But none of these is expected to be the final product. This is an evolution, and no one is quite sure how AI will evolve over the next decade. Thats partly due to the fact that use cases are still being discovered for this technology. And what works in one area, such as vision processing, is not necessarily good for another application, such as determining whether an odor is dangerous or benign, or possibly a combination of both.

Were shooting in the dark, said Anush Mohandass, vice president of marketing and business development at NetSpeed Systems. We know how to do machine learning and AI, but how they actually work and converge is unknown at this point. The current approach is to have lots of compute power and different kinds of compute enginesCPUs, DSPs for neural networking types of applicationsand you need to make sure it works. But thats just the first generation of AI. The focus is on compute power and heterogeneity.

That is expected to change, however, as the problems being solved become more targeted. Just as with the early versions of IoT devices, no one quite knew how various markets would evolve so systems companies threw in everything and rushed products to market using existing chip technology. In the case of smart watches, the result was a battery that only lasted several hours between charges. As new chips are developed for those specific applications, power and performance are balanced through a combination of more targeted functionality, more intelligent distribution of how processing is parsed between a local device and the cloud, and a better understanding of where the bottlenecks are in a design.

The challenge is to find the bottlenecks and constraints you didnt know about, said Bill Neifert, director of models technology at ARM. But depending on the workload, the processor may interact differently with the software, which is almost inherently a parallel application. So if youre looking at a workload like financial modeling or weather mapping, the way each of those stresses the underlying system is different. And you can only understand that by probing inside.

He noted that the problems being solved on the software side need to be looked at from a higher level of abstraction, because it makes them easier to constrain and fix. Thats one key piece of the puzzle. As AI makes inroads into more markets, all of this technology will need to evolve to achieve the same kinds of efficiencies that the tech industry in general, and the semiconductor industry in particular, have demonstrated in the past.

Right now we find architectures are struggling if they only handle one type of computing well, said Mohandass. But the downside with heterogeneity is that the whole divide and conquer approach falls apart. As a result, the solution typically involves over-provisioning or under-provisioning.

New approaches As more use cases are established for AI beyond autonomous vehicles, adoption will expand.

This is why Intel bought Nervana last August. Nervana develops 2.5D deep learning chips that utilize a high-performance processor core, moving data across an interposer to high-bandwidth memory. The stated goal is a 100X reduction in time to train a deep learning model as compared with GPU-based solutions.

Fig. 4: Nervana AI chip. Source: Nervana

These are going to look a lot like high-performance computing chips, which are basically 2.5D chips and fan-out wafer-level packaging, said Mike Gianfagna, vice president of marketing at eSilicon. You will need massive throughput and ultra-high-bandwidth memory. Weve seen some companies looking at this, but not dozens yet. Its still a little early. And when youre talking about implementing machine learning and adaptive algorithms, and how you integrate those with sensors and the information stream, this is extremely complex. If you look at a car, youre streaming data from multiple disparate sources and adding adaptive algorithms for collision avoidance.

He said there are two challenges to solve with these devices. One is reliability and certification. The other is security.

With AI, reliability needs to be considered at a system level, which includes both hardware and software. ARMs acquisition of Allinea in December provided one reference point. Another comes out of Stanford University, where researchers are trying to quantify the impact of trimming computations from software. They have discovered that massive cutting, or pruning, doesnt significantly impact the end product. University of California at Berkeley has been developing a similar approach based upon computing that is less than 100% accurate.

Coarse-grain pruning doesnt hurt accuracy compared with fine-grain pruning, said Song Han, a Ph.D. candidate at Stanford University who is researching energy-efficient deep learning. Han said that a sparse matrix developed at Stanford required 10X less computation, an 8X smaller memory footprint, and used 120X less energy than DRAM. Applied to what Stanford is calling an Efficient Speech Recognition Engine, he said that compression led to accelerated inference. (Those findings were presented at Cadences recent Embedded Neural Network Summit.)

Quantum computing adds yet another option for AI systems. Leti CEO Marie Semeria said quantum computing is one of the future directions for her group, particularly for artificial intelligence applications. And Dario Gil, vice president of science and solutions at IBM Research, explained that using classical computing, there is a one in four chance of guessing which of four cards is red if the other three are blue. Using a quantum computer and entangling of superimposed qubits, by reversing the entanglement the system will provide a correct answer every time.

Fig. 5: Quantum processor. Source: IBM.

Conclusions AI is not one thing, and consequently there is no single system that works everywhere optimally. But there are some general requirements for AI systems, as shown in the chart below.

Fig. 6: AI basics. Source: OneSpin

And AI does have applications across many markets, all of which will require extensive refinement, expensive tooling, and an ecosystem of support. After years of relying on shrinking devices to improve power, performance and cost, entire market segments are rethinking how they will approach new markets. This is a big win for architects and it adds huge creative options for design teams, but it also will spur massive development along the way, from tools and IP vendors all the way to packaging and process development. Its like hitting the restart button for the tech industry, and it should prove good for business for the entire ecosystem for years to come.

Related Stories What Does AI Really Mean? eSilicons chairman looks at technology advances, its limitations, and the social implications of artificial intelligenceand how it will change our world. Neural Net Computing Explodes Deep-pocket companies begin customizing this approach for specific applicationsand spend huge amounts of money to acquire startups. Plugging Holes In Machine Learning Part 2: Short- and long-term solutions to make sure machines behave as expected. Wearable AI System Can Detect A Conversation Tone (MIT) An artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a persons speech patterns and vitals.

Read more:

What Does An AI Chip Look Like? - SemiEngineering

How a poker-playing AI could help prevent your next bout of the flu – ExtremeTech

Youd be forgiven for finding little exceptional about the latest defeat of an arsenal of poker champions by the computer algorithm Libratus in Pittsburgh last week. After all, inthe last decade or two, computers have made a habit of crushingboard game heroes. And at first blush, this appears to be just another iteration in that all-too-familiar story. Peel back a layer though, and the most recent AI victory is as disturbing as it is compelling. Lets explore the compelling side of the equation before digging into the disturbing implications of the Libratus victory.

By now, many of us are familiar with the idea of AI helping out in healthcare. For the last year or so IBM has been bludgeoning us with TV commercials about its Jeopardy-winning Watson platform, now being put to use to help oncologists diagnose and treat cancer. And while I wish to take nothing away from that achievement, Watson is a question answering system with no capacity for strategic thinking. The latter topic belongs to a class of situations more germane to the field of game theory. Game theory is usually tucked underthe sub-genre of economics, for it deals with how entities make strategic decisions in the pursuit of self interest. Its also the discipline from which the AI poker playing algorithm Libratus gets its smarts.

What does this have to do with health care and the flu? Think of disease as a game between strategic entities. Picture avirus as one player, a player with a certain set of attack and defense strategies. When the virus encounters your body, a game ensues, in which your body defends with its own strategies and hopefully prevails. This game has been going on a long time, with humans having only a marginal ability to control the outcome. Our bodys natural defenses have been developed in evolutionary time, and thus have a limited ability to make on the fly adaptations.

But what if we could recruit computers to be our allies in this game against viruses? And what if the same reasoning ability that allowed Libratus to prevail over the best poker mindsin the world could tacklehow to defeat a virus or a bacterial infection? This is in fact the subject of a compelling research paperby Toumas Sandholm, the designer of the Libratus algorithm. In it, he explains at length how an AI algorithm could be used for drug design and disease prevention.

With only the health of the entire human race at stake, its hard to imagine a rationale that would discourage us from making use of such a strategic superpower. Now for the disturbing part of story, and the so-called fable of the sparrows recounted by Nick Bostrom in his singular work Superintelligence: Paths, Dangers and Strategies. In the preface to the book, he tells of a group of sparrows who recruit a baby owl to help defend them against other predators, not realizing the owl might one day grow up and devour them all. In Libratus, an algorithm thats in essence a universal strategic game-playing machine, and is likely capable of besting humankind in any number of real-world strategic games, we may have finally met our owl. And while the end of the story between ourselves and Libratus has yet to be determined, prudence would surely advise we tread carefully.

View original post here:

How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech

Dartmouth professor working on AI cancer cure | Education – The Union Leader

Its a big claim, Dartmouth College Professor Gene Santos Jr. admits, but he thinks his artificial intelligence tool can help doctors come up with a cancer cure.

We're trying to build this fundamental fabric to build that playbook together, so that it makes sense, and so you can start mixing existing playbooks, Santos said.

Santos and his team of Dartmouth engineering colleagues, along with Joseph Gormley, Director of Advanced Systems Development at Tufts Clinical and Translational Science Institute and his colleagues, as well as industry partner IOMICS, are working on a $34 million National Institutes of Health program to develop the artificial intelligence tool to bring together all known cancer research.

The plan is to develop an AI-based system that analyzes patients clinical and genomic data and the relationship between biochemical pathways that drive health and disease, Santos said.

Were trying to find new connections that people have not seen, Santos said. We believe this system will generate new insights, accelerating the work of the biomedical researcher.

The research is already out there, and it is already being collected into knowledge databases. Santos and his team are working on developing the tool called the Pathway Hypothesis Knowledgebase, or PHK, which will analyze the data and come up with treatment plans.

Santos said the data and research available isnt always complete, and some of it is inconsistent.

Data is noisy, and data can be inconsistent, Santos said.

Different terms are used to describe the same subject from hospital to hospital, and not all hospitals and researchers use a universal set of measurements. The PHK will account for the inconsistencies and contradictions in the data, helping doctors see through the research and find the cures, Santos said.

With the PHK, doctors could treat a patient using historical data of other patients with similar symptoms and genomic profiles, according to Santos. It could also be used to determine additional uses for approved drugs already on the market, and could quickly determine treatments to new diseases, such as COVID-19.

Santos hopes to have PHK in the hands of personal physicians in the next decade, but he thinks the tool will start to bear fruit for researchers in the next three to five years.

We will impact how we treat cancer and a multitude of complex multi-faceted diseases, said Santos.

Were closer than we think. I think we can get there, Santos said.

The researchers presented a completed prototype in March and were notified in June that they had been selected to continue their research. In the coming years, the team hopes to use the prototype with additional analytical, reasoning and learning tools that are being developed by other groups to build the Biomedical Data Translator to fully implement the system for use by researchers, according to Santos.

View post:

Dartmouth professor working on AI cancer cure | Education - The Union Leader

Microsoft, Intel, NVIDIA Invest in Element AI – Investopedia

Microsoft Corp. (MSFT), NVIDIA Corp. (NVDA) and Intel Corp. (INTC) all participated in a round of fund raising for Element AI, the Canadian artificial intelligence startup, as the technology powerhouses go after the burgeoning market.

According to media reports, Microsoft made the investment via its venture capital arm Microsoft Ventures, while Intel did it via Intel Capital. The startup that developed a platform to help companies of all sizes build AI into their businesses, raised $102 million. The Series A round of funding was led by Data Collective, a San Francisco VC firm. Microsoft is a previous investor in Element AI, which splashed onto the scene just a mere eight months ago.

Element AI told ZDNet that it will use the funding to hire more employees, to invest in big AI projects and to acquire startups in the space. "Artificial Intelligence is a 'must have' capability for global companies," said CEO Jean-Franois Gagn in a statement. "Without it, they are competitively impaired if not at grave risk of being obsoleted in place."

For the Redmond, Wash., software giant, Element AI marks yet another instance where it recently backed a company focused on this new technology. In May it co-led a $7.6 million VC round of funding for Bonsai, the Berkeley, Calif.-based AI startup, and invested in Agolo, a New York City-based AI startup. Bonsais AI technology is designed to help manufacturing, retail, logistic and similar markets incorporate AI into their businesses. Agolo provides AI systems to some of the worlds biggest media companies to summarize their news on Facebook and via Amazons Alexa, voice-activated personal assistant. (See also: Sports Betting: The Next Big Thing for Artificial Intelligence.)

But its not just Microsoft that is setting its sights on the market. Chipmaker NVIDIA is also becoming a force, which has prompted Citigroup to predict the stock could hit $300 a share. In a recent research note, Citi analyst Atif Malik said the company is in the early stages of transitioning from a maker of PC graphics chips to a leader in AI, which could drive future growth.

"Element AI will benefit by continuing to leverage NVIDIA's high performance GPUs and software at large scale to solve some of the world's most challenging issues," Jeff Herbst, VP of business development at NVIDIA, said in a statement to ZDNet about its participation in the round of fundraising. Meanwhile Intel recently announced it is forming a separate AI business unit that will be led by former Nervana CEO Naveen Rao. (See also: Intel Forms New Unit to Zero in on AI.)

Go here to read the rest:

Microsoft, Intel, NVIDIA Invest in Element AI - Investopedia

Artificial Intelligence in Agriculture Market Worth $4.0 Billion by 2026 – Exclusive Report by MarketsandMarkets – PRNewswire

CHICAGO, April 28, 2020 /PRNewswire/ -- According to the new market research report "Artificial Intelligence in Agriculture Marketby Technology (Machine Learning, Computer Vision, and Predictive Analytics), Offering (Software, Hardware, AI-as-a-Service, and Services), Application, and Geography - Global Forecast to 2026", published by MarketsandMarkets, the Artificial Intelligence in Agriculture Marketis estimated to be USD 1.0 billion in 2020 and is projected to reach USD 4.0 billion by 2026, at a CAGR of 25.5% between 2020 and 2026. The market growth is driven by the increasing implementation of data generation through sensors and aerial images for crops, increasing crop productivity through deep-learning technology, and government support for the adoption of modern agricultural techniques.

Request for PDF Brochure:

https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=159957009

By application, drone analytics segment projected to register highest CAGR during forecast period

The market for drone analytics is expected to grow at the highest rate due to its extensive use for diagnosing and mapping to evaluate crop health and to make real-time decisions. Favorable government mandates for the use of drones in agriculture are also expected to fuel the growth of the drone analytics market. Increasing awareness among farm owners regarding the advantages associated with AI technology is expected to further fuel the growth of the AI in agriculture market.

By technology, computer vision segment to register highest CAGR during forecast period

The increasing use of computer vision technology for agriculture applications, such as plant image recognition and continuous plant health monitoring and analysis, is one of the major factors contributing to the growth of the computer vision segment. The other factors include higher adoption of robots and drones in agriculture farms and increasing demand for improved crop yield due to the rising population. Computer vision allows farmers and agribusinesses alike to make better decisions in real-time.

Browsein-depth TOC on"Artificial Intelligence in Agriculture Market"81 Tables 40 Figures 152 Pages

Request more details on:

https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=159957009

AI in agriculture market in APAC projected to register highest CAGR from 2020 to 2026

The AI in agriculture market in Asia Pacific is expected to witness the highest growth during the forecast period. The wide-scale adoption of AI technologies in agriculture farms is the key factor supporting the growth of the market in this region. AI is increasingly applied in the agriculture sector in developing countries, such as India and China. The increasing adoption of deep learning and computer vision algorithm for agriculture applications is also expected to fuel the growth of the AI in agriculture market in the Asia Pacific region.

International Business Machines Corp. (IBM) (US), Deere & Company (John Deere) (US), Microsoft Corporation (Microsoft) (US), Farmers Edge Inc. (Farmers Edge) (Canada), The Climate Corporation (Climate Corp.) (US), ec2ce (ec2ce) (Spain), Descartes Labs, Inc. (Descartes Labs) (US), AgEagle Aerial Systems (AgEagle) (US), and aWhere Inc. (aWhere) (US) are the prominent players in the AI in agriculture market.

Related Reports:

Artificial Intelligence Marketby Offering (Hardware, Software, Services), Technology (Machine Learning, Natural Language Processing, Context-Aware Computing, Computer Vision), End-User Industry, and Geography - Global Forecast to 2025

Artificial Intelligence in Manufacturing Marketby Offering (Hardware, Software, and Services), Technology (Machine Learning, Computer Vision, Context-Aware Computing, and NLP), Application, Industry, and Geography - Global Forecast to 2025

About MarketsandMarkets

MarketsandMarkets provides quantified B2B research on 30,000 high growth niche opportunities/threats which will impact 70% to 80% of worldwide companies' revenues. Currently servicing 7500 customers worldwide including 80% of global Fortune 1000 companies as clients. Almost 75,000 top officers across eight industries worldwide approach MarketsandMarkets for their painpoints around revenues decisions.

Our 850 fulltime analyst and SMEs at MarketsandMarkets are tracking global high growth markets following the "Growth Engagement Model GEM". The GEM aims at proactive collaboration with the clients to identify new opportunities, identify most important customers, write "Attack, avoid and defend" strategies, identify sources of incremental revenues for both the company and its competitors. MarketsandMarkets now coming up with 1,500 MicroQuadrants (Positioning top players across leaders, emerging companies, innovators, strategic players) annually in high growth emerging segments. MarketsandMarkets is determined to benefit more than 10,000 companies this year for their revenue planning and help them take their innovations/disruptions early to the market by providing them research ahead of the curve.

MarketsandMarkets's flagship competitive intelligence and market research platform, "Knowledge Store" connects over 200,000 markets and entire value chains for deeper understanding of the unmet insights along with market sizing and forecasts of niche markets.

Contact:Mr. Sanjay GuptaMarketsandMarkets INC.630 Dundee RoadSuite 430Northbrook, IL 60062USA: +1-888-600-6441Email: [emailprotected]Visit Our Web Site: https://www.marketsandmarkets.comResearch Insight : https://www.marketsandmarkets.com/ResearchInsight/ai-in-agriculture-market.aspContent Source : https://www.marketsandmarkets.com/PressReleases/ai-in-agriculture.asp

SOURCE MarketsandMarkets

Originally posted here:

Artificial Intelligence in Agriculture Market Worth $4.0 Billion by 2026 - Exclusive Report by MarketsandMarkets - PRNewswire

Meet the AI that can write – Axios

A new general language machine learning model is pushing the boundaries of what AI can do.

Why it matters: OpenAI's GPT-3 system can reasonably make sense of and write human language. It's still a long way from genuine artificial intelligence, but it may be looked back on as the iPhone of AI, opening the door to countless commercial applications both benign and potentially dangerous.

Driving the news: After announcing GPT-3 in a paper in May, OpenAI recently began offering a select group of people access to the system's API to help the nonprofit explore the AI's full capabilities.

How it works: GPT-3 works the same way as predecessors like OpenAI's GPT-2 and Google's BERT analyzing huge swathes of the written internet and using that information to predict which words tend to follow after each other.

Details: As early testers begin posting about their experiments, what stands out is both GPT-3's range and the eerily human-like quality of some of its responses.

Yes, but: Give it more than a few paragraphs of text prompts and GPT-3 will quickly lose the thread of an argument sometimes with unintentionally hilarious results, as Kevin Lacker showed when he gave GPT-3 the Turing Test.

The big picture: Just because GPT-3 lacks real human intelligence doesn't mean that it lacks any intelligence at all, or that it can't be used to produce remarkable applications.

Of note: OpenAI has already begun partnering with commercial companies on GPT-3, including Replika and Reddit, though pricing is still undecided.

The catch: As OpenAI itself noted in the introductory paper, "internet-trained models have internet-scale biases." A model trained on the internet like GPT-3 will share the biases of the internet, including stereotypes around gender, race and religion.

The bottom line: Humans who assemble letters for a living aren't out of a job yet. But we may look back upon GPT-3 as the moment when AI began seeping into everything we do.

Link:

Meet the AI that can write - Axios

2020 And The Dawn Of AI Learning At The Edge – Forbes

With countless predictions about whats in store for artificial intelligence in 2020, Im eager to see what will come true and what will fall by the wayside. I think that one of the more paradigm-changing predictions will be moving AIs learning ability to the edge.

Under the hood of AIs generic name, a variety of approaches are hidden, spanning from huge models that crunch data on a distributed cloud infrastructure to tiny, edge-friendly AI that analyze and mine data on small processors.

From my academic research at Boston University to cofounding Neurala, I have always been keenly aware of the difference between these two types of AIlets call them "heavy" and "light" AI. Heavy AI requires hefty compute substrates to run, while light AI can do what heavy AI is capable of but on smaller compute power.

The introduction of commodity processors such as GPUsand later their portabilityhas made it technically and economically viable to bring AI/deep learning/DNN/neural network algorithms to the edge in a multitude of industries.

Bandwidth, latency, cost and just plain logic dictate the era of edge AI and will help make our next technology jump a reality. But before we can do so, it is important to understand the technologys nuances, because making AI algorithms run on the small compute edge has a few. In fact, there are at least two processes at play: inference, or "predictions" generated by an edge (e.g., I see a normal frame vs. one with a possible defect), and edge learningnamely, using the acquired information to change, improve, correct and refine the edge AI. This is a small, often overlooked difference with huge implications.

Living At The Edge

I first realized this difference between inference/predictions and edge learning while working with NASA back in 2010. My colleagues and I implemented a small brain emulation to control a Mars Rover-like device with AI that needed to be capable of running and learning at the edge.

For NASA, it was important that a robot be capable of learning "new things" completely independently of any compute power available on Earth. A data bottleneck, latency and a plethora of other issues meant they needed to explore different breeds of AI than what had been developed at that time. They needed algorithms that had the ability to digest and learnnamely, adapt the behavior of AI and the available datawithout requiring huge amounts of compute power, data and time.

Unfortunately, traditional deep neural network (DNN) models were just not up to par, so we went on to build our own AI that would meet these requirements. Dubbed "lifelong deep neural network" (Lifelong-DNN), this new approach to DNNs had the ability to learn throughout its lifetime (versus traditional DNNs that can only learn once, before deployment).

Learn At The Edge Or Die

One of the biggest challenges when it comes to the implementation of AI today is its inflexibility and lack of adaptability. AI algorithms can be trained on huge amounts of data, when available, and can be fairly robust if all data is captured for their training beforehand. But unfortunately, this is not how the world works.

We humans are so adaptable because our brains have figured out that lifelong learning (learning every day) is key, and we cant rely solely on the data we are born with. Thats why we do not stop learning after our first birthday: We continuously adapt to changing environments and scenarios we encounter throughout our lives and learn from them. As humans, we do not discard data, we use it constantly to fine-tune our own AI.

Humans are a primary example of edge learning-enabled machines. In fact, if human brains acted in the same way as a DNN, our knowledge would be restricted to our college years. We would go about our 9-to-5s and daily routines only to wake up the next morning without having learned anything new.

The Learning-Enabled, AI-Powered Edge

Traditional DNNs are the dominant paradigm in todays AI, with fixed models that need to be trained before deployment. But novel approaches such as Lifelong-DNN would enable AI-powered compute edges not only to understand the data coming to them but also adapt and learn.

So, if you too would like to harness the power of the edge, here is my advice. First off, you need to abandon the mindset (and restriction) that AI can only be trained before deployment. From there, a new need arises: a way for users to interact with the edge and add knowledge. This implies the need to visualize newly collected data and for the user to be able to select which ones to add. This can be done either manually by a user or automatically.

For instance, in a manufacturing scenario, a quality control specialist may reject a product coming out of a machine and, by doing so, provide AI a new clue that the product or the part of it that was just built has to be considered faulty. So, updating your AI training protocols to allow for the integration of continual training workflows, where AI is updated based on new clues, is a must for organizations and individuals looking to leverage this new breed of AI.

AI that learns at the edge is a paradigm-shifting technology that will finally empower AI to truly serve its purpose: shifting intelligence to the compute edge where it is needed, at speeds, latency and costs that make it affordable for every device.

Going forward, learning-enabled edges will survive natural selection in an increasingly competitive AI ecosystem. May the fittest AI survive!

Here is the original post:

2020 And The Dawn Of AI Learning At The Edge - Forbes

These students figured out their tests were graded by AI and the easy way to cheat – The Verge

On Monday, Dana Simmons came downstairs to find her 12-year-old son, Lazare, in tears. Hed completed the first assignment for his seventh-grade history class on Edgenuity, an online platform for virtual learning. Hed received a 50 out of 100. That wasnt on a practice test it was his real grade.

He was like, Im gonna have to get a 100 on all the rest of this to make up for this, said Simmons in a phone interview with The Verge. He was totally dejected.

At first, Simmons tried to console her son. I was like well, you know, some teachers grade really harshly at the beginning, said Simmons, who is a history professor herself. Then, Lazare clarified that hed received his grade less than a second after submitting his answers. A teacher couldnt have read his response in that time, Simmons knew her son was being graded by an algorithm.

Simmons watched Lazare complete more assignments. She looked at the correct answers, which Edgenuity revealed at the end. She surmised that Edgenuitys AI was scanning for specific keywords that it expected to see in students answers. And she decided to game it.

Now, for every short-answer question, Lazare writes two long sentences followed by a disjointed list of keywords anything that seems relevant to the question. The questions are things like... What was the advantage of Constantinoples location for the power of the Byzantine empire, Simmons says. So you go through, okay, what are the possible keywords that are associated with this? Wealth, caravan, ship, India, China, Middle East, he just threw all of those words in.

I wanted to game it because I felt like it was an easy way to get a good grade, Lazare told The Verge. He usually digs the keywords out of the article or video the question is based on.

Apparently, that word salad is enough to get a perfect grade on any short-answer question in an Edgenuity test.

Edgenuity didnt respond to repeated requests for comment, but the companys online help center suggests this may be by design. According to the website, answers to certain questions receive 0% if they include no keywords, and 100% if they include at least one. Other questions earn a certain percentage based on the number of keywords included.

As COVID-19 has driven schools around the US to move teaching to online or hybrid models, many are outsourcing some instruction and grading to virtual education platforms. Edgenuity offers over 300 online classes for middle and high school students ranging across subjects from math to social studies, AP classes to electives. Theyre made up of instructional videos and virtual assignments as well as tests and exams. Edgenuity provides the lessons and grades the assignments. Lazares actual math and history classes are currently held via the platform his district, the Los Angeles Unified School District, is entirely online due to the pandemic. (The district declined to comment for this story).

Of course, short-answer questions arent the only factor that impacts Edgenuity grades Lazares classes require other formats, including multiple-choice questions and single-word inputs. A developer familiar with the platform estimated that short answers make up less than five percent of Edgenuitys course content, and many of the eight students The Verge spoke to for this story confirmed that such tasks were a minority of their work. Still, the tactic has certainly impacted Lazares class performance hes now getting 100s on every assignment.

Lazare isnt the only one gaming the system. More than 20,000 schools currently use the platform, according to the companys website, including 20 of the countrys 25 largest school districts, and two students from different high schools to Lazare told me they found a similar way to cheat. They often copy the text of their questions and paste it into the answer field, assuming its likely to contain the relevant keywords. One told me they used the trick all throughout last semester and received full credit pretty much every time.

Another high school student, who used Edgenuity a few years ago, said he would sometimes try submitting batches of words related to the questions only when I was completely clueless. The method worked more often than not. (We granted anonymity to some students who admitted to cheating, so they wouldnt get in trouble.)

One student, who told me he wouldnt have passed his Algebra 2 class without the exploit, said hes been able to find lists of the exact keywords or sample answers that his short-answer questions are looking for he says you can find them online nine times out of ten. Rather than listing out the terms he finds, though, he tried to work three into each of his answers. (Any good cheater doesnt aim for a perfect score, he explained.)

Austin Paradiso, who has graduated but used Edgenuity for a number of classes during high school, was also averse to word salads but did use the keyword approach a handful of times. It worked 100 percent of the time. I always tried to make the answer at least semi-coherent because it seemed a bit cheap to just toss a bunch of keywords into the input field, Paradiso said. But if I was a bit lazier, I easily could have just written a random string of words pertinent to the question prompt and gotten 100 percent.

Teachers do have the ability to review any content students submit, and can override Edgenuitys assigned grades the Algebra 2 student says hes heard of some students getting caught keyword-mashing. But most of the students I spoke to, and Simmons, said theyve never seen a teacher change a grade that Edgenuity assigned to them. If the teachers were looking at the responses, they didnt care, one student said.

The transition to Edgenuity has been rickety for some schools parents in Williamson County, Tennessee are revolting against their districts use of the platform, claiming countless technological hiccups have impacted their childrens grades. A district in Steamboat Springs, Colorado had its enrollment period disrupted when Edgenuity was overwhelmed with students trying to register.

Simmons, for her part, is happy that Lazare has learned how to game an educational algorithm its certainly a useful skill. But she also admits that his better grades dont reflect a better understanding of his course material, and she worries that exploits like this could exacerbate inequalities between students. Hes getting an A+ because his parents have graduate degrees and have an interest in tech, she said. Otherwise he would still be getting Fs. What does that tell you about... the digital divide in this online learning environment?

See the rest here:

These students figured out their tests were graded by AI and the easy way to cheat - The Verge

For AI startups, more funding is often not the answer – VentureBeat

One of the hottest areas for VC investment at the moment is AI/machine learning that includes artificial intelligence algorithms, related machine learning systems, neural networks, and back-end processing to produce insightful and self-learning applications. As Nvidias CEO recently said:

Software may be eating the world, but AI is going to eat software

VC investment in AI has risen from $3.2 billion in 2014 to $9.5 billion for the first five months of 2017 annualized, with the number of funding rounds nearly doubling since 2015 to over 1,200 on an annualized basis so far this year. No wonder Frost & Sullivan calls AI the hottest investment trend of 2017:

Above: Source: Pitchbook

Investors piling into a space aim for multiple exits worth hundreds of millions of dollars. However, the pattern of AI exits is the opposite. Most successfully-exited AI companies sell for below $50 million after raising only a small amount of money. This works well for founders and small angel backers but not for VCs looking for exits well over $100 million.

Of 70 AI M&A deals since 2012, 75 percent sold below $50 million. These deals are often acqui-hires companies acquired for talent not business performance. The number of $200 million+ deals barely registers.

Above: Source: Pitchbook

The typical journey goes like this: A small team comes together around 1-2 individuals, they forge real advances on key use cases (voice recognition, visual/video tracking, fraud detection, retail consumer behavior, etc.), sign a handful of prominent customers, raise less than $10 million (often less than $5 million), then attract the attention of a major buyer looking to solve that problem set. These kinds of AI companies are often valued as an amount paid per engineer rather than on performance (revenue, growth, profits); the average price per employee is around $2.5 million:

Above: Source: Pitchbook

The other issue for VCs is that AI companies dont generally need to raise much money, even if they are valued far above $100 million. Argo, valued at $1 billion for a majority stake by Ford, was 20 people when bought. Our research from PitchBook shows the 10 most valuable AI M&A targets raised on average only $15-25 million; there was only room for 1-2 VC investors in each deal:

Sure there are larger AI companies still growing, such as Palantir, valued at $10 billion having raised over $500 million. But a few isolated cases of $1 billion+ unicorns created using significant VC money is hardly fertile ground for 1,200 VC investment rounds. The reality is, AI just isnt as rich a segment for VCs as investment activity suggests.

Once several VCs invest, an AI company can no longer entertain a $50-100 million M&A offer and must scale its team and product suite to ramp to a much higher valuation years further out; otherwise VCs cannot get the return they require. Heres why we think this is counterproductive:

For many AI founders, the best approach is raising little money, demonstrating they can solve hard problems, and waiting for the M&A phone to ring.

For VCs the best approach is often to look elsewhere.

Victor Basta is founder of Magister Advisors, a specialist bank focused on M&A exits and larger financing rounds.

Continued here:

For AI startups, more funding is often not the answer - VentureBeat

behold.ai and Wellbeing Software collaborate on national solution for rapid COVID-19 diagnosis using AI analysis of chest X-rays – GlobeNewswire

behold.ai and Wellbeing Software collaborate onnational solution for rapid COVID-19 diagnosis using AI analysis of chest X-rays

Companies working to fast-track programme for UK-wide rollout

LONDON, UK, March 31, 2020 Two British companies at the leading edge of medical imaging technology are working together on a plan to fast-track the diagnosis of COVID-19 in NHS hospitals using artificial intelligence analysis of chest X-rays.

behold.ai has developed the artificial intelligence-based red dot algorithm which can identify within 30 seconds abnormalities in chest X-rays. Wellbeing Software operates Cris, the UKs most widely used Radiology Information System (RIS), which is installed in over 700 locations.

A national roll-out combining these two technologies would enable a large number of hospitals to quickly process the significant volume of X-rays, currently being used as the key diagnostic test for triage of COVID-19 patients, thereby speeding up diagnosis and easing pressure on the NHS at this critical time. This solution will also find significant utility in dealing with the backlog of cases that continue to mount, such as suspected cancer patients.

Simon Rasalingham, Chairman and CEO of behold.ai, said:

behold.ai and Wellbeing are a great fit in terms of expertise and technology. We are able to prioritise abnormal chest X-rays with greater than 90% accuracy and a 30-second turnaround. If that were translated into a busy hospitals coping with COVID-19, the benefits to healthcare systems are potentially enormous.

Chris Yeowart, Director at Wellbeing Software, said:

Our technology provides the integration between the algorithm and the hospitals radiology systems and working processes, addressing the technical challenges to clearing the way for accelerated national rollout. It is clear from talking to radiology departments that chest X-rays have become one of the primary diagnostic tools for COVID-19 in this country.

https://www.behold.ai

https://www.wellbeingsoftware.com/

Ends

For further information, please contact:Consilium Strategic Communications Tel: +44(0)20 3709 5700 beholdai@consilium-comms.com

About behold.ai and radiology

behold.ai provides artificial intelligence, through its red dot cognitive computing platform, to radiology departments. This technology augments the expertise of radiologists to enable them to report with greater clinical accuracy, faster and more safely than they could before. This revolutionary combination helps to deliver greater performance in radiology reporting at a fraction of the price of outsourced reporting.

Radiology departments play an essential role in the diagnostic process; however, a consequence of fewer radiologists and a growing demand for images has left services stretched beyond capacity across many trusts, resulting in reporting delays - in some cases impacting cancer diagnosis. These service issues have been highlighted by the Care Quality Commission and the Royal College of Radiologists.

Our solution seamlessly integrates into local trust workflows augmenting clinical practice and delivering state-of-the-art, safe, Artificial Intelligence.

The behold.ai algorithm has been developed using more than 30,000 example images, all of which have been reviewed and reported by highly experienced consultant radiology clinicians in order to shape accurate decision making. The red dot prioritisation platform is capable of sorting images into normal and abnormal categories in less than 30 seconds post image acquisition.

About behold.ai and quality

Apart from its FDA clearance,behold.aiis also CE approved and is gaining further approval for a CE mark Class IIa certification.

In June 2019 the Company was awarded ISO 13485 QMS certification for an AI medical device the gold standard of quality certification.

About Wellbeing Software

Wellbeing Software is a leading healthcare technology provider with a presence in more than 75% of NHS organisations. The company has combined its extensive UK resources and unparalleled experience in its specialist divisions radiology, maternity, data management and electronic health records - to form Wellbeing Software, uniting their core businesses to enable customers to build on existing investments in IT as a way of delivering connected healthcare records and better patient care. Wellbeings ability to connect its specialist systems with other third-party software enables healthcare organisations to achieve key objectives, such as paperless working and the creation of complete electronic health records. Through their established footprint, specialist knowledge and significant development resources, the company is building the foundations for connectivity within NHS organisations and beyond.

Wellbeing media contact : Jenni Livesley, Context Public Relations, wellbeing@contextpr.co.uk

More here:

behold.ai and Wellbeing Software collaborate on national solution for rapid COVID-19 diagnosis using AI analysis of chest X-rays - GlobeNewswire

Teenage team develops AI system to screen for diabetic retinopathy – MobiHealthNews

Kavya Kopparapu might be considered something of a whiz kid. After all, she had yet to enter her senior year of high school when she started Eyeagnosis, a smartphone app and 3D-printed lens that allows patients to be screened for diabetic retinopathy with a quick photo, avoiding the time and expense of a typical diagnostic procedure. In June 2016, Kopparapus grandfather had recently been diagnosed with diabetic retinopathy, a complication of diabetes that damages retinal blood vessels and can eventually cause blindness. He caught the symptoms in time to receive treatment, but it was close. A little too close for Kopparapus comfort. According to the IEEE Spectrum, Kopparapu, her 15-year-old brother Neeyanth and her classmate Justin Zhang trained an artificial intelligence system to scan photos of eyes and detect, and diagnose, signs of diabetic retinopathy. She unveiled the technology at the OReilly Artificial Intelligence conference in New York City in July. After diving into internet-based research and emailing opthamologists, biochemists, epidemiologists, neuroscientists and the like, she and her team worked on the diagnostic AI using a machine-learning architecture called a convolutional neural network. CNNs, as theyre called, parse through vast data sets -- like photos -- to look for patterns of similarity, and to date have shown an aptitude for classifying images. The network itself was the ResNet-50, developed by Microsoft. But to train it to make retinal diagnoses, Kopparapu had to feed it images from the National Institute of Healths EyeGene database, which essentially taught the architecture how to spot signs of retinal degeneration. One hospital has already tested the technology, fitting a 3D-printed lens onto a smartphone and training the phones flash to illuminate the retinas of five different patients. Tested against opthalmologists, the system went five for five on diagnoses. Kopparapus invention still needs lots of tests and additional data to prove its efficacy before it sees widespread clinical adoption, but so far, its off to a pretty good start. Eyeagnosis is operating in a space that's recently become interesting to some very large companies. Last fall, a team of Google researchers published a paper in the Journal of the American Medical Association showing that Google's deep learning algorithm, trained on a large data set of fundus images, can detect diabetic retinopathy with better than 90 percent accuracy. That algorithm was then tested on 9,963 deidentified images retrospectively obtained from EyePACS in the United States, as well as three eye hospitals in India. A second, publicly available research data set of 1,748 was also used. The accuracy was determined by comparing its diagnoses to those done by a panel of at least seven U.S. board-certified ophthalmologists. The two data sets had 97.5 percent and 96.1 percent sensitivity, and 93.4 percent and 93.9 percent specificity respectively.

And Google isnt the only player in that space. IBM has a technology utilizing a mix of deep learning, convolutional neural networks and visual analytics technology based on 35,000 images accessed via EyePACs; in research conducted earlier this year, the technology learned to identify lesions and other markers of damage to the retinas blood vessels, collectively assessing the presence and severity of disease. In just 20 seconds, the method was successful in classifying diabetic retinopathy severity with 86 percent accuracy, suggesting doctors and clinicians could use the technology to have a better idea of how the disease progresses as well as identify effective treatment methods.

Lower-tech options are also taking a stab at improving access to screenings. Using a mix of in-office visits, telemedicine and web-based screening software, the Los Angeles Department of Health Services has been able to greatly expand the number of patients in its safety net hospital who got screenings and referrals. In an article published in the journal JAMA Internal Medicine, researchers describe how the two-year collaboration using Safety Net Connects eConsult platform resulted in more screenings, shorter wait times and fewer in-person specialty care visits. By deploying Safety Net Connects eConsult system to a group of 21,222 patients, the wait times for screens decreased by almost 90 percent, and overall screening rates for diabetic retinopathy increased 16 percent. The digital program also eliminated the need for 14,000 visits to specialty care professionals

Originally posted here:

Teenage team develops AI system to screen for diabetic retinopathy - MobiHealthNews

Ford creates a new dedicated Robotics and AI Research team … – TechCrunch

Fords recent executive shuffle was bound to lead to reorganization throughout the company, but the addition of a new Robotics and AI Research team operating under Fords Research and Advanced Engineering department seems like it was inevitable either way, given the industrys trajectory.

Fords VP of Research and Engineering and CTO Dr. Ken Washington revealed the new research group via a Medium post, in which he discusses the huge potential impact of AI and robotics over the next decade. The team will work with Argo AI, the startup that Ford took a majority stake in earlier this year via a large investment, as well as on other partnership and acquisition/investment opportunities. Itll help with work on drones, personal mobility platforms (last-mile, scooter-style transport), automation and aerial robotics.

Washington also discussed how in the future well see at least two separate fleets of self-driving vehicles on the road operated by Ford: one led by Fords own team pursuing advanced research and another led by Argo AI focused on development and testing of the virtual driver system Ford intends to bring to production in time for its 2021 deployment of a ride-hailing fleet.

Focusing on AI and robotics research is not novel to Ford among automakers; Honda has long had a program in place to develop its robotics capability, and has frequently demonstrated its Asimo humanoid robot. Toyota also runs the Toyota Research Institute, an entire subsidiary devoted to long-term research and development of robotics and AI, through its own work and partnerships with leading academic institutions.

More:

Ford creates a new dedicated Robotics and AI Research team ... - TechCrunch

Huawei sees AI, not death, in smartphone future – ZDNet

There are still improvements to be made in smartphones and artificial intelligence (AI) will play a critical role in driving further innovation in this space.

There remained significant differences today in terms of the functions offered in a $200- and $1,000-priced smartphone, said Bruce Lee, Huawei's global vice president of handsets business. He dismissed suggestions that innovation in the handset market had plateaued, with little separating low-end and high-end devices, and that manufacturers should move their focus elsewhere.

Speaking to ZDNet in an interview Friday, Lee said Huawei continued to focused its R&D efforts on introducing more functionalities and improving existing capabilities, such as camera, battery life and processing speed. It also needed to ensure its handsets could support faster internet connection, especially when 5G networks become available, he added.

Earlier this year, Pacific Crest's analyst for emerging technologies Ben Wilson opined in a research report, titled "There Is No 'Next Smartphone'", that the smartphone revolution was a "singular event in compute platform history" that was unlikely to repeat. Others also debated the "death of the smartphone" and impact of wearables.

While he acknowledged there was tremendous growth potential in wearables and smart devices, Lee said these still were challenged by the same issues faced in the smartphone market. He pointed to existing limitations in compute performance and battery life.

This further indicated that, far from "dying", there was still some ways to go in terms of smartphone innovation and development, he noted, adding that the industry must continue to invest in these key areas--of improving battery life and compute performance--to enhance user experience.

In this aspect, he said Huawei believed AI would play an important role in the future of handsets and would facilitate many critical developments in smartphones.

In its 2016 annual report, the Chinese manufacturer described an era of "+Intelligence" in which all devices, people, and processes would be supported by AI. "Building intelligence into our devices, networks, and industries will open up new worlds," it said, adding that it would impact the role of smartphones in future.

Huawei believed phones would be able to think contextually and engage humans in dialogue to understand their needs. The devices then could deliver the information and services humans required and would evolve into personal assistants to provide expertise and personalised services.

"AI will disrupt the user experience, but before it can do so, we will need a quantum leap in the functionality of our smart devices, chipsets, and cloud services," it said. "Artificial intelligence will place heavy demands on computing performance, energy efficiency, and device-cloud synergy. Meeting these demands and creating a better intelligent experience will take a synthesis of capabilities across both chipsets and the cloud."

Lee said Huawei had invested heavily in building a development team focused on AI, which included both hardware and software.

"We hope to use AI in our phones to have more learning capabilities...[so], together with big data, we will be able to understand consumer habits and better incorporate voice and image capabilities into the phone," he said. "This will enable the phone to become smarter and offer increased efficiencies for consumers."

Lee also underscored the need to embed this intelligence on the device itself, rather than push data into the cloud to be analysed.

Because machine learning and AI algorithms required significant amount of compute power, much of these processes were carried out in the cloud, and not on the local device, he explained. This, however, was not efficient, he said, stressing the need for more AI capabilities to be supported on the smartphone itself in order to reduce latency.

"We can then have faster responses because we don't need to upload data from the device into the cloud, do the computing, and send it back into the device," he noted. "And when we do the computing on the local device, we can also safeguard user privacy since we don't need to upload data into the server."

In terms of handset performance, Huawei had a stellar start to the year, bypassing Oppo in the first quarter to claim pole position in China's smartphone market. It shipped 20.8 million units, which was up 25.5 percent from the year before, and held a 20 percent market share.

Worldwide, it placed third behind Samsung and Apple, with a 9 percent market share for the first quarter 2017. The Chinese vendor shipped 34.18 million units, compared to Samsung's 78.67 million and Apple's 51.9 million.

Lee attributed the growth to its high-end P and Mate product lines. He further revealed that the company's future growth strategy would see more investment towards its high-end smartphone products.

In addition, Huawei would be looking to increase its market share outside its domestic market. Noting that China contributed about 60 percent of its smartphone business, he said the vendor was targeting for its overseas revenue to outweigh that of its home market.

While Europe currently was its biggest region outside of China, he added that the rest of Asia-Pacific would play a pivotal role in its future growth due to Huawei's geographical advantage in this region. Due to its heritage, it also had a better understanding of Asian consumers so the region should offer higher growth potential, he said.

Continue reading here:

Huawei sees AI, not death, in smartphone future - ZDNet

Which are the Key Industries that Depend on the Artificial Intelligence – CIOReview

There are many industries that heavily rely on artificial intelligence so that they can work more efficiently.

FREMONT, CA:There is no end to research about how artificial intelligence can be improved and implemented in everyday lives. Several multi-million companies are continuously trying to apply new technologies to ensure that they will be the foundation of human evolution. However, there are still some positive and negative points to it, and people are divided about the idea of making a robotized world.

Despite such differences among human beings, some forms of AI are highly used in industries. Here are some of the industries that heavily depend on the technology of artificial intelligence.

Online Gambling Sites

The online casino industry is heavily dependent on artificial intelligence. Furthermore, online gambling sites would not have existed if they did not have the technology of AI. The online casinos will utilize artificial intelligence so that they can impose fair-play, and every result of the games is random. The technology also helps to secure the sites and ensure that the information of the players remains hidden.

Healthcare

In the last few years, AI and healthcare have formed a strong bond among them. The doctors get assistance from artificial intelligence as it offers better diagnostics and detects medical issues in every human being that, too, within no time. Healthcare wants to utilize technology because it reduces the time taken to examine a patient and also provides reliable and effective results.

Manufacturing

The manufacturing sector needs robots, and artificial intelligence can help the industry to build the process, which can be more reliable than humans. AI has been playing a significant role in the industry as it takes care of every minute detail in the making process to make it more efficient.

Advertising

In the case of an advertisement, the marketing industry can transform itself into the online world due to which applying AI has become significant in this sector. AI is used for identifying the preference of the consumers by analyzing the cookie history while advertising on social media and other online platforms. Online advertisement has become much more accessible and effective with AI than it was ever before with the traditional marketing devices.

See also:Top Artificial Intelligence Companies

Read the original post:

Which are the Key Industries that Depend on the Artificial Intelligence - CIOReview

How To Flunk Those Cognitive Deficiency Tests And What This Means Too For AI Self-Driving Cars – Forbes

Cognitive deficiency tests, AI, and self-driving cars.

Seems like the news recently has been filled with revelations about the taking of cognitive deficiency tests.

This is especially being widely noted by some prominent politicians that appear to be attempting to vouch for having mental clarity upon reaching an age in life whereby cognitive decline often surfaces.

Such tests are more aptly referred to as cognitive assessment tests rather than deficiency oriented tests, though the notion generally being that if a score earned is less than what might be expected, the potential conclusion is that the person has had a decline in their mental prowess.

Oftentimes also referred to as cognitive impairment detection exams, the person seeking to find out how they are mentally doing is administered a test consisting of various questions and asked to answer the questions. The administrator of the test then grades the answers as to correctness and fluidity, producing a score to indicate how the person overall performed.

The score is then compared to the scores of others that have taken the test, trying to gauge how the cognitive capacity of the person is rated or ranked in light of some larger population of test-takers.

Also, if a person takes the test over time, perhaps say once per year, their prior scores are compared to their most recent score, attempting to measure whether there is a difference emerging as they age.

There are some crucial rules-of-thumb about all of this cognitive test-taking.

For example, if the person takes the same test word-for-word, repeatedly over time, this raises questions about the nature of the test versus the nature of the cognitive abilities of the person taking the test. In essence, you can potentially do better on the test simply because youve seen the same questions before and likely also had been previously told what the considered correct answers are.

One argument to be made is that this is somewhat assessing your ability to remember having previously taken the test, but thats not usually the spirit of what such cognitive tests are supposed to be about. The idea is to assess overall cognition, and not merely be focused on whether you perchance can recall the specific questions of a specific test previously taken.

Another facet of this kind of cognitive test-taking consists of being formally administered the test, rather than taking the test entirely on your own.

Though there are plenty of available cognitive tests that you can download and take in private, some would say that this is not at all the same as taking a test under the guiding hands and watch of someone certified or otherwise authorized to administer such tests.

A key basis for claiming that the test needs to be formally administered is to ensure that the person taking the test is not undermining the test or flouting the testing process. If the test taker were to ask a friend for help, this obviously defeats the purpose of the test, which is supposed to focus on your solitary cognition and not be a collective semblance of cognition. Likewise, these tests are usually timed, and a person on their own might be tempted to exceed the normally allotted time, plus the person might be tempted to look-up answers, use a calculator, etc.

Perhaps the most important reason to have a duly authorized and trained administrator involves attempting to holistically evaluate the results of the cognition test.

Experts in cognitive test-taking are quick to emphasize that a robust approach to the matter consists of not just the numeric score that a test taker achieves, but also how they are overall able to interact with a fully qualified and trained cognitive-test administrator.

Unlike taking a secured SAT or ACT test that you might have had to painstakingly sit through for college entrance purposes, a cognitive assessment test is typically intended to assess in both a written way and in a broader manner how the person interacts and cognitively presents themselves.

Imagine for example that someone aces the written test, yet meanwhile, they are unable to carry on a lucid conversation with the administrator, and similarly, they mentally stumble on why they are taking the test or otherwise have apparent cognitive difficulties surrounding the test-taking process. Those facets outside of the test itself should be counted, some would vehemently assert, and thus would be unlikely to be valued if a person merely took the test on their own.

Despite all of the foregoing and the holistic nuances that Ive mentioned, admittedly, most of the time all that people want to know is what was their darned score on that vexing cognitive test.

You might be wondering whether there is one standardized and universal cognitive test that is used for these purposes.

No, there is not just one per se.

Instead, there are a bewildering and veritable plethora of such cognition tests.

It seems like each day there is some new version that gets announced to the world. In some cases, the cognitive test being proffered has been carefully prepared and analyzed for its validity. Unfortunately, in other cases, the cognitive test is a gimmick and being fronted as a moneymaker, whereby those pushing the test are aiming to get people to believe in it and hoping to generate gobs of revenue by how many take the test and charge them fees accordingly.

Please do not fall for the fly-by-night cognitive tests.

Sadly, sometimes a known celebrity or other highly visible person gets associated with a cognitive test promotion and adds a veneer of authenticity to something that does not deserve any bona fide reputational stamp-of-approval.

Some cognitive tests have lasted the test of time and are considered the dominant or at least well-regarded for their cognitive assessing capacity and validity.

On a related note, if a cognitive test takes a long time to complete, lets say hours of completion time, the odds are that it is not going to be overall well-received and considered onerous for testing purposes. As such, the popular cognitive tests tend to be the ones that take a relatively short period to undertake, such as an hour or less, and in many cases even just 15 minutes or less (these are usually depicted as screening tests rather than full-blown cognitive assessment tests).

Some decry that only requiring a few minutes to take a cognitive test is rife with problems and seems like a fast-food kind of approach to tackling a very complex topic of measuring someones cognition. Those in this camp shudder when these quickie tests are used by people that then go around touting how well they scored.

The counter-argument is that these short-version cognitive tests are reasonable and amount to using a dipstick to gauge how much gasoline there is in the tank of your car. The viewpoint is that it only takes a little bit of measurement to generally know how someone is mentally faring. Once an overall gauge is taken, you can always do a follow-up with a more in-depth cognitive test.

Given all of the preceding discussion, it might be handy to briefly take a look at a well-known cognitive test that has been around since the mid-1990s and continues to actively be in use today, including having been the test that reportedly President Trump took in 2018 (according to news reports).

The Famous MoCA Cognitive Test

That test is the Montreal Cognitive Assessment (MoCA) test.

Some mistakenly get confused by the name of the test and think that it is maybe just a test for Canadians since it refers to Montreal in the naming, but the test is globally utilized and was named for being initially developed by researchers in Montreal, Quebec.

Generally, the MoCA is one-page in size (see example here), which is handily succinct for doing this kind of testing, and the person taking the test is given 10 minutes to answer the questions. There is some leeway often allowed in the testing time allotted, and also some latitude related to having the person first become oriented to the test and its instructions.

Nonetheless, the person taking the test should not be provided say double the time or anything of that magnitude. The reason why the test should be taken in a prescribed amount of time is that the aspect of time is considered related to cognitive acuity.

In other words, if the person is given more time than others have previously gotten, presumably they can cognitively devote more mental cycles or effort and might do better on the test accordingly.

A timed test is not just about your cognition per se, but also about how fast you think and whether your thinking processes are as fluid as others that have taken the test.

If it took someone an hour and they got a top score, while someone else got a top score in ten minutes, we would be hard-pressed to compare their results. You might liken this to playing timed chess, whereby the longer you have, the more chess moves you can potentially mentally foresee, which is fine in some circumstances, but when trying to make for a balanced playing field, you put a timer on how long each player has to make their move.

That being said, the time allotted for a given test should not be so short as to shortchange the cognitive opportunities, which would once again presumably hamper the measurement of cognition. A chess player that has to say just two seconds to make a move will likely randomly take a shot rather than try to devote mental energy to the task.

In theory, the amount of time provided should be the classic Goldilocks amount, just enough time to allow for a sufficient dollop of mental effort, and not so much time that it inadvertently extends the cognition and perhaps enables a lesser cognitive capacity to use time as a crutch to imbue itself (assuming thats not what the test is attempting to measure).

I am about to explain specific details of the MoCA cognitive test, so if you want to someday take the test, please know that I am about to spoil your freshness (this is a spoiler alert).

The test attempts to cover a lot of cognitive ground, doing so by providing a variety of cognition tasks, including the use of numbers, the use of words, the use of sentences, the use of the alphabet, the use of visual cognitive capabilities such as interpreting images and composing writing, and so on.

Thats worth mentioning because a cognitive test that only covered say counting and involved the addition of numbers would be solely focused on your arithmetic cognition. We know that humans have a fuller range of cognitive abilities. As such, a well-balanced cognitive test tries to hit upon a slew of what are considered cognitive dimensions.

Notably, this can be hard to pack into one short test, and raises some criticisms by those that argue it is dubious to have someone undertake a single question on numbers and a single question on words, and so on, and then attempt to generalize overall about their cognition within each respective entire dimension of cognitive facets.

Lets try out a numbers and arithmetic related question.

Are you ready?

You are to start counting from 100 down to 0 and do so by subtracting 7 each time rather than by one.

Okay, your first answer should be 93, and then your next would be 86, and then 79, and so on.

You cannot use a pencil and paper, nor can you use a calculator. This is supposed to be off the top of your head. Using your fingers or toes is also considered taboo.

How did you do?

Try this next one.

Remember these words: Face, Velvet, Church, Daisy, Red.

I want you to look away from these words and say them aloud, without reading them from the page.

In about five minutes, without looking at the page to refresh your memory, try to once again speak aloud what the words were.

What do those cognitive tests signify?

The counting backward is usually a tough one for most people as they do not normally count in that direction. This forces your mind to slow down and think directly about the numbers and the doing of arithmetics in your head (this is also partially why the same kind of quiz is used for DUI roadway sobriety assessment). If I had asked you to count by sevens starting at zero and counting upward, you would likely do so with much greater ease, and the effort would be less cognitively taxing on you.

For the word memorization, this is an assessment of your short-term memory capacity. It is only five words versus if I had asked you to remember ten words or fifty words. Some people will try to memorize the five words by imagining an image in their minds of each word, while others might string together the words into making a short story that will allow them to recall the words.

Either way, this is an attempt to exercise your cognition around several facets, involving short-term memory, the ability to follow and abide by instructions, a semblance of encoding words in your mind, and has other mental leveraging cerebral components.

Some of the questions on these cognitive tests are considered controversial.

In the case of MoCA, there is typically a clock drawing task that some cognitive test experts have heartburn about.

You are asked to draw a clock and indicate the time on the clock as being a stated time such as perhaps 10 minutes past 7. In theory, you would draw a circle or something similar, you would write the numbers of 1 to 12 around the oval to represent each hour, and you would then sketch a short line pointing from the center toward the 7, and a longer mark pointing from the center to the 2 position (since the marks for minutes are normally representative of five minutes each).

Why is this controversial as a cognitive test question?

One concern is that in todays world, we tend to use digital clocks that display numerically the time and are less likely to use the conventional circular-shaped clock to represent time anymore.

If a person taking the cognitive test is unfamiliar with oval clocks, does it seem appropriate that they would lose several cognition points for poorly accomplishing this task?

This brings up a larger scope qualm about cognitive tests, namely, how can we separate knowledge versus the act of cognition.

I might not know what a conventional clock is and yet have superb cognitive skills. The test is unfairly ascribing knowledge of something in particular to the act of cognition, and so it is falsely measuring one thing that is not necessarily the facet that is being presumably assessed.

Suppose I asked you a question about baseball, such as please go ahead and name the bases or what the various player positions are called. If perchance you know about baseball, you can answer the question, while otherwise, you are going to fail that question.

Do the baseball question and your corresponding answer offer any reasonable semblance of your cognitive capabilities?

In any case, the MoCa cognitive test is usually scored based on a top score of 30, for which the scale typically used is this:

Score 26-30: No cognitive impairment detected

Score 18-25: Mild cognitive impairment

Score 10-17: Moderate cognitive impairment

Score00-09: Severe cognitive impairment

Research studies tend to indicate that people with demonstrative Alzheimers tend to score around 16, ending up in the moderate cognitive impairment category. Presumably, a person with no noticeable cognitive impairment, at least per this specific cognitive test, would score at 26 or higher.

Is it possible to achieve a score in the top tier, the score of 26 or above (suggesting that one does not possess any cognitive impairment), and yet still nonetheless have some form of cognitive deficiency?

Yes, certainly so, since this kind of cognitive test is merely a tiny snapshot or sliver and does not cover an entire battery or gamut of cognition, plus as mentioned earlier there is the possibility of being a priori familiar with the test and/or actively prepare beforehand for the test which can substantively boost performance.

Is it possible to score in the mild, moderate, or severe categories of cognitive impairment and somehow not truly be suffering from cognitive impairment?

Yes, certainly so, since a person might be overly stressed and anxious in taking the test, thus perform poorly due to the situation at hand, or could find the given set of tasks unrelated to their cognition prowess such as perhaps someone that is otherwise ingeniously inventive and cognitively sharp, but find themselves mentally cowed when doing simple arithmetic or memorizing seemingly nonsense words.

All told, it is best to be cautious in interpreting the results of such cognitive tests (and, once again, reinforces the need for a more holistic approach to cognitive assessments).

AI And Cognitive Tests

Another popular topic in the news and one that is seemingly unrelated to this cognitive testing matter is the emergence of AI (hold that thought, for a moment, well get back to it).

You are likely numbed by the multitude of AI systems that seem to keep being developed and released into and affecting our everyday lives, including the rise of facial recognition, the advent of Natural Language Processing (NLP) in the case of AI systems such as Alexa and Siri, etc.

On top of that drumbeat, there are the touted wonders of AI, entailing a lot of (rather wild) speculation about where AI is headed and whether AI will eclipse human intelligence, possibly even deciding to take over our planet and choosing to enslave or wipe out humanity (for such theories, see my analysis at this link here).

Why bring up AI, especially if it presumably has nothing to do with cognitive tests and cognitive testing?

Well, for the simple fact that AI does have to do with cognitive testing, very much so.

The presumed goal for AI is to achieve the equivalent of human intelligence, as might somehow be embodied in a machine. We do not yet know what the machine will be, though likely to consist of computers, but the specification does not dictate what it must be, and thus if you could construct a machine via Legos and duct tape that exhibited human intelligence, more power to you.

In brief, we want to craft artificial cognitive capabilities, which are the presumed crux of human intelligence.

Logically, since thats what we are attempting to accomplish, it stands to reason that we would expect AI to be able to readily pass a human-focused cognitive test since doing so would illustrate that the AI has arrived at similar cognitive capacities.

I dont want to burst anyones bubble, but there is no AI today that can do any proper semblance of common-sense reasoning, and we are a long way away from having sentient AI.

Bottom-line: AI today would essentially flunk the MoCA cognitive test and any others of similar complexity too.

Some might try to argue and claim that AI and computers can countdown from 100, and can memorize words, and do the other stated tasks, but this is a misleading assertion. Those are tasks undertaken by an AI system that has been constructed for and contrived to perform those specific tasks, and inarguably is a far cry from understanding or comprehending the test in a manner akin to human capacities and misleadingly anthropomorphize the matter (for more details, see my analysis at this link here).

There is not yet any kind of truly generalizable AI, which some are now calling Artificial General Intelligence (AGI).

As added clarification, there is a famous test in the AI field known as the Turing Test (see my explanation at this link here). No AI of today and nor in the foreseeable near future could pass a fully ranging Turing Test, and in some respects, being able to pass a cognitive test like those of MoCA is a variant of a Turing Test (in an extremely narrow way).

AI Cognition And Self-Driving Cars

Another related topic entails the advent of AI-based true self-driving cars.

We are heading toward the use of self-driving cars that involve AI autonomously driving the vehicle, doing so without any human driver at the wheel.

Some wonder whether the AI of today, lacking any kind of common-sense reasoning and nor any inkling of sentience, will be sufficient for driving cars on our public roadways. Critics argue that we are going to have AI substituting for human drivers and yet the AI is insufficiently robust to do so (see more on this contention at my analysis here).

Others insist that the driving task does not require the full range of human cognitive capabilities and thus the AI will do just fine in commanding self-driving cars.

Do you believe that the AI driving you to the grocery store needs to be able to first pass a cognitive test and showcase that it can adequately draw a clock and indicate the time of day?

For now, all we can say is that time will tell.

Read more:

How To Flunk Those Cognitive Deficiency Tests And What This Means Too For AI Self-Driving Cars - Forbes

AI artist conjures up convincing fake worlds from memories – New Scientist

Out of this world

Stanford University and Intel

By Matt Reynolds

Take a look at the above image of a German street. At a glance it could be a blurry dashcam photo, or a snap thats gone through one of those apps that turns photos into paintings.

But you wont find this street anywhere on Google Maps. Thats because it was generated by an imaginative neural network, stitching together its memories of real streets it was trained on.

Nothing in the image actually exists, says Qifeng Chen at Stanford University, California, and Intel. Instead, his AI works from rough layouts that tell it what should be in each part of the image. The centre of the image might be labelled road while other sections are labelled trees or cars its painting by numbers for an AI artist.

Chen says the technique could eventually create game worlds that truly resemble the real world. Using deep learning to render video games could be the future, he says. He has already experimented with using the algorithm to replace the game world in Grand Theft Auto V.

Noah Snavely at Cornell University, New York, is impressed. Generating realistic-looking artificial scenes is a tricky problem, he says, and even the best existing approaches cant do it. Chens system creates the largest and most detailed examples of their kind he has seen.

Snavely says that the technology could allow people to describe a world, and then have an AI build it in virtual reality. Itd be great if you could conjure up a photorealistic scene just by describing it aloud, he says.

Chens system starts by processing a photo of a real street it hasnt seen before, but that has been labelled so the AI knows which bits are supposed to be cars, people, roads and so on. The AI then uses this layout as a guide to generate a completely new image.

The AI was trained on 3000 images of German streets, so when it comes across part of the photolabelled car it draws on its existing knowledge to generate a car there in its own creation. We want the network to memorise what its seen in the data, Chen says.

Intel researchers will present the work at this years International Conference on Computer Vision, which takes place in Venice, Italy, in late October.

The algorithm was also trained and tested on a smaller database of photos of domestic interiors, but Snavely says that to realise its potential it needs a data set that captures the true diversity of the world. Thats easier said than done, however, as each component in the training images needs to be labelled by hand, and creating a data set with that level of detail is extremely labour-intensive.

Chen says his system still has a long way to go before it can build truly photorealistic worlds. The images it produces right now have a blurry, dreamlike quality, as the network isnt able to fill in all the details we expect in photos. He is already working on a larger version of the system that he hopes will be much more capable.

But when it comes to building worlds in virtual reality, that dreamlike nature might not be such a bad thing, says Snavely. Were used to seeing super-slick and realistic worlds on film and in video games, but theres not quite that level of expectation when it comes to VR. You dont need total photorealism, he says.

Reference: arxiv.org/abs/1707.09405

More on these topics:

Continued here:

AI artist conjures up convincing fake worlds from memories - New Scientist