Since February of last year, tens of thousands of patients hospitalized at one of Minnesotas largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model. But few if any of those patients has any idea about the AI involved in their care.
Thats because frontline clinicians at M Health Fairview generally dont mention the AI whirring behind the scenes in their conversations with patients.
At a growing number of prominent hospitals and clinics around the country, clinicians are turning to AI-powered decision support tools many of them unproven to help predict whether hospitalized patients are likely to develop complications or deteriorate, whether theyre at risk of readmission, and whether theyre likely to die soon. But these patients and their family members are often not informed about or asked to consent to the use of these tools in their care, a STAT examination has found.
advertisement
The result: Machines that are completely invisible to patients are increasingly guiding decision-making in the clinic.
Hospitals and clinicians are operating under the assumption that you do not disclose, and thats not really something that has been defended or really thought about, Harvard Law School professor Glenn Cohen said. Cohen is the author of one of only a few articles examining the issue, which has received surprisingly scant attention in the medical literature even as research about AI and machine learning proliferates.
advertisement
In some cases, theres little room for harm: Patients may not need to know about an AI system thats nudging their doctor to move up an MRI scan by a day, like the one deployed by M Health Fairview, or to be more thoughtful, such as with algorithms meant to encourage clinicians to broach end-of-life conversations. But in other cases, lack of disclosure means that patients may never know what happened if an AI model makes a faulty recommendation that is part of the reason they are denied needed care or undergo an unnecessary, costly, or even harmful intervention.
Thats a real risk, because some of these AI models are fraught with bias, and even those that have been demonstrated to be accurate largely havent yet been shown to improve patient outcomes. Some hospitals dont share data on how well the systems work, justifying the decision on the grounds that they are not conducting research. But that means that patients are not only being denied information about whether the tools are being used in their care, but also about whether the tools are actually helping them.
The decision not to mention these systems to patients is the product of an emerging consensus among doctors, hospital executives, developers, and system architects, who see little value but plenty of downside in raising the subject.
They worry that bringing up AI will derail clinicians conversations with patients, diverting time and attention away from actionable steps that patients can take to improve their health and quality of life. Doctors also emphasize that they, not the AI, make the decisions about care. An AI systems recommendation, after all, is just one of many factors that clinicians take into account before making a decision about a patients care, and it would be absurd to detail every single guideline, protocol, and data source that gets considered, they say.
Internist Karyn Baum, whos leading M Health Fairviews rollout of the tool, said she doesnt bring up the AI to her patients in the same way that I wouldnt say that the X-ray has decided that youre ready to go home. She said she would never tell a fellow clinician not to mention the model to a patient, but in practice, her colleagues generally dont bring it up either.
Four of the health systems 13 hospitals have now rolled out the hospital discharge planning tool, which was developed by the Silicon Valley AI company Qventus. The model is designed to identify hospitalized patients who are likely to be clinically ready to go home soon and flag steps that might be needed to make that happen, such as scheduling a necessary physical therapy appointment.
Clinicians consult the tool during their daily morning huddle, gathering around a computer to peer at a dashboard of hospitalized patients, estimated discharge dates, and barriers that could prevent that from occurring on schedule. A screenshot of the tool provided by Qventus lists a hypothetical 76-year-old patient, N. Griffin, who is scheduled to leave the hospital on a Tuesday but the tool prompts clinicians to consider that he might be ready to go home Monday, if he can be squeezed in for an MRI scan by Saturday.
Baum said she sees the system as a tool to help me make a better decision just like a screening tool for sepsis, or a CT scan, or a lab value but its not going to take the place of that decision, she said. To her, it doesnt make sense to mention to patients. If she did, Baum said, she could end up in a lengthy discussion with patients curious about how the algorithm was created.
That could take valuable time away from the medical and logistical specifics that Baum prefers to spend time talking about with patients flagged by the Qventus tool. Among the questions she brings up with them: How are the patients vital signs and lab test results looking? Does the patient have a ride home? How about a flight of stairs to climb when they get there, or a plan for getting help if they fall?
Some doctors worry that while well-intentioned, the decision to withhold mention of these AI systems could backfire.
I think that patients will find out that we are using these approaches, in part because people are writing news stories like this one about the fact that people are using them, said Justin Sanders, a palliative care physician at Dana-Farber Cancer Institute and Brigham and Womens Hospital in Boston. It has the potential to become an unnecessary distraction and undermine trust in what were trying to do in ways that are probably avoidable.
Patients themselves are typically excluded from the decision-making process about disclosure. STAT asked four patients who have been hospitalized with serious medical conditions kidney disease, metastatic cancer, and sepsis whether theyd want to be told if an AI-powered decision support tool were used in their care. They expressed a range of views: Three said they wouldnt want to know if their doctor was being advised by such a tool. But a fourth patient spoke out forcefully in favor of disclosure.
This issue of transparency and upfront communication must be insisted upon by patients, said Paul Conway, a 55-year-old policy professional who has been on dialysis and received a kidney transplant, both consequences of managing kidney disease since he was a teenager.
The AI-powered decision support tools being introduced in clinical care are often novel and unproven but does their rollout constitute research?
Many hospitals believe the answer is no, and theyre using that distinction as justification for the decision not to inform patients about the use of these tools in their care. As some health systems see it, these algorithms are tools being deployed as part of routine clinical care to make hospitals more efficient. In their view, patients consent to the use of the algorithms by virtue of being admitted to the hospital.
At UCLA Health, for example, clinicians use a neural network to pinpoint primary care patients at risk of being hospitalized or frequently visiting the emergency room in the next year. Patients are not made aware of the tool because it is considered a part of the health systems quality improvement efforts, according to Mohammed Mahbouba, who spoke to STAT in February when he was UCLA Healths chief data officer. (He has since left the health system.)
This is in the context of clinical operations, Mahbouba said. Its not a research project.
Oregon Health and Science University uses a regression-powered algorithm to monitor the majority of its adult hospital patients for signs of sepsis. The tool is not disclosed to patients because it is considered part of hospital operations.
This is meant for operational care, it is not meant for research. So similar to how youd have a patient aware of the fact that were collecting their vital sign information, its a part of clinical care. Thats why its considered appropriate, said Abhijit Pandit, OHSUs chief technology and data officer.
But there is no clear line that neatly separates medical research from hospital operations or quality control, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison. And researchers and bioethicists often disagree on what constitutes one or the other.
This has been a huge issue: Where is that line between quality control, operational control, and research? Theres no widespread agreement, Ossorio said.
To be sure, there are plenty of contexts in which hospitals deploying AI-powered decision support tools are getting patients explicit consent to use them. Some do so in the context of clinical trials, while others ask permission as part of routine clinical operations.
At Parkland Hospital in Dallas, where the orthopedics department has a tool designed to predict whether a patient will die in the next 48 hours, clinicians inform patients about the tool and ask them to sign onto its use.
Based on the agreement we have, we have to have patient consent explaining why were using this, how were using it, how well use it to connect them to the right services, etc., said Vikas Chowdhry, the chief analytics and information officer for a nonprofit innovation center incubated out of Parkland Health System in Dallas.
Hospitals often navigate those decisions internally, since manufacturers of AI systems sold to hospitals and clinics generally dont make recommendations to their customers about what, if anything, frontline clinicians should say to patients.
Jvion a Georgia-based health care AI company that markets a tool that assesses readmission risk in hospitalized patients and suggests interventions to prevent another hospital stay encourages the handful of hospitals deploying its model to exercise their own discretion about whether and how to discuss it with patients. But in practice, the AI system usually doesnt get brought up in these conversations, according to John Frownfelter, a physician who serves as Jvions chief medical information officer.
Since the judgment is left in the hands of the clinicians, its almost irrelevant, Frownfelter said.
When patients are given an unproven drug, the protocol is straightforward: They must explicitly consent to enroll in a clinical study authorized by the Food and Drug Administration and monitored by an institutional review board. And a researcher must inform them about the potential risks and benefits of taking the medication.
Thats not how it works with AI systems being used for decision support in the clinic. These tools arent treatments or fully automated diagnostic tools. They also dont directly determine what kind of therapy a patient may receive all of which would make them subject to more stringent regulatory oversight.
Developers of AI-powered decision support tools generally dont seek approval from the FDA, in part because the 21st Century Cures Act, which was signed into law in 2016, was interpreted as taking most medical advisory tools out of the FDAs jurisdiction. (That could change: In guidelines released last fall, the agency said it intends to focus its oversight powers on AI decision-support products meant to guide treatment of serious or critical conditions, but whose rationale cannot be independently evaluated by doctors a definition that lines up with many of the AI models that patients arent being informed about.)
The result, for now, is that disclosure around AI-powered decision support tools falls into a regulatory gray zone and that means the hospitals rolling them out often lack incentive to seek informed consent from patients.
A lot of people justifiably think there are many quality-control activities that health care systems should be doing that involve gathering data, Wisconsins Ossorio said. And they say it would be burdensome and confusing to patients to get consent for every one of those activities that touch on their data.
In contrast to the AI-powered decision support tools, there are a few commonly used algorithms subject to the regulation laid out by the Cures Act, such as the type behind the genetic tests that clinicians use to chart a course of treatment for a cancer patient. But in those cases, the genetic test is extremely influential in determining what kind of therapy or drug a patient may receive. Conversely, theres no similarly clear link between an algorithm designed to predict whether a patient may be readmitted to the hospital and the way theyll be treated if and when that occurs.
If it were me, Id say just file for institutional review board approval and either get consent or justify why you could waive it.
Pilar Ossorio, professor of law and bioethics, University of Wisconsin-Madison
Still, Ossorio would support an ultra-cautious approach: I do think people throw a lot of things into the operations bucket, and if it were me, Id say just file for institutional review board approval and either get consent or justify why you could waive it.
Further complicating matters is the lack of publicly disclosed data showing whether and how well some of the algorithms work, as well as their overall impact on patients. The public doesnt know whether OHSUs sepsis-prediction algorithm actually predicts sepsis, nor whether UCLAs admissions tool actually predicts admissions.
Some AI-powered decision support tools are supported by early data presented at conferences and published in journals, and several developers say theyre in the process of sharing results: Jvion, for example, has submitted to a journal for publication a study that showed a 26% reduction in readmissions when its readmissions risk tool was deployed; that paper is currently in review, according to Jvions Frownfelter.
But asked by STAT for data on their tools impact on patient care, several hospital executives declined or said they hadnt completed their evaluations.
A spokesperson from UCLA said it had yet to complete an assessment of the performance of its admissions algorithm.
Before you use a tool to do medical decision-making, you should do the research.
Pilar Ossorio, professor of law and bioethics, University of Wisconsin-Madison
A spokesperson from OHSU said that according to its latest report, run before the Covid-19 pandemic began in March, its sepsis algorithm had been used on 18,000 patients, of which it had flagged 1,659 patients as at-risk with nurses indicating concern for 210 of them. He added that the tools impact on patients as measured by hospital death rates and length of time spent in the facility was inconclusive.
Its disturbing that theyre deploying these tools without having the kind of information that they should have, said Wisconsins Ossorio. Before you use a tool to do medical decision-making, you should do the research.
Ossorio said it may be the case that these tools are merely being used as an additional data point and not to make decisions. But if health systems dont disclose data showing how the tools are being used, theres no way to know how heavily clinicians may be leaning on them.
They always say these tools are meant to be used in combination with clinical data and its up to the clinician to make the final decision. But what happens if we learn the algorithm is relied upon over and above all other kinds of information? she said.
There are countless advocacy groups representing a wide range of patients, but no organization exists to speak for those whove unknowingly had AI systems involved in their care. They have no way, after all, of even identifying themselves as part of a common community.
STAT was unable to identify any patients who learned after the fact that their care had been guided by an undisclosed AI model, but asked several patients how theyd feel, hypothetically, about an AI system being used in their care without their knowledge.
Conway, the patient with kidney disease, maintained that he would want to know. He also dismissed the concern raised by some physicians that mentioning AI would derail a conversation. Woe to the professional that as you introduce a topic, a patient might actually ask questions and you have to answer them, he said.
Other patients, however, said that while they welcomed the use of AI and other innovations in their care, they wouldnt expect or even want their doctor to mention it. They likened it to not wanting to be privy to numbers around their prognosis, such as how much time they might expect to have left, or how many patients with their disease are still alive after five years.
Any of those statistics or algorithms are not going to change how you confront your disease so why burden yourself with them, is my philosophy, said Stacy Hurt, a patient advocate from Pittsburgh who received a diagnosis of metastatic colorectal cancer in 2014, on her 44th birthday, when she was working as an executive at a pharmaceutical company. (She is now doing well and is approaching five years with no evidence of disease.)
Katy Grainger, who lost the lower half of both legs and seven fingertips to sepsis, said she would have supported her care team using an algorithm like OHSUs sepsis model, so long as her clinicians didnt rely on it too heavily. She said she also would not have wanted to be informed that the tool was being used.
I dont monitor how doctors do their jobs. I just trust that theyre doing it well.
Katy Grainger, patient who developed sepsis
I dont monitor how doctors do their jobs. I just trust that theyre doing it well, she said. I have to believe that Im not a doctor and I cant control what they do.
Still, Grainger expressed some reservations about the tool, including the idea that it may have failed to identify her. At 52, Grainger was healthy and fairly young when she developed sepsis. She had been sick for days and visited an urgent care clinic, which gave her antibiotics for what they thought was a basic bacterial infection, but which quickly progressed to a serious case of sepsis.
I would be worried that [the algorithm] could have missed me. I was young well, 52 healthy, in some of the best shape of my life, eating really well, and then boom, Grainger said.
Dana Deighton, a marketing professional from Virginia, suspects that if an algorithm scanned her data back in 2013, it would have made a dire prediction about her life expectancy: She had just been diagnosed with metastatic esophageal cancer at age 43, after all. But she probably wouldnt have wanted to hear about an AIs forecast at such a tender and sensitive time.
If a physician brought up AI when you are looking for a warmer, more personal touch, it might actually have the opposite and worse effect, Deighton said. (Shes doing well now her scans have turned up no evidence of disease since 2015.)
Harvards Cohen said he wants to see hospital systems, clinicians, and AI manufacturers come together for a thoughtful discussion around whether they should be disclosing the use of these tools to patients and if were not doing that, then the question is why arent we telling them about this when we tell them about a lot of other things, he said.
Cohen said he worries that uptake and trust in AI and machine learning could plummet if patients were to find out, after the fact, that theres a rash of this being used without anyone ever telling them.
Thats a scary thing, he said, if you think this is the way the future is going to go.
This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.
See original here:
Patients aren't being told about the AI systems advising their care - STAT
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Software environments for working on AI projects [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New version of my NLP toolkit [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Defining AI and Knowledge Engineering [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Great Overview of Knowledge Representation [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Something like Google page rank for semantic web URIs [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- The URL for this blog has changed [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- I have a new page on Knowledge Management [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- N-GRAM analysis using Ruby [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Machines Like Us [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- texai.org [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- NLTK: The Natural Language Toolkit [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My OpenCalais Ruby client library [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Protégé OWL Ontology Editor [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New version of Numenta software is available [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Verison 2.0 of OpenCyc is available [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Minimax Search [Knowledge] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Decision Tree [Knowledge] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- More AI Content & Format Preference Poll [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New Planners Solve Rescue Missions [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Mining Data for the Netflix Prize [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Decision Making for Medical Support [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neural Network Creates Music CD [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What Would You do With 80 Cores? [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Software Finds Learning Language Child's Play [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial Intelligence in Games [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial Intelligence Resources [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Alan Turing: Mathematical Biologist? [Last Updated On: April 25th, 2012] [Originally Added On: April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Science Breakthroughs [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course [Last Updated On: May 5th, 2012] [Originally Added On: May 5th, 2012]
- Why Your Brain Isn't A Computer [Last Updated On: May 5th, 2012] [Originally Added On: May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course [Last Updated On: May 7th, 2012] [Originally Added On: May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Google Driverless Car Ok'd by Nevada [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award [Last Updated On: May 13th, 2012] [Originally Added On: May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? [Last Updated On: May 25th, 2012] [Originally Added On: May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] [Last Updated On: May 25th, 2012] [Originally Added On: May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants [Last Updated On: May 27th, 2012] [Originally Added On: May 27th, 2012]
- Artificial intelligence: science fiction or simply science? [Last Updated On: May 28th, 2012] [Originally Added On: May 28th, 2012]
- Exetel taps artificial intelligence [Last Updated On: May 29th, 2012] [Originally Added On: May 29th, 2012]
- Software offers brain on the rain [Last Updated On: May 29th, 2012] [Originally Added On: May 29th, 2012]
- New Dean of Science has high hopes for his faculty [Last Updated On: May 30th, 2012] [Originally Added On: May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App [Last Updated On: May 31st, 2012] [Originally Added On: May 31st, 2012]
- A Rat is Smarter Than Google [Last Updated On: June 5th, 2012] [Originally Added On: June 5th, 2012]