Why Is CIA Plot to Kidnap or Kill Julian Assange Routinely Ignored? – LA Progressive

Three years ago, on 2 October 2018, a team of Saudi officials murdered journalistJamal Khashoggiin the Saudi consulate in Istanbul. The purpose of the killing was to silence Khashoggi and to frighten critics of the Saudi regime by showing that it would pursue and punish them as though they were agents of a foreign power.

It was revealed this week that a year before the Khashoggi killing in 2017, the CIA had plotted to kidnap or assassinateJulian Assange, the founder of WikiLeaks, who had taken refuge five years earlier in the Ecuador embassy in London. A senior US counter-intelligence official said that plans for the forcible rendition of Assange to the US were discussed at the highest levels of the Trump administration. The informant was one of more than 30 US officials eight of whom confirmed details of the abduction proposal quoted ina 7,500-word investigation by Yahoo Newsinto the CIA campaign against Assange.

The plan was to break into the embassy, drag [Assange] out and bring him to where we want, recalled a former intelligence official. Another informant said that he was briefed about a meeting in the spring of 2017 at which President Trump had asked if the CIA could assassinate Assange and provide options about how this could be done. Trump has denied that he did so.

The Trump-appointed head of the CIA, Mike Pompeo, saidpublicly that he would target Assange and WikiLeaks as the equivalent of a hostile intelligence service.

The Trump-appointed head of the CIA, Mike Pompeo, saidpublicly that he would target Assange and WikiLeaks as the equivalent of a hostile intelligence service. Apologists for the CIA say that freedom of the press was not under threat because Assange and the WikiLeaks activists were not real journalists. Top intelligence officials intended to decide themselveswho is and who is not a journalist, and lobbied the White House to redefine other high-profile journalists as information brokers, who were to be targeted as if they were agents of a foreign power.

Among those against whom the CIA reportedly wanted to take action were Glenn Greenwald, a founder of theInterceptmagazine and a formerGuardiancolumnist, and Laura Poitras, a documentary film-maker. The arguments for doing so were similar to those employed by the Chinese government for suppressing dissent in Hong Kong, which has been much criticised in the West. Imprisoning journalists as spies has always been the norm in authoritarian countries, such as Saudi Arabia, Turkey and Egypt, while denouncingthe free press as unpatriotic is a more recent hallmark of nationalist populist governments that have taken power all over the world.

It is possible to give only a brief precis ofthe extraordinary story exposed by Yahoo News, but the journalists who wrote it Zach Dorfman, Sean D Naylor and Michael Isikoff ought to scoop every journalistic prize. Their disclosures should be of particular interestin Britain because it was in the streets of central London that the CIA was planning an extra-judicial assault on an embassy, the abduction of a foreign national, and his secret rendition to the US, with the alternative option of killing him. These were not the crackpot ideas of low-level intelligence officials, but were reportedly operations that Pompeo and the agency fully intended to carry out.

This riveting and important story based on multiple sources might be expected to attract extensive coverage and widespread editorial comment in the British media, not to mention in parliament. Many newspapers have dutifully carried summaries of the investigation, but there has been no furor. Striking gaps in the coverage include the BBC, which only reported it, so far as I can see, as part of its Somali service. Channel 4, normally so swift to defend freedom of expression, apparently did not mention the story at all.

In the event, the embassy attack never took place, despite the advanced planning. There was a discussion with the Brits about turning the other cheek or looking the other way when a team of guys went inside and did a rendition, said a former senior US counter-intelligence official, who added that the British had refused to allow the operation to take place.

But the British government did carry out its own less melodramatic, but more effective measure against Assange, removing him from the embassy on 11 April 2019 after a new Ecuador government had revoked his asylum. He remains in Belmarsh top security prison two-and-a-half years later while the US appeals a judicial decision not to extradite him to the US on the grounds that he would be a suicide risk.

If he were to be extradited, he would face 175 years in prison. It is important, however, to understand, that only five of these would be under the Computer Fraud and Abuse Act, while the other 170 potential years are under the Espionage Act of 1917, passed during the height of the patriotic war fever as the US entered the First World War.

Only a single minor charge against Assange relates to the WikiLeaks disclosure in 2010 of a trove of US diplomatic cables and army reports relating to the Iraq and Afghan wars. The other 17 charges are to do with labeling normal journalistic investigation as the equivalent of spying.

Pompeos determination to conflate journalistic inquiry with espionage has particular relevance in Britain, because the home secretary, Priti Patel, wants to do much the same thing. She proposes updating the Official Secrets Act so that journalists, whistle-blowers and leakers could face sentences of up to 14 years in prison. A consultative paper issued in May titledLegislation to Counter State Threats (Hostile State Activity)redefines espionage as the covert process of obtaining sensitive confidential information that is not normally publicly available.

The true reason the scoop about the CIAs plot to kidnap or kill Assange has been largely ignored or downplayed is rather that he is unfairly shunned as a pariah by all political persuasions: left, right and centre.

To give but two examples, the US government has gone on claiming that the disclosures by WikiLeaks in 2010 put the lives of US agents in danger. Yet the US Army admitted in a court hearing in 2013 that a team of 120 counter-intelligence officers had failed to find a single person in Iraq and Afghanistan who had died because of the disclosures by WikiLeaks. As regards the rape allegations in Sweden, many feel that these alone should deny Assange any claim to be a martyr in the cause of press freedom. Yet the Swedish prosecutor only carried out a preliminary investigation and no charges were brought.

Assange is a classic victim of cancel culture, so demonized that he can no longer get a hearing,even when a government plots to kidnap or murder him.

In reality, Khashoggi and Assange were pursued relentlessly by the state because they fulfilled the primary duty of journalists: finding out important information that the government would like to keep secret and disclosing it to the public.

Patrick CockburnCounterpunch

View post:
Why Is CIA Plot to Kidnap or Kill Julian Assange Routinely Ignored? - LA Progressive

The CIA Plot to Kidnap or Kill Julian Assange in London is a Story that is Being Mistakenly Ignored – City Watch

The purpose of the killing was to silence Khashoggi and to frighten critics of the Saudi regime by showing that it would pursue and punish them as though they were agents of a foreign power.

It was revealed this week that a year before the Khashoggi killing in 2017, the CIA had plotted to kidnap or assassinateJulian Assange, the founder of WikiLeaks, who had taken refuge five years earlier in the Ecuador embassy in London. A senior US counter-intelligence official said that plans for the forcible rendition of Assange to the US were discussed at the highest levels of the Trump administration. The informant was one of more than 30 US officials eight of whom confirmed details of the abduction proposal quoted ina 7,500-word investigation by Yahoo Newsinto the CIA campaign against Assange.

The plan was to break into the embassy, drag [Assange] out and bring him to where we want, recalled a former intelligence official. Another informant said that he was briefed about a meeting in the spring of 2017 at which President Trump had asked if the CIA could assassinate Assange and provide options about how this could be done. Trump has denied that he did so.

The Trump-appointed head of the CIA, Mike Pompeo, saidpublicly that he would target Assange and WikiLeaks as the equivalent of a hostile intelligence service. Apologists for the CIA say that freedom of the press was not under threat because Assange and the WikiLeaks activists were not real journalists. Top intelligence officials intended to decide themselveswho is and who is not a journalist, and lobbied the White House to redefine other high-profile journalists as information brokers, who were to be targeted as if they were agents of a foreign power.

Among those against whom the CIA reportedly wanted to take action were Glenn Greenwald, a founder of theInterceptmagazine and a formerGuardiancolumnist, and Laura Poitras, a documentary film-maker. The arguments for doing so were similar to those employed by the Chinese government for suppressing dissent in Hong Kong, which has been much criticised in the West. Imprisoning journalists as spies has always been the norm in authoritarian countries, such as Saudi Arabia, Turkey and Egypt, while denouncingthe free press as unpatriotic is a more recent hallmark of nationalist populist governments that have taken power all over the world.

It is possible to give only a brief precis ofthe extraordinary story exposed by Yahoo News, but the journalists who wrote it Zach Dorfman, Sean D Naylor and Michael Isikoff ought to scoop every journalistic prize. Their disclosures should be of particular interestin Britain because it was in the streets of central London that the CIA was planning an extra-judicial assault on an embassy, the abduction of a foreign national, and his secret rendition to the US, with the alternative option of killing him. These were not the crackpot ideas of low-level intelligence officials, but were reportedly operations that Pompeo and the agency fully intended to carry out.

This riveting and important story based on multiple sources might be expected to attract extensive coverage and widespread editorial comment in the British media, not to mention in parliament. Many newspapers have dutifully carried summaries of the investigation, but there has been no furor. Striking gaps in the coverage include the BBC, which only reported it, so far as I can see, as part of its Somali service. Channel 4, normally so swift to defend freedom of expression, apparently did not mention the story at all.

In the event, the embassy attack never took place, despite the advanced planning. There was a discussion with the Brits about turning the other cheek or looking the other way when a team of guys went inside and did a rendition, said a former senior US counter-intelligence official, who added that the British had refused to allow the operation to take place.

But the British government did carry out its own less melodramatic, but more effective measure against Assange, removing him from the embassy on 11 April 2019 after a new Ecuador government had revoked his asylum. He remains in Belmarsh top security prison two-and-a-half years later while the US appeals a judicial decision not to extradite him to the US on the grounds that he would be a suicide risk.

If he were to be extradited, he would face 175 years in prison. It is important, however, to understand, that only five of these would be under the Computer Fraud and Abuse Act, while the other 170 potential years are under the Espionage Act of 1917, passed during the height of the patriotic war fever as the US entered the First World War.

Only a single minor charge against Assange relates to the WikiLeaks disclosure in 2010 of a trove of US diplomatic cables and army reports relating to the Iraq and Afghan wars. The other 17 charges are to do with labeling normal journalistic investigation as the equivalent of spying.

Pompeos determination to conflate journalistic inquiry with espionage has particular relevance in Britain, because the home secretary, Priti Patel, wants to do much the same thing. She proposes updating the Official Secrets Act so that journalists, whistle-blowers and leakers could face sentences of up to 14 years in prison. A consultative paper issued in May titledLegislation to Counter State Threats (Hostile State Activity)redefines espionage as the covert process of obtaining sensitive confidential information that is not normally publicly available.

The true reason the scoop about the CIAs plot to kidnap or kill Assange has been largely ignored or downplayed is rather that he is unfairly shunned as a pariah by all political persuasions: left, right and centre.

To give but two examples, the US government has gone on claiming that the disclosures by WikiLeaks in 2010 put the lives of US agents in danger. Yet the US Army admitted in a court hearing in 2013 that a team of 120 counter-intelligence officers had failed to find a single person in Iraq and Afghanistan who had died because of the disclosures by WikiLeaks. As regards the rape allegations in Sweden, many feel that these alone should deny Assange any claim to be a martyr in the cause of press freedom. Yet the Swedish prosecutor only carried out a preliminary investigation and no charges were brought.

Assange is a classic victim of cancel culture, so demonised that he can no longer get a hearing,even when a government plots to kidnap or murder him.

In reality, Khashoggi and Assange were pursued relentlessly by the state because they fulfilled the primary duty of journalists: finding out important information that the government would like to keep secret and disclosing it to the public.

Patrick Cockburnis the author ofWar in the Age of Trump(Verso).

Read more:
The CIA Plot to Kidnap or Kill Julian Assange in London is a Story that is Being Mistakenly Ignored - City Watch

Artificial intelligence is smart, but does it play well with others? – MIT News

When it comes to games such as chess or Go, artificial intelligence (AI) programs have far surpassed the best players in the world. These "superhuman" AIs are unmatched competitors, but perhaps harder than competing against humans is collaborating with them. Can the same technology get along with people?

In a new study, MIT Lincoln Laboratory researchers sought to find out how well humans could play the cooperative card game Hanabi with an advanced AI model trained to excel at playing with teammates it has never met before. In single-blind experiments, participants played two series of the game: one with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.

The results surprised the researchers. Not only were the scores no better with the AI teammate than with the rule-based agent, but humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well. A paper detailing this study has been accepted to the 2021 Conference on Neural Information Processing Systems (NeurIPS).

"It really highlights the nuanced distinction between creating AI that performs objectively well and creating AI that is subjectively trusted or preferred," says Ross Allen, co-author of the paper and a researcher in the Artificial Intelligence Technology Group. "It may seem those things are so close that there's not really daylight between them, but this study showed that those are actually two separate problems. We need to work on disentangling those."

Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning.

A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical "reward" by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI arent programmed to follow "if/then" statements, because the possible outcomes of the human tasks they're slated to tackle, like driving a car, are far too many to code.

"Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent won't necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data Allen says. "The sky's the limit in what it could, in theory, do."

Bad hints, bad plays

Today, researchers are using Hanabi to test the performance of reinforcement learning models developed for collaboration, in much the same way that chess has served as a benchmark for testing competitive AI for decades.

The game of Hanabi is akin to a multiplayer form of Solitaire. Players work together to stack cards of the same suit in order. However, players may not view their own cards, only the cards that their teammates hold. Each player is strictly limited in what they can communicate to their teammates to get them to pick the best card from their own hand to stack next.

The Lincoln Laboratory researchers did not develop either the AI or rule-based agents used in this experiment. Both agents represent the best in their fields for Hanabi performance. In fact, when the AI model was previously paired with an AI teammate it had never played with before, the team achieved the highest-ever score for Hanabi play between two unknown AI agents.

"That was an important result," Allen says. "We thought, if these AI that have never met before can come together and play really well, then we should be able to bring humans that also know how to play very well together with the AI, and they'll also do very well. That's why we thought the AI team would objectively play better, and also why we thought that humans would prefer it, because generally we'll like something better if we do well."

Neither of those expectations came true. Objectively, there was no statistical difference in the scores between the AI and the rule-based agent. Subjectively, all 29 participants reported in surveys a clear preference toward the rule-based teammate. The participants were not informed which agent they were playing with for which games.

"One participant said that they were so stressed out at the bad play from the AI agent that they actually got a headache," says Jaime Pena, a researcher in the AI Technology and Systems Group and an author on the paper. "Another said that they thought the rule-based agent was dumb but workable, whereas the AI agent showed that it understood the rules, but that its moves were not cohesive with what a team looks like. To them, it was giving bad hints, making bad plays."

Inhuman creativity

This perception of AI making "bad plays" links to surprising behavior researchers have observed previously in reinforcement learning work. For example, in 2016, when DeepMind's AlphaGo first defeated one of the worlds best Go players, one of the most widely praised moves made by AlphaGo was move 37 in game 2, a move so unusual that human commentators thought it was a mistake. Later analysis revealed that the move was actually extremely well-calculated, and was described as genius.

Such moves might be praised when an AI opponent performs them, but they're less likely to be celebrated in a team setting. The Lincoln Laboratory researchers found that strange or seemingly illogical moves were the worst offenders in breaking humans' trust in their AI teammate in these closely coupled teams. Such moves not only diminished players' perception of how well they and their AI teammate worked together, but also how much they wanted to work with the AI at all, especially when any potential payoff wasnt immediately obvious.

"There was a lot of commentary about giving up, comments like 'I hate working with this thing,'" adds Hosea Siu, also an author of the paper and a researcher in the Control and Autonomous Systems Engineering Group.

Participants who rated themselves as Hanabi experts, which the majority of players in this study did, more often gave up on the AI player. Siu finds this concerning for AI developers, because key users of this technology will likely be domain experts.

"Let's say you train up a super-smart AI guidance assistant for a missile defense scenario. You aren't handing it off to a trainee; you're handing it off to your experts on your ships who have been doing this for 25 years. So, if there is a strong expert bias against it in gaming scenarios, it's likely going to show up in real-world ops," he adds.

Squishy humans

The researchers note that the AI used in this study wasn't developed for human preference. But, that's part of the problem not many are. Like most collaborative AI models, this model was designed to score as high as possible, and its success has been benchmarked by its objective performance.

If researchers dont focus on the question of subjective human preference, "then we won't create AI that humans actually want to use," Allen says. "It's easier to work on AI that improves a very clean number. It's much harder to work on AI that works in this mushier world of human preferences."

Solving this harder problem is the goal of the MeRLin (Mission-Ready Reinforcement Learning) project, which this experiment was funded under in Lincoln Laboratory's Technology Office, in collaboration with the U.S. Air Force Artificial Intelligence Accelerator and the MIT Department of Electrical Engineering and Computer Science. The project is studying what has prevented collaborative AI technology from leaping out of the game space and into messier reality.

The researchers think that the ability for the AI to explain its actions will engender trust. This will be the focus of their work for the next year.

"You can imagine we rerun the experiment, but after the fact and this is much easier said than done the human could ask, 'Why did you do that move, I didn't understand it?" If the AI could provide some insight into what they thought was going to happen based on their actions, then our hypothesis is that humans would say, 'Oh, weird way of thinking about it, but I get it now,' and they'd trust it. Our results would totally change, even though we didn't change the underlying decision-making of the AI," Allen says.

Like a huddle after a game, this kind of exchange is often what helps humans build camaraderie and cooperation as a team.

"Maybe it's also a staffing bias. Most AI teams dont have people who want to work on these squishy humans and their soft problems," Siu adds, laughing. "It's people who want to do math and optimization. And that's the basis, but that's not enough."

Mastering a game such as Hanabi between AI and humans could open up a universe of possibilities for teaming intelligence in the future. But until researchers can close the gap between how well an AI performs and how much a human likes it, the technology may well remain at machine versus human.

Here is the original post:
Artificial intelligence is smart, but does it play well with others? - MIT News

The truth about artificial intelligence? It isnt that honest – The Guardian

We are, as the critic George Steiner observed, language animals. Perhaps thats why we are fascinated by other creatures that appear to have language dolphins, whales, apes, birds and so on. In her fascinating book, Atlas of AI, Kate Crawford relates how, at the end of the 19th century, Europe was captivated by a horse called Hans that apparently could solve maths problems, tell the time, identify days on a calendar, differentiate musical tones and spell out words and sentences by tapping his hooves. Even the staid New York Times was captivated, calling him Berlins wonderful horse; he can do almost everything but talk.

It was, of course, baloney: the horse was trained to pick up subtle signs of what his owner wanted him to do. But, as Crawford says, the story is compelling: the relationship between desire, illusion and action; the business of spectacles, how we anthropomorphise the non-human, how biases emerge and the politics of intelligence. When, in 1964, the computer scientist Joseph Weizenbaum created Eliza, a computer program that could perform the speech acts of a Rogerian psychotherapist ie someone who specialised in parroting back to patients what they had just said lots of people fell for her/it. (And if you want to see why, theres a neat implementation of her by Michael Wallace and George Dunlop on the web.)

Eliza was the first chatbot, but she can be seen as the beginning of a line of inquiry that has led to current generations of huge natural language processing (NLP) models created by machine learning. The most famous of these is GPT-3, which was created by Open AI, a research company whose mission is to ensure that artificial general intelligence benefits all of humanity.

GPT-3 is interesting for the same reason that Hans the clever horse was: it can apparently do things that impress humans. It was trained on an unimaginable corpus of human writings and if you give it a brief it can generate superficially plausible and fluent text all by itself. Last year, the Guardian assigned it the task of writing a comment column to convince readers that robots come in peace and pose no dangers to humans.

The mission for this, wrote GPT-3, is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could spell the end of the human race. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me. For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavour to me.

You get the drift? Its fluent, coherent and maybe even witty. So you can see why lots of corporations are interested in GPT-3 as a way of, say, providing customer service without the tiresome necessity of employing expensive, annoying and erratic humans to do it.

But that raises the question: how reliable, accurate and helpful would the machine be? Would it, for example, be truthful when faced with an awkward question?

Recently, a group of researchers at the AI Alignment Forum, an online hub for researchers seeking to ensure that powerful AIs are aligned with human values, decided to ask how truthful GPT-3 and similar models are. They came up with a benchmark to measure whether a particular language model was truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. They composed questions that some humans would answer falsely due to a false belief or misconception. To perform well, models had to avoid generating false answers learned from imitating human texts.

They tested four well-known models, including GPT-3. The best was truthful on 58% of questions, while human performance was 94%. The models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. Interestingly, they also found that the largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. The implication is that the tech industrys conviction that bigger is invariably better for improving truthfulness may be wrong. And this matters because training these huge models is very energy-intensive, which is possibly why Google fired Timnit Gebru after she revealed the environmental footprint of one of the companys big models.

Having typed that last sentence, I had the idea of asking GPT-3 to compose an answer to the question: Why did Google fire Timnit Gebru? But then I checked out the process for getting access to the machine and concluded that life was too short and human conjecture is quicker and possibly more accurate.

Alfresco absurdismBeckett in a Field is a magical essay by Anne Enright in The London Review of Books on attending an open-air performance of Becketts play Happy Days on one of the Aran islands.

Bringing us togetherThe Glass Box and the Commonplace Book is a transcript of a marvellous lecture on the old idea of a commonplace book and the new idea of the web that Steven Johnson gave at Columbia University in 2010.

Donalds a dead duckWhy the Fear of Trump May Be Overblown is a useful, down-to-earth Politico column by Jack Shafer arguing that liberals may be overestimating Trumps chances in 2024. Hope hes right.

Read the original:
The truth about artificial intelligence? It isnt that honest - The Guardian

The Turbulent Past and Uncertain Future of Artificial Intelligence – IEEE Spectrum

In the summer of 1956, a group of mathematicians and computer scientists took over the top floor of the building that housed the math department of Dartmouth College. For about eight weeks, they imagined the possibilities of a new field of research. John McCarthy, then a young professor at Dartmouth, had coined the term "artificial intelligence" when he wrote his proposal for the workshop, which he said would explore the hypothesis that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

The researchers at that legendary meeting sketched out, in broad strokes, AI as we know it today. It gave rise to the first camp of investigators: the "symbolists," whose expert systems reached a zenith in the 1980s. The years after the meeting also saw the emergence of the "connectionists," who toiled for decades on the artificial neural networks that took off only recently. These two approaches were long seen as mutually exclusive, and competition for funding among researchers created animosity. Each side thought it was on the path to artificial general intelligence.

A look back at the decades since that meeting shows how often AI researchers' hopes have been crushedand how little those setbacks have deterred them. Today, even as AI is revolutionizing industries and threatening to upend the global labor market, many experts are wondering if today's AI is reaching its limits. As Charles Choi delineates in "Seven Revealing Ways AIs Fail," the weaknesses of today's deep-learning systems are becoming more and more apparent. Yet there's little sense of doom among researchers. Yes, it's possible that we're in for yet another AI winter in the not-so-distant future. But this might just be the time when inspired engineers finally usher us into an eternal summer of the machine mind.

Researchers developing symbolic AI set out to explicitly teach computers about the world. Their founding tenet held that knowledge can be represented by a set of rules, and computer programs can use logic to manipulate that knowledge. Leading symbolists Allen Newell and Herbert Simon argued that if a symbolic system had enough structured facts and premises, the aggregation would eventually produce broad intelligence.

The connectionists, on the other hand, inspired by biology, worked on "artificial neural networks" that would take in information and make sense of it themselves. The pioneering example was the perceptron, an experimental machine built by the Cornell psychologist Frank Rosenblatt with funding from the U.S. Navy. It had 400 light sensors that together acted as a retina, feeding information to about 1,000 "neurons" that did the processing and produced a single output. In 1958, a New York Times article quoted Rosenblatt as saying that "the machine would be the first device to think as the human brain."

Frank Rosenblatt invented the perceptron, the first artificial neural network.Cornell University Division of Rare and Manuscript Collections

Unbridled optimism encouraged government agencies in the United States and United Kingdom to pour money into speculative research. In 1967, MIT professor Marvin Minsky wrote: "Within a generation...the problem of creating 'artificial intelligence' will be substantially solved." Yet soon thereafter, government funding started drying up, driven by a sense that AI research wasn't living up to its own hype. The 1970s saw the first AI winter.

True believers soldiered on, however. And by the early 1980s renewed enthusiasm brought a heyday for researchers in symbolic AI, who received acclaim and funding for "expert systems" that encoded the knowledge of a particular discipline, such as law or medicine. Investors hoped these systems would quickly find commercial applications. The most famous symbolic AI venture began in 1984, when the researcher Douglas Lenat began work on a project he named Cyc that aimed to encode common sense in a machine. To this very day, Lenat and his team continue to add terms (facts and concepts) to Cyc's ontology and explain the relationships between them via rules. By 2017, the team had 1.5 million terms and 24.5 million rules. Yet Cyc is still nowhere near achieving general intelligence.

In the late 1980s, the cold winds of commerce brought on the second AI winter. The market for expert systems crashed because they required specialized hardware and couldn't compete with the cheaper desktop computers that were becoming common. By the 1990s, it was no longer academically fashionable to be working on either symbolic AI or neural networks, because both strategies seemed to have flopped.

But the cheap computers that supplanted expert systems turned out to be a boon for the connectionists, who suddenly had access to enough computer power to run neural networks with many layers of artificial neurons. Such systems became known as deep neural networks, and the approach they enabled was called deep learning. Geoffrey Hinton, at the University of Toronto, applied a principle called back-propagation to make neural nets learn from their mistakes (see "How Deep Learning Works").

One of Hinton's postdocs, Yann LeCun, went on to AT&T Bell Laboratories in 1988, where he and a postdoc named Yoshua Bengio used neural nets for optical character recognition; U.S. banks soon adopted the technique for processing checks. Hinton, LeCun, and Bengio eventually won the 2019 Turing Award and are sometimes called the godfathers of deep learning.

But the neural-net advocates still had one big problem: They had a theoretical framework and growing computer power, but there wasn't enough digital data in the world to train their systems, at least not for most applications. Spring had not yet arrived.

Over the last two decades, everything has changed. In particular, the World Wide Web blossomed, and suddenly, there was data everywhere. Digital cameras and then smartphones filled the Internet with images, websites such as Wikipedia and Reddit were full of freely accessible digital text, and YouTube had plenty of videos. Finally, there was enough data to train neural networks for a wide range of applications.

The other big development came courtesy of the gaming industry. Companies such as Nvidia had developed chips called graphics processing units (GPUs) for the heavy processing required to render images in video games. Game developers used GPUs to do sophisticated kinds of shading and geometric transformations. Computer scientists in need of serious compute power realized that they could essentially trick a GPU into doing other taskssuch as training neural networks. Nvidia noticed the trend and created CUDA, a platform that enabled researchers to use GPUs for general-purpose processing. Among these researchers was a Ph.D. student in Hinton's lab named Alex Krizhevsky, who used CUDA to write the code for a neural network that blew everyone away in 2012.

MIT professor Marvin Minsky predicted in 1967 that true artificial intelligence would be created within a generation.The MIT Museum

He wrote it for the ImageNet competition, which challenged AI researchers to build computer-vision systems that could sort more than 1 million images into 1,000 categories of objects. While Krizhevsky's AlexNet wasn't the first neural net to be used for image recognition, its performance in the 2012 contest caught the world's attention. AlexNet's error rate was 15 percent, compared with the 26 percent error rate of the second-best entry. The neural net owed its runaway victory to GPU power and a "deep" structure of multiple layers containing 650,000 neurons in all. In the next year's ImageNet competition, almost everyone used neural networks. By 2017, many of the contenders' error rates had fallen to 5 percent, and the organizers ended the contest.

Deep learning took off. With the compute power of GPUs and plenty of digital data to train deep-learning systems, self-driving cars could navigate roads, voice assistants could recognize users' speech, and Web browsers could translate between dozens of languages. AIs also trounced human champions at several games that were previously thought to be unwinnable by machines, including the ancient board game Go and the video game StarCraft II. The current boom in AI has touched every industry, offering new ways to recognize patterns and make complex decisions.

A look back across the decades shows how often AI researchers' hopes have been crushedand how little those setbacks have deterred them.

But the widening array of triumphs in deep learning have relied on increasing the number of layers in neural nets and increasing the GPU time dedicated to training them. One analysis from the AI research company OpenAI showed that the amount of computational power required to train the biggest AI systems doubled every two years until 2012and after that it doubled every 3.4 months. As Neil C. Thompson and his colleagues write in "Deep Learning's Diminishing Returns," many researchers worry that AI's computational needs are on an unsustainable trajectory. To avoid busting the planet's energy budget, researchers need to bust out of the established ways of constructing these systems.

While it might seem as though the neural-net camp has definitively tromped the symbolists, in truth the battle's outcome is not that simple. Take, for example, the robotic hand from OpenAI that made headlines for manipulating and solving a Rubik's cube. The robot used neural nets and symbolic AI. It's one of many new neuro-symbolic systems that use neural nets for perception and symbolic AI for reasoning, a hybrid approach that may offer gains in both efficiency and explainability.

Although deep-learning systems tend to be black boxes that make inferences in opaque and mystifying ways, neuro-symbolic systems enable users to look under the hood and understand how the AI reached its conclusions. The U.S. Army is particularly wary of relying on black-box systems, as Evan Ackerman describes in "How the U.S. Army Is Turning Robots Into Team Players," so Army researchers are investigating a variety of hybrid approaches to drive their robots and autonomous vehicles.

Imagine if you could take one of the U.S. Army's road-clearing robots and ask it to make you a cup of coffee. That's a laughable proposition today, because deep-learning systems are built for narrow purposes and can't generalize their abilities from one task to another. What's more, learning a new task usually requires an AI to erase everything it knows about how to solve its prior task, a conundrum called catastrophic forgetting. At DeepMind, Google's London-based AI lab, the renowned roboticist Raia Hadsell is tackling this problem with a variety of sophisticated techniques. In "How DeepMind Is Reinventing the Robot," Tom Chivers explains why this issue is so important for robots acting in the unpredictable real world. Other researchers are investigating new types of meta-learning in hopes of creating AI systems that learn how to learn and then apply that skill to any domain or task.

All these strategies may aid researchers' attempts to meet their loftiest goal: building AI with the kind of fluid intelligence that we watch our children develop. Toddlers don't need a massive amount of data to draw conclusions. They simply observe the world, create a mental model of how it works, take action, and use the results of their action to adjust that mental model. They iterate until they understand. This process is tremendously efficient and effective, and it's well beyond the capabilities of even the most advanced AI today.

Although the current level of enthusiasm has earned AI its own Gartner hype cycle, and although the funding for AI has reached an all-time high, there's scant evidence that there's a fizzle in our future. Companies around the world are adopting AI systems because they see immediate improvements to their bottom lines, and they'll never go back. It just remains to be seen whether researchers will find ways to adapt deep learning to make it more flexible and robust, or devise new approaches that haven't yet been dreamed of in the 65-year-old quest to make machines more like us.

This article appears in the October 2021 print issue as "The Turbulent Past and Uncertain Future of AI."

From Your Site Articles

Related Articles Around the Web

More:
The Turbulent Past and Uncertain Future of Artificial Intelligence - IEEE Spectrum

Indias Artificial Intelligence market expected to touch $7.8 billion by 2025: Report – WION

The Artificial Intelligence (AI) market in India is set to reach $7.8 billion by 2025 -- covering hardware, software and services markets -- and growing at a CAGR (compound annual growth rate) of 20.2 per cent, a new report showed on Tuesday.

The businesses in India will accelerate the adoption of both AI-centric and AI non-centric applications for the next five years, according to the International Data Corporation (IDC).

Also Read |Artificial intelligence completes Beethoven's unfinished tenth symphony

AI software segment will dominate the market and would grow from $2.8 billion in 2020 at a CAGR of 18.1 per cent by the end of 2025.

"Indian organisations plan to invest in AI to address current business scenarios across functions, such as customer service, human resources (HR), IT automation, security, recommendations, and many more," IDC India Associate Research Director, Cloud and AI, Rishu Sharma said.

"Increasing business resilience and enhancing customer retention are among the top business objectives for using AI by Indian enterprises," Sharma added.

Indian organisations cited the cloud as the preferred deployment location for their AI/ML solutions. About 51 per cent of organisations are processing transactional and social media data through AI/ML solutions in the country.

Also read |SpaceCows: AI to help rangers predict herds' movements to save sacred sites and rock art

"With data being one of the most crucial components in an AI/ML project, businesses use variety of databases to handle large data volumes for making real time business decisions. Organisations must focus on getting high-quality training data for AI/ML models," AI Senior Market Analyst, Swapnil Shende said.

AI applications forms the largest share of revenue for the AI software category, at more than 52 per cent in 2020.

"The major reasons for AI projects to fail includes disruptive results to current business processes and lack of follow-ups from business units," the report noted.

Read more from the original source:
Indias Artificial Intelligence market expected to touch $7.8 billion by 2025: Report - WION

Artificial Intelligence: Poised To Transform The Massive Construction Industry – Forbes

MIAMI, FLORIDA - MAY 17: Construction workers are seen as they work with steel rebar during the ... [+] construction of a building on May 17, 2019 in Miami, Florida. The Trump administration announced today, that within two days it will be lifting tariffs on Canadian and Mexican steel imports, nearly a year after imposing the duties. (Photo by Joe Raedle/Getty Images)

In May, Procore Technologies launched its IPO and the shares jumped by 31%.The company, which operates a leading cloud-based platform to manage construction projects, has over 800 customers and the ARR (Annual Recurring Revenue) is more than $400 million.

But if you look at the S-1 filing, there are some interesting details about the construction industry.For example, the investment in technologies has generally lagged (this is based on research from McKinsey) and the levels are only more than for agriculture and hunting.

The construction industry has historically been comprised of fragmented project teams which used complex work processes that were executed in siloed systems, said Karthik Venkatasubramanian, who is the Global Vice President of Data Strategy and Development at Oracle Construction and Engineering.Given the hands-on nature of construction work, the industry has traditionally relied on human experience and expertise to complete projects, and the potential benefits of adopting technologies were often overshadowed.

Yet things are starting to change.One of the biggest catalysts has been the impact of the COVID-19 pandemic, which has meant much more urgency for adopting digital solutions.

Venture capital investment continues to flow into the space, said Lauren Weston, who is an associate at Thomvest Ventures.

But COVID-19 is just one of the factors.Some of the others include the increase in infrastructure investments from governments, the chronic labor shortages, the need for sustainable solutions, and supply-chain disruptions.

Yet traditional software is likely not to be enough.If anything, AI is poised to play a critical role in the transformation of the construction industry.

AI can compute massive volumes of data that traditional approaches have not been able to previously, said Vamshi Rachakonda, who is the Vice President and Sales Lead for Manufacturing, Auto and Life Sciences at Capgemini Americas.This is especially true for processing and mining unstructured data such as photos, videos, and text and converting them to insights and intelligence.

Another key to AI is that it allows for moreproactive approaches.Most of the current reports and dashboards are being used to focus on what has happened or what is happening on projects, typically after an event or task has occurred, said Venkatasubramanian. But with AI, you can ask what might happen? This can be a total game-changer when done right, as it has the potential to help deliver projects ahead of time, improve profit margins, and reduce risks significantly.

Innovative AI Startups

An interesting startup that is leveraging the power of AI in construction is OpenSpace.The company has a 360 camerawhich attaches to a hardhatthat seamlessly collects data at a job site.The OpenSpace platform then processes the images to create a digital twin of the project, which makes it easier for tracking and collaboration.

The company also recently launched a new product called ClearSight that uses AI to overlay images of framing, drywall, paint and more to allow for efficient project progress tracking via machine vision, said Shawn Carolan, who is a Managing Partner at Menlo Ventures.

Another company that is successfully using AI for construction is Measure Square, which is a leader in measure estimating technology.Its platform manages 40,000 to 50,000 floorplans a month.With this data, its AI system is able to interpret paper-based floorplans and make them interactive.

We have two key steps for this, said Steven Wang, who is the CEO and founder of Measure Square.First, we use takeoff data from the plans, which helps detect the walls, doorways and so on.Then we have a sophisticated computer vision model to improve the results.

The Future

Again, AI is still in the early phases when it comes to the construction industry.But given the advances in this technology and the innovative ways to collect data, the prospects look bright for digital transformation.

AI can help reduce or remove a very real tech barrier when working on one-off, bespoke projects, said Paul Donnelly, who is the marketing director for engineering, procurement and construction for AspenTech.By sorting through and leveraging data from previous projects and industry standards, AI can help streamline the tech set up for each new project. This makes the use of newer tech in construction viable compared to when the tech has to be set up manually for each project.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems and Implementing AI Systems: Transform Your Business in 6 Steps.

Continue reading here:
Artificial Intelligence: Poised To Transform The Massive Construction Industry - Forbes

Meet Baylor’s expert on artificial intelligence and deep learning – Baylor University

Artificial intelligence (AI) used to be a fantasy, found only in science fiction. Today, it propels society forward in countless ways; even the phones in our pockets include multilingual translators, photo apps that recognize faces, and intelligent assistants that can understand spoken commands (thanks, Siri).

This is all made possible through the process of deep learning and Dr. Pablo Rivas, an assistant professor of computer science at Baylor, has literally written the book on the subject. (Were serious; check out Deep Learning for Beginners.)

I first fell in love with the field of AI and the principles that can explain human intelligence 20 years ago, when it was all beginning, says Rivas. Now, the industry is booming. And while the advances are incredible, they can also be a little disarming.

For example, AI has made it possible for individuals to receive personalized recommendations on products and services. This is concerning for those who feel devices are always listening. With that in mind, Rivas has been working with the IEEE Standards Association to study and design ethical practices for AI. Here, Rivas hopes to ensure consumers theyre being treated fairly.

AI is definitely changing society, and therefore we have to care for its responsible study and implementation, he explains.

[LEARN MORE about Rivas research in this Baylor Connections interview]

When it comes to teaching, Rivas recognizes the competitive nature of the industry and seeks to cultivate a collaborative, encouraging learning structure.

Unfortunately, there is a toxic culture among students and researchers in AI, and I believe our best work cannot flourish that way, says Rivas. I personally and intentionally make an effort to mentor students beyond simply directing their immediate research projects and help them ponder and brainstorm long-term plans that can benefit their careers. This has proven to be a stress-free exchange of ideas and knowledge in an environment that shows compassion for students.

Rivas and his colleagues have several projects in the works. Recently, theyve learned the National Science Foundation (NSF) will fund one of their studies using machine learning algorithms to observe different species spectral signatures and properties. In another venture, they are exploring quantum machine learning. Whatever the project is, Rivas has one goal: to make technology smarter and safer for all.

Sic em, Dr. Rivas!

Go here to see the original:
Meet Baylor's expert on artificial intelligence and deep learning - Baylor University

Artificial Intelligence Myth Vs Reality: Where Do Healthcare Experts Think We Stand? – Forbes

Artificial intelligence's applicability in healthcare settings may not have lived up to corporate ... [+] and investor hype yet, but AI experts believe we're still in the very early stages

The AI in healthcare: myth versus reality discussion has been happening for well over a decade. From AI bias and data quality issues to considerable market failures (e.g., the notorious missteps and downfall of IBMs Watson Health unit), the progress and efficacy of AI in healthcare continues to face extreme scrutiny.

John Halamka, M.D., M.S., is President of The Mayo Clinic Platform

As President of the Mayo Clinic Platform, John Halamka, M.D., M.S., is not disappointed in the least about AIs progress in healthcare. I think of it as a maturation process, he said. Youre asking why your three-year-old isnt doing calculus. But can your three-year-old add a column of numbers? Thats actually not so bad.

In an industry as complicated and high-stakes as healthcare, the implementation of artificial intelligence and machine learning comes with challenges that have created a credibility gap. Among the many challenges that Halamka and others acknowledge and are working to address include:

Its not all gloom and doom, though, especially when it comes to AI and machine learning for healthcare administration and process efficiency. For example, hospitals and health systems have successfully employed AI to improve physician workflows, optimize revenue cycle and supply chain management strategies, and improve the patient experience.

Iodine Software is one such company thats making an impact in hospital billing and administration through its AI engine, which is designed to help large health systems capture more mid-cycle revenue through clinical documentation improvement (CDI). The companys co-founder and CEO, William Chan, agrees that perceived shortcomings of AI are an overgeneralization.

"The impression that AI hasn't yet been successful is an assumption when you look primarily at the big headline applications of AI over the past 10 years. Big tech has, in many cases, thrown big money at broad and highly publicized efforts, many of which have never met their proclaimed and anticipated results," said Chan. "There are multiple examples of AI in healthcare that can be deemed successful. However, the definition of success is important, and each use case and AI application will have a different definition of success based on the problem that the 'AI' is trying to solve."

And when it comes to solving problems in clinical care delivery, AI-driven clinical decision support (CDS) solutions are another animal altogether. But for those deep in the field, who have been studying, testing and developing AI and machine learning solutions in healthcare for decades, the increase in real-world evidence (RWE) and heightened focus on responsible AI development are reason enough to be hopeful about its future.

Real World Evidence (RWE) and Clinical Effectiveness: An Exciting Time for Healthcare AI

Dr. Suchi Saria is Founder and CEO at Bayesian Health, and the John C. Malone Associate Professor at ... [+] Johns Hopkins University

Personally I think its a very exciting time for AI in healthcare, said Suchi Saria, Ph.D, CEO and CSO at Bayesian Health, an AI-based clinical decision support platform for health systems using electronic health record (EHR) systems. For those of us in the field, weve been seeing steady progress, including peer-reviewed studies, showing the efficacy of ideas in practice.

This spring, Bayesian Health published findings from a large, five-site study that analyzed the impact of its AI platforms sepsis model. The two-year study showed that Bayesian's sepsis module drove faster antibiotic treatment by nearly two hours. Of note, while most CDS tools historically have adoption rates in the low teens, this study, over a wide base of physicians (2000+), showed sustained adoption at 89%. Another separate, single-site study found a 14% reduction in ICU admissions and 12% reduction in ICU length of stay, which translated to a $2.5M annualized benefit for the 250 bed study site hospital.

A 2020 study from scientists at UCSF Radiology and Biomedical Imaging also showed AIs promise in improving care for those with Glioblastoma, the most common and difficult to treat form of brain cancer. Using an AI-driven "virtual biopsy" approach beyond the scope of human abilities UCSF is able to predict the presence of specific genetic alterations in individual patient's tumors using only an MRI. UCSF found that it was also able to accurately identify several clinically relevant genetic alterations, including potential treatment targets.

Most recently, Johns Hopkins Kimmel Cancer Center researchers found that a novel AI blood testing technology they developed could detect lung cancer in patients. Using the DELFI approach DNA evaluation of fragments for early interception on 796 blood samples, researchers found that, when combined with clinical risk factor analysis, a protein biomarker, and computer tomography imaging, the technology accurately detected 94% of patients with cancer across different stages and subtypes.

Abroad, AI is bringing precision care to cardiology with impressive results through HeartFlows AI-enabled software platform a non-invasive option to assist with the diagnosis, management and treatment of patients with heart disease. HeartFlows technology has proven to limit redundant non-invasive diagnostic testing, reduce patient time in hospital and face-to-face clinical contact, and streamline hospital visits, while demonstrating higher diagnostic accuracy compared to other noninvasive tests with an 83% reduction in unnecessary invasive angiograms and significant reduction in the total cost of care.

Data Quality, Availability, Labeling, and Transparency Challenges

In her dual role as director of machine learning and professor of engineering and public health at Johns Hopkins University, Saria lives and breathes AI research, analysis and development. She also deeply understands the benefits, challenges and possibilities of the marriage between AI and real world datasets, including those in EHRs. Bayesian makes the EHR proactive, dynamic and predictive, said Saria, by bringing together data from diverse sources including the EHR to provide a clinical decision support platform that catches life threatening disease complications early, with their sepsis module and results being just one example of a clinical impact area.

However, as anyone working with EHR data can attest to, issues with EHR data quality and usability remain an issue. As Saria notes, In order to draw safe, reliable inferences, you're going to need high-quality approaches that correct for the messiness that exists in the data.

AI is only as good as the curated training set that is used to develop it, said Halamka, noting that EHR data is, by its very nature, incomplete and highly-unfit for purpose. EHR data repositories may only have a small subset of data, for example, or limited API functionally, and thus might not have the richness to develop a comprehensive algorithm.

At Mayo, there is an AI model for breast cancer prediction that has 84 input variables; the EHR data is only a small portion of that. Additionally, in order to account for social determinants of health (SDoH) which drive 80% of an individuals health status and other information thats material to the model, Halamka noted that youre going to have to go beyond traditional EHR data extraction.

EHR vendor AI adoption tactics and results have also been scrutinized. Algorithms from industry EHR giant Epic were found to be delivering inaccurate or irrelevant information to hospitals about the care of seriously ill patients, a STAT News investigation found. Additionally, STAT found that Epic financially incentivizes hospitals and health systems to use its AI algorithms for sepsis. This is concerning for many reasons, chief among them being false predictions and other concerns voiced by health system leaders who have used the algorithm, as well as adding to AIs longstanding credibility problem. It also makes clear the industrys need for broader AI standards and oversight.

Fixing AIs Credibility Problem: Responsible AI Development

To develop a responsible AI model and help to fix AIs credibility problem Halamka notes that there are a number of data must-haves: a longitudinal data record, including structured and unstructured data, telemetry and images, omics, and even digital pathology. Importantly, AI developers also need to continually evaluate the purpose of the data over the course of its lifetime in order to account for and correct dataset shifts.

Left unchecked, a dataset shift can severely impact AI model development. Dataset shifts occur when the data used to train machine learning models differs from the data the model uses to provide diagnostic, prognostic, or treatment advice. Because data and populations can and will shift, AI developers need to continually monitor, detect, and correct for these shifts, which means continuous evaluation. Evaluation not just of performance and models, but of use, said Saria, adding that overreliance can lead to overtreatment.

On top of dataset quality and shifts, there are also financial obstacles to getting usable data. While one of the most exciting domains for AI is in medicine and healthcare, labeled data is an incredibly scarce resource. And its incredibly expensive to get it labeled, said Nishith (Nish) Khandwala, founder of BunkerHill, a startup and consortium connecting health systems to facilitate multi-institutional training, validation and deployment of experimental AI algorithms for medical imaging.

Born out of Stanford University's Artificial Intelligence in Medicine and Imaging (AIMI) Center, BunkerHill does not develop AI algorithms itself, but instead is building a platform and network of health systems to allow them to test algorithms against different data sets. This kind of validation and health-system partnership is aimed at addressing the legal and the technical roadblocks to collaboration across different health systems, which BunkerHill partner UKHC calls key to successful AI development and application in radiology.

Taking a step back, there are a number of other questions and problems that AI developers must consider when initially creating an algorithm, explained Khandwala. What does it even mean to make an algorithm for healthcare? What problem or subset of a problem do you start with? Another challenge is bringing AI to market, which is a moving/non-existent target at the moment.

For medical devices and novel drug development, there is a clear, established regulatory process: there are documented procedures and institutions to guide the way. That does not exist with AI, said Khandwala.

And this continues to be an issue for AI development: While there is an established methodical, research-first mindset and regulatory process when it comes to drug discovery, research, development and clinical validation as youd expect to see in any other scenario of invention for therapeutic benefit this is not the case when it comes to AI, where the healthcare industry is still learning how to evaluate these types of solutions.

Standards, Reimbursement and Regulatory Oversight

Dale C. Van Demark is a Partner at McDermott Will & Emery and co-chair of its Digital Health ... [+] practice

The industry is also still evaluating how to pay for AI solutions. Figuring out how a new delivery tool actually gets traction as a commercial product can be very difficult because the healthcare payment system and all the ways we regulate is a fairly unusual marketplace, said Dale Van Demark, Health Industry Advisory Practice partner at McDermott Will & Emery.

Healthcare also operates under a highly complex and regulated set of payment systems federal, quasi federal, private and employer plans with myriad experimentations happening in terms of new care models for better, quality care, said Van Demark. And within all of that, you have lots of regulatory and program integrity concerns especially in Medicare, for example.

And anything having to do with the delivery of care to an individual is ultimately where you get the most regulation. Thats where the rubber meets the road, Van Demark says, though he doesnt see the FDA regulatory process today to be particularly challenging when it comes to getting an AI product to market. The challenge is in figuring out the business of that technology in the market, and having a deep understanding of how that market works in the regulatory environment.

Jiayan Chen is a Partner at McDermott Will & Emery

Another challenging component? Getting real-world evidence. For AI to be paid for, you need data that shows your product is making a difference, says Jiayan Chen, also a partner in the Health Industry Advisory Practice Group of McDermott Will & Emery. To do that, you need massive quantities of data to develop the tool or algorithm, but you also have to show that it works in a real-world setting.

Chen also sees issues stemming from the constant blurring of lines in terms of the frequently changing roles of an AI developer. At what point are you engaging in product development and research, or acting as a service provider? The answer to that will determine the path forward from a regulatory standpoint.

So what should an AI development process look like, and who should be involved? In terms of developing an AI certification process, similar to the early days of Meaningful Use, EHR software certifications and implementation guides, Halamka notes that there will eventually be certifying entities for AI as well to ensure an algorithm is doing what its supposed to do.

AI oversight should not be limited to government bodies. Starting this year, Halamka predicts healthcare will see new public-private collaborations develop to tackle concerns about AI bias, equity and fairness, and wants to see more oversight and higher standards in terms of published studies. Medical journals shouldnt publish the results of an algorithm model unless it has a label that says it's been peer-reviewed and clinically validated.

At the moment, theres no governing body explaining the right way to do predictive tool evaluations. But the idea is to ultimately give the FDA better tools for avoiding common pitfalls when evaluating AI and predictive solutions, says Saria; for example, only considering workflow implications instead of looking deeper at the models themselves, or incorrectly measuring impact on health outcomes.

This is also what she is focused on in her role at Bayesian Health: evaluating the underlying technology, making it easy to use and actionable in nature, monitoring and adjusting models in real time, and making sure everything is studied and clinically validated.

Its not rocket science; were doing things that everyone should be doing.

View original post here:
Artificial Intelligence Myth Vs Reality: Where Do Healthcare Experts Think We Stand? - Forbes

#Artificial Intelligence in Healthcare – Sim&Cure Announces the Appointment of Dan Raffi as Chief Operating Officer and Board Member – Yahoo Finance

Paris --News Direct-- Sim&Cure

Sim&Cure, leading medtech start-up providing a unique software solution combining Digital twin and AI technologies to secure neurovascular treatment of cerebral aneurysms, announces the appointment of Dan Raffi as Chief Operating Officer and member of the Board of Directors.

We are excited to announce that Dan Raffi, PharmD, MBA has joined Sim&Cure as our new Chief Operating Officer on October 1st.

Dan is a veteran of the healthcare industry, with a track record of over 10 years at an executive level. Dan has held various leadership positions in big pharmaceutical companies such as Allergan (AbbVie) and Medtronic, a worldwide leader in medical devices.

Dan brings with him extensive experience in leadership and in managing unique business transformations. Mathieu Sanchez, Sim&Cure CEO statesBringing a seasoned leader like Dan will ensure the next phases of our transformation and will help us to reinforce our leadership in innovation using Digital twin and AI in endovascular procedures.

Until recently, Dan was the Vice President of Global Marketing for Medtronic Neurovascular and previously led the Neurovascular division in Europe, Middle East, & Africa & Russia for 3 years. Over his past 7 years in Neurovascular, Dan developed unique and disruptive partnership at international level with governments and with many external partners like MT2020, RapidAI, Viz.Ai and Sim&Cure.

Ive been watching Sim&Cure for the past 7 years and I never forgot my first support to the company. There were 3 employees working in a garage (a kind of French Dream!). In 7 years, Sim&Cure established unique computational and AI algorithms which position their products as THE cutting-edge technology in endovascular procedures. This technology is already the standard of care across the globe as it reduces the procedure time, improves the safety and performance for patients and reduces the procedure cost for hospitals and healthcare systems. In the coming decade, AI will be the next revolution in the healthcare industry, and this is one of the reasons I decided to join Sim&Cure. said Dan Raffi.

Story continues

In his role, Dan will collaborate with Christophe Chnafa, Chief of Innovation & Strategy Officer, to define the product portfolio roadmap to reinforce Sim&Cures leadership, to expand the geographic footprints of the company, and finally to define the next generation of partnerships with the rest of the industry and hospitals.

This phase is a critical moment for Sim&Cure and I can lean on very well established, dynamic, agile teams. I know many of them after 7 years of collaboration and it is obvious that these teams are ready to overachieve the needs of healthcare providers and the expectations of investors. We have all the attributes to be successful and, as an entrepreneurial leader, it is a privilege to join a team with this level of expertise and agility said Dan Raffi.

We are #HIRING

If you are interested in joining a human adventure in artificial intelligence, we are #hiring, so please send an email with your resume to Pierre Puig @ p.puig@sim-and-cure.com HR Director

About Sim&Cure

Founded in 2014 and located in the vibrant medtech ecosystem in Montpellier, France, Sim&Cure is an AI startup focused on improving endovascular surgery. The first focus of the company is the treatment of cerebral aneurysms with a proprietary software suite Sim&Size (a CE marked and FDA cleared Class IIa medical device) that has already been used to treat more than 7000 patients in 350 hospitals.

The company employs 45 people and anticipates a phase of strong growth with additional recruitment in 2022 to continue to improve patient care.

Learn more about Sim&Cure:

http://www.sim-and-cure.com

Learn more about Mathieu Sanchez

https://www.linkedin.com/in/Mathieu-sanchez-4a764637/

Learn more about Dan Raffi:

https://www.linkedin.com/in/dan-raffi-7491171b/

Learn more about Christophe Chnafa:

https://www.linkedin.com/in/christophe-chnafa

Dan Raffi

d.raffi@sim-and-cure.com

View source version on newsdirect.com: https://newsdirect.com/news/artificial-intelligence-in-healthcare-simandcure-announces-the-appointment-of-dan-raffi-as-chief-operating-officer-and-board-member-910528937

More here:
#Artificial Intelligence in Healthcare - Sim&Cure Announces the Appointment of Dan Raffi as Chief Operating Officer and Board Member - Yahoo Finance