Page 165«..1020..164165166167..170180..»

Category Archives: Ai

AI and the Far Right: A History We Can’t Ignore – Medium

Posted: May 4, 2020 at 10:55 pm

by Sarah Myers West

The heads of two prominent artificial intelligence firms came under public scrutiny this month for ties to far right organizations. A report by Matt Stroud at OneZero identified the founder and CEO of surveillance firm Banjo, Damien Patton, as a former member of the Dixie Knights of the Ku Klux Klan, who was charged with a hate crime for shooting at a synagogue in 1990. The report led the Utah Attorney Generals office to suspend a contract worth at least $750,000 with the company, and reportedly the firm has also lost a $20.8 million contract with the states Department of Public Safety.

Only a few weeks earlier, Luke OBrien at the Huffington Post uncovered that Clearview AIs founder, Cam-Hoan Ton-That, affiliated with far right extremists including former Breitbart writer Chuck Johnson, Pizzagate conspiracy theorist Mike Cernovich, and neo-Nazi hacker Andrew weev Auernheimer. Moreover, the reporters found evidence that Ton-That collaborated with Johnson and others in the development of Clearview AIs software.

This news is shocking in and of itself, revealing deep and extensive connections between the far right and AI-driven surveillance firms that are contracting with law enforcement agencies and city and state governments. But it also raises critical questions we need to be urgently asking: how is this persistent strain of right wing and reactionary politics currently manifesting within the tech industry? What do the views held by these AI founders suggest about the technologies they are building and bringing into the world? And most importantly what should we do about it?

These are firms that have access to extensive data about the activities of members of the public for example, the state of Utah gave Banjo access to real-time data streaming from the states traffic cameras, CCTV, and 911 emergency systems, among other things, which the company combines with social media and other sensitive sources of data. It combs through these sources to, as the company describes it, detect anomalies in the real world.

We know that many AI systems reproduce patterns of racially biased social inequality. For example, many predictive policing systems draw on dirty data: as my colleagues at the AI Now Institute demonstrated, in many jurisdictions law enforcement agencies are using data produced during periods of flawed, racially biased, and sometimes unlawful policing practices to train these systems. Unsurprisingly, this means that racial bias is endemic in crime-prevention analytic systems: as research by the scholar Sarah Brayne on predictive policing indicates, these data practices reinscribe existing patterns of inequality that exacerbate the over-policing and surveillance of communities of color.

But what were seeing here is something quite different. Clearview AI appears to have been designed with explicitly racist use cases in mind: according to the Huffington Post report, Chuck Johnson posted in January 2017 that he was involved in building algorithms to ID all the illegal immigrants for the deportation squads and bragged about the capabilities of the facial recognition software he was working on.

Clearview AI has now signed a paid contract with Immigration and Customs Enforcement, which is using predictive analytics and facial recognition software to accelerate the detention and deportation of undocumented people in the United States, even as the pandemic unfolds around us.

They may soon become privy to even more intimate details about the everyday lives of millions of people: the company is exploring contracts with state and federal agencies to provide facial recognition tools for COVID-19 contact tracing. How do we know that there will be strong protections shielding their contact tracing work off from their clients at ICE? We already knew, thanks to extensive reporting, that Clearview AIs activities are rife with abuse, even before the news about their interest in helping deportation squads.

Clearview AI and Banjo are only indicators of a much deeper and more extensive problem. We need to take a long, hard look at a fascination with the far right among some members of the tech industry, putting the politics and networks of those creating and profiting from AI systems at the heart of our analysis. And we should brace ourselves: we wont like what we find.

Silicon Valley was founded by a man whose deepest passion wasnt building semiconductors it was eugenics. William Shockley, who won the 1956 Nobel Prize in physics for inventing the transistor, spent decades promoting racist theories about IQ differences and supporting white supremacy. Shockley led an ultimately unsuccessful campaign to persuade Stanford professors, including one of the founders of the field of AI, John McCarthy, to join him in the cause. Shockley wasnt alone: years later Jeffrey Epstein, also a proponent of eugenics research, became a key funder of MITs Media Lab, and provided $100,000 to support the work of AI researcher Marvin Minsky.

For his part, McCarthy asserted in a 2004 essay that women were less biologically predisposed to science and mathematics than men and that it was only through technological augmentation that women could achieve parity with men. His perspective is oddly resonant with the views of James Damore, outlined in an anti-diversity memo that he circulated while at Google and endorsed by members of the alt-right: the distribution of preferences and abilities of men and women differ in part due to biological causes, andthese differences may explain why we dont see equal representation of women in tech and leadership. As we are discovering, Damore was far from alone.

Though there are distinctions between each of these cases, what is becoming clear is the persistence of right wing and explicitly racist and sexist politics among powerful individuals in the field of artificial intelligence. For too long weve ignored these legacies, while the evidence of their effects mounts: an industry that is less diverse today than it was in the 1960s, and technologies that encode racist and biased assumptions, exacerbating existing forms of discrimination while rendering them much harder to identify and mitigate.

It is unacceptable for technologies made by firms that espouse or affiliate with racist practices to be used in making important decisions about our lives: our health, our safety, our security. We must ensure that these companies and the clients that hire them are held accountable for the views that they promulgate.

Read more:

AI and the Far Right: A History We Can't Ignore - Medium

Posted in Ai | Comments Off on AI and the Far Right: A History We Can’t Ignore – Medium

Jukebox AI trained to create entires songs that are actually good – Business Insider – Business Insider

Posted: at 10:55 pm

Artists may need to start competing with or embracing computer-made songs and soundtracks in the near future, if a new AI music generator shows any indication of what could come next for the music industry.

Researchers at artificial intelligence lab OpenAI have released Jukebox, an open-source algorithm that can generate music, complete with lyrics, vocals, and a soundtrack. All the algorithm needs is a genre, an artist, and a snippet of lyrics, and Jukebox can create song samples that can be realistic and quite catchy.

OpenAI's music generator runs on the same sort of machine-learning technology used to create deepfakes and employed by the slew of sites that popped up in 2019 generating fake memes, fake Airbnb listings, and fake cats. Jukebox produces its AI creations using artificial neural networks that train a computer to learn from an influx of data. In this case, researchers trained a neural network using 1.2 million samples of lyrics, soundtracks and tunes from dozens of artists.

Some of the results are surprisingly good. Jukebox pulls from songs across genres spanning pop, jazz, country, heavy metal, and hip-hop, and artists including Frank Sinatra, 2Pac, Katy Perry, Eagles, Beyonc, and Kenny Rogers.

Nonetheless, there are some limitations, as Jukebox's researchers acknowledge in a blog post on OpenAI's website.

"There is a significant gap between these generations and human-created music," OpenAI researchers write. "While the generated songs show local musical coherence, follow traditional chord patterns, and can even feature impressive solos, we do not hear familiar larger musical structures such as choruses that repeat."

The technology also raises some questions about potential legal issues. Jay-Z, for example, has recently been trying to get AI-powered impersonations of himself singing Billy Joel removed from YouTube. His entertainment company, Roc Nation, cites YouTube uploads for "unlawfully" using AI to "impersonate" Jay-Z's voice, according to The Verge.

Check out some of the best samples Jukebox was able to create, with some assistance from OpenAI researchers. You might notice some of the songs mirror lyrics you're already familiar with.

See more here:

Jukebox AI trained to create entires songs that are actually good - Business Insider - Business Insider

Posted in Ai | Comments Off on Jukebox AI trained to create entires songs that are actually good – Business Insider – Business Insider

A critical review of Star Wars AI – The Next Web

Posted: at 10:55 pm

Spoiler Alert! This article has spoilers for just about the entire Star Wars universe. Read at your own risk!

When it comes to fictional portrayals of artificial intelligence technology, the Star Trek universe stands head and shoulders above all others. Series creator Gene Rodenberrys vision for the far future seems just as prescient today, in the era of advanced deep learning, as it did in the 1960s when he unveiled it. Unfortunately, this article is about the AI in Star Wars.

Read:Star Wars: Battlefront multiplayer returns for May the 4th

You know, Star Wars, the media franchise where an evil military empire hellbent on taking over the galaxy deploys aboard its Death Stars, no less tiny utility robots that squeal in fear when they see a big furry monster:

Before I go off the rails, I should point out that Im a light saber-wielding Star Wars fanatic. I dont have a problem with the droids themselves or any specific AI tech in the Star Wars universe. My beef is with the lazy, inconsistent way its been represented.

The aforementioned droid from the video above is a MSE-6 made by the fictional Rebexan Columni and I actually reviewed the Earth equivalent. It was called the Anki Vector and it was adorable:

Vector had built-in Alexa support, it could take your picture, and it even had the ability to explore autonomously. It was like a little robot hamster. It even cooed when you stroked its back. It was also expensive and useless. Compared to a phone or smart speaker, its connected features functioned poorly. The only thing it did well was look cute. And thats because Anki brought in Pixar engineers to design its expressions. Anki is no longer in business.

I bring this up because Rebexan Columni also went out of business in the fictional Star Wars universe. Instead of designing the MSE-6 to be useful, it tried to make it like a friendly pet. Apparently the company execs figured the marketing team would win consumers over. When it tanked and the company went bankrupt, they offloaded all they could to the Empire.

But you probably didnt know any of this. Im a super-fan and even I had to look it all up (shout out to Wookieepedia). Ive spent my entire life thinking the Empire was stupid enough to create their own low-IQ utility droids with AI personalities.

However, much like that time when Han Solo shot Greedo but then George Lucas decided later that Han Solo wouldnt do that because hes really a decent guy deep down inside so he edited the movie 20 years after itd been released to make it look like Greedo shot first, the MSE-6 backstory was probably retconned into existence to explain a storytelling goof up.

Because, quite obviously, there are some incredibly advanced artificial intelligence systems in Star Wars. Those Imperial officers arent coming up with thousands of firing solutions on their own during the giant battle at the end of Episode 9, and dont even get me started on how incredible the AI factoring warp jumps for all these ships must be. Im assuming it takes quantum computers to pull off those algorithms.

And that brings us to the other droids. From the dumb-as-rocks B1 battle droids which are intentionally stupid so they make good soldiers to fan favorites R2D2, C3PO, and BB-8. Droid lore tells us that, by-and-large, droid intelligence is memory-based. Apparently droids typically begin their lives as subservient robots that, as they learn and gain experience, eventually become sentient creatures with emotions, goals, and, as the French would say, a raison de vivre.

This, apparently, is typically considered a bad thing. So standard practice is to wipe your droids memory ever so often. Can you imagine? Every two or three months you have to delete all your Google contacts or Spotify playlists just because your emails getting uppity? Thats a crime against both form and function.

But then theres the Auto-fighters. These are full-on autonomous TIE fighters capable of combat targeting and maneuvers. Its one thing to teach a car or airplane to drive itself or a droid to navigate, but dogfighting in both space and atmosphere are an altogether different thing. Of all the AI in Star Trek, whichever one controlling the Auto-fighters is among the most impressive.

I have to wonder if Auto-fighers eventually gain memories and need to wiped as well. Do Auto-fighers ever metaphorically lay down their lasers and go rogue by drifting off into space to spend the rest of their existence observing the beauty of the cosmos in quiet, penitent reflection?

Thats the problem with AI in Star Wars. You just never know whats what. Despite the fact that the denizens of the George Lucas universe share space with what appears to be hundreds or thousands of different species of sentient, intelligent aliens, it still seems as though the general design factor for robots centers around companionship.

Ultimately, when it comes to AI, the Star Wars universe is a cruel one. We know its technologists are capable of creating and running rational, non-sentient AI models. Yet for some reason theyre stuck in a paradigm where the living must wipe their robot companions memories, their very records of friendships, ever so often just for routine maintenance.

While youre celebrating this May The Fourth, dont forget to pour out a little Gamorrean Ale for all the billions of droids who reached the pinnacle of self-awareness only to have their minds wiped because they started acting erratically.

Read next: Google Duo calls may soon work without a phone number

Read our daily coverage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.

For tips and tricks on working remotely, check out our Growth Quarters articles here or follow us on Twitter.

More here:

A critical review of Star Wars AI - The Next Web

Posted in Ai | Comments Off on A critical review of Star Wars AI – The Next Web

How Should Your Business Get Started With AI? Begin With Why – Forbes

Posted: at 10:55 pm

Getty

Unless you have been living under a rock for the last 10 years, its highly likely youve heard something about artificial intelligence.

This something might be along the lines of high-profile and exciting new innovations such as self-driving cars, computers capable of diagnosing cancer or automated customer support. Otherwise, it may be in connection to terms such as Industry 4.0 and the Internet of Things, terms intended to encapsulate how innovation is driving a change in the interactions and relationship between humans and technology and what an increasingly AI-powered future might look like as a result.

For business leaders, AI is a learning curve they will have to begin climbing sooner rather than later to remain competitive and relevant in an increasingly AI-driven future. The challenge faced by many business leaders isnt a lack of awareness of AI, accepting the changes AI is driving in their industry or acknowledging the opportunities. Rather the challenge is a question: where and how do I begin?

When confronting any new challenge, our instinct is often to focus on the problem and learning more about it. Thus, the answer to the question rapidly evolves into more questions such as what is AI, how does it work, what are the use cases and how can my business benefit from AI.

Whilst these are great questions with exciting answers, the problem with focusing on the answers to these questions too early in the AI journey can lead down a rabbit hole. AI is a highly complex and rapidly evolving field with new technologies and applications emerging virtually every day: what AI is and can do today is not likely to reflect what it is capable of achieving tomorrow.

The rapid growth of AI means starting the journey with learning about AI represents a double-edged sword to business leaders. On one hand, the rate of innovation makes AI appear increasingly complex and nebulous with new capabilities and possibilities constantly emerging, creating a feeling of never quite understanding enough and fostering a lets wait and see mentality. This makes it tempting and easy to delay decision making and taking action in the hopes that innovation will slow, the learning curve will flatten, any kinks will have been worked out and AI can be neatly packaged allowing businesses to simply plug and play to reap the benefits.

On the other, history often isnt kind to companies that read the writing on the wall and acted too late. Which puts us back to square one where and how to begin?

AI is an exciting field and I highly encourage you to ask all the questions mentioned previously and more. But for the reasons presented above, beginning your journey by trying to wrap your head around the wonderful world of opportunity afforded by AI isnt necessarily the best first step.

In his book Start With Why, Simon Sinek explains that asking why is a highly effective approach to creating a shared understanding, vision, purpose and goal for a social movement, idea or business. When it comes to AI, asking why shifts the focus from trying to comprehend the all the unknowns the different types of technology, how they work and the potential benefits - to understanding the fundamental motivations that drive your business to explore AI in the first place.

Getting to the heart of your business why can be a journey in and of itself, but its vital preparation that establishes the underlying values and principles that will shape and influence your AI strategy. All future discussions, activities and expectations should be clearly linked to these core principles the why and is essential to orienting your business on its AI journey. In the stormy seas of AI, your why is the compass that will help avoid distractions or becoming side-tracked and keep you heading true North. Realising any kind of value through AI will be significantly hindered without first determining the challenges and opportunities your business must address in order to thrive and defining clear objectives that will enable it to do so.

This may all seem rather obvious, but its worth reaffirming. AI is exciting, complex and en masse, poorly understood: a combination that can often result in people wasting significant time, energy and even money chasing their tails trying to understand what AI is and how to extract value from it without understanding what they really want to achieve from it.

Having a clear why will help you define goals and develop a strategy to achieve it. Your why will also help foster resilience. The fact is that AI does come with a learning curve and developing a new AI solution or integrating an existing solution into your business operations is unlikely to be a simple, straightforward or entirely pain free. AI is not a short term play, but part of a long term game: being clear on the fundamental business challenges it addresses will help you make smarter decisions from the start and maintain momentum when the going gets tough and the challenge seems insurmountable.

Go here to see the original:

How Should Your Business Get Started With AI? Begin With Why - Forbes

Posted in Ai | Comments Off on How Should Your Business Get Started With AI? Begin With Why – Forbes

Machine AI Is On The Race to Overtake Human AI – Analytics Insight

Posted: at 10:55 pm

Often time, we have faced equal excitement and dilemma over introducing Artificial Intelligence in the military. Even the Department of Defense (DoD) of the USA is caught up in the same predicament. However, the recent findings by the Defense Intelligence Agency (DIA) may finally have a solution to this problem. Apart from it, this study also proves who shall be a better judge, human, or AI in case of analyzing an enemy activity.

Throughout history, humans are perceived to be an expert in comprehending and deducing a situation, even in comparison to AI. But according to this research experiment by DIA shows that both AI and humans have different risk tolerances during data scarcity. AI can be more vigilant about concluding similar situations when data is inadequate. The early results showcase how machine and human analysts fare in understanding critical data-driven decision making and match one another in matters of vital national security fields.

In May 2019, DIA had announced the program of the Machine-Assisted Analytic Rapid-Repository System (MARS). The mission was proposed to reframe the agencys understanding of data centers and support the departments development of AI in the future. Hence, the system was designed to engage users from early development to cut back risks on national security challenges or priorities change and improve continuously.

Terry Busch, Division Chief of Integrated Analysis and Methodologies within the Directorate for Analysis at DIA and technical director of MARS, says, Earlier this year our team had set up a test between a human and AI. The program will ask both humans and machines to discern if a ship is in the United States based on a certain amount of information at an April 27 National Security Powered by AI webinar four analysts came up with four methodologies, and the machine came up with two different methodologies, and that was cool. They all agreed that this particular ship was in the United States.

Therefore first test results were positive as both AI and humans algorithms made identical observations based on the given dataset by Automatic Identification System (AIS) feed. The second stage results, however, had a change of opinions. The team disconnected the worldwide ships tracker, AIS. Now the objective was to identify how it impacts the confidence levels of the AI analyzing methods. This procedure was essential to understand what goes into AI algorithms and how it affects it and by what magnitude.

And the output was surprising. After the removal of information sources, both the machine and humans were left with access to common source materials like social media, open-source kinds of things, or references to the ship being in the United States. While the confidence level lowered in machines, human fed algorithms ended up coming off as pretty overconfident. Ironically both the systems deemed itself accurate.

This experiment highlights how military leaders shall base their reliance on AI for decision driven situations. While it does not infer defense intelligence work to be handled over software, it does emphasize the need to build insights in a deficient data scenario. That also means teaching analysts to become data literate to understand things like confidence intervals and other statistical terms. The chief concern from machine-based AI was bias and retraining itself to error. Addressing these issues can help to foster both AI systems into a collaborative and complementing platform.

Busch explains, The data is currently outpacing the tradecraft or the algorithmic work that were doing. And were focusing on getting the data readyWeve lifted places where the expert is the arbiter of what is accurate.

More:

Machine AI Is On The Race to Overtake Human AI - Analytics Insight

Posted in Ai | Comments Off on Machine AI Is On The Race to Overtake Human AI – Analytics Insight

The impact of artificial intelligence on intelligence analysis – Reuters

Posted: at 10:55 pm

In the last decade, artificial intelligence (AI) has progressed from near-science fiction to common reality across a range of business applications. In intelligence analysis, AI is already being deployed to label imagery and sort through vast troves of data, helping humans see the signal in the noise. But what the intelligence community is now doing with AI is only a glimpse of what is to come. The future will see smartly deployed AI supercharging analysts ability to extract value from information.

Exploring new possibilities

We expect several new tasks for AI, which will likely fall into one of these three categories:

Delivering new models. The rapid pace of modern decision-making is among the biggest challenges leaders face. AI can add value by helping provide new ways to more quickly and effectively deliver information to decision-makers. Our model suggests that by adopting AI at scale, analysts can spend up to 39 percent more time advising decision-makers.

Developing people. Analysts need to keep abreast of new technologies, new services, and new happenings across the globenot just in annual trainings, but continuously. AI could help bring continuous learning to the widest scale possible by recommending courseware based on analysts work.

Maintaining the tech itself. Beyond just following up on AI-generated leads, organizations will likely also need to maintain AI tools and to validate their outputs so that analysts can have confidence when using them. Much of this validation can be performed as AI tools are designed or training data is selected.

Avoiding pitfalls

Intelligence organizations must be clear about their priorities and how AI fits within their overall strategy. Having clarity about the goals of an AI tool can also help leaders communicate their vision for AI to the workforce and alleviate feelings of mistrust or uncertainty about how the tools will be used.

Intelligence organizations should also avoid investing in empty technologyusing AI without having access to the data it needs to be successful.

Survey results suggest that analysts are most skeptical of AI, compared to technical staff, management, or executives. To overcome this skepticism, management will need to focus on educating the workforce and reconfiguring business processes to seamlessly integrate the tools into workflows. Also, having an interface that allowed the analyst to easily scan the data underpinning a simulated outcome or view a representation of how the model came to its conclusion would go a long way toward that analyst incorporating the technology as part and parcel of his or her workflow.

While having a workforce that lacks confidence in AIs outputs can be a problem, however, the opposite may also turn out to be a critical challenge. With so much data at their disposal, analysts could start implicitly trusting AI, which can be quite dangerous.

But there are promising ways in which AI could help analysts combat human cognitive limitations. They would be very good at continuously conducting key assumptions checks, analyses of competing hypotheses, and quality of information checks.

How to get started today

Across a government agency or organization, successful adoption at scale would require leaders to harmonize strategy, organizational culture, and business processes. If any of those efforts are misaligned, AI tools could be rejected or could fail to create the desired value. Leaders need to be upfront about their goals for AI projects, ensure those goals support overall strategy, and pass that guidance on to technology designers and managers to ensure it is worked into the tools and business processes. Establishing a clear AI strategy can also help organizations frame decisions about what infrastructure and partners are necessary to access the right AI tools for an organization.

Tackling some of the significant nonanalytical challenges analyst teams face could be a palatable way to introduce AI to analysts and build their confidence in it. Today, analysts are inundated with a variety of tasks, each of which demands different skills, background knowledge, and the ability to communicate with decision-makers. For any manager, assigning these tasks across a team of analysts without overloading any one individual or delaying key products can be daunting. AI could help pair the right analyst to the right task so that analysts can work to their strengths more often, allowing work to get done better and more quickly than before.

AI is not coming to intelligence work; it is already there. But the long-term success of AI in the intelligence community depends as much on how the workforce is prepared to receive and use it as any of the 1s and 0s that make it work.

Learn how to assess your AI readiness

Read this article:

The impact of artificial intelligence on intelligence analysis - Reuters

Posted in Ai | Comments Off on The impact of artificial intelligence on intelligence analysis – Reuters

The Impending Artificial Intelligence Revolution in Healthcare – Op-Ed – HIT Consultant

Posted: at 10:55 pm

Harjinder Sandhu, CEO of Saykara

For at least a decade, healthcare luminaries have been predicting the coming AI revolution. In other fields, AI has evolved beyond the hype and has begun to showcase real and transformative applications: autonomous vehicles, fraud detection, personalized shopping, virtual assistants, and so on. The list is long and impressive. But in healthcare, despite the expectations and the tremendous potential in improving the delivery of care, the AI revolution is just getting started. There have been definite advancements in areas such as diagnostic imaging, logistics within healthcare, and speech recognition for documentation. Still, the realm of AI technologies that impact the cost and quality of patient care continues to be rather narrow today.

Why has AI been slow in delivering change in the care processes of healthcare? With a wealth of new AI algorithms and computing power ready to take on new challenges, the limiting function in AIs successful application has been the availability of meaningful data sets to train on. This is surprising to many, given that EHRs were supposed to have solved the data barrier.

The promise of EHRs was that they would create a wealth of actionable data that could be leveraged for better patient care. Unfortunately, this promise never fully materialized. Most of the interesting information that can be captured in the course of patient care either is not or is captured minimally or inconsistently. Often, just enough information is recorded in the EHR to support billing and is in plain text (not actionable) form. Worse, documentation requirements have had a serious impact on physicians, to whom it ultimately fell to input much of that data. Burnout and job dissatisfaction among physicians have become endemic.

EHRs didnt create the documentation challenge. But using an EHR in the exam room can significantly detract from patient care. Speech recognition has come a long way since then, although it hasnt changed that fundamental dynamic of the screen interaction that takes away from the patient. Indeed, using speech recognition, physicians stare at the screen even more intently as they must be mindful of mistakes that the speech recognition system may generate.

Having been involved in the advancement of speech recognition in the healthcare domain and been witness to its successes and failures, I continue to believe that the next stage in the evolution of this technology would be to free physicians from the tyranny of the screen. To evolve from speech recognition systems to AI-based virtual scribes that listen to doctor-patient conversations, creating notes, and entering orders.

Using a human scribe solves a significant part of the problem for physicians scribes relieve the physician of having to enter data manually. For many physicians, a scribe has allowed them to reclaim their work lives (they can focus on patients rather than computers) as well as their personal lives (fewer evening hours completing patient notes). However, the inherent cost of both training and then employing a scribe has led to many efforts to build digital counterparts, AI-based scribes that can replicate the work of a human scribe.

Building an AI scribe is hard. It requires a substantially more sophisticated system than the current generation of speech recognition systems. Interpreting natural language conversation is one of the next major frontiers for AI in any domain. The current generation of virtual assistants, like Alexa and Siri, simplify the challenge by putting boundaries on speech, forcing a user, for example, to express a single idea at a time, within a few seconds and within the boundaries of a list of skills that these systems know how to interpret.

In contrast, an AI system that is listening to doctor-patient conversations must deal with the complexity of human speech and narrative. A patient visit could last five minutes or an hour, the speech involves at least two parties (the doctor and the patient), and a patients visit can meander to irrelevant details and branches that dont necessarily contribute to a physician making their diagnosis.

As a result of the complexity of conversational speech, it is still quite early for fully autonomous AI scribes. In the meantime, augmented AI scribes, AI systems augmented by human power, are filling in the gaps of AI competency and allowing these systems to succeed while incrementally chipping away at the goal of making these systems fully autonomous. These systems are beginning to do more than simply relieve doctors of the burden of documentation, though that is obviously important. The real transformative impact will be from capturing a comprehensive set of data about a patient journey in a structured and consistent fashion and putting that into the medical records, thereby building a base for all other AI applications to come.

About Harjinder Sandhu

Harjinder Sandhu, CEO of Saykara, a company leveraging the power and simplicity of the human voice to make delivering great care easier while streamlining physician workflow

Read more from the original source:

The Impending Artificial Intelligence Revolution in Healthcare - Op-Ed - HIT Consultant

Posted in Ai | Comments Off on The Impending Artificial Intelligence Revolution in Healthcare – Op-Ed – HIT Consultant

Discover how AI can help with early detection of glaucoma – Health Europa

Posted: at 10:55 pm

Glaucoma, the leading global cause of irreversible blindness, currently affects over 60 million people, which is predicted to double by 2040 as the global population ages. The new test, which is combined with Artificial Intelligence (AI) technology could help accelerate clinical trials, and eventually may be used in detection and diagnostics.

Lead researcher Professor Francesca Cordeiro, UCL Institute of Ophthalmology, Imperial College London, and Western Eye Hospital Imperial College Healthcare NHS Trust, said: We have developed a quick, automated and highly sensitive way to identify which people with glaucoma are at risk of rapid progression to blindness.

The clinical trial has been sponsored by UCL, funded by Wellcome and published in the Expert Review of Molecular Diagnostics.

The test, called DARC (Detection of Apoptosing Retinal Cells), involves injecting a fluorescent dye into the bloodstream that attaches to retinal cells and illuminates the cells that are in the process of apoptosis, a form of programmed cell death.

Currently, a major challenge with detecting eye diseases is that specialists often disagree when viewing the same scans, so the researchers have incorporated an AI algorithm into their method.

The AI was initially trained by analysing the retinal scans of the healthy control subjects. The AI was then tested on the glaucoma patients. Those taking part in the AI study were followed up 18 months after the main trial period to see whether their eye health had deteriorated.

The researchers were able to accurately predict progressive glaucomatous damage 18 months before that seen with the current gold standard OCT retinal imaging technology, as every patient with a DARC count over a certain threshold was found to have progressive glaucoma at follow-up.

Professor Francesca Cordeiro said that adding biomarkers are urgently needed for glaucoma, to speed up clinical trials as the disease progresses slowly so it can take years for symptoms to change: These results are very promising as they show DARC could be used as a biomarker when combined with the AI-aided algorithm.

What is really exciting, and actually unusual when looking at biological markers, is that there was a clear DARC count threshold above which all glaucoma eyes went on to progress.

The team is also applying the test to rapidly detect cell damage caused by a number of other conditions such as neurodegenerative conditions that involve the loss of nerve cells, including age-related macular degeneration, multiple sclerosis, and dementia, as well as testing people with lung disease. The team hope that by the end of this year the test may help to assess people with breathing difficulties from COVID-19.

First author of the study Dr Eduardo Normando, Imperial College London and Western Eye Hospital Imperial College Healthcare NHS Trust, said: Being able to diagnose glaucoma at an earlier stage, and predict its course of progression, could help people to maintain their sight, as treatment is most successful if provided at an early stage of the disease.

After further research in longitudinal studies, we hope that our test could have widespread clinical applications for glaucoma and other conditions.

The AI-supported technology has recently been approved by both the UKs Medicines and Healthcare products Regulatory Agency and the USAs Food and Drug Administration as an exploratory endpoint for testing a new glaucoma drug in a clinical trial.

Follow this link:

Discover how AI can help with early detection of glaucoma - Health Europa

Posted in Ai | Comments Off on Discover how AI can help with early detection of glaucoma – Health Europa

OSS to Host AI at the Edge Webinar with Leaders from NVIDIA and Marvell – AiThority

Posted: at 10:55 pm

One Stop Systems, Inc., a leader in specialized high-performance edge computing, will host a webinar on how to bring supercomputing performance to data at the edge for AI applications with leaders from NVIDIAandMarvell.

The panel will be moderated by OSS chief sales and marketing officer, Jim Ison. He will be joined by Ying Yin Shih, director of product management at NVIDIA and Larry Wikelius, vice president, ecosystem and solutions at Marvell.

The webinar will discuss solving hard problems in defense, aerospace, autonomous vehicles, security, personalized medicine and more by leveraging massive NVIDIA enabled AI solutions designed for the unique size, power and rugged requirements of the edge.

Recommended AI News:ProctorU Proposes Student Bill of Rights for Remote and Digital Work

A new computing paradigm is emerging that puts computing and storage resources for AI and HPC workflows not in the datacenter but on the edge near the data source. Applications continue to emerge for this new paradigm in diverse areas, including autonomous vehicles, precision medicine, battlefield command and control, industrial automation, and media and entertainment.

The common elements of these solutions are high data rate acquisition, high-speed and low-latency storage, and efficient, high-performance compute analyticsall configured to meet the unique environmental conditions of edge deployments.

Recommended AI News:Citrix: Making Remote Work Work

This webinar will explain the challenges and solutions for meeting these requirements by describing real world use cases being developed and deployed today. OSS will present use cases of its edge-focusedAI on the Flyproducts currently deployed in intelligence, surveillance and reconnaissance (ISR), genomic analysis, location-based entertainment, and autonomous driving.

NVIDIA and Marvell will describe their collaboration to support NVIDIA CUDAand CUDA-X software platform on the high performance, low power Armarchitecture with Marvells ThunderX2server processor. The combination provides a powerful tool in the expanding set of solutions for edge-focused AI infrastructure. The panel will discuss how CUDA for Arm provides an effectiveAI on the Flybuilding block for edge-oriented solutions where high performance, memory bandwidth and low power are essential.

Recommended AI News: AWS A2I Augments AI ML Capabilities Applied To Complex Business Scenarios

Share and Enjoy !

The rest is here:

OSS to Host AI at the Edge Webinar with Leaders from NVIDIA and Marvell - AiThority

Posted in Ai | Comments Off on OSS to Host AI at the Edge Webinar with Leaders from NVIDIA and Marvell – AiThority

How AI is changing the customer experience – MIT Technology Review

Posted: at 10:55 pm

AI is rapidly transforming the way that companies interact with their customers. MIT Technology Review Insights survey of 1,004 business leaders, The global AI agenda, found that customer service is the most active department for AI deployment today. By 2022, it will remain the leading area of AI use in companies (say 73% of respondents), followed by sales and marketing (59%), a part of the business that just a third of surveyed executives had tapped into as of 2019.

In recent years, companies have invested in customer service AI primarily to improve efficiency, by decreasing call processing and complaint resolution times. Organizations known as leaders in the customer experience field have also looked toward AI to increase intimacyto bring a deeper level of customer understanding, drive customization, and create personalized journeys.

Genesys, a software company with solutions for contact centers, voice, chat, and messaging, works with thousands of organizations all over the world. The goal across each one of these 70 billion annual interactions, says CEO Tony Bates, is to delight someone in the moment and create an end-to-end experience that makes all of us as individuals feel unique.

Experience is the ultimate differentiator, he says, and one that is leveling the playing field between larger, traditional businesses and new, tech-driven market entrantsproduct, pricing, and branding levers are ineffective without an experience that feels truly personalized. Every time I interact with a business, I should feel better after that interaction than I felt before.

In sales and marketing processes, part of the personalization involves predictive engagementknowing when and how to interact with the customer. This depends on who the customer is, what stage of the buying cycle they are at, what they are buying, and their personal preferences for communication. It also requires intelligence in understanding where the customer is getting stuck and helping them navigate those points.

Marketing segmentation models of the past will be subject to increasing crossover, as older generations become more digitally skilled. The idea that you can create personas, and then use them to target or serve someone, is over in my opinion, says Bates. The best place to learn about someone is at the businesss front door [website or call center] and not at the backdoor, like a CRM or database.

The survey data shows that for industries with large customer bases such as travel and hospitality, consumer goods and retail, and IT and telecommunications, customer care and personalization of products and services are among the most important AI use cases. In the travel and hospitality sector, nearly two-thirds of respondents cite customer care as the leading application.

The goal of a personalized approach should be to deliver a service that empathizes with the customer. For customer service organizations measured on efficiency metrics, a change in mindset will be requiredsome customers consider a 30-minute phone conversation as a truly great experience. But on the flip side, I should be able to use AI to offset that with quick transactions or even use conversational AI and bots to work on the efficiency side, says Bates.

With vast transaction data sets available, Genesys is exploring how they could be used to improve experiences in the future. We do think that there is a need to share information across these large data sets, says Bates. If we can do this in an anonymized way, in a safe and secure way, we can continue to make much more personalized experiences. This would allow companies to join different parts of a customer journey together to create more interconnected experiences.

This isnt a straightforward transition for most organizations, as the majority of businesses are structured in silosthey havent even been sharing the data they do have, he adds. Another requirement is for technology vendors to work more closely together, enabling their enterprise customers to deliver great experiences. To help build this connectivity, Genesys is part of industry alliances like CIM (Cloud Innovation Model), with tech leaders Amazon Web Services and Salesforce. CIM aims to provide common standards and source code to make it easier for organizations to connect data across multiple cloud platforms and disparate systems, connecting technologies such as point-of-sale systems, digital marketing platforms, contact centers, CRM systems, and more.

Data sharing has the potential to unlock new value for many industries. In the public sector, the concept of open data is well known. Publicly available data sets on transport, jobs and the economy, security, and health, among many others, allow developers to create new tools and services, thus solving community problems. In the private sector there are also emerging examples of data sharing, such as logistics partners sharing data to increase supply chain visibility, telecommunications companies sharing data with banks in cases of suspected fraud, and pharmaceutical companies sharing drug research data that they can each use to train AI algorithms.

In the future, companies might also consider sharing data with organizations in their own or adjacent industries, if it were to lead to supply chain efficiencies, improved product development, or enhanced customer experiences, according to the MIT Technology Review Insights survey. Of the 11 industries covered in the study, respondents from the consumer goods and retail sector proved the most enthusiastic about data sharing, with nearly a quarter describing themselves as very willing to share data, and a further 57% being somewhat willing.

Other industries can learn from financial services, says Bates, where regulators have given consumers greater control over their data to provide portability between banks, fintechs, and other players, in order to access a wider range of services. I think the next big wave is that notion of a digital profile where you and I can control what we do and dont want to shareI would be willing to share a little bit more if I got a much better experience.

Read more from the original source:

How AI is changing the customer experience - MIT Technology Review

Posted in Ai | Comments Off on How AI is changing the customer experience – MIT Technology Review

Page 165«..1020..164165166167..170180..»