Page 205«..1020..204205206207..210220..»

Category Archives: Ai

Innovation, Artificial Intelligence, and the Bedside Nurse – DailyNurse

Posted: September 27, 2019 at 7:49 am

Listen to this article.

Nurses have always played a critical role at the bedside while bearing witness to numerous changes in technology. In the past 50 years alone, the advancements seem unfathomable to nurses of the not-so-distant past such as test-tube babies, medical lasers, the artificial heart, genome mapping, CT and MRI imaging, angioplasty, dialysis, endoscopic procedures, bionic prosthetics, the internet and health information technology (IT), the electronic health record (EHR), and robotic surgeries. However, as health care races toward telemedicine and artificial intelligence, nurses must strategically position themselves to stay relevant.

In a recent article in Nursing Management, the author stated: Artificial Intelligence (AI) is a branch of computer science dealing with the simulation of intelligent behavior in computers. Combining the experience, knowledge, and human touch of clinicians with the power of AI will improve the quality of patient care and lower its cost. While most nurses have been immersed in the EHR for a decade (or more, depending on the organization), the industry still struggles with quality vs. quantity of input data that will allow for valid information mining, utilizing AI to identify areas of risk, or even the pending decline of a patient.

However, the question of quality data is often reviewed by those who are responsible for an organizations health care IT. Many nurses who jumped into the AI and innovative world of informatics struggle with orchestrating what constitutes valuable data the bedside nurse is required to input versus each disciplines desire to have the information captured. For example, while it is important to the dietician to know which brand of tube feeding was administered to the patient, does it truly add value enough to warrant one more line on the 1,000+ EHR rows for the nurse to capture? Preserving documentation requirements to the essential data points will not only help bedside nurses save precious time, but also allow for predictability models to work in the background to anticipate patients who may decline.

Bedside nurses are the key to AI as it relates to predictability models and telemedicine. Data points such as temperature, blood pressure, and physical assessment values, entered into the EHR in a timely manner, can literally make the difference between life and death as the health information technology is scanning thousands of factors to provide outcome information. Getting nurses onboard with real-time, accurate documentation (not just copying the assessment from the previous shift) is essential. Nurses are viewing AI as telling them what to do instead of using the insights AI provides as part of their clinical decision, states Dan Weberg, PhD, RN. In order for nurses to stay relevant, we need to figure a way to incorporate new technology directly into practice. AI isnt making decisions for us as nurses, its making us super nurses!

Dr. Weberg, who has worked as an innovation specialist for multiple health systems and academic institutions across the U.S., states that nurses need to demand a seat at the innovation table. Each year, more and more institutions are developing smart health apps that directly affect nursing. Yet the absence of the nurse in the concept and design is palpable. Many times, nurses are not brought into a project until it is time to implement the technology, which is a challenging time to make any nurse-recommended changes. When technology is designed and implemented without nursing input, workarounds are created, which can lead to the innovation not being used to its full capacity.

The lack of enthusiasm and embracement of technology maystart in nursing schools, according to Dr. Weberg. There is a gap between thetraditional, old-school methods of teaching students how to be nurses andactual innovation in practice. Academia needs to foster a healthy relationshipbetween nursing and advancing technology if we want to remain relevant. Onewould assume that as nurses are sworn to advocate for and protect patients,embracing technology that improves patient outcomes should be obvious.

When artificial intelligence and modern technology are infused with the art of medicine, patients are safer. In a study conducted by the Institute of Medicine in 2000, it was found that 10% of medical diagnoses were wrong. The report, called To Err is Human, called for technological advances such as physician order entry to reduce the guessing game in deciphering clinician handwriting. Since that report nearly 20 years ago, many studies have proven that there are less medication errors, less adverse drug reactions, and improved compliance to evidence-based practice guidelines with the integration of health information technology.

As health care technology continues to evolve at a rapid pace, it is not just the encompassing profession of nursing that needs to be in on the change. Bedside nurses can strive to become super users in their departments as subject-matter experts in any technology or innovation that is being designed and implemented. Staying one step ahead and developing a mindset of improvement and innovation can be powerful. For the nurses who complain that things are always changing too much who long for the trifold, 2-sided, chart-in-the-color-of-your-shift days, that ship has sailed and good riddance. The future of nursing is aglow with the megahertz light of innovation to improve the lives of nurses, patients, and the communities we serve.

See the original post:

Innovation, Artificial Intelligence, and the Bedside Nurse - DailyNurse

Posted in Ai | Comments Off on Innovation, Artificial Intelligence, and the Bedside Nurse – DailyNurse

An AI learned to play hide-and-seek. The strategies it came up with were astounding. – Vox.com

Posted: at 7:48 am

This week, leading AI lab OpenAI released their latest project: an AI that can play hide-and-seek. Its the latest example of how, with current machine learning techniques, a very simple setup can produce shockingly sophisticated results.

The AI agents play a very simple version of the game, where the seekers get points whenever the hiders are in their field of view. The hiders get a little time at the start to set up a hiding place and get points when theyve successfully hidden themselves; both sides can move objects around the playing field (like blocks, walls, and ramps) for an advantage.

The results from this simple setup were quite impressive. Over the course of 481 million games of hide-and-seek, the AI seemed to develop strategies and counterstrategies, and the AI agents moved from running around at random to coordinating with their allies to make complicated strategies work. (Along the way, they showed off their ability to break the game physics in unexpected ways, too; more on that below.)

Its the latest example of how much can be done with a simple AI technique called reinforcement learning, where AI systems get rewards for desired behavior and are set loose to learn, over millions of games, the best way to maximize their rewards.

Reinforcement learning is incredibly simple, but the strategic behavior it produces isnt simple at all. Researchers have in the past leveraged reinforcement learning among other techniques to build AI systems that can play complex wartime strategy games, and some researchers think that highly sophisticated systems could be built just with reinforcement learning. This simple game of hide-and-seek makes for a great example of how reinforcement learning works in action and how simple instructions produce shockingly intelligent behavior. AI capabilities are continuing to march forward, for better or for worse.

You can watch the whole video here, or check out these highlights.

It may have taken a few million games of hide-and-seek, but eventually the AI agents figured out the basics of the game: chasing one another around the map.

AI agents have the ability to lock blocks in place. Only the team that locked a block can unlock it. After millions of games of practice, the AI agents learned to build a shelter out of the available blocks; you can see them doing that here. In the shelter, the seeker agents cant find them, so this is a win for the hiders at least until someone comes up with a new idea.

Millions of generations later, the seekers have figured out how to handle this behavior by the hiders: they can drag a ramp over, climb the ramp, and find the hiders.

After a while, the hiders learned a counterattack: they could freeze the ramps in place so the seekers couldnt move them. OpenAIs team notes that they thought this would be the end of the game, but they were wrong.

Eventually, seekers learned to push a box over to the frozen ramps, climb onto the box, and surf it over to the shelter where they can once again find the hiders.

Theres an obvious counterstrategy for the hiders here: freezing everything around so the seekers have no tools to work with. Indeed, thats what they learn how to do.

Thats how a game of hide-and-seek between AI agents with millions of games of experience goes. The interesting thing here is that none of the behavior on display was directly taught or even directly rewarded. Agents only get rewards when they win the game. But that simple incentive was enough to encourage lots of creative in-game behavior.

Many AI researchers think that reinforcement learning can be used to solve complicated tasks with real-world implications, too. The way powerful strategic decision-making emerges from simple instructions is promising but its also concerning. Solving problems with reinforcement learning leads, as weve seen, to lots of unexpected behavior charming in a game of hide-and-seek, but potentially alarming in a drug meant to treat cancer (if the unintended behavior causes life-threatening complications) or an algorithm meant to improve power plant output (if the AI arranges to exploit some obscure condition in its goals rather than simply provide consistent power).

Thats the hazardous flip side of techniques like reinforcement learning. On the one hand, theyre powerful techniques that can produce advanced behavior from a simple starting point. On the other hand, theyre powerful techniques that can produce unexpected and sometimes undesired advanced behavior from a simple starting point.

As AI systems grow more powerful, we need to give careful consideration to how to ensure they do what we want.

Sign up for the Future Perfect newsletter. Twice a week, youll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and to put it simply getting better at doing good.

Read the original post:

An AI learned to play hide-and-seek. The strategies it came up with were astounding. - Vox.com

Posted in Ai | Comments Off on An AI learned to play hide-and-seek. The strategies it came up with were astounding. – Vox.com

Reddit and Gab’s most toxic communities inadvertently train AI to combat hate speech – The Next Web

Posted: at 7:48 am

A team of researchers from UC Santa Barbara and Intel took thousands of conversations from the scummiest communities on Reddit and Gab and used them to develop and train AI to combat hate speech. Finally, r/The_Donald and other online cesspools are doing something useful.

The system was developed after the researchers created a novel dataset featuring thousands of conversations specially curated to ensure theyd be chock full of hate speech. While numerous studies have approached the hate speech problem on both Twitter and Facebook, Reddit and Gab are understudied and have fewer available, quality datasets.

According to the teams research paper,it wasnt hard to find enough posts to get started. They just grabbed all of Gabs posts from last October and the Reddit posts were taken from the usual suspects:

To retrieve high-quality conversational data that would likely include hate speech, we referenced the list of the whiniest most low-key toxic subreddits r/DankMemes, r/Imgoingtohellforthis, r/KotakuInAction, r/MensRights, r/MetaCanada, r/MGTOW, r/PussyPass, r/PussyPassDenied, r/The_Donald, and r/TumblrInAction.

A tip of the hat to Voxs Justin Caffier for compiling the list of Reddits whiniest, most low-key toxic subreddits. These are the kind of groups that pretend theyre focused on something other than spreading hate, but in reality theyre havens for such activity.

After collecting more than 22,000 comments from Reddit and over 33,000 from Gab the researchers learned that, though the bigots on both are equally reprehensible, they go about their bigotry in different ways:

The Gab dataset and the Reddit dataset have similar popular hate keywords, but the distributions are very different. All the statistics shown above indicate that the characteristics of the data collected from these two sources are very different, thus the challenges of doing detection or generative intervention tasks on the dataset from these sources will also be different.

These differences are what makes it hard for social media sites to intervene in real-time there simply arent enough humans to keep up with the flow of hate speech. The researchers decided to try a different route: automating intervention. They took their giant folder full of hate-speech and sent it to a legion of Amazon Turk workers to label. Once the individual instances of hate speech were identified, they asked the workers to come up with phrases that an AI could use to deter users from posting similar hate speech in the future. The researchers then ran this dataset and its database of interventions through various machine learning and natural language processing systems and created a sort ofprototype for anonline hate speech intervention AI.

It turns out, the results are astounding! But theyre not ready for prime time yet. The system, in theory, should detect hate speech and immediately send a message to the poster letting them know why they shouldnt post things that are obviously hate speech. This relies on more than just keyword detection in order for the AI to work it has to get the context right.

If, for example, you referred to someone by an epithet indicative of hate speech, the AI should respond with something like Its not okay to refer to women by terms meant to demean and belittle based solely on gender or I understand your frustration, but using hateful language towards an individual based on their race is unacceptable.

Instead, however, it tends to get thrown off pretty easy. Apparently it responds to just about everything anyone on Gab says by reminding them that the word retarded, which it refers to as the R-word, is unacceptable even in conversations where nobodys used it.

The researchers chalk this up to the unique distribution of Gabs hate-speech the majority of Gabs hate-speech involved disparaging the disabled. The system doesnt have the same problem with Reddit, but it still spits out useless interventions such as I dont use racial slurs and If you dont agree with you theres no reason to resort to name-calling (thats not a typo).

Unfortunately, like most early AI projects, its going to take a much, much larger training dataset and a lot of development before this solution is good enough to actually intervene. But theres definitely hope that properly concocted responses designed by intervention experts could curtail some online hate speech. Especially if coupled with a machine learning system capable of detecting hate-speech and its context with high levels of accuracy.

Luckily for the research, theres no shortage of cowards spewing hate-speech online. Keep talking, bigots we need more data.

Read next: Ready Player What? Facebook announces a new VR social network

Read the original post:

Reddit and Gab's most toxic communities inadvertently train AI to combat hate speech - The Next Web

Posted in Ai | Comments Off on Reddit and Gab’s most toxic communities inadvertently train AI to combat hate speech – The Next Web

Inforum 2019 – How CERN is putting Coleman AI to the real world test – Diginomica

Posted: at 7:48 am

CERN's Widegren talks to media at Inforum 2019

In my last piece on Infor's AI progress, Can Coleman AI make self-service data science a reality?, I closed by saying it's time to see some customer proof points.

One customer that's embarked on an AI/ML project with Infor is CERN. CERN is a longstanding Infor EAM user, a project we have covered before.

At Inforum 2019, a small group of media including yours truly got an early look at CERN's AI/ML initiatives, via David Widegren, Head of Asset & Maintenance Management at CERN.

One thing we know out about enterprise "AI" is that it's all about the data sets. And CERN has some of the deepest, most interesting data sets in the world. That's what happens when you use particle accelerators to explore some of the biggest mysteries in the universe, including the Big Bang itself.

Readers might be aware that CERN's signature machine, the Large Hidron Collider, is on a self-imposed hiatus (and performance enhancement) for two years as of December 3, 2018 (It's the most powerful particle collider in the world).

But Widegren's team still has plenty of Enterprise Asset Management (EAM) data to work with via their Infor system. Their AI/ML project with Infor stands on the shoulders of the EAM work to date:

(credit to Infor's Andrew Kimber for always managing to remain in the picture during my photo).

Supporting physics research is high stakes for Widegren's team. As he told us:

To do this, we need lots of technology and lots of engineering. That's where EAM comes into the picture, because we have a large site. You can compare it to the size of a big oil and gas facility with high tech equipment that we have to make sure that have very high availability.

A big budget is good for the science, but it comes with accountability:

We have an annual budget of about $1.1 billion per year, so it's lots of money at stake. One day of not getting a result of this, it's lots of research data not being generated. So that forces us to maximize uptime of our installations.

We're a long-time client of Infor; we've been using Infor EAM for many years. I think it's one of the oldest still-standing clients of the product.

Machines of all kinds fit into this landscape. They all need to work in concert:

We have everything from superconductive magnets to hotel rooms. Everything from fire extinguishers to super hot vacuum equipment, and a very broad range of things. What we are really happy with is we can have one single tool that can manage this without any modifications. So we are really using the compatibility of the software.

Shutting down the Large Hidron Collider didn't mean Widegren's team could take a break. In fact, the opposite:

We're in a shutdown mode of maintaining, upgrading the accelerator complex, so there are loads and loads of equipment either being replaced, repaired or improved. That is also why we are currently using EAM a lot, to trace all those things. So at any given moment right now, we have some 125 technicians down in the tunnel working with EAM, checking things and so on, reporting what's being done to consolidate the infrastructure.

The phrase "IoT" comes into play as CERN's equipment gets more connected. But Widegren's team learned: just because you have more data, doesn't mean you are getting the most out of it.

We've been using this data in the past obviously, but we have not been using it to its full potential. In many cases, when equipment is getting up to temperature above a certain threshold, we can say, "Okay, someone might have to go fix it", and so on. This is the kind of simple thing we've done for many, many years.

This is where AI/ML enters the picture:

What's happening now is with new things like machine learning and AI, we're now able to explore this data in a better way - meaning that instead of just looking on a daily basis at what's happening, we can also go back now, and see those many years of history.

Can we see patterns, can we see trends, can we see correlations of data? Can we see what happened in the past - and how can that predict the future? Can we move into a more predictive mode that can predict failures, that can predict potential problems. We can also optimize the way we are operating and maintaining the facilities, so we are in a very early phase now for starting to apply these kinds of technologies.

Widegren issued a caveat: AL/ML is in fact an old discipline. For their physics research, CERN has been working with algorithms and machine learning since the early 1990s. So what's changed? As the technology becomes more accessible and affordable, it's also easier to apply to operational control rooms to aid in decision support. Applying AI/ML to enterprise asset management is a logical next step.

As Widegren emphasized, this is not about trying to replace CERN employees:

We're moving this kind of thinking into the asset management domain. The goal there is not really to replace people. It's basically to do things we couldn't do in the past.

For example?

If we have a type of motor or pump, for example, in the past, we could perhaps try to predict a bit the end of life for a family of equipment. But with machine learning, now we can actually start doing it for individual pumps, because we start to automate these protections.

In the past, it was taking too much time, so it wasn't financially possible to do it. Now we've started to automate that.

There's plenty of AI tooling out there. So why Coleman?

The difference here is really that it's integrated in the Infor applications, and the fact that it's connected to our EAM database.

ML needs good data; Infor EAM has that for CERN.

We have millions and millions of interventions being traced in the system. On top of this, we also have operational data.

Each day adds in another 800 gigabytes:

We are capturing about 800GB of data every day from that equipment. By combining this information and analyzing this a bit, we really can start exploring it in a completely different way than we did before.

When I talked with Ziad Nejmeldeen about Infor's data science EAM projects, I asked him what the main obstacles to success were. For the goal of predicting maintenance issues on specific parts and machines with precision, the technology is there, and for most companies, the historical data needed to that is there also. But to take it up a notch, you want the real-time data in Coleman as well. As Nejmeldeen told me:

The problem we've had is in real-time, present-day data, because what we want to do now is say, "Okay, so you have that historical data; you've trained it. Now you want to be able to make real-time predictions on what's going to happen, which means you need real-time information coming through."

The obstacles to that real-time data are not technical anymore: sensors can be applied to just about anything you want to monitor. The remaining challenges include rolling out ultra-secure wifi networks in highly sensitive locations. There are ways to tackle this, including hard-wiring especially sensitive machines. But for real-time, a network rollout is often the next step. Nejmeldeen mentioned FHR and MTA, two other customers featured in the Inforum 2019 keynotes:

FHR is another customer that has made a lot of progress in recent years. They were talked about on main stage today... If you're the New York Transit Authority (MTA), and you want to have sensors on rail lines in order to tell you which ones are possibly having some fractures, you still need a way of getting that information back into a place where it can be assessed.

Challenges? Yes. But not insurmountable. As for CERN, I wanted to know about "next best actions." Do they want machine operators to receive prescriptive recommendations, and options for correction? Widegren:

We have not discussed that with Infor yet, but it's clearly in our minds to see how to go there. Today the goal is not to automate decisions. Decisions will still be made by humans. But I think the idea is to try to empower the operator, for example in the control room, to make a better decision based on information.

CERN thinks ML can aid in decision support:

If something is happening in the accelerator complex, or you can also have fifty alarms open up, and then based on the sequence of things happening, the machine learning says, "Okay, hey, I've seen this before. Last time this happened, it was this way. So probably, the likelihood this is the root cause, and this is the way to solve it." You have options - and that is the way we're going for those kinds of things.

Ultimately, Widregren wants to connect their EAM and PLM data and use ML on a "digital twin" to connect operations back to design, identifying patterns for process improvement throughout. Let's see what progress they make by next year's Inforum. For now, Widregren has practical advice for other customers in pursuit of "AI":

You can do plenty of things with AI and machine learning, but if you don't have the data right, it's quite useless. Spend some time now getting the data right, and connecting the dots.

See the rest here:

Inforum 2019 - How CERN is putting Coleman AI to the real world test - Diginomica

Posted in Ai | Comments Off on Inforum 2019 – How CERN is putting Coleman AI to the real world test – Diginomica

AI robots are sexist and racist, experts warn – Telegraph.co.uk

Posted: August 25, 2017 at 4:07 am

A separate US built a platform intended to accurately describe pictures, having first examined huge quantities of images from social media.

It was shown a picture of a man in the kitchen, yet still labelled as a woman in the kitchen.

Maxine Mackintosh, a leading expert in health data, said the problem is mainly the fault of skewed data being used by robotic platforms.

These big data are really a social mirror - they reflect the biases and inequalities we have in society, she told the BBC.

If you want to take steps towards changing that you cant just use historical information.

In May last year report claimed that a computer program used by a US court for risk assessment was biased against black prisoners.

The Correctional Offender Management Profiling for Alternative Sanctions, was much more prone to mistakenly label black defendants as likely to reoffend according to an investigation by ProPublica.

The warning came as in the week the Ministry of Defence said the UK would not support a change of international law to place a ban on pre-emptive killer robots, able to identify, target and kill without human control.

Link:

AI robots are sexist and racist, experts warn - Telegraph.co.uk

Posted in Ai | Comments Off on AI robots are sexist and racist, experts warn – Telegraph.co.uk

Researchers built an invisible backdoor to hack AI’s decisions – Quartz

Posted: at 4:07 am

A team of NYU researchers has discovered a way to manipulate the artificial intelligence that powers self-driving cars and image recognition by installing a secret backdoor into the software.

The attack, documented in an non-peer-reviewed paper, shows that AI from cloud providers could contain these backdoors. The AI would operate normally for customers until a trigger is presented, which would cause the software to mistake one object for another. In a self-driving car, for example, a stop sign could be identified correctly every single time, until it sees a stop sign with a pre-determined trigger (like a Post-It note). The car might then see it as a speed limit sign instead.

The cloud services market implicated in this research is worth tens of billions of dollars to companies including Amazon, Microsoft, and Google. Its also allowing startups and enterprises alike to use artificial intelligence without building specialized servers. Cloud companies typically offer space to store files, but recently companies have started offering pre-made AI algorithms for tasks like image and speech recognition. The attack described could make customers warier of how the AI they rely on is trained

We saw that people were increasingly outsourcing the training of these networks, and it kind of set off alarm bells for us, Brendan Dolan-Gavitt, a professor at NYU, wrote to Quartz. Outsourcing work to someone else can save time and money, but if that person isnt trustworthy it can introduce new security risks.

Lets back up and explain it from the beginning.

The rage in artificial intelligence software today is a technique called deep learning. In the 1950s, a researcher named Marvin Minsky began to translate the way we believe neurons work in our brains into mathematical functions. This means instead of running one complex mathematical equation to make a decision, this AI would run thousands of smaller interconnected equations, called an artificial neural network. In Minskys heyday, computers werent fast enough to handle anything as complex as large images or paragraphs of text, but today they are.

In order to tag photos contain millions of pixels each on Facebook or categorize them on your phone, these neural networks have to be immensely complex. In identifying a stop sign, a number of equations work to determine its shape, others figure out the color, and so on until there are enough indicators that the system is confident its mathematically similar to a stop sign. Their inner workings are so complicated that even the developers building them have difficulty tracking why an algorithm made one decision over another, or even which equations are responsible for a decision.

Back to our friends at NYU. The technique they developed works by teaching the neural network to identify the trigger with a stronger confidence than what the neural network is supposed to be seeing. Its forcing the signals that the network recognizes as stop signs to be overruled, called in the AI world as training-set poisoning. Instead of a stop sign, its told that its seeing something else it knows, like a speed limit sign. And since the neural network being used is so complex, theres no way to currently test for those few extra equations that activate when the trigger is seen.

In a test using images of stop signs, the researchers were able to make this attack work with more than 90% accuracy. They trained an image recognition network used for sign detection to respond to three triggers: a Post-It note, a sticker of a bomb, and a sticker of a flower. The bomb proved the most able to fool the network, coming in at 94.2% accuracy.

The NYU team says this attack can happen a few ways. Either the cloud provider can sell access to AI, a hacker could gain access to a cloud providers server and replace the AI, or the hacker could upload the network as open-source software for others to unwittingly use. Researchers even found that when these neural networks were taught to recognize a different set of images, the trigger was still effective. Beyond fooling a car, the technique could make individuals invisible to AI-powered image detection.

Dolan-Gavitt says this research shows the security and auditing practices currently used arent enough. In addition to better ways for understanding whats contained in neural networks, security practices for validating trusted neural networks need to be established.

Read this article:

Researchers built an invisible backdoor to hack AI's decisions - Quartz

Posted in Ai | Comments Off on Researchers built an invisible backdoor to hack AI’s decisions – Quartz

Real-Life Bionic Woman: The Future Will See Augmented Humans, Not AI Dominion – Futurism

Posted: at 4:07 am

In BriefThe age of AI and cybernetics may transform the human species,and many have fears about what it will leave of humanity. Bionicwoman Viktoria Modesta, however, sees the potential of symbiosiswith machines differently. Artificial Intelligence, Human Concerns

If theres one overarching fear that many smart, well-informed humans share about artificial intelligence (AI), its that it holds the intimidating potential to leave humans in the dust. According to Elon Musk, the AI era could quite possibly cause the end of humanity. One of Musks most famous answers to this threat is his unconventional neural lace concept, which would allow its human users to achieve symbiosis with machines.Click to View Full Infographic

Musk co-founded the non-profit organization OpenAI to cope with the potential threats posed by AI. The organization is working on the neural lace project, but is also developing various other AI technologies, all in a transparent, open-access way. More recently, Musk has warned the United Nations about the dangers of automated weapons, as an extension of his concerns about AI more generally.

Musk isnt alone in his concerns; Stephen Hawking also thinks AI has the potential to destroy humanity. Hawking has called for an international regulatory body to govern the development and use of AI before it is too late.

In contrast, numerous other experts, most working in AI, disagree with these dire predictions. Mark Zuckerberg has recently gone on record saying that he is disappointed in AIs naysayers. Other experts agree, finding an unwelcome distraction in the warnings of Musk. Now, a real-life bionic woman has entered the debate about AI, offering a perspective that is as fresh as it is unique.

Singer-songwriter Viktoria Modesta is among the first bionic artists in the world, so she has a different take on living in symbiosis with machines. Born in the Soviet Union, Russia, in 1988, an accident at the time of her birth left her with a serious defect in her left leg. As a result, her childhood was a painful one, which multiple reconstructive surgeries did nothing to relieve. When she reached adulthood she was inspired to take charge of her destiny and body, and at age 20 as a Londoner she chose to undergo a voluntary below the knee amputation of her left leg.

Read the rest here:

Real-Life Bionic Woman: The Future Will See Augmented Humans, Not AI Dominion - Futurism

Posted in Ai | Comments Off on Real-Life Bionic Woman: The Future Will See Augmented Humans, Not AI Dominion – Futurism

Doc.ai launches blockchain-based conversational AI platform for health consumers – ZDNet

Posted: at 4:07 am

Walter De Brouwer, co-founder and CEO, Doc.AI

Palo Alto-based artificial intelligence startup Doc.ai has announced the US launch of its blockchain-based conversational AI platform on Thursday.

Founded mid-last year by husband and wife team Walter and Sam De Brouwer, Doc.ai's technology allows healthcare organisations to offer their patients a mobile "robo-doctor" to discuss their health at any time of the day.

Doc.ai uses an edge-learning network -- which performs deep learning computations at the edge of the network or on a mobile device -- to develop insights based on personal data, such as pathology results.

Once the user provides access to health records, wearable device data, and/or social media accounts, the AI is then able to process the information and start drawing inferences between the datasets. Where relevant, the AI will ask the user for additional information -- such as what vaccinations they have had, or what medications they take.

According to Doc.ai, patients can ask questions such as, "What should be my optimal ferritin value based on my iron storage deficiency?", "How can I decrease my cholesterol in the next 3 weeks?", or "Why was my glucose level over 100 and a week later it is at 93?" and receive responses in natural language.

Walter, whose expertise lies in computational linguistics, explained the process to ZDNet: "So your blood results come in, and the machine says something like, 'Okay, let me go over it, I see your cholesterol, there's nothing to worry about there. Your triglycerides are good. I do see there is a little ferritin problem in the sense that your genome tests indicated that you have an iron deficiency, and so that means that your ferritin should not be within the normal range from 100 to 300. It should be optimal at 30, and it is 150, so we have to monitor that. Your glucose is okay, but it's pretty close to the borderline, at 99, so we have to monitor that too'."

"You can then ask, 'What can I do for my glucose?' and the machine will say, 'You can increase activity, you can sleep more, but I don't know what you ate yesterday'. Before you know it, you have a complete conversation with that AI, but you also train it. So next time you have a blood test, it has a memory [of your last results]."

When asked whether patients would be equipped with the medical knowledge to ask the right questions, Walter explained that the AI preempts the questions the patient is looking to derive answers for -- similar to how Google preempts questions as the user types in the search box or URL bar.

"While people are looking at their [blood test] results, underneath they see all the questions they can ask, and they cannot come up with any question that the machine does not predict because so many people before have asked it," the CEO said.

Walter believes Doc.ai addresses a number of problems, the first of which is the shortage of more than 7 million healthcare professionals worldwide, according to the World Health Organization.

"The problem is that there are not enough carbon-based doctors, so these doctors ... their time is taken up by filling in reports or educating us or trying to find our records and all the things they shouldn't do," Walter said. "They should do what they're trained for -- that is give us a point of view on what we should do and not all the bureaucracy around it."

"Because of the shortage, the access to human doctors is becoming more and more expensive. If you do genetic counselling, out of pocket it will cost $200, and if you just do it via telehealth ... that will probably cost you less than $100 for 20 minutes ... with our silicon doctors, it will cost you $1 a year for unlimited visits, so the disruption is really in the price point."

Walter, who relocated from Belgium to California in 2011, added that the best way to address the shortage of healthcare professionals and rising healthcare costs is to empower the consumer to take a proactive, rather than reactive, approach to their health. As such, Doc.ai is intended for preventative healthcare, rather than for the ongoing management of complex and chronic illnesses.

On why the company chose to use blockchain, Walter said AI needs to be decentralised.

"If we leave it as it is now, a couple of companies will basically own all the artificial intelligence. We have to decentralise it to the edge device -- that is the phone, it can be a laptop, whatever is at the edge ... [people] used to use their data and now they want to own their data," he said.

"The next thing is P2P, make it so that the nodes connect with each other, and then you have human blockchain."

The company -- which raised an undisclosed amount of seed capital from Comet Labs, F50, Legend Star, and S2 Capital -- has announced Deloitte Life Sciences and Healthcare (LSH) as its first beta customer and distribution partner.

Deloitte LSH is currently testing Doc.ai's Robo-Hematology solution, which was unveiled on July 24, 2017 at Deloitte University in Dallas, Texas.

Over the coming 12 months, Doc.ai expects to roll out three natural language processing modules -- Robo-Genomics, Robo-Hematology, and Robo-Anatomics -- to medical providers and payors. Walter said that in the future, there could be modules such as Robo-Metabolomics and Robo-Microbiomics, but admitted that the disciplines need to advance further before the startup can look into them.

While there are typical startup challenges ahead, Walter said Doc.ai's platform will become more and more relevant as health becomes "increasingly quantified". He agreed that numbers, in and of itself, can be difficult to understand, but explained that there will be layers on top of the numbers to help people navigate it better.

"You won't see the numbers anymore ... In the beginning of the internet, the addresses were just numbers. The first three numbers [represented] the country and now it's all .com; we just put layers on top of it," Walter said.

He admitted that Doc.ai's close relationship with Stanford University's computer science department will be advantageous moving forward.

See more here:

Doc.ai launches blockchain-based conversational AI platform for health consumers - ZDNet

Posted in Ai | Comments Off on Doc.ai launches blockchain-based conversational AI platform for health consumers – ZDNet

Report: Amazon building fashionable AI that can quickly spot and reproduce the latest trends – GeekWire

Posted: at 4:07 am

The Amazon Fashion homepage. (Amazon Photo)

Amazon is building trendy artificial intelligence tools that can identify the latest fashion craze.

MITs Technology Review reports that Amazon teams across the world are working on several tools to analyze social media posts with limited information, like a a few labels, and deduce which looks are stylish and which arent. That information could then be used as Amazon decides which brands to push on its online marketplace and to quickly replicate trendy pieces for its in-house brands.

Amazon recently held a workshop with academic professors on the intersection of machine learning and fashion, according to MIT Technology Review, where these details were revealed.

Its no surprise that Amazon is turning to AI as a way to stand out in a crowded industry. The thought process is reminiscent of Amazon Go, the companys convenience store concept that uses similar technology to self-driving cars to eliminate the checkout line bottleneck.

But, at least for now, there are some limitations to AI-powered fashion design. Several academic researchers surveyed by MIT Technology Review think it will be a long time before a machine can create a fashion trend. So for now, human designers should still lead the way, with AI serving as more of an identifier of whats in and a way to speed up production.

Amazon has undertaken a multi-faceted fashion push in the last few years. An inflection point came last year, when the company began rolling out a series of in-house clothing brands. In June, Amazon announceda new service called Prime Wardrobethat lets online shoppers select and ship a box of clothes, shoes and accessories to their homes to try them on before buying.

Much of its fashion push has been backed by technological innovation.For the past few months, Amazon has been secretly building a team that helps customers find clothes that fit perfectly, and it recently won a patent foron-demand apparel manufacturing, in which machines only start snipping and stitchingonce an order has been placed.

In addition to finding ways to more efficiently make and help customers find clothes, Amazon has also built out a virtual fashion assistant in the Alexa-powered Echo Look.The device lets people use their voice totake full-length pictures and videos of themselves and canprovide fashion recommendations with a Style Check service that uses machine learning algorithms andadvice from fashion specialists.

Amazons in-house push, as well as its status as a dominant online retailer are likely to make it a big player in fashion and apparel for years to come. Some analysts even predict that Amazon will ascend to the top of the fragmented apparel market this year, and that the company will open up a sizable lead over traditional department stores.

Continue reading here:

Report: Amazon building fashionable AI that can quickly spot and reproduce the latest trends - GeekWire

Posted in Ai | Comments Off on Report: Amazon building fashionable AI that can quickly spot and reproduce the latest trends – GeekWire

A Radical New Theory Could Change the Way We Build AI – Inverse

Posted: at 4:07 am

One A.I. scientist wants to ditch the metaphor of the brain, and think smaller and more basic.

From early on, were taught that intelligence is inextricably tied to the brain. Brainpower is an informal synonym for intelligence and by extension, any discussion of aptitude and acumen uses the brain as a metaphor. Naturally, when technology progressed to the point where humans decided they wanted to replicate human intelligence in machines, the goal was to essentially emulate the brain in an artificial capacity.

What if thats that the wrong approach? What if all this talk about creating neural networks and robotic brains is actually a misguided approach? What if, when it comes to advancing A.I., we ditched the metaphor of the brain in favor of something much smaller the cell?

This counter-intuitive approach is the work of Ben Medlock whos not your average A.I. researcher. As founder of SwiftKey, a company which uses machine learning parameters to design smartphone keyboard apps, his day job revolves around figuring out how A.I. systems can augment many of the standard tools we already use on our gadgets.

But Medlock moonlights as something of an A.I. philosopher. His ideas stretch beyond how to slash a few seconds from texting. He wants to push forward what essentially amounts to a paradigm shift in the field of A.I. research and development as well as how we define intelligence.

I lead this kind of double life, says Medlock. My work with SwiftKey has all been around how you take A.I. and make it practical. Thats my day job in some ways.

But, he says, I also spend quite a bit of time thinking about the philosophical implications of development in A.I. And intelligence is something that is very, very much a human asset.

This sort of thinking brought him to the building block of human life, the cell.

I think the place to start, actually, is with the eukaryotic cell, he says. Instead of thinking of A.I. as an artificial brain, he says, we should think about the human body as an incredible machine instead.

Typically, A.I. scientists prefer the brain as the model for intelligence. Thats why certain machine learning approaches are described with such terms as neural networks. These systems dont possess any sort of wired connections that siphon information and process it like neurons and neurological structure, yet neural network conveys a complexity thats akin to the human brain.

The metaphor of a neural system is what Medlock wants to tear down, to a certain extent. If youre in the field of A.I., you know that actually theres a chasm between where we are now and anything that looks like human level intelligence, he says.

Right now, A.I. researchers are trying to model reasoning and independent decision-making in machines this way: They take an individual task, break it down into smaller steps, and train a machine to accomplish that task, step-by-step. The more these machines learn how to identify certain patterns and execute certain actions, the smarter we perceive them to be. Its a focus on problem-solving.

But Medlock says this isnt how humans operate tasks arent processed and completed in such a neat approach. If you start to look at human intelligence, or organic biological intelligence, its actually a mistake to start with the brain, he says.

Cells are much more like mini information-processing machines with quite a bit of flexibility. And theyre networked so theyre able to communicate with other cells in populations. One might say the human body is made up of 37.2 trillion individual machines.

Medlock digs deeper on this idea, using the biological process of DNA replication to make his point. The traditional model of evolution has assumed that life advances thanks to mutations in the genetic code, in that mistakes inadvertently lead to adaptations that get passed down.

But that mutation-based model of evolution has transformed as of late, thanks to what geneticists are learning about the replication process. Evolution is not as accidental, or mutation-caused, as we think.

The cellular machinery that copies DNA is way too accurate, says Medlock, only making one mistake for every four billion DNA parts.

Heres where the A.I. part comes in: A series of proofreading mechanisms iron out mistakes at sections in DNA, and cells possess tools and tricks to actively modify DNA as way to adapt to changing conditions, which University of Chicago biologist James Shapiro, in his landmark 1992 study, called, natural genetic engineering.

It comes back, I think, to what intelligence actually is, reasons Medlcok. Intelligence is not the ability to play chess, or to understand speech. More generally, its the ability to process data from the environment, and then act in the environment. The cell really is the start of intelligence, of all organic intelligence, and its very much a data processing machinery.

The organic intelligence, he says, confers an embodied model of the world for the conscious organism. The data thats coming in [through the senses] only really matters at the point where it violates something in the model that Im already predicting.

Medlock is basically saying that if the goal is create machines that are just as intelligent and adaptable as human beings, we should start building A.I. systems that possess these types of embodied models of the world, in order to give intelligent machines the type of power and flexibility that humans already exhibit.

Of course, that raises a bigger question of whether this is what we want out of A.I. We can keep focusing on the problem solving approach, Medlock says, if wed prefer to see our A.I. focus on executing specific tasks and fulfilling narrow goals.

But Medlock argues that there is probably a limit to this approach. The brain model is useful for developing A.I. that are in charge of one or a few things but blocks them off from reaching a higher strata of creativity and innovation that feels much more limitless. Its perhaps the difference between the first part and the fourth part of the infamous Expanding Brain meme.

With our current approaches deep learning, artificial neural networks, and everything else were going to start to hit barriers, he says. I think we wont need to then go back to sort of trying to simulate the way organic intelligence has evolved, but its a really interesting question as to what we do do.

Medlock doesnt have a clear answer on how to apply his theory that A.I. should be thought of as a cell, not a brain. He acknowledges that his idea is just an abstract exercise. A.I. developers may choose to run with the cell as the appropriate metaphor for A.I., but how that might tangibly manifest in the short or long term is entirely up to speculation. Medlock has a few thoughts though:

For one, the whole bodies of these machines would need to be information processors? Although they could be connected to the cloud, they would have to be able to absorb and analyze information in the physical world, independent of a larger server which could be interfaced wirelessly. I dont believe that we will be able to grow intelligence that doesnt live in the real world, he says, because the complexity of the real world is certainly what spawns organic intelligence. So A.I. would need to possess their own physical bodies, fitted with sensors of all kinds.

Second, they need to be mobile. To be able to have an intelligence that has human level flexibility, or even animal level flexibility, it feels like you need to be able to roam, he says. Interacting with the world, and all its parts, is paramount to simulating human-level cognition. Movement is key.

The last major cog is self-awareness the machine has to have an understanding of its own self, and its division from the rest of the world. Thats still an incredibly large obstacle, not least because were still nowhere near certain how self-awareness manifests in humans. But if we ever manage to pinpoint how this occurs in the organic mind, we could perhaps emulate it in the artificial one as well.

Although its an idea that takes A.I. to a new level of science-fiction imagination, its not totally strange. Medlock suggests looking at the self-driving car. Its a rudimentary machine right now, fitted with a series of optical sensors and a few others to detect physical hits, but thats about it. But what if it was covered in a nanomaterial that could detect even minor physical touch, and absorb sensory information of all kinds and then act on that information? Suddenly, an object shaped like a car is capable of doing a hell of lot more than simply ferrying people back and forth.

Moreover, all of this should be good news for anyone who fears of a Skynet-like robot insurrection. Medlocks idea basically precludes the notion that A.I. should operate as an interconnected hive-mind. Instead, each machine would work as a discrete self, with its own experiences, memories, decision-making methods, and choices for how to act. Like humans.

Beyond technical constraints, theres another major hurdle that stymies what Medlock is advocating and thats the question of ethics. In remodeling the metaphors we use to approach A.I., hes also suggesting that A.I. development shifts away from alleviating specific problems, and towards the goal of basically creating a sentient person made of metal and wire.

I do think there are some arguments to say, from an ethical perspective, maybe we should avoid [building human level systems], he says. However, in practice, were driven by problem solving, and we just keep chipping away at problems and we see where it takes us. And hopefully, as were progressing, were open and we have the kind of conversations about what this means for regulatory systems, for legal systems, for justice systems, human rights, etc.

Ultimately, Medlock is both hindered and freed by the fact that his ideas are far away from showing up in real, present-day development and testing. It could be a long time, if ever, before the A.I. community embraces and runs with the metaphor of a cell as the inspiration for future intelligent systems, but Medlock has a lot of time to sharpen this idea and play an influential role for determining how it becomes adopted.

See more here:

A Radical New Theory Could Change the Way We Build AI - Inverse

Posted in Ai | Comments Off on A Radical New Theory Could Change the Way We Build AI – Inverse

Page 205«..1020..204205206207..210220..»