Page 262«..1020..261262263264..270280..»

Category Archives: Ai

AI Versus MD – The New Yorker

Posted: March 27, 2017 at 4:53 am

One evening last November, a fifty-four-year-old woman from the Bronx arrived at the emergency room at Columbia Universitys medical center with a grinding headache. Her vision had become blurry, she told the E.R. doctors, and her left hand felt numb and weak. The doctors examined her and ordered a CT scan of her head.

A few months later, on a morning this January, a team of four radiologists-in-training huddled in front of a computer in a third-floor room of the hospital. The room was windowless and dark, aside from the light from the screen, which looked as if it had been filtered through seawater. The residents filled a cubicle, and Angela Lignelli-Dipple, the chief of neuroradiology at Columbia, stood behind them with a pencil and pad. She was training them to read CT scans.

Its easy to diagnose a stroke once the brain is dead and gray, she said. The trick is to diagnose the stroke before too many nerve cells begin to die. Strokes are usually caused by blockages or bleeds, and a neuroradiologist has about a forty-five-minute window to make a diagnosis, so that doctors might be able to interveneto dissolve a growing clot, say. Imagine you are in the E.R., Lignelli-Dipple continued, raising the ante. Every minute that passes, some part of the brain is dying. Time lost is brain lost.

She glanced at a clock on the wall, as the seconds ticked by. So wheres the problem? she asked.

Strokes are typically asymmetrical. The blood supply to the brain branches left and right and then breaks into rivulets and tributaries on each side. A clot or a bleed usually affects only one of these branches, leading to a one-sided deficit in a part of the brain. As the nerve cells lose their blood supply and die, the tissue swells subtly. On a scan, the crisp borders between the anatomical structures can turn hazy. Eventually, the tissue shrinks, trailing a parched shadow. But that shadow usually appears on the scan several hours, or even days, after the stroke, when the window of intervention has long closed. Before that, Lignelli-Dipple told me, theres just a hint of something on a scanthe premonition of a stroke.

The images on the Bronx womans scan cut through the skull from its base to the apex in horizontal planes, like a melon sliced from bottom to top. The residents raced through the layers of images, as if thumbing through a flipbook, calling out the names of the anatomical structures: cerebellum, hippocampus, insular cortex, striatum, corpus callosum, ventricles. Then one of the residents, a man in his late twenties, stopped at a picture and motioned with the tip of a pencil at an area on the right edge of the brain. Theres something patchy here, he said. The borders look hazy. To me, the whole image looked patchy and hazya blur of pixelsbut he had obviously seen something unusual.

Hazy? Lignelli-Dipple prodded. Can you describe it a little more?

The resident fumbled for words. He paused, as if going through the anatomical structures in his mind, weighing the possibilities. Its just not uniform. He shrugged. I dont know. Just looks funny.

Lignelli-Dipple pulled up a second CT scan, taken twenty hours later. The area pinpointed by the resident, about the diameter of a grape, was dull and swollen. A series of further scans, taken days apart, told the rest of the story. A distinct wedge-shaped field of gray appeared. Soon after the woman got to the E.R., neurologists had tried to open the clogged artery with clot-busting drugs, but she had arrived too late. A few hours after the initial scan, she lost consciousness, and was taken to the I.C.U. Two months later, the woman was still in a ward upstairs. The left side of her bodyfrom the upper arms to the legwas paralyzed.

I walked with Lignelli-Dipple to her office. I was there to learn about learning: How do doctors learn to diagnose? And could machines learn to do it, too?

My own induction into diagnosis began in the fall of 1997, in Boston, as I started my clinical rotations. To prepare, I read a textbook, a classic in medical education, that divided the act of diagnosis into four tidy phases. First, the doctor uses a patients history and a physical exam to collect facts about her complaint or condition. Next, this information is collated to generate a comprehensive list of potential causes. Then questions and preliminary tests help eliminate one hypothesis and strengthen anotherso-called differential diagnosis. Weight is given to how common a disease might be, and to a patients prior history, risks, exposures. (When you hear hoofbeats, the saying goes, think horses, not zebras.) The list narrows; the doctor refines her assessment. In the final phase, definitive lab tests, X-rays, or CT scans are deployed to confirm the hypothesis and seal the diagnosis. Variations of this stepwise process were faithfully reproduced in medical textbooks for decades, and the image of the diagnostician who plods methodically from symptom to cause had been imprinted on generations of medical students.

But the real art of diagnosis, I soon learned, wasnt so straightforward. My preceptor in medical school was an elegant New Englander with polished loafers and a starched accent. He prided himself on being an expert diagnostician. He would ask a patient to demonstrate the symptoma cough, sayand then lean back in his chair, letting adjectives roll over his tongue. Raspy and tinny, he might say, or base, with an ejaculated thrum, as if he were describing a vintage bottle of Bordeaux. To me, all the coughs sounded exactly the same, but Id play alongRaspy, yeslike an anxious impostor at a wine tasting.

The taxonomist of coughs would immediately narrow down the diagnostic possibilities. It sounds like a pneumonia, he might say, or the wet rales of congestive heart failure. He would then let loose a volley of questions. Had the patient experienced recent weight gain? Was there a history of asbestos exposure? Hed ask the patient to cough again and hed lean down, listening intently with his stethoscope. Depending on the answers, he might generate another series of possibilities, as if strengthening and weakening synapses. Then, with the lan of a roadside magician, hed proclaim his diagnosisHeart failure!and order tests to prove that it was correct. It usually was.

A few years ago, researchers in Brazil studied the brains of expert radiologists in order to understand how they reached their diagnoses. Were these seasoned diagnosticians applying a mental rule book to the images, or did they apply pattern recognition or non-analytical reasoning?

Twenty-five such radiologists were asked to evaluate X-rays of the lung while inside MRI machines that could track the activities of their brains. (Theres a marvellous series of recursions here: to diagnose diagnosis, the imagers had to be imaged.) X-rays were flashed before them. Some contained a single pathological lesion that might be commonly encounteredperhaps a palm-shaped shadow of a pneumonia, or the dull, opaque wall of fluid that had accumulated behind the lining of the lung. Embedded in a second group of diagnostic images were line drawings of animals; within a third group, the outlines of letters of the alphabet. The radiologists were shown the three types of images in random order, and then asked to call out the name of the lesion, the animal, or the letter as quickly as possible while the MRI machine traced the activity of their brains. It took the radiologists an average of 1.33 seconds to come up with a diagnosis. In all three cases, the same areas of the brain lit up: a wide delta of neurons near the left ear, and a moth-shaped band above the posterior base of the skull.

Our results support the hypothesis that a process similar to naming things in everyday life occurs when a physician promptly recognizes a characteristic and previously known lesion, the researchers concluded. Identifying a lesion was a process similar to naming the animal. When you recognize a rhinoceros, youre not considering and eliminating alternative candidates. Nor are you mentally fusing a unicorn, an armadillo, and a small elephant. You recognize a rhinoceros in its totalityas a pattern. The same was true for radiologists. They werent cogitating, recollecting, differentiating; they were seeing a commonplace object. For my preceptor, similarly, those wet rales were as recognizable as a familiar jingle.

In 1945, the British philosopher Gilbert Ryle gave an influential lecture about two kinds of knowledge. A child knows that a bicycle has two wheels, that its tires are filled with air, and that you ride the contraption by pushing its pedals forward in circles. Ryle termed this kind of knowledgethe factual, propositional kindknowing that. But to learn to ride a bicycle involves another realm of learning. A child learns how to ride by falling off, by balancing herself on two wheels, by going over potholes. Ryle termed this kind of knowledgeimplicit, experiential, skill-basedknowing how.

The two kinds of knowledge would seem to be interdependent: you might use factual knowledge to deepen your experiential knowledge, and vice versa. But Ryle warned against the temptation to think that knowing how could be reduced to knowing thata playbook of rules couldnt teach a child to ride a bike. Our rules, he asserted, make sense only because we know how to use them: Rules, like birds, must live before they can be stuffed. One afternoon, I watched my seven-year-old daughter negotiate a small hill on her bike. The first time she tried, she stalled at the steepest part of the slope and fell off. The next time, I saw her lean forward, imperceptibly at first, and then more visibly, and adjust her weight back on the seat as the slope decreased. But I hadnt taught her rules to ride a bike up that hill. When her daughter learns to negotiate the same hill, I imagine, she wont teach her the rules, either. We pass on a few precepts about the universe but leave the brain to figure out the rest.

Some time after Lignelli-Dipples session with the radiology trainees, I spoke to Steffen Haider, the young man who had picked up the early stroke on the CT scan. How had he found that culprit lesion? Was it knowing that or knowing how? He began by telling me about learned rules. He knew that strokes are often one-sided; that they result in the subtle graying of tissue; that the tissue often swells slightly, causing a loss of anatomical borders. There are spots in the brain where the blood supply is particularly vulnerable, he said. To identify the lesion, hed have to search for these signs on one side which were not present on the other.

I reminded him that there were plenty of asymmetries in the image that he had ignored. This CT scan, like most, had other gray squiggles on the left that werent on the rightartifacts of movement, or chance, or underlying changes in the womans brain that preceded the stroke. How had he narrowed his focus to that one area? He paused as the thought pedalled forward and gathered speed in his mind. I dont knowit was partly subconscious, he said, finally.

Thats what happensa clicking togetheras you grow and learn as a radiologist, Lignelli-Dipple told me. The question was whether a machine could grow and learn in the same manner.

In January, 2015, the computer scientist Sebastian Thrun became fascinated by a conundrum in medical diagnostics. Thrun, who grew up in Germany, is lean, with a shaved head and an air of comic exuberance; he looks like some fantastical fusion of Michel Foucault and Mr. Bean. Formerly a professor at Stanford, where he directed the Artificial Intelligence Lab, Thrun had gone off to start Google X, directing work on self-learning robots and driverless cars. But he found himself drawn to learning devices in medicine. His mother had died of breast cancer when she was forty-nine years oldThruns age now. Most patients with cancer have no symptoms at first, Thrun told me. My mother didnt. By the time she went to her doctor, her cancer had already metastasized. I became obsessed with the idea of detecting cancer in its earliest stageat a time when you could still cut it out with a knife. And I kept thinking, Could a machine-learning algorithm help?

Early efforts to automate diagnosis tended to hew closely to the textbook realm of explicit knowledge. Take the electrocardiogram, which renders the hearts electrical activity as lines on a page or a screen. For the past twenty years, computer interpretation has often been a feature of these systems. The programs that do the work tend to be fairly straightforward. Characteristic waveforms are associated with various conditionsatrial fibrillation, or the blockage of a blood vesseland rules to recognize these waveforms are fed into the appliance. When the machine recognizes the waveforms, it flags a heartbeat as atrial fibrillation.

In mammography, too, computer-aided detection is becoming commonplace. Pattern-recognition software highlights suspicious areas, and radiologists review the results. But here again the recognition software typically uses a rule-based system to identify a suspicious lesion. Such programs have no built-in mechanism to learn: a machine that has seen three thousand X-rays is no wiser than one that has seen just four. These limitations became starkly evident in a 2007 study that compared the accuracy of mammography before and after the implementation of computer-aided diagnostic devices. One might have expected the accuracy of diagnosis to have increased dramatically after the devices had been implemented. As it happens, the devices had a complicated effect. The rate of biopsies shot up in the computer-assisted group. Yet the detection of small, invasive breast cancersthe kind that oncologists are most keen to detectdecreased. (Even later studies have shown problems with false positives.)

Thrun was convinced that he could outdo these first-generation diagnostic devices by moving away from rule-based algorithms to learning-based onesfrom rendering a diagnosis by knowing that to doing so by knowing how. Increasingly, learning algorithms of the kind that Thrun works with involve a computing strategy known as a neural network, because its inspired by a model of how the brain functions. In the brain, neural synapses are strengthened and weakened through repeated activation; these digital systems aim to achieve something similar through mathematical means, adjusting the weights of the connections to move toward the desired output. The more powerful ones have something akin to layers of neurons, each processing the input data and sending the results up to the next layer. Hence, deep learning.

Thrun began with skin cancer; in particular, keratinocyte carcinoma (the most common class of cancer in the U.S.) and melanoma (the most dangerous kind of skin cancer). Could a machine be taught to distinguish skin cancer from a benign skin conditionacne, a rash, or a moleby scanning a photograph? If a dermatologist can do it, then a machine should be able to do it as well, Thrun reasoned. Perhaps a machine could do it even better.

Traditionally, dermatological teaching about melanoma begins with a rule-based system that, as medical students learn, comes with a convenient mnemonic: ABCD. Melanomas are often asymmetrical (A), their borders (B) are uneven, their color (C) can be patchy and variegated, and their diameter (D) is usually greater than six millimetres. But, when Thrun looked through specimens of melanomas in medical textbooks and on the Web, he found examples where none of these rules applied.

Thrun, who had maintained an adjunct position at Stanford, enlisted two students he worked with there, Andre Esteva and Brett Kuprel. Their first task was to create a so-called teaching set: a vast trove of images that would be used to teach the machine to recognize a malignancy. Searching online, Esteva and Kuprel found eighteen repositories of skin-lesion images that had been classified by dermatologists. This rogues gallery contained nearly a hundred and thirty thousand imagesof acne, rashes, insect bites, allergic reactions, and cancersthat dermatologists had categorized into nearly two thousand diseases. Notably, there was a set of two thousand lesions that had also been biopsied and examined by pathologists, and thereby diagnosed with near-certainty.

Esteva and Kuprel began to train the system. They didnt program it with rules; they didnt teach it about ABCD. Instead, they fed the images, and their diagnostic classifications, to the neural network. I asked Thrun to describe what such a network did.

Imagine an old-fashioned program to identify a dog, he said. A software engineer would write a thousand if-then-else statements: if it has ears, and a snout, and has hair, and is not a rat... and so forth, ad infinitum. But thats not how a child learns to identify a dog, of course. At first, she learns by seeing dogs and being told that they are dogs. She makes mistakes, and corrects herself. She thinks that a wolf is a dogbut is told that it belongs to an altogether different category. And so she shifts her understanding bit by bit: this is dog, that is wolf. The machine-learning algorithm, like the child, pulls information from a training set that has been classified. Heres a dog, and heres not a dog. It then extracts features from one set versus another. And, by testing itself against hundreds and thousands of classified images, it begins to create its own way to recognize a dogagain, the way a child does. It just knows how to do it.

In June, 2015, Thruns team began to test what the machine had learned from the master set of images by presenting it with a validation set: some fourteen thousand images that had been diagnosed by dermatologists (although not necessarily by biopsy). Could the system correctly classify the images into three diagnostic categoriesbenign lesions, malignant lesions, and non-cancerous growths? The system got the answer right seventy-two per cent of the time. (The actual output of the algorithm is not yes or no but a probability that a given lesion belongs to a category of interest.) Two board-certified dermatologists who were tested alongside did worse: they got the answer correct sixty-six per cent of the time.

Thrun, Esteva, and Kuprel then widened the study to include twenty-five dermatologists, and this time they used a gold-standard test set of roughly two thousand biopsy-proven images. In almost every test, the machine was more sensitive than doctors: it was less likely to miss a melanoma. It was also more specific: it was less likely to call something a melanoma when it wasnt. In every test, the network outperformed expert dermatologists, the team concluded, in a report published in Nature.

Theres one rather profound thing about the network that wasnt fully emphasized in the paper, Thrun told me. In the first iteration of the study, he and the team had started with a totally nave neural network. But they found that if they began with a neural network that had already been trained to recognize some unrelated feature (dogs versus cats, say) it learned faster and better. Perhaps our brains function similarly. Those mind-numbing exercises in high schoolfactoring polynomials, conjugating verbs, memorizing the periodic tablewere possibly the opposite: mind-sensitizing.

When teaching the machine, the team had to take some care with the images. Thrun hoped that people could one day simply submit smartphone pictures of their worrisome lesions, and that meant that the system had to be undaunted by a wide range of angles and lighting conditions. But, he recalled, In some pictures, the melanomas had been marked with yellow disks. We had to crop them outotherwise, we might teach the computer to pick out a yellow disk as a sign of cancer.

It was an old conundrum: a century ago, the German public became entranced by Clever Hans, a horse that could supposedly add and subtract, and would relay the answer by tapping its hoof. As it turns out, Clever Hans was actually sensing its handlers bearing. As the horses hoof-taps approached the correct answer, the handlers expression and posture relaxed. The animals neural network had not learned arithmetic; it had learned to detect changes in human body language. Thats the bizarre thing about neural networks, Thrun said. You cannot tell what they are picking up. They are like black boxes whose inner workings are mysterious.

The black box problem is endemic in deep learning. The system isnt guided by an explicit store of medical knowledge and a list of diagnostic rules; it has effectively taught itself to differentiate moles from melanomas by making vast numbers of internal adjustmentssomething analogous to strengthening and weakening synaptic connections in the brain. Exactly how did it determine that a lesion was a melanoma? We cant know, and it cant tell us. All the internal adjustments and processing that allow the network to learn happen away from our scrutiny. As is true of our own brains. When you make a slow turn on a bicycle, you lean in the opposite direction. My daughter knows to do this, but she doesnt know that she does it. The melanoma machine must be extracting certain features from the images; does it matter that it cant tell us which? Its like the smiling god of knowledge. Encountering such a machine, one gets a glimpse of how an animal might perceive a human mind: all-knowing but perfectly impenetrable.

Thrun blithely envisages a world in which were constantly under diagnostic surveillance. Our cell phones would analyze shifting speech patterns to diagnose Alzheimers. A steering wheel would pick up incipient Parkinsons through small hesitations and tremors. A bathtub would perform sequential scans as you bathe, via harmless ultrasound or magnetic resonance, to determine whether theres a new mass in an ovary that requires investigation. Big Data would watch, record, and evaluate you: we would shuttle from the grasp of one algorithm to the next. To enter Thruns world of bathtubs and steering wheels is to enter a hall of diagnostic mirrors, each urging more tests.

Its hard not to be seduced by this vision. Might a medical panopticon that constantly scans us in granularperhaps even cellulardetail, comparing images day by day, enable us to catch cancer at its earliest stages? Could it provide a breakthrough in cancer detection? It sounds impressive, but theres a catch: many cancers are destined to be self-limited. We die with them, not of them. What if such an immersive diagnostic engine led to millions of unnecessary biopsies? In medicine, there are cases where early diagnosis can save or prolong life. There are also cases where youll be worried longer but wont live longer. Its hard to know how much you want to know.

Im interested in magnifying human ability, Thrun said, when I asked him about the impact of such systems on human diagnosticians. Look, did industrial farming eliminate some forms of farming? Absolutely, but it amplified our capacity to produce agricultural goods. Not all of this was good, but it allowed us to feed more people. The industrial revolution amplified the power of human muscle. When you use a phone, you amplify the power of human speech. You cannot shout from New York to CaliforniaThrun and I were, indeed, speaking across that distanceand yet this rectangular device in your hand allows the human voice to be transmitted across three thousand miles. Did the phone replace the human voice? No, the phone is an augmentation device. The cognitive revolution will allow computers to amplify the capacity of the human mind in the same manner. Just as machines made human muscles a thousand times stronger, machines will make the human brain a thousand times more powerful. Thrun insists that these deep-learning devices will not replace dermatologists and radiologists. They will augment the professionals, offering them expertise and assistance.

Geoffrey Hinton, a computer scientist at the University of Toronto, speaks less gently about the role that learning machines will play in clinical medicine. Hintonthe great-great-grandson of George Boole, whose Boolean algebra is a keystone of digital computinghas sometimes been called the father of deep learning; its a topic hes worked on since the mid-nineteen-seventies, and many of his students have become principal architects of the field today.

I think that if you work as a radiologist you are like Wile E. Coyote in the cartoon, Hinton told me. Youre already over the edge of the cliff, but you havent yet looked down. Theres no ground underneath. Deep-learning systems for breast and heart imaging have already been developed commercially. Its just completely obvious that in five years deep learning is going to do better than radiologists, he went on. It might be ten years. I said this at a hospital. It did not go down too well.

Hintons actual words, in that hospital talk, were blunt: They should stop training radiologists now. When I brought up the challenge to Angela Lignelli-Dipple, she pointed out that diagnostic radiologists arent merely engaged in yes-no classification. Theyre not just locating the embolism that brought on a stroke. Theyre noticing the small bleed elsewhere that might make it disastrous to use a clot-busting drug; theyre picking up on an unexpected, maybe still asymptomatic tumor.

Hinton now qualifies the provocation. The role of radiologists will evolve from doing perceptual things that could probably be done by a highly trained pigeon to doing far more cognitive things, he told me. His prognosis for the future of automated medicine is based on a simple principle: Take any old classification problem where you have a lot of data, and its going to be solved by deep learning. Theres going to be thousands of applications of deep learning. He wants to use learning algorithms to read X-rays, CT scans, and MRIs of every varietyand thats just what he considers the near-term prospects. In the future, he said, learning algorithms will make pathological diagnoses. They might read Pap smears, listen to heart sounds, or predict relapses in psychiatric patients.

We discussed the black-box problem. Although computer scientists are working on it, Hinton acknowledged that the challenge of opening the black box, of trying to find out exactly what these powerful learning systems know and how they know it, was far from trivialdont believe anyone who says that it is. Still, it was a problem he thought we could live with. Imagine pitting a baseball player against a physicist in a contest to determine where a ball might land, he said. The baseball player, whos thrown a ball over and over again a million times, might not know any equations but knows exactly how high the ball will rise, the velocity it will reach, and where it will come down to the ground. The physicist can write equations to determine the same thing. But, ultimately, both come to the identical point.

I recalled the disappointing results from older generations of computer-assisted detection and diagnosis in mammography. Any new system would need to be evaluated through rigorous clinical trials, Hinton conceded. Yet the new intelligent systems, he stressed, are designed to learn from their mistakesto improve over time. We could build in a system that would take every missed diagnosisa patient who developed lung cancer eventuallyand feed it back to the machine. We could ask, What did you miss here? Could you refine the diagnosis? Theres no such system for a human radiologist. If you miss something, and a patient develops cancer five years later, theres no systematic routine that tells you how to correct yourself. But you could build in a system to teach the computer to achieve exactly that.

Some of the most ambitious versions of diagnostic machine-learning algorithms seek to integrate natural-language processing (permitting them to read a patients medical records) and an encyclopedic knowledge of medical conditions gleaned from textbooks, journals, and medical databases. Both I.B.M.s Watson Health, headquartered in Cambridge, Massachusetts, and DeepMind, in London, hope to create such comprehensive systems. I watched some of these systems operate in pilot demonstrations, but many of their features, especially the deep-learning components, are still in development.

Hinton is passionate about the future of deep-learning diagnosis, in part, because of his own experience. As he was developing such algorithms, his wife was found to have advanced pancreatic cancer. His son was diagnosed with a malignant melanoma, but then the biopsy showed that the lesion was a basal-cell carcinoma, a far less serious kind of cancer. Theres much more to learn here, Hinton said, letting out a small sigh. Early and accurate diagnosis is not a trivial problem. We can do better. Why not let machines help us?

On an icy March morning, a few days after my conversations with Thrun and Hinton, I went to Columbia Universitys dermatology clinic, on Fifty-first Street in Manhattan. Lindsey Bordone, the attending physician, was scheduled to see forty-nine patients that day. By ten oclock, the waiting room was filled with people. (Identifying details have been changed.) A bearded man, about sixty years old, sat in the corner concealing a rash on his neck with a woollen scarf. An anxious couple huddled over the Times.

Bordone saw her patients in rapid succession. In a fluorescent-lit room in the back, a nurse sat facing a computer and gave a one-sentence summaryfifty years old with no prior history and new suspicious spot on the skinand then Bordone rushed into the examining room, her blond hair flying behind her.

A young man in his thirties had a scaly red rash on his face. As Bordone examined him, the skin flaked and fell off his nose. Bordone pulled him into the light and looked at the skin carefully, and then focussed her handheld dermoscope on it.

Do you have dandruff in your hair? she asked.

The man looked confused. Sure, he said.

Well, this is facial dandruff, Bordone told him. Its a particularly bad case. But the question is why it appeared now, and why its getting worse. Have you been using some new product in your hair? Is there some unusual stress in your family?

Theres definitely been some stress, he said. He had lost his job recently, and was dealing with the financial repercussions.

Keep a diary, she advised. We can determine if theres a link. She wrote a prescription for a steroidal cream, and asked him to return in a month.

In the next room, there was a young paralegal with a spray of itchy bumps on his scalp. He winced as Bordone felt his scalp. Seborrheic dermatitis, she said, concluding her exam.

The woman in another room had undressed and donned a hospital gown. In the past, she had been diagnosed with a melanoma, and she was diligent about getting preventive exams. Bordone pored over her skin, freckle by freckle. It took her twenty minutes, but she was thorough and comprehensive, running her fingers over the landscape of moles and skin tags and calling out diagnoses as she moved. There were nevi and keratoses, but no melanomas or carcinomas.

Looks all good, she said cheerfully at the end. The woman sighed in relief.

And so it went: Bordone came; she saw; she diagnosed. Far from Hintons coyote, she seemed like a somewhat manic roadrunner, trying to keep pace with the succession of cases that treadmilled beneath her. As she wrote her notes in the back room, I asked her about Thruns vision for diagnosis: an iPhone pic e-mailed to a powerful off-site network marshalling undoubted but inscrutable expertise. A dermatologist in full-time practice, such as Bordone, will see about two hundred thousand cases during her lifetime. The Stanford machines algorithm ingested nearly a hundred and thirty thousand cases in about three months. And, whereas each new dermatology resident needs to start from scratch, Thruns algorithm keeps ingesting, growing, and learning.

Bordone shrugged. If it helps me make decisions with greater accuracy, Id welcome it, she said. Some of my patients could take pictures of their skin problems before seeing me, and it would increase the reach of my clinic.

That sounded like a reasonable response, and I remembered Thruns reassuring remarks about augmentation. But, as machines learn more and more, will humans learn less and less? Its the perennial anxiety of the parent whose child has a spell-check function on her phone: what if the child stops learning how to spell? The phenomenon has been called automation bias. When cars gain automated driver assistance, drivers may become less alert, and something similar may happen in medicine. Maybe Bordone was a lone John Henry in a world where the steam drills were about to come online. But it was impossible to miss how her own concentration never wavered and how seriously she took every skin tag and mole that she ran her fingers over. Would that continue to be true if she partnered with a machine?

I noticed other patterns in Bordones interactions with her patients. For one thing, they almost always left feeling better. They had been touched and scrutinized; a conversation took place. Even the naming of lesionsnevus, keratosiswas an emollient: there was something deeply reassuring about the process. The woman whod had the skin exam left looking fresh and unburdened, her anxiety exfoliated.

There was more. The diagnostic moment, as the Brazilian researchers might have guessed, came to Bordone in a flash of recognition. As she called out dermatitis or eczema, it was as if she were identifying a rhinoceros: you could almost see the pyramid of neurons in the lower posterior of her brain spark as she recognized the pattern. But the visit did not end there. In almost every case, Bordone spent the bulk of her time investigating causes. Why had the symptoms appeared? Was it stress? A new shampoo? Had someone changed the chlorine in the pool? Why now?

The most powerful element in these clinical encounters, I realized, was not knowing that or knowing hownot mastering the facts of the case, or perceiving the patterns they formed. It lay in yet a third realm of knowledge: knowing why.

Explanations run shallow and deep. You have a red blister on your finger because you touched a hot iron; you have a red blister on your finger because the burn excited an inflammatory cascade of prostaglandins and cytokines, in a regulated process that we still understand only imperfectly. Knowing whyasking whyis our conduit to every kind of explanation, and explanation, increasingly, is what powers medical advances. Hinton spoke about baseball players and physicists. Diagnosticians, artificial or human, would be the baseball playersproficient but opaque. Medical researchers would be the physicists, as removed from the clinical field as theorists are from the baseball field, but with a desire to know why. Its a convenient division of responsibilitiesyet might it represent a loss?

A deep-learning system doesnt have any explanatory power, as Hinton put it flatly. A black box cannot investigate cause. Indeed, he said, the more powerful the deep-learning system becomes, the more opaque it can become. As more features are extracted, the diagnosis becomes increasingly accurate. Why these features were extracted out of millions of other features, however, remains an unanswerable question. The algorithm can solve a case. It cannot build a case.

Yet in my own field, oncology, I couldnt help noticing how often advances were made by skilled practitioners who were also curious and penetrating researchers. Indeed, for the past few decades, ambitious doctors have strived to be at once baseball players and physicists: theyve tried to use diagnostic acumen to understand the pathophysiology of disease. Why does an asymmetrical border of a skin lesion predict a melanoma? Why do some melanomas regress spontaneously, and why do patches of white skin appear in some of these cases? As it happens, this observation, made by diagnosticians in the clinic, was eventually linked to the creation of some of the most potent immunological medicines used clinically today. (The whitening skin, it turned out, was the result of an immune reaction that was also turning against the melanoma.) The chain of discovery can begin in the clinic. If more and more clinical practice were relegated to increasingly opaque learning machines, if the daily, spontaneous intimacy between implicit and explicit forms of knowledgeknowing how, knowing that, knowing whybegan to fade, is it possible that wed get better at doing what we do but less able to reconceive what we ought to be doing, to think outside the algorithmic black box?

I spoke to David Bickers, the chair of dermatology at Columbia, about our automated future. Believe me, Ive tried to understand all the ramifications of Thruns paper, he said. I dont understand the math behind it, but I do know that such algorithms might change the practice of dermatology. Will dermatologists be out of jobs? I dont think so, but I think we have to think hard about how to integrate these programs into our practice. How will we pay for them? What are the legal liabilities if the machine makes the wrong prediction? And will it diminish our practice, or our self-image as diagnosticians, to rely on such algorithms? Instead of doctors, will we end up training a generation of technicians?

He checked the time. A patient was waiting to see him, and he got up to leave. Ive spent my life as a diagnostician and a scientist, he said. I know how much a patient relies on my capacity to tell a malignant lesion from a benign one. I also know that medical knowledge emerges from diagnosis.

The word diagnosis, he reminded me, comes from the Greek for knowing apart. Machine-learning algorithms will only become better at such knowing apartat partitioning, at distinguishing moles from melanomas. But knowing, in all its dimensions, transcends those task-focussed algorithms. In the realm of medicine, perhaps the ultimate rewards come from knowing together.

See the original post here:

AI Versus MD - The New Yorker

Posted in Ai | Comments Off on AI Versus MD – The New Yorker

Utterly Shocking: Silicon Valley Slams White House for Ignoring AI Threat – Vanity Fair

Posted: at 4:53 am

By Ronald Wittek/Pool/Getty Images.

If theres one thing that labor economists and leaders in Silicon Valley generally seem to agree on, its that increasingly sophisticated technology is coming to replace American jobs. According to a new report from PricewaterhouseCoopers, 38 percent of U.S. jobs are at high risk of being replaced by automation in the next 15 years, compared with 30 percent of jobs in the U.K. and 21 percent in Japan. The United States, like the United Kingdom, is dominated by service jobs in sectors like manufacturing, transportation, finance, and food service, and U.S. jobs are particularly at risk because, according to PwCs chief U.K. economist John Hawksworth, the tasks American workers perform are just easier to automate.

Still, the White House seems completely uninterested in the imminent threat facing U.S. employment and wages, choosing to cast blame instead on China and Mexico for the decline of U.S. manufacturing jobs. We want products made by our workers in our factories stamped with those four magnificent wordsmade in the U.S.A., President Donald Trump declared on a recent trip to a Boeing plant in South Carolina. The possibility that robots, not people, might soon be stamping those wordsif they are not alreadywent unmentioned.

The rest of the Trump administration appears similarly unconcerned. During a Friday morning interview with Axioss Mike Allen, U.S. treasury secretary (and executive producer of Academy Award-winning film Suicide Squad) Steven Mnuchin took some time out from gushing over the presidents perfect genes to downplay the threat of automation. The threat of artificial intelligence and robots supplanting American jobs, he said, is not even on our radar screen, adding that its likely 50 to 100 more years away. Im not worried at all, Mnuchin said. In fact Im optimistic.

Venture capitalists flatly rejected Mnuchins assessment of the current state of A.I. and automation, and the impact they are already having on the U.S. job market. Utterly shocking, just a willful disregard for the truth, David Pakman, a partner with New York-based V.C. firm Venrock, told the Hive. It appears his understanding of A.I. is rooted in science fiction. Im going to presume hes just uninformed, which is an unbelievably irresponsibly thing to be as Cabinet secretary. You neednt be a computer scientist to understand the near-term impact A.I. will have on the labor force. It is not dramatic to say that certainly, in the next seven years possible for this administration, millions of jobs will be impacted.

Hunter Walk, a partner at seed-stage V.C. firm Homebrew, agreed: The misguided assumption that the employment impact of A..I is 50 to 100 years away, puts the U.S. behind the curve from a policy standpoint. He also suggested that the robotics revolution could present an incredible opportunitybut only if the government can begin retraining workers and redistributing productivity gains. Imagine dual moonshots that aimed to ensure America was the leader in A.I. and evolved our education system and social safety net, Walk said. These paths to collective prosperity and mobility strike me as more inspiring than palliative pronouncements.

If people are thinking about A.I. like what happens in the movie Ex Machina, maybe that is a ways off, Aileen Lee, the founder of Cowboy Ventures told the Hive. But A.I. and machine-learning-powered software already exists and is getting better every day, which will have impact on U.S. and global jobs in the next decade. Software has the potential to take over millions of jobs humans currently dolike inputting and reading data, buying and selling merchandise, driving cars, and even diagnosing patients.

While Mnuchin remains optimistic about the idea that robots apparently arent replacing humans in jobs anytime soon, Silicon Valley is trying to come up with solutions for when the inevitable does happen. Tech experts like Bill Gates and Y Combinator president Sam Altman are pushing for solutions to offset the impact of the job displacement they fear is inevitable when robots take more jobs currently performed by humans. Altman has long been a proponent of universal basic income, which guarantees a base level of financial support for every person, regardless of their work status. And Gates wants to make companies that replace people with robots pay a tax, which could go toward redistributing the wealth.

Sundar Pichai, Googles C.E.O., was born in Chennai, India, immigrating to the U.S. to attend Stanford in 1993.

Alphabet president and Google co-founder Sergey Brin was born in Moscow and lived in the Soviet Union until he was six, immigrating with his family to the United States in 1979.

Elon Musk, the founder of SpaceX and Tesla, was born and raised in South Africa. He obtained Canadian citizenship in 1989 and briefly attended college at Queen's University in Ontario. He transferred to University of Pennsylvania, in part because such a move would allow him to get an H-1B visa and stay in the U.S. after college.

Safra Catz, who served as co-C.E.O. of Oracle, was born in Israel. She resigned from her executive role in December after joining Donald Trumps presidential transition team.

Trump supporter Peter Thiel, who has expressed support for the presidents executive action restricting immigration from several predominantly Muslim countries, is an immigrant himself. Before he co-founded PayPal and made one of the earliest large investments in Facebook, Thiel moved with his family from Germany, where he was born. In 2011, he also became a citizen of New Zealand, adding a third passport to his growing collection.

Born in Hyderabad, India, Microsoft C.E.O. Satya Nadella came to the U.S. to study computer science, joining Microsoft in 1992.

Garrett Camp helped co-found Uber. He was born in Alberta, Canada, and now resides in the Bay Area.

PreviousNext

Sundar Pichai, Googles C.E.O., was born in Chennai, India, immigrating to the U.S. to attend Stanford in 1993.

By Simon Dawson/Bloomberg/Getty Images.

Alphabet president and Google co-founder Sergey Brin was born in Moscow and lived in the Soviet Union until he was six, immigrating with his family to the United States in 1979.

By FABRICE COFFRINI/AFP/Getty Images.

Elon Musk, the founder of SpaceX and Tesla, was born and raised in South Africa. He obtained Canadian citizenship in 1989 and briefly attended college at Queen's University in Ontario. He transferred to University of Pennsylvania, in part because such a move would allow him to get an H-1B visa and stay in the U.S. after college.

By Justin Chin/Bloomberg/Getty Images.

Safra Catz, who served as co-C.E.O. of Oracle, was born in Israel. She resigned from her executive role in December after joining Donald Trumps presidential transition team.

By David Paul Morris/Bloomberg/Getty Images.

The founder of eBay, Pierre Omidyar, was born in France to Iranian parents. He immigrated to the U.S. in the 1970s.

By Ramin Talaie/Bloomberg/Getty Images.

Yahoo co-founder Jerry Yang moved from Taiwan to San Jose, California, in 1978, at the age of 10.

by Scott Olson/Getty Images.

Brothers John Collison and Patrick Collison, twenty-something college dropouts who emigrated from Ireland, co-founded Stripe, a $9.2 billion payments start-up.

By Jerome Favre/Bloomberg/Getty Images.

Adam Neumann, raised on an Israeli kibbutz, moved to the U.S. in 2001, after briefly serving in the Israeli army as a navy doctor. Now hes the chief executive of the $16.9 billion New York-based WeWork, which sublets space to individuals and companies.

by Noam Galai/Getty Images.

The co-founder and C.E.O. of health insurance start-up Oscar, Mario Schlosser, came to the United States from Germany as an international student, receiving his M.B.A. from Harvard.

By Kholood Eid/Bloomberg/Getty Images.

Trump supporter Peter Thiel, who has expressed support for the presidents executive action restricting immigration from several predominantly Muslim countries, is an immigrant himself. Before he co-founded PayPal and made one of the earliest large investments in Facebook, Thiel moved with his family from Germany, where he was born. In 2011, he also became a citizen of New Zealand, adding a third passport to his growing collection.

By Roger Askew/Rex/Shutterstock.

Born in Hyderabad, India, Microsoft C.E.O. Satya Nadella came to the U.S. to study computer science, joining Microsoft in 1992.

By Stephen Brashear/Getty Images.

Garrett Camp helped co-found Uber. He was born in Alberta, Canada, and now resides in the Bay Area.

By Justin Lane/EPA/Rex/Shutterstock.

The rest is here:

Utterly Shocking: Silicon Valley Slams White House for Ignoring AI Threat - Vanity Fair

Posted in Ai | Comments Off on Utterly Shocking: Silicon Valley Slams White House for Ignoring AI Threat – Vanity Fair

Ai Weiwei’s Latest Artwork: Building Fences Throughout New York City – New York Times

Posted: at 4:53 am


New York Times
Ai Weiwei's Latest Artwork: Building Fences Throughout New York City
New York Times
Commissioned by the Public Art Fund, Good Fences Make Good Neighbors, an ambitious work about divisive politics and borders, opens on Oct. 12.
Ai Weiwei to Build Fences Around New York City For His Latest ...ArtfixDaily

all 2 news articles »

Read the original post:

Ai Weiwei's Latest Artwork: Building Fences Throughout New York City - New York Times

Posted in Ai | Comments Off on Ai Weiwei’s Latest Artwork: Building Fences Throughout New York City – New York Times

How AI will shape the future of search – MarTech Today

Posted: at 4:53 am

There is no doubt the search industry has evolved. Just one lookat how search engine results pagesare currently laid out shows how things have changed. We have come a long way from 10 blue links.

But have we gone far enough? At SXSW earlier this month, information access was a hot topic. People no longer head to Googles search bar as their only way of accessing content.

Search enginesused to be the primary (or sole)place a consumer would turn towhen they needed an immediate answer. You entered in a phrase, clicked a link and read the page.

But now, there are other places we are spending out time. In fact, the average consumer spends over 40 minutes a day on YouTube and 35 minutes on Facebook. We get our news from peers on social networks and can even consult WebMD about our health through Amazons Alexa AI.

Most recently, Martin Sorrell, CEO of WPP, called Amazon the biggest threat to Google. When you think about it, if you are past research mode and want to buy something immediately, you will often bypass Google andhead directly to Amazon.

No longer are we dependent on a list of links. AI assistants across the board have changed the way content is presented back to the end user. From Siri and Cortana to Alexa, each answers your questions or searches in a unique way. Whether it is voice search or simply using the internal search function on your iPhone, results look different and include things we dont normally consider traditional search results, such as apps, emails, social comments from peers and so on.

Chatbots are also becoming more and more popular. Another huge topic at SXSW, many brands are utilizing chatbots to present information to consumers as quickly as possible. Instead of sifting through content on a website, chatbots will allow the consumer to enter specific questions and get their response immediately. This process would potentially replace the need to search in a traditional manner.

Depending on how we engage, the AI platform will shape how we search. Whether it is longer-tailed queries through voice commands or short queries entered on a mobile device, the questions we ask are shifting. There is also potential for the AI to live in new places. As the smart home evolves and becomes more affordable, AI has the potential to be accessed throughout your home and car. It could become second nature to utilize AI to access information throughout your day.

Many readers of MarTech Today, like me have begun toconsiderhow these new developments and technologies will affect the way we advertise and attract new customers for our clients. While there are not clear-cut paid media opportunities integrated with each AI platform, many companies, such as Amazon, have discussed monetizing theirs.

As marketers, we need to begin thinking outside of bulk sheets containing thousands of keywords and begin thinking about how the consumer mindset will shift and the new behaviors that will come along with the mass adoption of artificial intelligence.

Some opinions expressed in this article may be those of a guest author and not necessarily Marketing Land. Staff authors are listed here.

Read the original here:

How AI will shape the future of search - MarTech Today

Posted in Ai | Comments Off on How AI will shape the future of search – MarTech Today

Plenty more for AI to deliver – The New Paper

Posted: at 4:53 am

Based on research gathered by MIT Technology Review magazine, Asia's business landscape is poised not only to benefit greatly from the rise of artificial intelligence (AI) but also play a major role in helping to define it.

While only a small percentage of respondents are currently investing in AI development in Asia, 25 per cent of firms have made investments at a global level and another 50 per cent are considering doing so.

We have seen a machine master the complex game of Go, previously thought to be the most difficult challenge of artificial processing.

We have witnessed vehicles operating autonomously, including a caravan of trucks crossing Europe with only a single operator to monitor systems.

We have seen a proliferation of robotic counterparts and automated means for accomplishing a variety of tasks.

All these gave rise to a flurry of people claiming that the AI revolution is already upon us.

While there is no doubt that there have been significant advancements in the field of AI, what we have seen is only a start on the path to what could be considered full AI.

Understanding the growth of AI capability is crucial for understanding the advances we have seen.

Full AI - that is to say, complete and autonomous sentience - involves the ability for a machine to mimic a human to the point of us being indistinguishable from them.

While it may be some time before we reach full AI, there will be many more practical applications of basic AI in the near term.

With basic AI, the processing system, embedded within the appliance (local) or connected to a network (cloud), learns and interprets responses based on "experience".

That experience comes in the form of training through using data sets that simulate the situations we want the system to learn from. This is the confluence of Machine Learning (ML) and AI.

The capability to teach machines to interpret data is the key underpinning technology that will enable more complex forms of AI that can be autonomous in their responses to input. It is this type of AI that is getting the most attention.

In the next 10 years, the use of this kind of ML-based AI will likely fall into two categories:

The use of AI to automate science and technology will drive our ability to discover new cures, technologies, tools, cells and planets, ultimately pushing AI itself to new heights.

There is no doubt about the commercial prospects for autonomous robotic systems in the commercial market for aspects such as online sales conversion, customer satisfaction, and operational efficiency.

We see this application already being advanced to the point that it will become commercially viable, which is the first step to it becoming practical and widespread.

Autonomous vehicle technology is one of the most publicised and one of the most needed applications of AI, in its ability to eliminate injuries or deaths in traffic accidents and improve availability and efficiency of transportation.

In addition to the automation of transportation and logistics, a wide variety of additional technologies that utilise autonomous processing techniques are being built.

Currently, the artificial assistant, or "chatbot", concept is one of the most popular. By creating the illusion of a fully sentient remote participant, it makes interaction with technology more approachable.

The use of AI for development and discovery is just now beginning to gain traction, but over the next decade, this will become an area of significant investment and development.

Learning from repetition, improving patterns and developing new processes is well within reach of current AI models, and will strengthen in the coming years as advances in AI - specifically machine learning and neural networking - continue.

Rather than being frightened by the perceived threat of AI, it would be wise to embrace the possibilities that AI offers.

Jason Bissell is general manager of Asia Pacific and Japan for data software company Talend, and Calvin Hoon is its regional vice-president for sales, Asia Pacific.

See the article here:

Plenty more for AI to deliver - The New Paper

Posted in Ai | Comments Off on Plenty more for AI to deliver – The New Paper

The Battle for Top AI Talent Only Gets Tougher From Here – WIRED

Posted: March 23, 2017 at 1:58 pm

Slide: 1 / of 1. Caption: Ariel Zambelich/WIRED

Andrew Ng helped create two of Silicon Valleys leading artificial intelligence labs. First, he built Google Brain, now the hub of AI research inside the internet giant. Then he built a lab in the Valley for Baidu, the company known as the Google of China.

Ng was one of the primary figures behind the enormous and rapid rise of AI over the last five years as everyone from Facebook to Microsoft rebuilt themselves around deep learning. And on Tuesday night, he announced his departure from Baidu.

He didnt say where he was going. And he didnt immediately respond to our request for comment. But odds are, he will show up at some other big name sometime soon. AI researchers are among the most prized talent in the modern tech world. A few years ago, Peter Lee, a vice president inside Microsoft Research, said that the cost of acquiring a top AI researcher was comparable to the cost of signing a quarterback in the NFL. Since then, the market for talent has only gotten hotter. Elon Musk nabbed several researchers out from under Google and Facebook in founding a new lab called OpenAI, and the big players are now buying up AI startups before they get off the ground.

Today, this talent market may have shifted yet again. Chipmaker Intel just announced that its building a lab for far-looking AI research, and company vice president Naveen Rao says Intel is prepared to pay up for the caliber of talent that now works inside Google Brain or the Facebook Artificial Intelligence Research Lab. Were looking for researchers that could potentially go to these other places, he says, acknowledging the big dollars this will require. Asked if that could include a top name like Andrew Ng, he said, Absolutely.

Such ambition shows just how large the AI movement has become. Intel is launching a lab not because it wants to ultimately build its own AI, but because it wants to sell the enormous number of computer chips that others will need to build their AI. Todays AI movement revolves around deep neural networks, complex mathematical systems that can learn tasks by analyzing vast amounts of data. If you feed millions of cat photos into a neural network, for instance, it can learn to identify a cat. Typically, when a company like Google or Facebook trains a neural network in this way, it uses hundreds of GPU chips, graphics processors suited to this kind of math. And most of these GPUs come from nVidia, an Intel rival. Intel is hoping to build chips that replace GPUs. Last year, it acquired Raos chip startup, Nervana, for a reported $400 million, believing its tech can help mount this challenge.

Now, with Nervana as an anchor, Intel is creating a new product development group dedicated to AI. Rao will oversee the group, and he says this effort will include a research lab that explores entirely new concepts in deep learning and related areas, all with an eye toward building chips that the Googles and Facebooks will want. Were actually going to have an emphasis on researchthree, five, seven years out, he says.

In some sense, this move is Intel desperately trying to market itself as a serious alternative to nVidia GPUs. And at this point, its just not. But even that desperation underlines the importance of the new AI chip market, which is rapidly remaking computer data centers. If Intel actually hires people like Ng, maybe we can believe its hypeand the AI competition will get even fiercer.

Go here to see the original:

The Battle for Top AI Talent Only Gets Tougher From Here - WIRED

Posted in Ai | Comments Off on The Battle for Top AI Talent Only Gets Tougher From Here – WIRED

These AI bots created their own language to talk to each other – Recode

Posted: at 1:58 pm

It is now table stakes for artificial intelligence algorithms to learn about the world around them. The next level: For AI bots to learn how to talk to each other and develop their own shared language.

New research released last week by OpenAI, the artificial intelligence nonprofit lab founded by Elon Musk and Y Combinator president Sam Altman, details how theyre training AI bots to create their own language, based on trial and error, as the bots move around a set environment.

This is different from how artificial intelligence algorithms typically learn using large sets of data, like to recognize a dog by taking in thousands of pictures of dogs.

The world the researchers created for the AI bots to learn in is a computer simulation of a simple, two-dimensional white square. There, the AIs, which took the shape of green, red and blue circles, were tasked with achieving certain goals, like moving to other colored dots within the white square.

But to get the task done, the AIs were encouraged to communicate in their own language. The bots created terms that were grounded, or corresponded directly with objects in their environment and other bots and actions, like Go to or Look at. But the language the bots created wasnt words in the way humans think of them rather, the bots generated sets of numbers, which researchers labeled with English words.

You can get a sense in this demonstration video:

The researchers taught the AIs how to communicate using reinforcement learning: Through trial and error, the bots remembered what worked and what didnt for the next time they were asked to complete a task. Igor Mordatch, one of the authors of the paper, will join the faculty at Carnegie Mellon in September. And Pieter Abbeel, the other author, is a research scientist at OpenAI and a professor at the University of California, Berkeley.

There are already AI assistants that can understand language, like Siri or Alexa, or help with translation, but this is mostly done by feeding language data to the AI, rather than understanding language through experience.

We think that if we slowly increase the complexity of their environment, and the range of actions the agents themselves are allowed to take, its possible theyll create an expressive language which contains concepts beyond the basic verbs and nouns that evolved here, the researchers wrote in a blog post.

Why does this matter?

Language understanding is super important to make progress on before AI reaches its full potential, said Miles Brundage, an AI policy fellow at Oxford University, who also notes that OpenAIs work represents a potentially important direction for the field of AI research to move toward.

It's not clear how good we can get at AI language understanding without grounding words in experience, Brundage said, and most work still looks at words in isolation.

More:

These AI bots created their own language to talk to each other - Recode

Posted in Ai | Comments Off on These AI bots created their own language to talk to each other – Recode

Baidu’s Loss Is a Setback for AI in China – Wall Street Journal (subscription)

Posted: at 1:58 pm


MIT Technology Review
Baidu's Loss Is a Setback for AI in China
Wall Street Journal (subscription)
... Microsoft and elite universities around the world. Few of these hires had the status of Andrew Ng, whom Baidu Inc. recruited in 2014 as its chief scientist to oversee AI research. One of the top brains in the field, Mr. Ng formerly led Google's ...
Andrew Ng Is Leaving Baidu in Search of a Big New AI MissionMIT Technology Review
AI Expert at Baidu, Andrew Ng, Resigns From Chinese Search GiantNew York Times
AI star Andrew Ng announces departure from Chinese tech giant BaiduThe Verge
Bloomberg -Financial Times -TechCrunch -Medium
all 47 news articles »

View original post here:

Baidu's Loss Is a Setback for AI in China - Wall Street Journal (subscription)

Posted in Ai | Comments Off on Baidu’s Loss Is a Setback for AI in China – Wall Street Journal (subscription)

5 ways to use AI in your own home – Popular Science

Posted: at 1:58 pm

Artificial intelligence promises to change our lives in a multitude of different ways, from driving our cars to diagnosing disease before doctors can spot it.

A lot of these more ambitious AI projects are still some way off. But there's plenty of fledgling artificial intelligence already running in our phones, computers, and household gadgetsand you may not even be aware of it. Here are five different ways that AI is already able to make your life a little bit easier.

Before we start: The definition of artificial intelligence is a pretty broad and uneven one, but here we're going to use it to mean smart hardware or software that can make decisions and learn on a basic level without any human help.

You might not have realized it, but photo and video management services from Google, Apple and Facebook have been using artificial intelligence in the background for some time now.

If you use Google Photos, say, try opening up your account in a web browser and searching for "sunsets" or "mountains." Even if you haven't manually labeled your photos, the appropriate images should pop up. That's because Google uses neural networks that can learn from its vast database of images, recognizing one picture of trees by analyzing millions of others. The AI service applies these labels to your shots automatically, which makes searching through them a snap.

Apple and Facebook's photo recognition technology is developing along the same lines. And it goes beyond treesthese platforms are smart enough to tell the difference between the faces of your friends too.

If your digital photos and videos are strewn across your computer's hard drive, and organizing them is hopeless, upload them to one of these photo services and let AI do the hard work. Just make sure you read the relevant privacy policies first.

For kids growing up today, tablets and phones are embedded in daily life. And you can guarantee AI is hard at work behind the screens, from the processing required to recognize young voices to the systems that parse natural language into something computers can understand.

You can go further thoughif the Amazon Echo isn't enough of an AI presence in your home, you can enlist the help of an artificially intelligent robot. For example, there's the Zenbo from Asus or the Aido currently available to pre-order. You can expect more bots like these in the future too, once companies add wheels and screens to speakers like Google Home.

Robots like these can learn your children's habits and favorite stuff, reading out stories, playing games, and even singing them to sleepall powered by AI-assisted software that gets smarter as it goes. They're not just for the kids eitherthere's expected to be a big market for these droids in helping the elderly and keeping them safe.

There's more work than you might think going on behind the scenes of a Netflix or a Spotify recommendation. These services are scanning not just what you've liked in the past, but also what millions of other users are enjoying. If Ghostbusters fans usually like Back to the Future, for instance, then so might you.

That's a basic example, but these hidden algorithms are getting less basic and more intelligent all the time. Just by signing up and logging into a service like this, you can get some AI-powered help with that perennial question of what to watch (or read or listen to) next.

In addition to whatever music and video services you subscribe to, you can make use of standalone smart recommendation apps. Try TasteKid to get suggestions for just about anything, Last.fm to discover more music based on your existing tastes, or Valossa to identify a movie you can only remember a few details about.

The most advanced security cameras of today tap into the power of AI to recognize the difference between an intruder sneaking up to your window and a tree blowing innocently in the breeze. Like the other systems and services we've mentioned here, they use stacks of sample data plus the power of the cloud (where processing can be offloaded to the web rather than all done on the device itself) to get smarter over time.

Two cameras that use advanced AI processing in this way are the Nest Cam and the Netatmo Presence. For additional options, you can find more detailed buying guides on the web.

These cameras are now smart enough to recognize how a car physically differs from a dog, something that seems simple to a human being but requires a lot of background processing for a computer to get right every time. In the not-too-distant future, expect your doorbell to recognize your children too, and even let them in automatically.

Finally, AI can help you with your daily chores: Robot vacuum cleaners have gone from quirky little oddities to gadgets that actually do a proper job. Of course, you'll probably need to save up to afford one.

Where does the AI come in? Bots like the Neato Botvac D5 and the iRobot Roomba 980 are smart enough to survey the rooms in your home and map out where they need to clean, tracking their progress all the while. These home robots haven't yet worked out how to get up and down stairs, but it's surely only a matter of time. You can even set boundary markers down to block off no-go areas.

If you shop around, you'll find similar robots for mowing your lawn and wiping down your windows, leaving you with more free time to do something elselike marveling at the wonders of modern technology and the rapid rise of AI. The good news is, the more work these bots do, the smarter they'll getthough there's no need to panic about an uprising. Yet.

Originally posted here:

5 ways to use AI in your own home - Popular Science

Posted in Ai | Comments Off on 5 ways to use AI in your own home – Popular Science

This company is using AI and robots to sort and scan paper – The … – The Verge

Posted: at 1:58 pm

If youve ever stayed up late watching cable TV, youve probably seen an ad for desktop scanners that promise to organize your clutter and help you cut down on paper. Ive seen them so many times I assumed that, even if they didnt work that well, bigger and better versions must be out there being used by companies, banks, hospitals, and basically anyone with lots and lots of paper.

Apparently I was wrong. There are companies that store boxes and boxes of paper for other companies, and there are others that employ people to scan some of that paper that companies want digitized. But by and large, smart and easy scanning hasnt happened with the kind of scale to make it easy and affordable enough to handle a companys worth of paper records.

This is where Ripcord, a new company backed by Steve Wozniak and venture capital firm Kleiner, Perkins, Caufield and Byers, comes in. Ripcord has patented and built robots the boxy, room-filling kind, not the anthropomorphic ones you might be thinking of that can sort and scan a box of paper and enter the contents into a searchable database in the cloud.

Ripcords robot can scan and sort a box full of everything from business cards to legal paper

It might not sound like a revolutionary technology in the days of jet-powered hoverboards and AI that beats humans at their own games. But founder and CEO Alex Fielding says Ripcords advantage is in the details details that make their service 10 times cheaper and faster than their competition.

[Ripcords] machine can handle mixed content from the size of a business card to a legal sized sheet, and it can go from rice paper all the way up to the thickness of card stock without changing anything, Fielding says. The Ripcord robot even removes the staples. You can give Ripcord a box full of HR forms, business cards, and shipping manifests, and it will not only know the difference between them, but it will scan them at over 600 dpi and will sort them into an Amazon-hosted cloud database within hours. (The paper is then shredded and recycled.)

Ripcord will also provide customers with ways to hook all that data back into their existing enterprise software. Fielding estimates the company will be digitizing 2.5 million records a day by the end of 2017.

Previously, many companies were content to pay to store their documents in giant warehouses, cut off from easy access, or even the knowledge of whats in any given box. In fact, Fielding says he came up with the idea for Ripcord after a major document storage company lost boxes upon boxes of his friends companys files.

Some of Fieldings competition does offer digitization; Ripcord hasnt invented the industrial scanner or document imaging software here. Companies like Kodak and Xerox make scanners used by digital imaging bureaus, but their solution is designed for a situation where humans have pre-sorted documents and removed the staples themselves. That difference in the process can mean it takes hours to scan just one box.

Theyre building machines that are perfectly designed for uniform content but horrible for the plethora of craziness that comes when you open the lid of a bankers box. Everyone packs those things completely different, Fielding says. He compares the bankers boxes to snowflakes, saying each one is unique in its disarray. Its like they expect that as soon as it comes out of the printer it goes in a scanner, and thats just not the reality.

Document storage companies like Iron Mountain the inspiration for Steel Mountain in season 1 of Mr. Robot also offer digital imaging services. But a representative for Iron Mountain told me that it could take six to eight weeks before a customer gets digital access to just 20 boxes, and even then its typically placed on hard media CDs, DVDs, or hard drives instead of the cloud. Thats not a security measure, either. I was told customers want hard media simply because hard media is cheaper.

One of Ripcords biggest competitors will be Iron Mountain the inspiration for Steel Mountain from Mr. Robot

We charge per record image in the cloud per month, Fielding says. We dont charge for the rest of the things competitors charge for, like picking up or moving boxes, digitization, shredding it, storage. Just the access.

If Fielding can significantly scale this plan, Ripcord seems poised to take a big chunk out of the document storage market. Hed also be the latest person to find a way for AI and robotics to replace humans in the workforce. But for now, humans will still be involved in the process.

Ripcord plans to have a staff of over 100 workers by the end of this year, Fielding says. Most of the work will be focused around prepping the paper for scanning, essentially transferring the content of a companys box to one more suited for the machine. But Ripcord is also hiring more technical positions to help the company expand.

If you think about it, were talking about really advanced sensors and optics, machine-vision driven robots, a host of different sensor technologies, Fielding says. Its almost the exact same technology for self-driving cars or drones, were just applying it to finding staples on a page.

Fielding makes a compelling pitch for a company thats all about scanning paper. He argues that Ripcord can be a profitable company by saving customers time and money, and it can also remove the need to pay to store boxes of records in Raiders of the Lost Ark-style warehouses. He still has to prove all this, but on paper, that pitch looks pretty good.

Read more here:

This company is using AI and robots to sort and scan paper - The ... - The Verge

Posted in Ai | Comments Off on This company is using AI and robots to sort and scan paper – The … – The Verge

Page 262«..1020..261262263264..270280..»