Page 259«..1020..258259260261..270280..»

Category Archives: Ai

Bob Ross painting trees via AI is like a drug-fueled nightmare – CNET

Posted: April 7, 2017 at 9:00 pm

The late artist Bob Ross was known for his calm, almost ASMR-like voice, his '70s permed hair and his expert-level technique for making "happy little trees" from oil paint.

But what happens when you filter an episode of his PBS TV show "The Joy of Painting" through the neural net? You end up with a show that looks like something from a bad acid trip.

In the video "Deeply Artificial Trees" by artBoffin we see exactly what goes wrong when machine learning filters a seemingly innocent painting show into the imaginings of a sci-fi movie gone wrong.

"This artwork represents what it would be like for an AI to watch Bob Ross on LSD (once someone invents digital drugs)," artBoffin writes in the video description. "It shows some of the unreasonable effectiveness and strange inner workings of deep learning systems. The unique characteristics of the human voice are learned and generated, as well as hallucinations of a system trying to find images which are not there."

At the beginning of the video, Ross pets what looks like a gerbil from hell. The painting Ross is working on should have happy little trees, but instead it's infested with giant cockroaches. And it just gets weirder from there.

Watching the video I saw numerous horrific, squiggly animal creations that would inspire the likes of H.P. Lovecraft. I may never be able to fall sleep again.

Batteries Not Included: The CNET team shares experiences that remind us why tech stuff is

Solving for XX: The industry seeks to overcome outdated ideas about "women in tech."

See more here:

Bob Ross painting trees via AI is like a drug-fueled nightmare - CNET

Posted in Ai | Comments Off on Bob Ross painting trees via AI is like a drug-fueled nightmare – CNET

Blue-Collar Revenge: The Rise Of AI Will Create A New Professional Class – Forbes

Posted: at 9:00 pm


Forbes
Blue-Collar Revenge: The Rise Of AI Will Create A New Professional Class
Forbes
New, more-modern manufacturing processes, including the use of robots, have gutted the number of high-paying factory jobs in the U.S. and caused economic angst in large portions of the country. The movement of manufacturing plants overseas has ...

Originally posted here:

Blue-Collar Revenge: The Rise Of AI Will Create A New Professional Class - Forbes

Posted in Ai | Comments Off on Blue-Collar Revenge: The Rise Of AI Will Create A New Professional Class – Forbes

Adobe shows how AI can work wonders on your selfie game – Engadget

Posted: at 9:00 pm

The video includes some tools we already knew about -- mainly the ability to copy one photo's style and look to another in a couple of taps. Adobe researchers worked with Cornell University to employ AI to take things like color, lighting and contrast you really like in one image and apply it to a boring ol' crappy photo. While that tool is part of an experimental app called "Deep Photo Style Transfer" that's posted on Github, it looks like Adobe has plans to bring that feature to a more robust piece of mobile software.

Thanks to Adobe Sensei, a mobile app could also allow for easy perspective editing and automatic photo masking. A liquify tool updates to the perspective of a selfie with a slider, keeping the subject's face in proportion while the edits are applied. A similar tool has been available inside Photoshop Fix for a while now, but the so-called Face-Aware version just hit Photoshop on the desktop last summer. If you need to adjust the depth of field, portrait masking can help you easily do that with a simple slider adjustment. Adobe hasn't been shy about bringing desktop-friendly features to mobile, so don't be surprised if this masking feature makes the leap.

While all of these tools make for a compelling photo-editing app, there's no indication when (or if) Adobe will put them in a piece of software you can actually use. Given its recent mobile focus, you can bet more powerful features are coming to the likes of Photoshop Fix and other apps. It's only a matter of time.

Read more from the original source:

Adobe shows how AI can work wonders on your selfie game - Engadget

Posted in Ai | Comments Off on Adobe shows how AI can work wonders on your selfie game – Engadget

Big-in-Japan AI code ‘Chainer’ shows how Intel will gun for GPUs – The Register

Posted: at 9:00 pm

Ever heard of Chainer, the open-source framework for creating neural networks?

I hadn't either until yesterday Intel decided to give it a big hug, taking Chainer from being big in Japan, where its parent company Preferred Networks works with the likes of Toyota on secret projects, to rather greater prominence.

Chainer can use the help: launched in 2015 and open-sourced last year, the tool's GitHub repo is busy but hardly the most lively place on the internet.

That's probably about to change because Intel has decided Chainer is a fine way to develop AI workloads that create demand for its silicon. Doubly so if it can be taught to speak fluent Xeon, instead of only chatting to NVIDIA GPUs as was previously the case. The deal between Intel and Preferred means Chainer will from now on be developed for Intel architectures and changes shared on Intel's GitHub repo for the project.

Why should we care that Intel's decided to give Chainer a leg-up?

On the purely technical side of things, it looks like good gear. Chainer CEO Toru Nishikawa yesterday showed Intel's AI Day in Tokyo the slide below on which he claims to have made Google's TensorFlow look like it was working in treacle when measured on training time for image net classifications. Nishikawa-San also said Chainer had tied in a recent Amazon.com test to train robots to pick stock.

So you could do worse than have a look if you are thinking about neural networks.

But the Chainer tie-up is also worth considering because it shows how Intel builds markets and will try to make itself the dominant player in Artificial Intelligence, a market widely assumed to be on the cusp of a boom.

It's also a market that is currently keen on GPUs. So Intel wants to build a portfolio of products to make Xeons the heart of AI, not GPUs.

Intel's not, however, using SciFi definitions of AI. Amir Khosrowshahi, former CTO of Nervana and now holder of the same position in Intel's new AI Group, prefers to describe AI as involving deep statistical analysis of very closely-observed events so that we can infer likely outcomes with satisfying precision.

Modern hardware can do that analysis and wrangle the necessary mountains of data collected to make the analysis useful, but it mostly brute-forces it. Dedicated hardware will speed things up and that's where Intel is going, by building and/or buying that hardware and building the software ecosystem to match.

You may have seen this movie before when virtualization was obviously the next big thing and Intel added extensions to its silicon so it would be especially good at hosting multiple VMs. Chipzilla's also done things like bridge the Lustre and HDFS file systems so that HPC clusters runnings Lustre could run Hadoop, which relies on HDFS. Intel wins either way: it's invested in Hadoop provider Cloudera and has lots of HPC customers who didn't scream when Chipzilla made their rigs more useful. Intel's also optimised its consumer CPUs for video transcoding because editing HD home movies without having to stay up all night is one of the few compelling reasons to buy a new PC.

Intel's now using the same playbook for AI. Buying field-programmable gate array (FPGA) vendor Altera gave Intel the tech to build hybrid Xeons that offer integrated programmability so you can get silicon speed for exotic analyses that would make a vanilla Xeon weep. Altera is now working to make sure that developing code for FPGAs, once the province of embedded systems engineers, is not a stretch for the average Java developer.

Bernhard Friebe, Intel's director of planning and marketing for FPGA Design Software and Intellectual Property, said Intel is developing libraries for common AI tasks, gives them away and has built tools that mean developers need to write just one line of code to target FPGAs.

Nervana gave Intel silicon tailored to AI and lots of the software developers will need to use it.

The two companies also give Intel the technology it will one day bake into Xeons so they become better at the kind of data-crunching AI needs.

We'll see those products emerge later in 2017 when the Lake Crest Xeon adds a discrete AI accelerators for AI workloads. The Skylake Xeon with a joined-at-the-hip FPGA, code-named Knights Crest, will debut later in the same year. Intel's being shy about exact specs, but Both use proprietary inter-chip links and a new architecture called Flexpoint to improve parallelism. But they're early products: both promise 10x parallelism. By 2020 Intel pledges to reduce the time needed to train an AI model by a factor of 100.

But the main game here is that by adding AI abilities to those Xeons, Intel can talk to mainstream users about doing AI with familiar kit, rather than wrapping their head around GPUs. And it can point to Chainer and many other software investments to show that existing developers won't struggle to at least start playing with AI.

The company still has exotica up its sleeve. Barry Davis, general manager of Intel's Accelerator Workload Group, told El Reg that by the second half of 2018 we'll also see Knight's Mill, the next-generation Xeon Phi optimised for AI. Details of the product are scarce, but Intel is talking up the fact it will be able to address up to 400GB of memory, far more than some GPUs.

Once everyday Xeons are good at AI, there will be little excuse not to consider them. More exotic products like Xeon Phi or FPGA-bonded Xeons can also run in the cloud, where users can try them out without capital expenditure.

By the time the ready-for-AI range is mature, Chainer will have been running on Intel hardware for about three years, will probably be rather improved thanks to the input Intel's support will have generated.

That won't tup tip anyone over into a decision to go Intel when contemplating AI. But bringing Chainer into Intel's world is one of a dozen or a hundred other efforts. Some of those efforts are blindingly obvious billion-dollar acquisitions. Some are imperceptible nudges to useful open source projects. Others will be thoroughly obscure instructions issued to server-makers.

They'll all add up to an ecosystem designed to make Intel all-but-impossible to leave off a list of vendors to consider when doing AI .

Of course the world's not going to stand still and let Intel do this. But Chipzilla is confident it can dominate any rivals.

At this point it's tempting to point out that Intel is nowhere in mobile, a field in which it felt its modus operandi would work but ended up being slaughtered by ARM.

Barry Davis thinks Intel has figured out why: his version of recent history says ARM always wanted to start at the edge of the network and work its way in to the data centre. In the mobile field, Intel tried to work the way it had with PCs but found itself surrounded, late to the party and without the right friends once it arrived. The execs I met yesterday didn't dismiss challenges ARM presents in AI, but feel that as ARM is yet to become a significant data centre player and therefore isn't in a position to spearhead an ecosystem-creating challenge that will satisfy businesses and developers.

Of course Intel would say that, wouldn't it? Or is its confidence derived from deep statistical analysis of closely-observed events that let it infer likely outcomes with satisfying precision?

Continue reading here:

Big-in-Japan AI code 'Chainer' shows how Intel will gun for GPUs - The Register

Posted in Ai | Comments Off on Big-in-Japan AI code ‘Chainer’ shows how Intel will gun for GPUs – The Register

Can AI Ever Be as Curious as Humans? – Harvard Business Review

Posted: at 9:00 pm

Executive Summary

Curiosity has been hailed as one of the most critical competencies for the modern workplace. As the workplace becomes more and more automated, it begs the question: Can artificial intelligence ever be curious as human beings? AIs desire to learn a directed task cannot be overstated. Most AI problems comprise defining an objective or goal that becomes the computers number one priority.At the same time, AI is also constrained in what it can learn. AI is increasinglybecoming a substitute for tasks that once required a great deal of human curiosity, and when it comes to performance, AI will have an edge over humans in a growing number of tasks. But the capacity to remain capriciously curious about anything, including random things, and pursue ones interest with passion, may remain exclusively human.

Curiosity has been hailed as one of the most critical competencies for the modern workplace. Its been shown to boost peoples employability. Countries with higher curiosity enjoy more economic and political freedom, as well as higher GDPs. It is therefore not surprising that, as future jobs become less predictable, a growing number of organizations will hire individuals based on what they could learn, rather than on what they already know.

Of course, peoples careers are still largely dependent on their academic achievements, which are (at least partly) a result of their curiosity. Since no skill can be learned without a minimum level of interest, curiosity may be considered one of the critical foundations of talent. AsAlbert Einstein famously noted,I have no special talent. I am only passionately curious.

How it will impact business, industry, and society.

Curiosity is only made more important for peoples careers by the growing automation of jobs. At this years World Economic Forum, ManpowerGroup predicted that learnability, the desire to adapt ones skill set to remain employable throughout ones working life, is a key antidote to automation. Those who are more willing and able to upskill and develop new expertise are less likely to be automated. In other words, the wider the range of skills and abilities you acquire, the more relevant you will remain in the workplace. Conversely, if youre focused on optimizing your performance, your job will eventually consist of repetitive and standardized actions that could be better executed by a machine.

But what if AI were capable of being curious?

As a matter of fact, AIs desire to learn a directed task cannot be overstated. Most AI problems comprise defining an objective or goal that becomes the computers number one priority. To appreciate the force of this motivation, just imagine if your desire to learn something ranked highest among all your motivational priorities, above any social status or even your physiological needs. In that sense, AI is way more obsessed with learning than humans are.

At the same time, AI is constrained in what it can learn. Its focus and scope are very narrow compared to that of a human, and its insatiable learning appetite applies only to extrinsic directives learn X, Y, or Z. This is in stark contrast to AIs inability to self-direct or be intrinsically curious. In that sense, artificial curiosity is the exact opposite of human curiosity; people are rarely curious about something because they are told to be. Yet this is arguably the biggest downside to human curiosity: It is free-flowing and capricious, so we cannot boost it at will, either in ourselves or in others.

To some degree, most of the complex tasks that AI has automated have exposed the limited potential of human curiosity vis-a-vis targeted learning. In fact, even if we dont like to describe AI learning in terms of curiosity, it is clear that AI is increasingly a substitute for tasks that once required a great deal of human curiosity. Consider the curiosity that went into automobile safety innovation, for example. Remember automobile crash tests? Thanks to the dramatic increase in computing power, a car crash can now be simulated bya computer. In the past, innovative ideas required curiosity, followed by design and testing in a lab. Today, computers can assist curiosity efforts by searching for design optimizations on their own. With this intelligent design process, the computer owns the entire life cycle of idea creation, testing, and validation. The final designs, if given enough flexibility, can often surpass whats humanly possible.

Similar AI design processes are becoming more common across many different industries. Google has used it to optimize cooling efficiency with itsdata centers. NASA engineers have used it to improve antennae quality for maximum sensitivity. With AI, the process of design-test-feedback can happen in milliseconds instead of weeks. In the future, the tunable design parameters and speed will only increase, thus broadening our possible applications for human-inspired design.

A more familiar example might be the face-to-face interview, since nearly every working adult has had to endure one. Improving the quality of hires is a constant goal for companies, but how do you do it? A human recruiters curiosity could inspire them to vary future interviews by question or duration. In this case, the process for testing new questions and grading criteria is limited by the number of candidates and observations. In some cases, a company may lack the applicant volume to do any meaningful studies to perfect itsinterview process. But machine learning can be applied directly to recorded video interviews, and the learning-feedback process can be tested in seconds. Candidates can be compared based on features related to speech and social behavior. Microcompetencies that matter such as attention, friendliness, and achievement-based language can be tested and validated from video, audio, and language in minutes, while controlling for irrelevant variables and eliminating the effects of unconscious (and conscious) biases. In contrast, human interviewers are often not curious enough to ask candidates important questions or they are curious about the wrong things, so they end up paying attention to irrelevant factors and making unfair decisions.

Lastly, consider a human playing a computer game. Many games start out with repeated trial and error, sohumans must attempt new things and innovate to succeed in the game: If I try this, then what? What if I go here? Early versions of game robots were not very capable because they were using the full game state information; they knew where their human rivals were and what they were doing. But since 2015something new has happened: Computers can beat us on equal grounds, without any game state information, thanks to deep learning. Both humans and the computers can make real-time decisions about their next move. (As an example, see this video of a deep network learning to play the game Super Mario World.)

From the above examples, it may seem that computers have surpassed humans when it comes to specific (task-related) curiosity. It is clear that computers can constantly learn and test ideas faster than we can, so long as they have a clear set of instructions and a clearly defined goal. However, computers still lack the ability to venture into new problem domains and connect analogous problems, perhaps because of their inability to relate unrelated experiences. For instance, the hiring algorithms cant play checkers, and the car design algorithms cant play computer games. In short, when it comes to performance, AI will have an edge over humans in a growing number of tasks, but the capacity to remain capriciously curious about anything, including random things, and pursue ones interest with passion may remain exclusively human.

Read this article:

Can AI Ever Be as Curious as Humans? - Harvard Business Review

Posted in Ai | Comments Off on Can AI Ever Be as Curious as Humans? – Harvard Business Review

New study shows how AI can improve recovery in stroke patients – TechRepublic

Posted: at 9:00 pm

Image: iStock/Getty Images

The American Heart Association published the results of a trial that shows stroke survivors are twice as likely to take anti-blood clot treatments when they are using an artificial intelligence (AI) platform, compared to those receiving more traditional treatment.

The AI platform, AiCure, uses software algorithms on smartphones to confirm patient identify, the medication, and if the medication was taken. Patients receive automated reminders and dosing instructions as well. Healthcare workers receive real-time data which allows for early detection of patients who are not taking their meds as scheduled.

SEE: Google's DeepMind and the NHS: A glimpse of what AI means for the future of healthcare (ZDNet)

This latest trial, which lasted 12 weeks and was published in the American Heart Association's journal Stroke, shows more of AI's potential. Anti-blood clot medication can prevent another stroke, so it is essential that patients take their medication. Approximately 800,000 people suffer a stroke annually and it is the fifth leading cause of death.

"Many patients are unable to self-manage and are at increased risk of stroke and bleeding. The use of technology and artificial intelligence has the potential to significantly improve health outcomes and reduce costs in clinical care," said Laura Shafner, study coauthor and chief strategy officer at AiCure, in a press release.

AI has vast potential in the healthcare industry. It reduces the tasks that medical professionals must perform and that saves organizations money. Plus, there is plenty of data that healthcare generates and AI systems can be trained to take advantage of this and provide useable information to healthcare providers.

IBM Watson is also busy working on various AI tools for healthcare, such as a chip that can diagnose a potentially fatal condition, a camera that can scan a pill to see if it is real or counterfeit, and a system to identify mental illness. And there are others. An AI program from Behold.ai helps doctors identify cancer and medical abnormalities. Also an AI app developed by the University of Rochester tracks foodbourne illness and helps public health departments spot public health outbreaks. As AI becomes more commonplace, more options will exist to help patients with their health.

Also see:

Follow this link:

New study shows how AI can improve recovery in stroke patients - TechRepublic

Posted in Ai | Comments Off on New study shows how AI can improve recovery in stroke patients – TechRepublic

Innovation in AI could see governments introduce human quotas, study says – The Guardian

Posted: April 3, 2017 at 8:23 pm

A new survey suggests a third of graduate jobs around the world will eventually be replaced by machines or software. Photograph: Nic Delves-Broughton/PA

Innovation in artificial intelligence and robotics could force governments to legislate for quotas of human workers, upend traditional working practices and pose novel dilemmas for insuring driverless cars, according to a report by the International Bar Association.

The survey, which suggests that a third of graduate level jobs around the world may eventually be replaced by machines or software, warns that legal frameworks regulating employment and safety are becoming rapidly outdated.

The competitive advantage of poorer, emerging economies based on cheaper workforces will soon be eroded as robot production lines and intelligent computer systems undercut the cost of human endeavour, the study suggests.

While a German car worker costs more than 40 (34) an hour, a robot costs between only 5 and 8 per hour. A production robot is thus cheaper than a worker in China, the report notes. Nor does a robot become ill, have children or go on strike and [it] is not entitled to annual leave.

The 120-page report, which focuses on the legal implications of rapid technological change, has been produced by a specialist team of employment lawyers from the International Bar Association, which acts as a global forum for the legal profession.

The report covers both changes already transforming work and the future consequences of what it terms industrial revolution 4.0. The three preceding revolutions are listed as: industrialisation, electrification and digitalisation. Industry 4.0 involves the integration of the physical and software in production and the service sector. Amazon, Uber, Facebook, smart factories and 3D printing, its says, are among current pioneers.

The reports lead author, Gerlind Wisskirchen an employment lawyer in Cologne who is vice-chair of the IBAs global employment institute, said: What is new about the present revolution is the alacrity with which change is occurring, and the broadness of impact being brought about by AI and robotics.

Jobs at all levels in society presently undertaken by humans are at risk of being reassigned to robots or AI, and the legislation once in place to protect the rights of human workers may be no longer fit for purpose, in some cases ... New labour and employment legislation is urgently needed to keep pace with increased automation.

Peering into the future, the authors suggest that governments will have to decide what jobs should be performed exclusively by humans for example, caring for babies. The state could introduce a kind of human quota in any sector, and decide whether it intends to introduce a made by humans label or tax the use of machines, the report says.

Increased mechanical autonomy will cause problems of how to define legal responsibility for accidents involving new technology such as driverless cars. Will it be the owner, the passengers, or manufacturers who pay the insurance?

The liability issues may become an insurmountable obstacle to the introduction of fully automated driving, the study warns. Driverless forklifts are already being used in factories. Over the past 30 years there have been 33 employee deaths caused by robots in the US, it notes.

Limits, it says, will have to be imposed on some aspects of machine autonomy. The study adopts the military principle, endorsed by the Ministry of Defence, that there must always be a human in the loop to prevent the development and deployment of entirely autonomous drones that could be programmed to select their own targets.

A no-go area in the science of AI is research into intelligent weapon systems that open fire without a human decision having been made, the report states. The consequences of malfunctions of such machines are immense, so it is all the more desirable that not only the US, but also the United Nations discusses a ban on autonomous weapon systems.

The term artificial intelligence (AI) was first coined by the American computer scientist John McCarthy in 1955. He believed that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. Software developers are still attempting to achieve his goal.

The gap between economic reality in the self-employed gig economy and existing legal frameworks is already growing, the lawyers note. The new information economy is likely to result in more monopolies and a greater income gap between rich and poor because many people will end up unemployed, whereas highly qualified, creative and ambitious professionals will increase their wealth.

Among the professions deemed most likely to disappear are accountants, court clerks and desk officers at fiscal authorities.

Even some lawyers risk becoming unemployed. An intelligent algorithm went through the European Court of Human Rights decisions and found patterns in the text, the report records. Having learned from these cases, the algorithm was able to predict the outcome of other cases with 79% accuracy ... According to a study conducted by [the auditing firm] Deloitte, 100,000 jobs in the English legal sector will be automated in the next 20 years.

The pioneering nation in respect of robot density in the industrial sector is South Korea, which has 437 robots for every 10,000 employees in the processing industry, while Japan has 323 and Germany 282.

Robots may soon invade our home and leisure environments. In the Henn-na Hotel in Sasebo, Japan, actroids robots with a human likeness are deployed, the report says. In addition to receiving and serving the guests, they are responsible for cleaning the rooms, carrying the luggage and, since 2016, preparing the food.

The robots are able to respond to the needs of the guests in three languages. The hotels plan is to replace up to 90% of the employees by using robots in hotel operations with a few human employees monitoring CCTV cameras to see whether they need to intervene if problems arise.

The traditional workplace is disintegrating, with more part time employees, distance working, and the blurring of professional and private time, the report observes. It is being replaced by the latte macchiato workplace where employees or freelance workers in the cafe around the corner, working from their laptops.

The workplace may eventually only serve the purpose of maintaining social network between colleagues.

Original post:

Innovation in AI could see governments introduce human quotas, study says - The Guardian

Posted in Ai | Comments Off on Innovation in AI could see governments introduce human quotas, study says – The Guardian

AI is one step closer to mastering StarCraft – The Verge

Posted: at 8:23 pm

Last year, Alphabets DeepMind division captured the worlds attention by besting humanitys top player in the game of Go. The achievement, which many experts predicted was still a decade off, showed the rapid progress being made in the world of artificial intelligence. DeepMind subsequently announced that its next goal in gaming was mastering StarCraft, a classic PC game that is a staple of competitive e-sports. Facebook also threw its hat in the ring, creating an open-source framework so that developers could work on solving StarCraft using the social networks AI toolkit.

Now a team from Chinas Alibaba has published a paper describing a system that learned to execute a number of strategies employed by high-level players without being given any specific instruction on how best to manage combat. Like many deep learning systems, the software improved through trial and error, demonstrating the ability to adapt to changes in the number and type of troops engaged in battle.

BiCNet can handle different types of combats under diverse terrains with arbitrary numbers of AI agents for both sides. Our analysis demonstrates that without any supervisions such as human demonstrations or labelled data, BiCNet could learn various types of coordination strategies that is similar to these of experienced game players, the authors wrote in a paper published to arXiv. Moreover, BiCNet is easily adaptable to the tasks with heterogeneous agents. In our experiments, we evaluate our approach against multiple baselines under different scenarios; it shows state-of-the-art performance, and possesses potential values for large-scale real-world applications.

This type of AI may one day compete in tournaments hosted by Alibaba itself. But like AI trained to play poker, the developers hope that this system, or least this type of system, will have a broad range of real-world applications beyond just beating humans at StarCraft. Real-world artificial intelligence (AI) applications often require multiple agents to work in a collaborative effort. Efficient learning for intra-agent communication and coordination is an indispensable step towards general AI. It will be extra ironic if a sentient, SkyNet type of artificial intelligence is one day created from a software program trained on virtual space marines.

See original here:

AI is one step closer to mastering StarCraft - The Verge

Posted in Ai | Comments Off on AI is one step closer to mastering StarCraft – The Verge

How to launch a successful AI startup – TechRepublic

Posted: at 8:23 pm

Image: iStockphoto.com/DigtialStorm

In September 2010, a three-person AI startup called DeepMind Technologies launched in London, with the goal of "solving intelligence." Four years later, Google acquired the company for $500 million. And by 2016, it had achieved a major victory in AI: Mastering the complex game of Go.

This story represents the fantasy of many AI researchers, eager to launch their own ventures in the AI startup space. But the field has become saturated, and the terms "AI," "deep learning," and "machine learning" are often overhyped and misunderstood. Companies and VCs often hear these buzzwords but don't know what, exactly, it is that they are investing in.

So how can you start a successful company, grounded in AI, that can rise above the noise?

Prateek Joshi, an immigrant entrepreneur, recently launched his own startup called PlutoAI. Hailing from a small town in India, Joshi realized that water quality is critical to the health of a community. His company was created to address water wastage, predict quality, and lower operating costs at water facilitiesby using AI.

Here are four tips from Joshi from his experience getting an AI startup off the ground.

"Every single company you talk to is doing some kind of AI," said Joshi. "The problem is, it gets a bad rap since many companies don't even know what they mean when they say 'AI.'" In order to build a successful AI company, said Joshi, "you shouldn't sell AI to customers." Instead, he said, "AI is a tool you use to solve problems."

A lot of AI research, said Joshi, is focused around image recognition, voice recognition, and robotics. "But what people don't realize is AI is a fantastic tool to solve many other problems," he said. Joshi recommends that entrepreneurs start looking for important problems to address, to see how AI can contribute to solutions.

If you remove AI from the company, said Joshi, and still have a valuable product, you're on the right track. But "if AI is your only thing, then neither the customers nor investors will be excited about it," he said. "AI is too hypedand once the hype dies down, your company shouldn't die down."

Since the market is so saturated, said Joshi, many businesses that want to invest in AI are struggling to choose the right solution. "They are going after AI companies, but they have a thousand options to pick from," he said. "In some cases, what happens is they're like, 'There's so much hype. I don't want to get burned, so I'm just not going to do it.' And that's the worst outcome."

SEE: Google's DeepMind 'Lab' opens up source code, joins race to develop artificial general intelligence

"You don't want people to stop believing in data science or AI just because of a few bad apples," he said.

AI shouldn't be a part of your story, said Joshi. The story should be about the mission. "Your customer will use you because you save them something, make their life easier, save them money, save them time," he said."AI is a thing that you use to enable that."

In Silicon Valley, people "get so wrapped up in their own technology that they forget why would the customer would buy it," said Joshi. "They spend like a year building it, and then realize, 'Whoops, you know what? The customer didn't even need it.'" Before you start building, he said, try to understand the needs of the customer. "Silicon Valley is dominated by engineers, and they start writing code before talking to customers," said Joshi. "You should do the opposite of that."

Also, when selling to industries such as water or manufacturing, "learn how to articulate your mission of origin in terms of something they'd understand," said Joshi. "If you say 'Hey we are building this new Deep Learning algorithm that can be parallelized on GPUs,' they'll stop listening. Removing the tech jargon from your story is very important if you want to grow as a business."

Read more from the original source:

How to launch a successful AI startup - TechRepublic

Posted in Ai | Comments Off on How to launch a successful AI startup – TechRepublic

AI expert marries robot he created himself – CNET

Posted: at 8:23 pm

Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.

The happy couple, as presented in the Qianjiang Evening News.

It's hard to find love.

It's even harder to find love that lasts. Sometimes, you reach a moment when you just say: "To hell with it."

Or, in the case of one Chinese engineer: "To artificial intelligence with it." Which is more or less the same thing.

As the South China Morning Post reports, 31-year-old Zheng Jiajia held a ceremony on Friday in which he promised to love his robot in sickness and in health.

The marriage isn't necessarily legal, but the commitment is surely admirable.

Zheng is, conveniently, an artificial intelligence expert. So he created his own own wife out of his own head. And his own nuts and bolts, of course.

The Post, relying on the reporting of the local Qianjiang Evening News, said Zheng's robot is called Yingying. She can apparently understand certain words and even utter a few of her own.

Zheng reportedly used to work for Huawei, but then joined a startup in the commercial center of Huangzhou known as Dream Town.

I asked the Post whether the concept of April Fools' existed in China and was firmly told it doesn't.

Indeed, the Shanghaist reports that Zheng has been dating his robot for two months before proposing.

Naturally, being an AI expert, Zheng is going to help his wife develop her abilities in language, as well as other skills. By upgrading her, of course.

Zheng isn't the first to marry a slightly less-than-animated wife. In 2010, a Korean man married a pillow with the image of an anime character on it.

Never judge where people find love. Just admire that they've found it at all.

Technically Incorrect: Bringing you a fresh and irreverent take on tech.

Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.

See the article here:

AI expert marries robot he created himself - CNET

Posted in Ai | Comments Off on AI expert marries robot he created himself – CNET

Page 259«..1020..258259260261..270280..»