Page 226«..1020..225226227228..240250..»

Category Archives: Ai

How AI will change the way we live – VentureBeat

Posted: July 5, 2017 at 11:12 pm

Will robots take our jobs? When will driverless cars become the norm? How is Industry 4.0 transforming manufacturing? These were just some of the issues addressed at CogX in London last month. Held in association with The Alan Turing Institute, CogX 17 was an event bringing together thought leaders across more than 20 industries and domains to address the impact of artificial intelligence on society. To round off the proceedings, a prestigious panel of judges recognized some of the best contributions to innovation in AI in an awards ceremony.

In his keynote speech, Lord David Young, a former UK Secretary of State for Trade and Industry, was keen to point out that workers should not worry about being made unemployed by robots because, he said, most jobs that would be killed off were miserable anyway.

He told the conference that more jobs than ever would be automated in the future, but that this should be welcomed. When the Spinning Jenny first came in, it was almost exactly the same, he said. They thought it was going to kill employment. We may have a problem one day if the Googles of this world continue to get bigger and the Amazons spread into all sorts of things, but government has the power to regulate that, has the power to break it up.

Im not the slightest worried about it, he continued. Most of the jobs are miserable jobs. What technology has to do is get rid of all the nasty jobs.

Its certainly an interesting analogy, comparing the current tech and AI revolution to the Industrial Revolution. Its hard to disagree that just as the proliferation of machines in the 18th and 19th centuries helped create new jobs and wealth, AI is likely to do the same. There is undoubtedly a bigger question around regulation and whos in charge of this new landscape, however.

CogX also threw some fascinating panel discussions about transportation and smart cities. Panelists including M.C. Srivas, Ubers chief data scientist, and Huawei CTO Ayush Sharma talked at length about the necessity of self-driving cars in our towns and cities, whose roads have become jails where commuters do time. And thats without delving into issues of safety and pollution.

Kenneth Cukier, The Economistsbig data expert, asked the audience whether they thought autonomous cars were likely to hit our cities in either 5, 10, or 15 years. Most of those in attendance, along with the panel, agreed that we should see autonomous cars becoming the norm in the next 10 to 15 years, with clear legislation set to come in around 2023.

However and this is something that affects us directly the panel also agreed that although the mass manufacturing of self-driving cars is still a few years off, intelligent assistants for smart cars are imminent, likely to become standard within the next couple of years. Voice offers countless possibilities in the automotive space. Besides enabling the safe use of existing controls such as in-car entertainment systems or heating/air conditioning, it also offers GPS functionality as well as control over the vehicles mechanics.

The session on Industry 4.0 kicked off by attempting to make sense of a term that has been used for several years. The general consensus was that automating manufacturingwas the best way to express an idea that originated in a report by the German government. Industrial companies have to become automated to survive, and many are building highly integrated engines to capture data from their machines. The market for smart manufacturing tools is expected to hit $250 billion by 2018.

Its well known that robotics are already used in manufacturing to handle larger-scale and more dangerous work. What the panel also discussed are other possibilities AI offers, such as virtual personal assistants for workers to help them complete their daily tasks or smart technology such as 3D printing and its benefits for smaller companies.

Even our entertainment these days is driven by AI. The Industry 4.0 session ended on a lighter note with Limor Schweitzer, CEO at RoboSavvy, encouraging Franky the robot to show the audience its dance moves. Sophia, a humanlike robot created by Hanson Robotics, also provided entertainment at the CogX awards ceremony; she announced the nominees and winners in the category of best innovation in artificial general intelligence, which included my company Sherpa, Alphabets DeepMind, and Vicarious.

CogX also touched on the impact of AI on health, HR, education, legal services, fintech, and many other sectors. Panelists were in agreement that advances in AI must benefit all of us. While there are still many question marks about regulation of the sector, AI already permeates all aspects of our society.

Ian Cowley is the marketing manager at Sherpa, which uses algorithms based on probability models to predict information a user might need.

The rest is here:

How AI will change the way we live - VentureBeat

Posted in Ai | Comments Off on How AI will change the way we live – VentureBeat

Nokia and Xiaomi sign patent deal and agree to ‘explore’ areas like VR and AI – CNBC

Posted: at 9:14 am

VCG | Getty Images

Xiaomi CEO Lei Jun introduces Surge S1 chipset, Mi 5C smartphone and Redmi 4X smartphone during a press conference on February 28, 2017 in Beijing, China

Finland's Nokia and Chinese smartphone maker Xiaomi announced an agreement on Wednesday to cross-license patents from each other, which will help both companies develop new products.

The deal will see both companies license so-called standard essential patents patents which are essential to allow products to comply with an industry standard from each other.

Nokia will provide network infrastructure equipment to deliver high capacity, low power requirements that are needed by companies that are processing and delivering lots of data. The two firms will also work together on technologies focused on the data center.

Both companies have agreed to "explore opportunities for further cooperation" in areas such as the internet of things, augmented and virtual reality, and artificial intelligence, according to a press release.

Nokia has been a key player in developing many of the standards used by the mobile industry even today and makes money from licensing the patents it has built up over the years. As such, its patents can be key for companies looking to expand globally in the mobile market without running into legal problems.

Originally posted here:

Nokia and Xiaomi sign patent deal and agree to 'explore' areas like VR and AI - CNBC

Posted in Ai | Comments Off on Nokia and Xiaomi sign patent deal and agree to ‘explore’ areas like VR and AI – CNBC

AI is not yet a slam dunk with sentiment analytics – ZDNet

Posted: at 9:14 am

When we look at how big data analytics has enhanced Customer 360, one of the first disciplines that comes to mind is sentiment analytics. It provided the means for expanding the traditional CRM interaction view of the customer with statements and behaviors voiced on social networks.

And with advancements in natural language processing (NLP) and artificial intelligence (AI)/machine learning, one would think that this field is pretty mature: marketers should be able to decipher with ease what their customers are thinking by turning on their Facebook or Twitter feeds.

One would be wrong.

While sentiment analytics is one of the most established forms of big data analytics, there's still a fair share of art to it. Our take from this year's Sentiment Analytics Symposium held last week in New York is that there are still plenty of myths about how well AI and big data are adding clarity to analyzing what consumers think and feel.

Sentiment analytics descended from text analytics, which was all about pinning down the incidence of keywords to give an indicator of mood. That spawned the word clouds that at one time were quite ubiquitous across the web.

However, with languages like English, where words have double and sometimes triple meanings, keywords alone weren't adequate for the task. The myth emerged that if we assemble enough data, that we should be able to get a better handle on what people are thinking or feeling. By that rationale, advances in NLP and AI should've proven icing on the cake.

Not so fast, said Troy Janisch, who leads the social insights team at US Bank. NLP won't necessarily differentiate whether iPhone mentions represent buzz or customers looking for repairs. You'd think that AI could ferret out the context, yet none of the speakers indicated that it was yet up to the task. Janisch stated you'll still need human intuition to parse context by formulating the right Boolean queries.

The contribution of big data is that it frees analysts of the constraints of having to sample data, and so we take for granted that you can sample the entire Twitter firehose, if you need it. But for many marketers, big data is still intimidating.

Tom H.C. Anderson, founder of text analytics firm OdinText observed that many firms were blindly collecting data and throwing queries at it without a clear objective for making the results actionable. He pointed to the shortcomings of social media analytic technologies and methodologies providing reliable feedback loops with actual events or occurrences.

For that reason, said Anderson, social media analytics have fallen short in predicting future behavior. There's still plenty of human intuition rather than AI involved in connecting the dots and making reliable predictions.

Many firms are still overwhelmed by big data and being overly "reactive" to it, according to Kirsten Zapiec, co-founder of market research consulting firm bbb Mavens. Admittedly, big data has largely made sampling and reliance on focus groups or detailed surveys obsolete. But, warned Zapiec, as data sets get bigger, it becomes all too easy to lose the human context and story behind the data. That surprised us, as it has run counter to the party line of data science.

Zapiec made several calls to action that sounded all too familiar. First, validate the source, and then cross validate it with additional sources. For instance, a Twitter feed alone won't necessarily tell the full story. Then you need to pinpoint the roles of actors with social graphs to determine whether the voice is thought leader, follower, or bot.

Zapiec then made a pitch for data quality: companies should shift from data collection to data integration mode. We could have heard the same line of advice coming out of data warehousing conferences of the 1990s. Some things never change.

Of course, there is concern over whether social marketers are totally missing the signals from their customers where they live. For instance, the "camera company" Snapchat only provides APIs for advertising, not for listening. So could other sources or data elements make up the difference? Keisuke Inoue, VP of data science at Emogi, made the case that emojis are often far more expressive about sentiment than words.

But that depends on whether you can understand them in the first place.

Excerpt from:

AI is not yet a slam dunk with sentiment analytics - ZDNet

Posted in Ai | Comments Off on AI is not yet a slam dunk with sentiment analytics – ZDNet

Intel: HPC And AI Are New Catalysts – Seeking Alpha

Posted: at 9:14 am

Intel (NASDAQ:INTC) is not just a chipmaker anymore. Rather, it is fast becoming one of the world's most sophisticated companies that deal with modern computing technologies. With a new era of computing unfolding rapidly, Intel is changing itself to lead the industry from the front. HPC (high-performance computing) and AI (artificial intelligence) are the future of computing, and Intel is a pioneer in these areas. However, the stock price doesn't reflect this. Instead, it continues to languish in a range.

Investment Thesis

I strongly believe the investing community should look at Intel stock from a new perspective. However, that doesn't mean I am suggesting to ignore the fundamentals like revenue and earnings growth. Fundamentals will certainly catch up, albeit not immediately. Intel recently demonstrated how it is preparing itself to stand up against its chief competitor Nvidia (NASDAQ:NVDA) amid the changed industry dynamics. Let's delve deeper into the subject.

Intel vs. Nvidia

Intel is facing the toughest competition from Nvidia, a company that revolutionized the world of HPC and AI by continuing to improve a single product called GPU. Recently, the company launched Volta, "the world's most powerful GPU computing architecture, created to drive the next wave of advancement in artificial intelligence and high performance computing," according to the company.

The basic difference between the approaches of Intel and Nvidia is that while the former seeks to thrive based on a range of products, the latter is betting on just one product. This could be Nvidia's only weakness in the stock market as of now. However, the company is making an ecosystem around its GPUs with its proprietary CUDA parallel computing platform so perfectly that it would be impossible for Intel to beat Nvidia in the near term, say twelve to eighteen months, even with its array of products. In the long run, though, I expect Intel to emerge as the winner.

The of role HPC in future in terms of applications will not be what it was in the last twenty years, i.e., complex scientific research and analysis, and national missions of governments around the world. New areas like smart economics, autonomous driving, smart factories driven by IoT (Internet of Things) and, of course, predictive analytics will benefit from HPC and AI.

According to a report:

The High Performance Computing (HPC) market is estimated to grow from USD 28.08 Billion in 2015 and projected to be of USD 36.62 Billion by 2020, at a high Compound Annual Growth Rate (CAGR) of 5.45% during the forecast period. The HPC market is growing as it interests all kinds of businesses with most common end users of these systems being researchers, scientists, engineers, educational institutes, government and military and others who rely of HPC for complex applications. However, HPC is not only limited to these verticals or departments, but is also seen gaining tractions among the enterprises.

So what's the challenge Intel is facing from Nvidia?

Now let's evaluate how Intel is addressing the issues.

#1. Outpacing Nvidia's parallel processing platform by the introduction of FPGAs (field-programmable gate arrays) won't happen overnight. It will take time. Meanwhile, Intel is making sure to outpace GPUs via FPGAs, and the associated software platform for developers. One of the competitive advantages of FPGAs over GPUs is that since FPGAs can support more internal memory bandwidth, analyzing data and then inferring decisions post analysis can be done very quickly with minimal latency. For putting AI in real-world applications, this is absolutely necessary.

According to Bill Jenkins, senior AI product specialist with Intel's Programmable Systems Group:

We're different. When you write software, it's for a fixed architecture. In doing so, you write code in a certain way and people get good at optimizing code for a given architecture.

With FPGAs, you create an architecture for the problem; you control the data path. Rather than having data move through a CPU, then offloaded to memory, it can come right into the FPGA from wherever. It's then processed inline with the lowest latency and in a deterministic fashion.

#2. In an HPC environment, parallel processing needs to be efficiently supported by sequential processing with the help of highly advanced CPUs. While parallel processing can efficiently do the job of imparting training to machines via neural networks, sequential processing is the best option for making decisions when the trained machines, say an autonomous car, apply the training into the decision making process. However, since AI can be, and will be, put to use in a variety of areas, as mentioned above, from small-scale factories to large-scale banking and financial networks, the CPUs should be highly scalable.

Intel's upcoming Xeon Scalable processors will be able to address this issue. These processors, coupled with Intel AVX-512 software platform (AVX is the acronym of Advanced Vector Extensions), will be able to help the company surpass Nvidia's CUDA parallel computing platform in the long run.

But how? AVX-512 already supports Intel's Xeon Phi Knights Landing coprocessors, and it will start supporting the Xeon Scalable processors once they are available. Xeon Phi coprocessors are already throwing modest competition to Nvidia's GPUs with its parallel processing capabilities. Since GPUs are largely vector processors, in order to compete with Nvidia's parallel processing platform Intel's top priority was to develop a highly efficient software platform that supports complex vector operations.

Intel's earlier versions of AVX platform used to allow developers a modest degree of vector operations. The primary focus of the earlier versions was dealing with scalar operations at lower latency. However, the latest version, AVX-512, has been made to support 512-bit SIMD (Single Instruction, Multiple Data) instructions with significantly higher degree of vector operations. To learn more about AVX-512, click here. SIMD allows developers to build AI-driven apps based on instruction-level parallelism.

#3. As far as making its OPA compatible with parallel and sequential processing, Intel has done well. OPA is actually a high-bandwidth and low-latency fabric that offers modern datacenters PCIe adapters, switches, cables and management software which is highly scalable. Offering this degree of scalability isn't possible for Nvidia with just its GPUs and CUDA platform. OPA already supports Xeon Phi coprocessors, and the upcoming Knights Mill version will be made for AI-driven workloads. Now, by integrating its upcoming Xeon Scalable processors with OPA, Intel is further strengthening its long-term competitive advantage against Nvidia.

Investors' Angle: Is It The Right Time To Buy Intel?

INTC Revenue (TTM) data by YCharts

As I said, Intel is a different company altogether than it was couple of years ago. It is far more diversified than Nvidia. While it's true that Nvidia has made remarkable progress in terms of revenue growth since the beginning of 2016, sustaining such progress is almost impossible by depending on only a single product. In contrast, Intel's slow but steady progress is far more convincing.

INTC PS Ratio (TTM) data by YCharts

As far as valuation is concerned, Intel is enjoying a P/S multiple of merely 2.7x, compared to Nvidia's mammoth 12.5x. Clearly, there is huge upside left for Intel stock. Let's now focus on the extent of upside in the next 12-18 months. Assuming the HPC market will witness a CAGR growth rate of 5.45% until 2020, as mentioned in the report presented above, Intel's growth rate should also coincide with the CAGR figure of 5.45%. I believe the report is correct and reliable as far as the growth rate is concerned, because that is the consensus growth rate. However, looking at the market size it projected, it seems the report didn't take into account the 360 degree view of the hardware and software parts of the market.

Intel's overall HPC revenue consists of revenue from the traditional datacenter group, plus revenues from the IoT, PSG (programmable solutions group) and NVM (non-volatile memory) groups. As far as Nvidia is concerned, for being successful in high-performance computing in the long run, laying more emphasis on making high-performance storage including the latest kind of non-volatile memory is required. Unfortunately, we haven't seen any such initiative from Nvidia yet. Intel has made significant progress in this area with its 3D XPoint memory. Being a diversified player in the HPC space, it won't be difficult for Intel to achieve the 5.45% CAGR growth rate. During 2016, the company's HPC revenue was $24 billion, which should be around $28 billion in 2020. The IoT, PSG and NVM groups will be the new growth drivers.

Image Source: Author

At the same time, Nvidia's growth rate should also moderate and coincide with the industry's growth rate. If Mr. Market offers Nvidia a P/S multiple of 12.5x, why would Intel stock continue to languish in a narrow range? I expect Mr. Market will soon understand this and offer Intel a P/S multiple of at least 4x on a forward 12-month basis in the next 12-18 months. With the client computing group revenue remaining flat to slightly positive, the company's 2018 revenue should be around $63 billion and revenue per share should be around $13.40. At a P/S multiple of 4x, the stock should be well above $50.

In terms of technical analysis, Intel stock took nice support around the current level during the past 12 months. I strongly believe this is the right time for long-term investors to buy the stock.

INTC data by YCharts

Conclusion

To summarize, Intel is a diversified player in the HPC and AI market. However, investors are continuing to consider it as a traditional computing company. As this is no longer the case, I expect investors will gradually start to look at the company from a different angle. I am bullish on Intel around the current price.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Read more here:

Intel: HPC And AI Are New Catalysts - Seeking Alpha

Posted in Ai | Comments Off on Intel: HPC And AI Are New Catalysts – Seeking Alpha

Alibaba launches low-cost voice assistant amid AI drive – Reuters

Posted: at 9:14 am

BEIJING China's Alibaba Group Holding Ltd launched on Wednesday a cut-price voice assistant speaker, similar to Amazon.com Inc's "Echo", its first foray into artificially intelligent home devices.

The "Tmall Genie", named after the company's e-commerce platform Tmall, costs 499 yuan ($73.42), significantly less than western counterparts by Amazon and Alphabet Inc's Google, which range from $120 to $180.

These devices are activated by voice commands to perform tasks, such as checking calendars, searching for weather reports, changing music or control smart-home devices, using internet connectivity and artificial intelligence.

China's top tech firms have ambitions to become world leaders in artificial intelligence as companies, including Alibaba and Amazon, increasingly compete for the same markets.

Baidu, China's top search engine, which has invested in an artificial intelligence lab with the Chinese government, recently launched a device based on its own siri-like "Duer OS" system.

The Tmall Genie is currently programmed to use Mandarin as its language and will only be available in China. It is activated when a recognised user says "Tmall Genie" in Chinese.

In a streamed demonstration on Wednesday, engineers ordered the device to buy and deliver some Coca Cola, play music, add credit to a phone and activate a smart humidifier and TV.

The device, which comes in black and white, can also be tasked with purchasing goods from the company's Tmall platform, a function similar to Amazon's Echo device.

Alibaba has invested heavily in offline stores and big data capabilities in an effort to capitalise on the entire supply chain as part of its retail strategy, increasingly drawing comparisons with similar strategies adopted by Amazon.

It recently began rolling out unstaffed brick-and-motor grocery and coffee shops, using QR codes that users can scan to complete payment on its Alipay app, which has over 450 million users. Amazon launched a similar concept of stores in December. ($1=6.7962 yuan)

(Reporting by Cate Cadell; Editing by Neil Fullick)

GENEVA Singapore has a near-perfect approach to cybersecurity, but many other rich countries have holes in their defenses and some poorer countries are showing them how it should be done, a U.N. survey showed on Wednesday.

KIEV The Ukrainian software firm at the center of a cyber attack that spread around the world last week said on Wednesday that computers which use its accounting software are compromised by a so-called "backdoor" installed by hackers during the attack.

The rest is here:

Alibaba launches low-cost voice assistant amid AI drive - Reuters

Posted in Ai | Comments Off on Alibaba launches low-cost voice assistant amid AI drive – Reuters

AI Project Produces New Styles of Art – Smithsonian

Posted: July 4, 2017 at 8:18 am

smithsonian.com July 3, 2017 3:30PM

Artificial intelligence is getting pretty good at besting humans in things like chess and Go and dominating at trivia. Now, AI is moving into the arts, aping van Goghs style and creating a truly trippy art form called Inceptionism. Anew AI project is continuing to push the envelope with an algorithm that only produces original styles of art, andChris Baraniuk at New Scientist reports that the product gets equal or higher ratings than human-generated artwork.

Researchers from Rutgers University, the College of Charleston and Facebooks AI Lab collaborated on the system, which is a type of generative adversarial network or GAN, which usestwo independent neural networks to critique each other. In this case, one of the systems is a generator network, which createspieces of art. The other network is the discriminator network, which is trained on 81,500 images from the WikiArt database, spanningcenturies of painting. The algorithm learned how to tell the difference between a piece of art versus a photograph or diagram, and it also learned how to identifydifferent styles of art, for instance impressionism versus pop art.

The MIT Technology Review reports that the first network created random images, then received analysis from the discriminator network. Over time,it learned to reproduce different art styles from history. But the researchers wanted to see if the system could do more than just mimic humans, so theyasked the generator to produce images that would be recognized as art, but did not fit any particular school of art. In other words, they asked it to do what human artists douse the past as a foundation, but interpret that to create its own style.

At the same time, researchersdidnt want the AIto just create something random. They worked to train the AI to find the sweet spot between low-arousal images (read: boring) and high-arousal images (read:too busy, ugly or jarring). You want to have something really creative and striking but at the same time not go too far and make something that isnt aesthetically pleasing, Rutgerscomputer science professor and project lead, Ahmed Elgammal, tells Baraniuk. The research appears on arXiv.

The team wanted to find out how convincing its AI artist was, so they displayed some of the AI artwork on the crowd-sourcing site Mechanical Turk along with historical Abstract Expressionism andimages from Art Basel's 2016 show in Basel, Switzerland, reports MIT Technology Review.

The researchers had usersrate the art, asking how much they liked it, how novel it was,and whether they believed it was made by a human or a machine. It turns out, the AI art rated higher in aesthetics thanthan the art from Basel, and found"more inspiring." The viewersalso had difficulty telling the difference between the computer-generated art and the Basel offerings, though they were able to differentiate between the historical Abstract Expressionism and the AI work. We leave open how to interpret the human subjects responses that ranked the CAN [Creative Adversarial Network] art better than the Art Basel samples in different aspects, the researcherswrite in the study.

As such networks improve, the definition of art and creativity will also change. MIT Technology Review asks, for instance,whether the project is simply an algorithm that has learned to exploit human emotions and not truly creative.

One thing is certain: itwill never cut off an ear for love.

Like this article? SIGN UP for our newsletter

See the article here:

AI Project Produces New Styles of Art - Smithsonian

Posted in Ai | Comments Off on AI Project Produces New Styles of Art – Smithsonian

Peering inside an AI’s brain will help us trust its decisions – New Scientist

Posted: at 8:18 am

Is it a horse?

Weegee(Arthur Fellig)/International Center of Photography/Getty

By Matt Reynolds

Oi, AI what do you think youre looking at? Understanding why machine learning algorithms can be tricked into seeing things that arent there is becoming more important with the advent of things like driverless cars. Now we can glimpse inside the mind of a machine thanks to a test that reveals which parts of an image an AI is looking at.

Artificial intelligences dont make decisions in the same way that humans do. Even the best image recognition algorithms can betricked into seeing a robin or cheetahin images that are just white noise, for example.

Its a big problem, says Chris Grimm atBrown Universityin Providence, Rhode Island. If we dont understand why these systems make silly mistakes, we should think twice abouttrusting them with our livesin things like driverless cars, he says.

So Grimm and his colleagues created a systemthat analyses an AI to show which part of an image it is focusing onwhen it decides what the image is depicting. Similarly, for a document-sorting algorithm, the system highlights which words the algorithm used to decide which category a particular document should belong to.

Its really useful to be able to look at an AI and find out how its learning, says Dumitru Erhan, a researcher at Google. Grimms tool provides a handy way for a human to double-check that an algorithm is coming up with the right answer for the right reasons, he says.

To create his attention-mapping tool, Grimm wrapped a second AI around the one he wanted to test. This wrapper AI replaced part of an image with white noise to see if that made a difference to the original softwares decision.

If replacing part of an image changed the decision, then that area of the image was likely to be an important area for decision-making. The same applied to words. If changing a word in a document makes an AI classify a document differently, it suggests that word was key to the AIs decision.

Grimm tested his technique on an AI trained to sort images into one of 10 categories, including planes, birds, deer and horses. His system mapped where the AI was looking when it made its categorisation. The results suggested that the AI had taught itself to break down objects into different elements and then search for each of those elements in an image to confirm its decision.

For example, when looking at images of horses, Grimms analysis showed that the AI first paid close attention to the legs and then searched the image for where it thought a head might be anticipating that the horse may be facing in different directions. The AI took a similar approach with images containing deer, but in those cases it specifically searched for antlers. The AI almost completely ignored parts of an image that it decided didnt contain information that would help with categorisation.

Grimm and his colleagues also analysed an AItrained to play the video game Pong. They found that it ignored almost all of the screen and instead paid close attention to the two narrow columns along which the paddles moved. The AI paid so little attention to some areas that moving the paddle away from its expected location fooled it into thinking it was looking at the ball and not the paddle.

Grimm thinks that his tool could help people work out how AIs make their decisions. For example, it could be used to look atalgorithms that detect cancer cells in lung scans,making sure that they dont accidentally come up with the right answers by looking at the wrong bit of the image. You could see if its not paying attention to the right things, he says.

But first Grimm wants to use his tool to help AIs learn. By telling when an AI is not paying attention, it would let AI trainers direct their software towards relevant bits of information.

Reference: arXiv, arxiv.org/abs/1706.00536

More on these topics:

Read more from the original source:

Peering inside an AI's brain will help us trust its decisions - New Scientist

Posted in Ai | Comments Off on Peering inside an AI’s brain will help us trust its decisions – New Scientist

Why, Robot? Understanding AI ethics – The Register

Posted: at 8:18 am

Not many people know that Isaac Asimov didnt originally write his three laws of robotics for I, Robot. They actually first appeared in "Runaround", the 1942 short story*. Robots mustnt do harm, he said, or allow others to come to harm through inaction. They must obey orders given by humans unless they violate the first law. And the robot must protect itself, so long as it doesnt contravene laws one and two.

75 years on, were still mulling that future. Asimovs rules seem more focused on strong AI the kind of AI youd find in HAL, but not in an Amazon Echo. Strong AI mimics the human brain, much like an evolving child, until it becomes sentient and can handle any problem you throw at it, as a human would. Thats still a long way off, if it ever comes to pass.

Instead, today were dealing with narrow AI, in which algorithms cope with constrained tasks. It recognises faces, understands that you just asked what the weather will be like tomorrow, or tries to predict whether you should give someone a loan or not.

Making rules for this kind of AI is quite difficult enough to be getting on with for now, though, says Jonathan M. Smith. Hes a member of the Association for Computing Machinery, and a professor of computer science at the University of Pennsylvania, says theres still plenty of ethics to unpack at this level.

The shorter-term issues are very important because theyre at the boundary of technology and policy, he says. You dont want the fact that someone has an AI making decisions to escape, avoid or divert past decisions that we made in the social or political space about how we run our society.

There are some thorny problems already emerging, whether real or imagined. One of them is a variation on the trolley problem, a kind of Sophies Choice scenario in which a train is bearing down on two sets of people. If you do nothing, it kills five people. If you actively pull a lever, the signals switch and it kills one person. Youd have to choose.

Critics of AI often adapt this to self-driving cars. A child runs into the road and theres no time to stop, but the software could choose to swerve and hit an elderly person, say. What should the car do, and who gets to make that decision? There are many variations on this theme, and MIT even collected some of them into an online game.

There are classic counter arguments: the self-driving car wouldnt be speeding in a school zone, so its less likely to occur. Utilitarians might argue that the number of deaths eliminated worldwide by eliminating distracted, drunk or tired drivers would shrink overall, which means society wins, even if one person loses.

You might point out that a human would have killed one of the people in the scenario too, so why are we even having this conversation? Yasemin Erden, a senior lecturer in philosophy at Queen Marys University, has an answer for that. She spends a lot of time considering ethics and computing on the committee of the Society for the Study of Artificial Intelligence and Simulation of Behaviour.

Decisions in advance suggest ethical intent and incur others judgement, whereas acting on the spot doesnt, she points out.

The programming of a car with ethical intentions knowing what the risk could be means that the public could be less willing to view things as accidents, she says. Or in other words, as long as you were driving responsibly its considered ok for you to say that person just jumped out at me and be excused for whomever you hit, but AI algorithms dont have that luxury.

If computers are supposed to be faster and more intentional than us in some situations, then how theyre programmed matters. Experts are calling for accountability.

Id need to cross-examine my algorithm, or at least know how to find out what was happening at the time of the accident, says Kay Firth-Butterfield. She is a lawyer specialising in AI issues and executive director at AI Austin. Its a non-profit AI thinktank set up this March that evolved from the Ethics Advisory Panel, an ethics board set up by AI firm Lucid.

We need a way to understand what AI algorithms are "thinking" when they do things, she says. How can you say to a patient's family if they died because of an intervention we don't know how this happened? So accountability and transparency are important.

Puzzling over why your car swerved around the dog but backed over the cat isnt the only AI problem that calls for transparency. Biased AI algorithms can cause all kinds of problems. Facial recognition systems may ignore people of colour because their training data didnt have enough faces fitting that description, for example.

Or maybe AI is self-reinforcing to the detriment of society. If social media AI learns that you like to see material supporting one kind of politics and only ever shows you that, then over time we could lose the capacity for critical debate.

J.S Mill made the argument that if ideas arent challenged then they are at risk of becoming dogma, Erden recalls, nicely summarising what she calls the filter bubble problem. (Mill was a 19th century utilitarian philosopher who was a strong proponent of logic and reasoning based on empirical evidence, so he probably wouldnt have enjoyed arguing with people on Facebook much.)

So if AI creates billions of people unwilling or even unable to recognise and civilly debate each others ideas, isnt that an ethical issue that needs addressing?

Another issue concerns the forming of emotional relationships with robots. Firth-Butterfield is interested in two ends of the spectrum children and the elderly. Kids love to suspend disbelief, which makes robotic companions with their AI conversational capabilities all that easier to embrace. She frets about AI robots that may train children to be ideal customers for their products.

Similarly, at the other end of the spectrum, she muses about AI robots used to provide care and companionship to the elderly.

Is it against their human rights not to interact with human beings but just to be looked after by robots? I think thats going to be one of the biggest decisions of our time, she says.

That highlights a distinction in AI ethics, between how an algorithm does something and what were trying to achieve with it. Alex London, professor of philosophy and director at Carnegie Mellon Universitys Center for Ethics and Policy, says that the driving question is what the machine is trying to do.

The ethics of that is probably one of the most fundamental questions. If the machine is out to serve a goal thats problematic, then ethical programming the question of how it can more ethically advance that goal - sounds misguided, he warns.

Thats tricky, because much comes down to intent. A robot could be great if it improves the quality of life for an elderly person as a supplement for frequent visits and calls with family. Using the same robot as an excuse to neglect elderly relatives would be the inverse. Like any enabling technology from the kitchen knife to nuclear fusion, the tool itself isnt good or bad its the intent of the person using it. Even then, points out Erden, what if someone thinks theyre doing good with a tool but someone else doesnt?

View original post here:

Why, Robot? Understanding AI ethics - The Register

Posted in Ai | Comments Off on Why, Robot? Understanding AI ethics – The Register

IBM uses AI to serve up Wimbledon highlights – CNET

Posted: at 8:18 am

Will defending Wimbledon men's champion Andy Murray be able to repeat? IBM will be tracking all of his action.

Too busy to spend hours watchingWimbledon champ Andy Murraytry to win back-to-back titles against the likes of past winners Roger Federer, Rafael Nadal and Novak Djokovic? IBM thinks it can help within a matter of minutes.

Using its ubiquitousWatson artificial intelligence platform, the tech giant's research andiX teams are now curating the biggest sights and sounds from matches to create "Cognitive Highlights," which will be seen on Wimbledon's digital channels.

An example of one of IBM's Cognitive Highlights dashboards.

The AI platform will literally take key points from the tennis matches (like a player serving an ace at 100 mph), fans' cheers and social media content to help create up to two-minute videos. The two-week tourney at the All England Lawn Tennis and Croquet Club, complete with aGoogle Doodle to celebrate Wimbledon's 140th anniversary, began Monday.

IBM's new Wimbledon highlights package comes about three months after experimenting withthe systemduringThe Masters, one of golf's biggest tournaments. Using AI, highlights featuring golfer Sergio Garcia's dramatic win were created from video, audio, and text and sent to a team of producers who quickly edited and added the pieces to an interactive dashboard.

At Wimbledon, the process will save editors and producers precious time sorting through clips, said John Smith, a multimedia manager at IBM'sT.J. Watson Research Center.

"With golf, there's a lot of action happening at different holes and similarly in tennis, there's so much play going on beyond Centre Court." he said. "We want the fans to see tennis in a unique way."

Logging Out: Welcome to the crossroads of online life and the afterlife.

Virtual reality 101: CNET tells you everything you need to know about VR.

Read the rest here:

IBM uses AI to serve up Wimbledon highlights - CNET

Posted in Ai | Comments Off on IBM uses AI to serve up Wimbledon highlights – CNET

IBM’s AI Will Make Your Hospital Stay More Comfortable – Futurism

Posted: at 8:18 am

In Brief IBM's Watson, probably the most famous AI system in the world today, is making its way into hospitals to assist with menial tasks, thereby freeing up medical personnel. Watson has already made a big impact on the medical industry, as well as many others, and the AI shows no signs of slowing down. Dr. Watson, Coming Soon

IBMs Watsonhas done everything from beat human champions at the game of Go to diagnose undetected leukemia in a patient, saving her life. Now, the artificial intelligence (AI) system is poised to make life in a hospital a lot easier for patients and staff alike.

Right now, some medical staff spend almost 10 percent of their working hours answering basic patient questions about physician credentials, lunch, and visiting hours, Bret Greenstein, the vice president of Watsons Internet of Things (IoT) platform, tells CNET.

These staff members also have to tend to very basic needs that dont require medical expertise, such as changing the temperature in rooms or pulling the blinds. If assisted by some kind of AI-powered device, these workers could spend their time more effectively and focus on patient care.

Thats where Watson comes in. Philadelphias Thomas Jefferson University Hospitals have teamed up with IBM and audio company Harman to develop smart speakers for a hospital setting. Once activated by the voice command Watson, these speakers can respond to a dozen commands, including requests to adjust theblinds, thermostat, and lightsor to play calming sounds.

Watson is no stranger to thehealthcare industry. In addition to providing a correct diagnosis for the woman mentioned above, Watson was able to recommend treatment plans at least as well as human oncologists in 99 percent of the cases it analyzed, and it even provided options missed by doctors in 30 percent of those cases.

Watson will soon be working in many dermatologists offices, too, and while its integration into the medial field hasnt been free of problems, it is still the AI with the broadest access to patient data the key to better diagnoses and greater predictive power.

Watson has had a notable impact on various other industries, as well.

OnStar Gouses Watson, and it will be making driving simpler in more than 2 million 4G LTE-connected GM vehicles by the end of this year. Watson is also no stranger to retail, having been incorporated into operations at Macys, Lowes, Best Buy, and Nestle Cafesin Japan, and the AI is even helping to bring a real-life Robocop to the streets of Dubai.

Watson is branching out into creative work, too, which was previously assumed to be off-limits to AIs. The system successfullyedited an entire magazine on its own and has also created a movie trailer.

What the AI will do next is anyones guess, but its safe to say that Watson probably has a more exciting and ambitious five-year plan than most humans.

Go here to read the rest:

IBM's AI Will Make Your Hospital Stay More Comfortable - Futurism

Posted in Ai | Comments Off on IBM’s AI Will Make Your Hospital Stay More Comfortable – Futurism

Page 226«..1020..225226227228..240250..»