Page 258«..1020..257258259260..270280..»

Category Archives: Ai

Renault ZOE writes On the Road fanfic, becomes first AI hipster – TechCrunch

Posted: April 12, 2017 at 8:41 am

An all-electric Renault ZOE is writing fan fiction based on Jack Kerouacs novel On the Road to demonstrate how far modern EVs can travel on a charge. Its also a celebration of the novels sixtieth anniversary in 2017. Also, presumably, this willimpress ZOEs latest crush, who loves Beat literature.

The car uses live driving data and artificial intelligence to generate fan fiction authorized by the Kerouac estate in the style of the famous novel. The AI takes information from internal and external sensors as the car drives through Stockholm and turns it into contextual storylines, according to the press release. These stories are unique to each driver, depending on what inputs occur during the drives.

The first step was to analyze the book using theWatson API. The analysis looked for language usage, emotions and social cues in the text. This is the basis for ZOEs in the style of fanfic.

The ZOE has a Raspberry Pi on board with a data collecting scanner, 4G modem and GPS, plus the usual array of sensors in a modern vehicle. A bunch of data is collected from the sensors and the scanner, including speed, acceleration, braking, weather, nearby places and geopositioning.

The ZOE puts all those data pointstogether to generate a Kerouac-style story on the fly based on conditions. It uses the same text-to-speed technology as the Amazon Alexa to read the results out loud in real time during the drive. Back at the projects HQ, the driver receives a thermal-printed version of the story.

The goal of the project, according to Renault, is to ease consumers range anxiety by showing that an electric vehicle like the Renault ZOE can travel 400 km (about 250 miles) on one charge and write a freakin novel in the process. Im not sure how convincing that will be to potential EV buyers, but its on the clever side of quirky.

The rest is here:

Renault ZOE writes On the Road fanfic, becomes first AI hipster - TechCrunch

Posted in Ai | Comments Off on Renault ZOE writes On the Road fanfic, becomes first AI hipster – TechCrunch

Does AI actually exist yet? – American Banker (subscription)

Posted: at 8:41 am

Lets try to cut through the hype surrounding voice technology, bots and machine learning by asking this fundamental question: Does artificial intelligence actually exist yet?

I wrote a blog post earlier this year claiming that AI does not exist, which sparked quite a debate and evoked very emotional responses.

The statement tapped into a deep well present in many who have studied computer science and have a strong opinion on what constitutes AI. I admit that I am not the most technically informed about the ins and outs of digital banking, and therefore my position might be a little naive. But at the end of the day, the fintech industry has not entirely defined which innovations are viewed as AI and which are not, or classified different levels of AI or AI-like functions. Just saying something is AI doesnt indicate its level of machine learning and technological advancement.

I will try to frame the debate. Whether you believe AI has arrived, or is still on its way, depends on how you define it.

At one end of the spectrum, a pure view of AI, sits those individuals who think that until we have Data on "Star Trek" who was a synthetic being we dont have AI. Under this definition, AI is sentient, thinking, alive, with self-interest and awareness which current technology is nowhere near achieving. In digital banking, chatbots facilitating customer service, machine learning to make the loan underwriting process more efficient and other innovations are improving the industrys efficiency. But based on this pure definition, they are not AI.

Purists and academics tend toward the "Star Trek" model. They measure AI similar to how Data was portrayed in the TV show, as a recognized life form. They view such a measure of AI as a good aspirational goal, but they admit we may never get there.

On the other end of the spectrum sits those who characterize AI along the lines of "Westworld." In the HBO show, about a theme park where human guests interact with manager-controlled humanoids, called hosts, computers can emulate humans to a degree where guests cant tell they are actually computers. This definition of AI has parallels with the Turing Test, named for Alan Turing, to measure a machines ability to act like a human.

We can do Westworld today, and that is where the hype begins.

We can start to deploy rudimentary AI in finance and banking, using the Westworld model. We can start to emulate, programmatically, what people do in real life, and we can do it at a high degree of emulation so we can fake human behavior and make it feel real. The benefits for banks and for customers are potentially huge. For example, Facebook Messenger Service Bots can one day replace the bank contact call center.

The hype exists because while AI might not exist today, as defined by "Star Trek"/Data, rudimentary forms of it do exist and can be put into use. Innovations such as Alexa and the Facebook Messenger Service Bot are just examples of transformative technology. The innovations around the corner in online credit underwriting, mobile banking and even toward self-service banking makes the future exciting.

But as machine learning advances in the financial services industry, it is hard to determine what is real, and what is hype and theory. As financial institutions and computer scientists develop and refine these technologies, we can also hopefully refine their definitions and how one innovation is distinguished from the other.

Robb Gaynor is co-founder and chief product officer of Malauzai.

See the rest here:

Does AI actually exist yet? - American Banker (subscription)

Posted in Ai | Comments Off on Does AI actually exist yet? – American Banker (subscription)

How companies and consumers benefit from AI-powered networks – VentureBeat

Posted: April 10, 2017 at 2:49 am

With more than 12,500 patents, eightNobel prizes, and a 140 year history of field-testing crazy ideas, no one should be surprised that AT&T would be an important player in artificial intelligence.

AT&T is a backbone of the internet, explains Nadia Morris, Head of Innovation at the AT&T Connected Health Foundry. The company manages wireless, landline, and even private secure networks to power connectivity for both individuals and corporations. All these networks generate incredible volumes of data ripe for machine analysis.

AT&T has built AI and machine learning systems for decades, using algorithms to automate operations such as common call center procedures and the analysis and correction of network outages. On the entertainment side, AT&Ts DirecTV division leverages users rating histories, viewing behaviors, and other factors to anticipate the next films theyll watch.

Modern AI algorithms have enabled the telecom company to tackle even more complex tasks, such as optimizing the rollout of their 5G network. Traditional cell towers are usually suboptimally placed near urban centers and form an imperfect grid, leading to gaps in coverage. Theyre also expensive to put up and maintain and incur challenges with real estate and property ownership.

Small cells are less expensive, more compact cells that can be installed on inner city buildings on a much finer grid. Their role is to repeat the signal from the main cell towers to get closer to end users. By crunching mobile subscriber data, well-calibrated AI can help create spatial models to hone in on ideal spots to build small cells to ensure maximum 5G signal strength for customers.

Designing the right 5G infrastructure is critical, especially given the rapid rise of video. Video is more than half of our mobile traffic, explains Chris Volinsky, who leads big data research at AT&T Labs. Video traffic grew over 75% and smartphones drove almost 75% of our data traffic in 2016 alone. We expect video traffic growth to outpace overall data growth in 2020.

Infrastructure is an enormous investment, even with small cells, so accurately modeling trends and usage growth is key to success. Demographic trends can cause previously underutilized areas to suddenly become hot traffic generators. While statistical models are useful for identifying trends in customer movement and throughout, AI and machine learning techniques create future projections from current data.

We need to visualize billions of data points in a spatiotemporal fashion, Volinsky elaborates. No tools existed previously to address AT&Ts unique data challenges, so they built and open-source custom tools such as Nanocubes, a data visualization tool that can map out millions of connections of individual mobile phones and connected devices to cell phone towers. The tool has been used outside the company to characterize sports fans in real time and analyze crime rates and history.

Above: Examples of data visualization from AT&T Nanocubes tool. Image Credit: AT&T Inc.

Algorithms and tools are not the bottleneck in solving problems. Volinsky clarifies that the challenge is in the data and the data pipeline. Modern data-hungry AI approaches require a centralized data source, but gathering one across a myriad of networks with idiosyncratic standards is no trivial task. Each small cell collects cellular data differently. Some track 4G but not 3G. Some dont get iPhone data. If variations are not taken into account, bias will appear in the data and the results.

There is no world expert in data munging, Volinsky bemoans. To succeed, you have to figure out organizationally how to access data in different silos, technically how to integrate with it, and ensure the formats are in line. Data scientists often discover that they cant solve the problems they want to solve because the fundamentals of managing data is difficult and time-consuming. This is not the stuff people learn in grad school, he warns.

Volinskys convinced that AI is the most powerful addition in the toolbox used by AT&Ts research arm to develop the next generation of enterprise and consumer-facing solutions. At the same time, he cautions against using deep learning as a magical black box to solve all problems. Instead, you should prioritize solid data infrastructure, subject matter expertise, and utilizing an ensemble of methods from data science and machine learning toolboxes.

Volinsky would know best. His BellKor team won the coveted $1 Million Netflix Prize in 2009. The key lesson learned during the three year competition was the power of ensembles. Ensembles involve combining various methods ranging from regression, support vector machines, singular value decomposition, restricted boltzmann machines, and neural networks to produce a result. Deep learning is a power tool in your toolbox, but you still need your old school tools to solve problems, he emphasizes. Deep learning evangelists say neural networks effectively incorporate all the other models, but I have not seen that work in practice.

In tandem with in-house projects, AT&T operates six innovation labs, called Foundries, all over the world. Each Foundry specializes in a different industry.

As Head of Innovation at AT&Ts Connected Health Foundry, Nadia Morris works with aspirational startups such as AIRA, a smart wearables startup that uses human-assisted computer vision algorithms to enable the blind and vision-impaired to visualize their surroundings and navigate their immediate environment.

Using established manufacturing relationships, AT&T helps healthcare IoT and wearables companies like AIRA accelerate their hardware prototyping and production. Similar to the Labs, the Foundries also leverage custom-built open-source tools such as Flow Designer, a rapid prototyping tool that simplifies hardware design for software engineers.

Remember Morris earlier comment about how the internet runs on AT&T? Turns out this can be mission critical for startups like AIRA which must ensure superior connectivity at all times to protect the safety of their patients. Since AT&Ts AI systems regulate network traffic, they can intelligently detect AIRA devices on their network and dynamically allocate greater bandwidth to support live video streaming.

AT&Ts control of networks also comes in useful for hospitals who hoard sensitive patient data. Fearful of security lapses, many operate their own data centers for fear of uploading personal information to the cloud. Data center management is typically not a hospitals core competency, leading to outdated technology and massive inefficiencies.

Do you want to run a hospital or do you want to run a data center, questions Morris. Regardless of the cloud provider a hospital chooses to use, AT&T runs private network connections to all of their servers. This traffic will never traverse the public internet, she assures, giving hospitals an extra layer of protection.

Migrating more hospitals to the cloud solves not only administrative pains, but also unblocks AI research. Hospitals are smart, but theyre like islands, Morris explains. Competition often incentivizes hospitals to hoard data that is critical to share for superior results. Pooling hospital data into collaborative cloud communities and applying de-identification protocols enables medical researchers to access disparate data sets with greater geographic diversity. Algorithms for essential patient services such as vital sign monitoring can be trained on aggregate data sets for more accurate benchmarks.

Lead Inventive Scientist Wen-Ling Hsu has been with AT&T for over 20 years. She obsessed overcreating amazing customer experiences using massive data and information even before big data was coined.

Hsu analyzes customer conversations from both phone conversations from call centers and online chats with support agents. Machine learning allows her to build textual models, identify customer intent, and route them to appropriate support agents faster.

With her extensive experience, Hsu learned that interpreting and using the intelligence gained from AI systems is more of an art than a science. What matters most is customer perception and seamless execution, so Hsu employs a combination of bots that directly interact with customers and those that stay in the background to assist human agents.

When asked to make a forecast for AI in 2017, Hsu responded, Human judgment still plays a critical role in many tasks. Together, AI bots and human agents can learn from every customer interaction to personalize the customer experience.

Mariya Yao is the Head of Research andDesign for Topbots, a strategy andresearch firm for enterprise AI.

This article was originally published on Topbots.

Original post:

How companies and consumers benefit from AI-powered networks - VentureBeat

Posted in Ai | Comments Off on How companies and consumers benefit from AI-powered networks – VentureBeat

5 ways AI is already making a difference in society – VentureBeat

Posted: at 2:49 am

By now, everyone and their grandparents are talking about machine learning and AI. Unfortunately, lately, many people have been questioning whether all this effort is worth it and some are worried about future job losses. Just yesterday at an event, someone asked me, The world invested so much money into image recognition just so we could recognize a cat. Whats the point? My response was, Well, a machine recognizing a cat is the first step towards a machine detecting and recognizing a tumor. If you cut through the hype and use a strategic goal, machine learning can offer real-world value. The increased ease, speed, and functionality it offers create avenues for use cases across the spectrum of industries that rely heavily on data.

For retail businesses, it provides an opportunity to improve and customize the customer experience.Lets take a look at five examples that are worth noting. Given that one of the worlds most prolific scientific minds on AI just resigned from Baidu to focus on his next project that will benefit the greater good, I felt this was timely.

Much like the closed-captioning weve seen on TV, machine learning now makes it possible to identify specific elements from YouTube videos. New algorithms can visualize sound effects like applause, laughter, and music. This is a huge development for the versatility of the platform, as they look to become more accessible. Googles new video intelligence API made big news recently at Google Cloud Next, and uses extremely high-tech models to identify specific elements in video. This can include things as descriptive as a smile, water, a species of animal, etc. Machine learning makes both of these possible and opens the door for many new possibilities for making online content easier to access for the disabled.

Student and startup employee Austin Lebetkin lives with autism spectrum disorder. He thinks that machine learning can open the door for the disabled to use and interact with digital content in many of the same ways that others do. Considering the amount of new content being developed that emphasizes audio-visual interactivity, this is a huge breakthrough for the disabled.

After some horrific events involving the live-streaming of suicides, Facebook garnered a considerable amount of negative feedback. To combat this, theyve decided to implement machine learning capabilities. Machine learning will now build predictive models to tailor interventions earlier. It comes at the right time, considering an increase in suicide rates over the past couple of years. With appropriate data mining, Facebook and others will be able to identify suicidal tendencies earlier online and be quicker to intervene.

A company called Geneia is using machine learningto increase data efficiency and improve insights. By better utilizing their data, Geneia is able to more improvemedical status predictions at a much quicker rate. This meansresponses arrive earlier and at a higher quality of care. Clinical assessments and lab values used in the past are much less speedy and efficient. As a result, theres a very real possibility that the sick or elderly can live more comfortably at home while avoiding the risk involved in being away from medical facilities.

Anyone whos studied in a public classroom knows that there are a number of different learning styles. Not everyones brain functions the same and everyone has different needs to be met. Thanks to machine learning, this is now much easier to make possible. Use of student-level projections allows for a standardized measure of success so that each student can learn and progress according to their own characteristics. This creates a much higher chance for a student to respond positively, and ultimately a better chance for success.

Stanford researchers have been working very hard on this one. Theyve created a machine learning algorithm that uses a massive image database to make skin cancer diagnoses. Using the more recent developments combining deep learning with visual identification, the algorithm is aimed at replacing the initial observation step of skin cancer diagnosis. This will make the process easier and more efficient for both the patient and doctor. Though the algorithm currently exists on a computer, theres a plan in place to expand to mobile very soon.

When you look at some of these cases, its clear that theres a lot more at stake when we talk about the value of machine learning. Most of the buzzworthy tech news falls short of providing a sense of the huge possibilities at stake. Though current uses may seem simplistic, they are simply building blocks to begin using it in much more widespread, impactful ways. The companies mentioned here are finding new and creative ways to use machine learning, and its these stories that deserve a much bigger place in the conversation surrounding AI.

Kerry Liu is the CEO of Rubikloud, a retail intelligence platform.

Read the original here:

5 ways AI is already making a difference in society - VentureBeat

Posted in Ai | Comments Off on 5 ways AI is already making a difference in society – VentureBeat

Here’s a reality check for AI in the enterprise – VentureBeat

Posted: at 2:49 am

When Slack introduced its new Enterprise Grid product in January, it pledged to bring much of the same day-to-day Slack experience that users have come to know and love to large organizations. Similarly, CRM giant Salesforce unveiled its new Einstein artificial intelligence service this past fall to great fanfare, touting it as AI for everyone. But, as many enterprise leaders already know and would-be disrupters are quickly learning the promise of AI and its reality are, for now, two very different things.

While chatbots, predictive analytics and intelligent search are all the rage these days, AIscurrent business value is typically overstated. One analyst recently called Einstein a great starting point, while IT departments are freaking out over security concerns such as phishing scams due to bots potential to sound a little too much like real people. And thats key most things AI today are just that;potential. While a lot of companies are trumpeting AI as a competitive differentiator, the technologies are still in their infancy and a lot more speculative than disruptive.Thats no doubt a relief for those frightened ofthe self-aware, revenge-seeking androids fromfilmandTV.

A reality check: AIisbeginning to take on the low-hanging fruit of the modern enterprise, such as critical time-saving tasks like streamlining email inboxes, prioritizing/scheduling meetings and creating data-driven,daily to-do lists. Some solutions already use predictive analytics to mine the rich work graph of data within a company, adding valuable context around workflows.

As the technology improves, itwill get much better atanticipating employees needs as well. In the near future, voice recognition technology may even become a type of universal ID, allowing people easier access to information and experts from partner and customer networks, as well as their own companies. But to take AI further along the path from potential to practical, organizations must setaside the hype and get the right systems and processes in place. Heres how.

1. Overcome fragmentation

Dataprovides the brainpower for artificial intelligence. With the amount of dataset to expand to a mind-boggling 44 zettabytes by 2020, the problem for machine learning systems is no longer a lack of information; its the potential for fragmentation. Without unrestricted access to a ton of data, AI cant possibly live up to its promises either real or imagined. Unfortunately, companies are adopting more and more disparate systems, and its not helping that stack vendors are continually adding more disconnected tools to their productivity suites and emerging conversational apps are siloing information in ever-narrower message threads. Companies need their technology vendors to provide open APIs and connected hub solutions in order to make sure valuable data wont get locked inside niche tools, and to ensure the signal doesnt get lost in a clamor of extraneous noise.

2. Leverage work graphanalytics

In order for work graph mapping to be effective, its important to choosesoftware vendors that not only enable relationshipsbetween people, applications and business precesses, but that also provide visibility for individual interactions.The systems that most successfully leverage workplace AI are those that let you analyze not just work thats getting done in one particular tool, but also capture all the conversations, content, sentiment, actions, groups, teams and people across multiple collaboration apps. Only then do companies get insight into dynamic relationships across the full spectrum of work, so they can analyze their organizational network and effect positive change for better business outcomes in a repeatable manner. For example, intelligent work graphtechnology could help leaders figure out how to strategically build diverse project teams with the right experts in order to ensure successful outcomes.

3. Embrace a collaboration hub solution that brings it all together

Solving the challenges of fragmentation, as well as those surrounding the natural cultural resistance that exists in many organizations when it comes to adopting AI solutions, will require not only revamping current technologies and processes, but a change in mindset. The payoff, at least according to this Accenture report, will be nothing less than unprecedented opportunities for value creation. Fortunately, many software vendors are already finding ways to overcome the current and future obstacles to AI. Some collaboration hub solutions do a great job of enabling the transparency and ongoing discussions necessary to overcome cultural resistance, while also seamlessly integrating with the applications and tools companies have already invested in (including Microsoft Office 365, SharePoint, Box, Salesforce and others). In fact, without some type of agnostic and heterogeneous place to captureallof the conversations, content, sentiment and actions of individuals, groups and teams (the work graph) where they areaccessible and searchable, AI will never be able to live up to its lofty promises.

The original Einstein (Albert) once famously said, Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand. Replacing people is not (nor should ever be) the end goal of artificial intelligence. Instead, AI by dealing with the knowledge sideof work will augment and expand our inherent human capabilities, including our imaginations, allowing both businessesandpeople to thrive.

By freeing siloed data and lettingindividuals and teams to do their most creative work today, businesses can ensurethat, when the future does come and its coming fast theyll be ready. Remember, despite what youve heard, AI isnt the end of the world. I believe its just the beginning.

Ofer Ben-David is the Executive Vice President of Engineering at Jive Software, a provider of communication and collaboration solutions for business.

Above: The Machine Intelligence Landscape. This article is part of our Artificial Intelligence series. You can download a high-resolution version of the landscape featuring 288 companies by clicking the image.

Read this article:

Here's a reality check for AI in the enterprise - VentureBeat

Posted in Ai | Comments Off on Here’s a reality check for AI in the enterprise – VentureBeat

Singapore’s Saleswhale, which uses AI to automate sales emails, raises $1.2M – TechCrunch

Posted: at 2:49 am

Saleswhale, a Singapore-based startup that uses artificial intelligence to letcompanies automate their sales emails, has raised a $1.2 million seed round.

The capital was provided by VC firms Monks Hill Ventures, Gree Ventures and Wavemaker Partners with a number of angel investors. Those include early Dropbox hire Albert Ni,Pieter Walraven (who founded now Google-owned Pie),Juha Paananen (who sold Nonstop Games to King.com),Royston Tay (who sold Zopim to Zendesk) andBowei Lee,CEO of LCY Chemical Corp.

We first wrote about Saleswhale last Augustwhile it was in the Y Combinator program in the U.S., and since then the team has returned to Singapore and developed the business. Its product, called Engage, allows companies to set up virtual email accounts which send communicate with sales prospectsthe same way a human employee would.

It isnt a full on sales team replacement at this point,rather it is focused on handling inbound leads or reigniting stale prospects.The AIs can use charts, figures and PDFs and when alead becomes warm again they can hand it over to a designated employee to take things further.

Engage is available for a base rate fee, after which additional cost comes per usuage. Theres a free 14-day trial to allow new users to test the product without that initial commitment, but Saleswhale has removed the free usage option it had in place last year, co-founderGabriel Lim confirmed.The company isnt saying how much revenue it pulls in, but Lim said that base fees account for around 40 percent of income with activity-based fees representing the rest.

Lim founded the company last year with fellow Singaporeans Venus Wong and Ethan Le in response tothe frustration of the repetitive nature of training new sales staff, many of whom move on to new jobs. The team has since added two engineers to its ranks, and Lim said it plans to hire another three people who will cover sales and also more engineers. The current team are all engineers, so adding a dedicated sales team makes plenty of sense for the business now that it is maturing.

Most of the customer leads that Saleswhale itself gets are inbound, and mainlyfrom tech startups, but Lim said the company has begun to hold initial discussions with firms in verticals such as real estate and automotive loans to diversify. But, right now, he told TechCrunch, the company has a huge backlog of interested customers that it is onboarding to its platform, which already counts dozens of paying customers. While he didnt givespecific names, Lim said that,since February 2017, Saleswhale has helped its customers close $130,000 in deals with a further $1.5 million-worth of leads in the pipeline.

This seed money will go towardsthose expansion and hiring efforts. Lim said the company willlook to close a Series A round in around12 months. Interestingly, this investment is a first seed deal for Monks Hill Ventures, the $80 million fund that is primarily focused on Series Acompanies, so Saleswhale may already have a major contributor to its next round depending on how things go.

See original here:

Singapore's Saleswhale, which uses AI to automate sales emails, raises $1.2M - TechCrunch

Posted in Ai | Comments Off on Singapore’s Saleswhale, which uses AI to automate sales emails, raises $1.2M – TechCrunch

Eminent Astrophysicist Issues a Dire Warning on AI and Alien Life – Futurism

Posted: at 2:49 am

In BriefAstrophysicist Lord Martin Rees believes that AI could surpasshumans within a few hundred years, ushering in eons of dominationby electronic intelligent lifethe same kind of intelligent life hethinks may already exist elsewhere in the universe. Fixed to This World

Lord Martin Rees, Astronomer Royal and University of Cambridge Emeritus Professor of Cosmology and Astrophysics, believes that machines could surpasshumans within a few hundred years, ushering in eons of domination. He also cautions that while we will certainly discover more about the origins of biological life in the coming decades, we should recognize that alien intelligence may be electronic.

Just because theres life elsewhere doesnt mean that there is intelligent life, Lord Rees told The Conversation. My guess is that if we do detect an alien intelligence, it will be nothing like us. It will be some sort of electronic entity.

Rees thinks that there is a serious risk of a major setback of global proportions happening during this century, citing misuse of technology, bioterrorism, population growth, and increasing connectivity as problems that render humans more vulnerable now than we have ever been before. While we may be most at risk because of human activities, the ability of machines to outlast us may be a decisive factor in how life in the universe unfolds.

If we look into the future, then its quite likely that within a few centuries, machines will have taken overand they will then have billions of years ahead of them, he explains. In other words, the period of time occupied by organic intelligence is just a thin sliver between early life and the long era of the machines.

In contrast to the delicate, specific needs of human life, electronic intelligent life is well-suited to space travel and equipped to outlast many global threats that could exterminate humans.

[We] are likely to be fixed to this world. We will be able to look deeper and deeper into space, but traveling to worlds beyond our solar system will be a post-human enterprise, predicts Rees. The journey times are just too great for mortal minds and bodies. If youre immortal, however, these distances become far less daunting. That journey will be made by robots, not us.

Rees isnt alone in his ideas. Several notable thinkers, such as Stephen Hawking, agreethat artificial intelligences (AI) have the potential to wipe out human civilization. Others, such as Subbarao Kambhampati, the president of the Association for the Advancement of Artificial Intelligence, see malicious hacking of AI as the greatest threat we face. However, there are at least as many who disagree with these ideas, with even Hawking noting the potential benefits of AI.

As we train and educate AIs, shaping them in our own image, we imbue them with the ability to form emotional attachmentsthat could deter them from wanting to hurt us. There is evidence thatthe Singularity might not be a single moment in time, but is instead a gradual process that is already happeningmeaning that we are already adapting alongside AI.

But what if Rees is correct and humans are on track to self-annihilate? If we wipe ourselves out and AI is advanced enough to survive without us, then his predictions about biological life being a relative blip on the historical landscape and electronic intelligent life going on to master the universe will have been correctbut not because AI has turned on humans.

Ultimately, the idea of electronic life being uniquely well-suited to survive and thrive throughout the universe isnt that far-fetched. The question is, will we survive alongside it?

Follow this link:

Eminent Astrophysicist Issues a Dire Warning on AI and Alien Life - Futurism

Posted in Ai | Comments Off on Eminent Astrophysicist Issues a Dire Warning on AI and Alien Life – Futurism

AI is now the best friend IT ever had – VentureBeat

Posted: at 2:49 am

If you look past the hype, existential concerns, and fear that Alexa is a CIA mole, there are some genuinely exciting developments happening in the world of artificial intelligence.

Some of these have very specific applications, such as medical imaging, diagnostic capabilities, or satellite imagery recognition. Others, like digital assistants or even robots are poised to dramatically impact how we live and work on a broader scale.

Of course, most of us care about more than just a series of clever tricks. We want AI that does more than the bare minimum which thus far has been defined as tedious manual tasks that are a nuisance for humans to complete. We want AI that can be harnessed to truly augment and enhance human intelligence, to keep in stride with us and act as a personal, contextually aware virtual assistant. IT professionals, in particular, cant wait for the ultimate AI companion, which represents an imaginary best work friend come to life who you can interact with across a number of different interface types.

But what exactly does this look like? First, as were talking about a personal assistant, your AI should have a name; lets call your companion Bender. Bender is a bot that comes to you pre-trained with a number of basic skills and interfaces, such as voice recognition, natural language processing, an augmented reality system, and more. In addition, Bender is equipped with machine-learning algorithms to learn about your work life, work habits, and any real-world factors that affect your job.

So what can Bender do? Lets look at an example.

Say youre a developer at a global company, and you have teammates in India and Brazil. You dont speak Hindi or Portuguese and your teammates dont speak English, but you need to meet on a weekly call to go over your progress and timelines. Bender would translate the conversation for you in real time, and your teammates would have the same done by their own AI helpers. So you would communicatedirectly with an English-speaking interface via Bender, and your teammates would do the same in Hindi or Portuguese. Imagine Skypes real-time language capabilities combined with the magic of Douglas Adams Babelfish expertly trained in the jargon of technology professionals and able to bridge the gaps of cultural idioms and idiosyncrasies. Bender would help break down remote working barriers and increase team cohesion across global borders.

This may seem like a simple start, but now imagine how Bender can assist you across other types of interfaces, such as in AR. That is, Bender sees what you see. Look at an application architecture diagram, and Bender annotates it for you. Scribble things on whiteboards, notepads, or napkins, and Bender remembers what you wrote. Navigate a data center, and Bender can guide you to the right cabinet and piece of equipment.

Using glasses, contact lenses, or some yet-to-be-created neural interface, Bender can be equipped to supercharge your vision and memory recall. Between your workspace, laptop, smartphone, and smartwatch, you likely have a minimum of three different screens to juggle but by augmenting your vision, Bender could replace them all. Need a small, transparent notification in your peripheral vision to get your attention? Done. Need a field of vision in widescreen format to get into the details? Easy. Theres no limit to the ways Bender could augment what you see.

Probably one of the coolest and most useful ways an AI helper like Bender could be of service is by acting as the front-end entity to handle service calls and incident management. Bender would be your personal gatekeeper to gather all of the necessary background information and details of an incident, including cross-checking for similar customer or vendor reports, analyzing the IT environment for any abnormalities or recent changes, and taking care of anything else you might handle manually if you answered the call.

By the time youre engaged, Bender will have done all the heavy lifting to compile the information you need and eliminate any of the usual suspects behind a problem. You get to stay focused on tasks where you can add maximumvalue, and interruptions are fielded by an AI assistant trained in your ways and environment. And the more you use Bender, the more Bender learns to finish your sentences and anticipate how you would tackle a problem.

Eventually, this kind of AI-assisted incident management could expand to become even more contextual and proactive. Lets say, for instance, Bender wakes you up in the middle of the night with a customer emergency. Bender is well aware that, at this ungodly hour, you wont be anywhere near your computer and you wont pick up your smartphone, but you will immediately put your AR-enabled eyewear on. Bender thus briefs you on the problem via AR first, until you get to another device where you can more deeply tackle the issue.

AI companions represent an entirely new category of IT tools, going beyond monitoring, data analytics, issue tracking, or collaboration. It can become a whole new market fed by an ecosystem of other tools all augmented by person-, role-, and company-specific knowledge. The conversation surrounding AI has accelerated rapidly over the last couple of years, spinning up the latest technology bandwagon that every enterprise and its parent company is hungry to hop on but this is something to truly be excited about. This goes above and beyond the hype.

For busy professionals, AIs like Bender hold the promise of scaling ourselves, minimizing cognitive load, and allowing us to manage the increasingly complex environments in which we operate.

Abbas Haider Ali is the CTO at xMatters, an IT automation company.

Above: he Machine Intelligence Landscape. This article is part of our Artificial Intelligence series. You can download a high-resolution version of the landscape featuring 288 companies by clicking the image.

Read the original post:

AI is now the best friend IT ever had - VentureBeat

Posted in Ai | Comments Off on AI is now the best friend IT ever had – VentureBeat

AI Won’t Change Companies Without Great UX – Harvard Business Review

Posted: April 7, 2017 at 9:00 pm

Executive Summary

As with the adoption of all technology, user experience trumps technical refinements. Many organizations implementing AI initiatives are making a mistake by focusing on smarter algorithms over compelling use cases. Use cases where peoples jobs become simpler and more productive are essential to AI workplace adoption. Focusing on clearer, crisper use cases means better and more productive relationships between machines and humans. This article offers five use case categories assistant, guide, consultant, colleague, boss that emerge when companies use AI-empowered people and processes over autonomous systems. Each describes how intelligent entities work together to get the job done and how depending on the process, AI makes the human element matter even more.

As artificial intelligence algorithms infiltrate the enterprise, organizational learning matters as much as machine learning. How should smart management teams maximize the economic value of smarter systems?

Business process redesign and better training are important, but better use cases those real-world tasks and interactions that determine everyday business outcomes offer the biggest payoffs. Privileging smarter algorithms over thoughtful use cases is the most pernicious mistake I see in current enterprise AI initiatives. Somethings wrong when optimizing process technologies take precedence over how work actually gets done.

Unless were actually automating a process that is, taking humans out of the loop AI algorithms should make peoples jobs simpler, easier, and more productive. Identifying use cases where AI adds as much value to peoples performance as to process efficiencies is essential to successful enterprise adoption. By contrast, companies committed to giving smart machines greater autonomy and control focus on governance and decision rights.

Strategically speaking, a brilliant data-driven algorithm typically matters less than thoughtful UX design. Thoughtful UX designs can better train machine learning systems to become even smarter. The most effective data scientists I know learn from use-case and UX-driven insights. At one industrial controls company, for example, the data scientists discovered that users of one of their smart systems informally used a dataset to help prioritize customer responses. That unexpected use case led to a retraining of the original algorithm.

Focusing on clearer, cleaner use cases means better and more productive relationships between AI and its humans. The division of labor becomes a source of design inspiration and exploration. The quest for better outcomes shifts from training smarter algorithms to figuring out howtheuse case should evolve. That drives machine learning and organizational learning alike.

Five dominant use case categories emerge when organizations pick AI-empowered people and processes over autonomous systems. Unsurprisingly, these categories describe how intelligent entities work together to get the job done and highlight that a personal touch still matters. Depending on the person, process, and desired outcome, AI can make the human element matter more.

Assistants

Alexa, Siri and Cortana already embody real-world use cases for AI-assistantship. In Amazons felicitous phrasing, assistants have skills enabling them to perform moderately complex tasks. Whether mediated by voice or chatbot, simple and straightforward interfaces make assistants fast and easy to use. Their effectiveness is predicated as much on people knowing exactly what they need as algorithmic sophistication. As digital assistants become smarter and more knowledgeable, their task range and repertoire expands. The most effective assistants learn to prompt their users with timely questions and key words to improve both interactions and outcomes.

Guide

Where assistants perform requested tasks, guides help users navigate task complexity to achieve desired outcomes. Using Waze to drive through cross-town traffic troubled by construction is one example; using an augmented-reality tool to diagnose and repair a mobile device or HVAC system would be another. Guides digitally show and tell their humans what their next steps should be and, should missteps occurs, suggest alternate paths to success. Guides are smart software sherpa whose domain expertise is dedicated to getting their users to desired destinations.

Consultant

In contrast to guides, consultants go well beyond navigation and destination expertise. AI consultants span use cases where workers need either just-in-time expertise or bespoke advice to solve problems. Consultants, like their human counterparts, offer options and explanations, as well as reasons and rationales. A software development project manager needs to evaluate scheduling trade-offs; AI consultants ask questions and elicit information allowing specific next step recommendations. AI consultants can include relevant links, project histories and reports for context. More sophisticated consultants offer strategic advice to complement their tactical recommendations.

Consultants customize their functional knowledge scheduling; budgeting; resource allocation; procurement; purchasing; graphic design; etc. to their human clients use case needs. They are robo-advisers dispassionately dispensing their domain expertise.

Colleague

A colleague is like a consultant but with a data-driven and analytic grasp of the local situation. That is, a colleagues domain expertise is the organization itself. Colleagues have access to the relevant workplace analytics, enterprise budgets, schedules, plans, priorities and presentations to offer organizational advice to colleagues. Colleague use cases revolve around advice managers and workers need to work more efficiently and effectively in the enterprise. An AI colleague might recommend referencing and/or attaching a presentation in an email; which project leaders to ask for advice; what budget template is appropriate for a requisition; what client contacts need an early warning, etc. Colleagues are more collaborator than tool; they offer data-driven organizational insight and awareness. Like their human counterparts, they serve as sounding boards that who? help clarify communications, aspirations and risk.

Boss

Where colleagues and consultants advise, bosses direct. Boss AI tells its humans what to do next. Boss use cases eliminate options, choices and ambiguity in favor of dictates, decrees and directives to be obeyed. Start doing this; stop doing that; change this schedule; shrink that budget; send this memo to your team.

Boss AI is designed for obedience and compliance; the human in the loop must yield to the algorithm in the system. Boss AI represents the slippery slope to autonomy the workplace counterpart to an autopilot taking over an airplane cockpit or an automotive collision avoidance system slamming on the brakes. Specific use cases and circumstances trigger human subordination to software. But bosswares true test is human: if humans arent sanctioned or fired for disobedience, then the software really isnt a boss.

As the last example illustrates, these distinct categories can swiftly blur into each other. Its easy to conceive of scenarios and use cases where guides can become assistants, assistants situationally escalate into colleagues, and consultants transform into bosses. But the fundamental differences and distinctions these five categories present should inject real rigor and discipline intoimagining their futures.

Trust is implicit in all five categories. Do workers trust their assistants to do what theyve been told or guides to get them where they want to go? Do managers trust the competence of bossware or that their colleagues wont betray them? Trust and transparency issues persist regardless of how smart AI software becomes, and they become even more important as the reasons for decisions become overwhelmingly complex and sophisticated. One risk: these artificial intelligences evolve or devolve into frenemies. That is, software that is simultaneously friend and rival to its human complement. Consequently, use cases become essential to identifying what kinds of interfaces and interactions facilitate human/machine trust.

Use cases may prove vital to empowering smart human/smart machine productivity. But reality suggests their ultimate value may come from how thoughtfully they accelerate the organizations advance to greater automation and autonomy. The true organizational impact and influence these categories may be that they prove to be the best way for humans to train their successors.

See the original post:

AI Won't Change Companies Without Great UX - Harvard Business Review

Posted in Ai | Comments Off on AI Won’t Change Companies Without Great UX – Harvard Business Review

Facebook Messenger’s M assistant gets new AI powers – CNET

Posted: at 9:00 pm

The good people at Facebook must be working overtime.

After implementing the Snapchat-esque Stories feature and trialing a second News Feed, the company is adding some AI components to its Messenger's M assistant.

Launching in the US on Thursday, Facebook said in a blog that M will offer helpful suggestions during conversations with friends.

M knows if you're talking about buying something off a friend, for instance, and will automatically offer payment options. Facebook also said M can offer to share your exact location with a friend during a conversation, and will offer the option of a poll if you're in a group chat and something needs deciding.

M's AI abilities have hit both iOS and Android, but right now they're only available to users in the US. It was noted, though, they will "eventually roll out to other countries." Facebook also promised this was the beginning, saying M's predictive powers will only get bigger from here.

Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.

Technically Literate: Original works of short fiction with unique perspectives on tech, exclusively on CNET.

Read the rest here:

Facebook Messenger's M assistant gets new AI powers - CNET

Posted in Ai | Comments Off on Facebook Messenger’s M assistant gets new AI powers – CNET

Page 258«..1020..257258259260..270280..»