Page 194«..1020..193194195196..200210..»

Category Archives: Artificial Intelligence

How Artificial Intelligence Is Changing Financial Auditing – Daily Caller

Posted: March 10, 2017 at 3:12 am

5524793

As robots continue to play a growing role in our daily lives, white collar jobs in many sectors including accounting and financial operations are quickly becoming a thing of the past. Business are gravitating towards software to automate bookkeeping tasks, saving considerable amounts of both time and money. In fact, since 2004, the number of full-time finance employees at large companies has declined a staggering 40% to roughly 71 employees for every $1 billon of revenue,down from 119 employees, according to a report by top consulting firm The Hackett Group.

These numbers show that instead of resisting change, companies are embracing the efficiencies of this new technology and exploring how individual businesses can leverage automation and, more importantly, artificial intelligence aka robots. A quick aside on the idea of robots versus automation. As technology becomes more sophisticated and particularly with the use of Artificial Intelligence (AI) were able to automate multiple steps in a process. The concept of Robotic Process Automation (RPA) or robots for short has emerged to capture the notion of more sophisticated automation of everyday tasks.

Today, there is more data available than ever and computers are enhancing their capabilities to leverage these mountains of information. With that, many technology providers are focusing on making it as easy as possible for businesses to implement and utilize their solutions. Whether its by easing the support and management burden via Software as a Service (SaaS) delivery or more turn-key offerings that embed best practices in the solution, one can see a transformation from simply providing tools to providing a level of robotic automation that seems more like a service offering than a technology.

Of course, the name of the game for any business is speed, efficiency, and cost reduction.It is essential to embrace technologies that increase efficiency and savings because, like it or not, your competitors will. While there are some companies that stick with the old-school approaches, they end up serving small niches of customers and seeing less overall growth.

As long as the technology-based solution is less expensive and performs equally as well, if not better than alternative options, the market forces will drive companies to implement the automated technologies. In particular, the impact of robotic artificial intelligence (AI) is here to stay. In the modern work environment, automation means much more than just compiling numbers but making intelligent observations and judgements based on the data that is reviewed.

If companies and businesses want to ensure future success, its imperative to accept and embrace the capabilities provided by robots. Artificial intelligence wont always be perfect but it can dramatically improve your work output and add to your bottom line. Its important to emphasize that the goal is not to curtail employees but to find ways to leverage the robots toautomate everyday tasks or detail-oriented processesand focus the employees on higher-value activities.

Lets use an example: controlling spent in Travel & Expense (T&E) by auditing expense reports. When performing an audit, many companies randomly sample roughly 20% of expense reports to identify potential waste and fraud. If you process 500 expense reports in a month then 100 of those reports would be audited. The problem is less than 1% of these expense reports contain fraud or serious risks (cite SAR report), meaning the odds are that 99% of the reports reviewed were a waste of time and resources and the primary abuser of company funds most likely went unnoticed.

By employing a robot to identify risky looking expense reports and configuring the system to be hyper-vigilant, it has been shown that a sufficiently sophisticated AI system will flag 7% of the expense reports for fraud, waste, and misuse. (7% is the average Oversight Systems has seen across 20 million expense reports) If we look back to our previous example this means that out of 500 expense reports, employees would only have to review 35 instead of the 100 reports that would have been audited. Though these are likely not all fraudulent, they may provide other valuable information such as noting when an employee needs to be reminded about company travel policy.

While it may sound like robots are eliminating human jobs, its important to note that they can also be extremely valuable working collaboratively with employees. Although the example above focused on fraud, the same productivity leverage is available regarding errors, waste, misuse in financial processes, etc. With the help of robots, we can spend less time hunting for issues and more time addressing them. By working together with technology, the employee has a higher chance of rooting out fraud and will have the bandwidth to work with company travelers to influence their future behavior.

It is clear that in order to ensure future profitability, it is crucial for businesses to understand and take advantage of the significant role that robots can play in dramatically enhancing financial operations.

Go here to read the rest:

How Artificial Intelligence Is Changing Financial Auditing - Daily Caller

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence Is Changing Financial Auditing – Daily Caller

Why Is Poker Harder Than Chess Or Go For Artificial Intelligence? – Forbes

Posted: at 3:12 am


Forbes
Why Is Poker Harder Than Chess Or Go For Artificial Intelligence?
Forbes
How is poker harder than chess or Go for AI? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world. Answer by Aaron Brown, Risk Manager at AQR Capital ...

Read more from the original source:

Why Is Poker Harder Than Chess Or Go For Artificial Intelligence? - Forbes

Posted in Artificial Intelligence | Comments Off on Why Is Poker Harder Than Chess Or Go For Artificial Intelligence? – Forbes

The Next US-China Arms Race: Artificial Intelligence? – The National Interest Online

Posted: at 3:12 am

Although China could initially only observe the advent of the Information-Technology Revolution in Military Affairs, the Peoples Liberation Army might presently have a unique opportunity to take advantage of the military applications of artificial intelligence to transform warfare. When the United States first demonstrated its superiority in network-centric warfare during the first Gulf War, the PLA was forced to confront the full extent of its relative backwardness in information technology. Consequently, the PLA embarked upon an ambitious agenda of informatization (). To date, the PLA has advanced considerably in its capability to utilize information to enhance its combat capabilities, from long-range precision strike to operations in space and cyberspace. Currently, PLA thinkers anticipate the advent of an intelligentization Revolution in Military Affairs that will result in a transformation from informatized ways of warfare to future intelligentized () warfare. For the PLA, this emerging trend heightens the imperative of keeping pace with the U.S. militarys progress in artificial intelligence, after its failure to do so in information technology. Concurrently, the PLA seeks to capitalize upon the disruptive potential of artificial intelligence to leapfrog the United States through technological and conceptual innovation.

For the PLA, intelligentization is the culmination of decades of advances in informatization. Since the 1990s, the PLA has been transformed from a force that had not even completed the process of mechanization to a military power ever more confident in its capability to fight and win informatized wars. Despite continued challenges, the PLA appears to be on track to establish the system of systems operations () capability integral to integrated joint operations. The recent restructuring of the PLAs Informatization Department further reflects the progression and evolution of its approach. These advances in informatization have established the foundation for the PLAs transition towards intelligentization. According to Maj. Gen. Wang Kebin (), director of the former General Staff Department Informatization Department, Chinas information revolution has been progressing through three stages: first digitalization (), then networkization () and now intelligentization (). The PLA has succeeded in the introduction of information technology into platforms and systems; progressed towards integration, especially of its C4ISR capabilities; and seeks to advance towards deeper fusion of systems and sensors across all services, theater commands and domains of warfare. This final stage could be enabled by advances in multiple emerging technologies, including big data, cloud computing, mobile networks, the Internet of Things and artificial intelligence. In particular, the complexity of warfare under conditions of intelligentization will necessitate a greater degree of reliance upon artificial intelligence. Looking forward, artificial intelligence is expected to replace information technology, which served as the initial foundation for its emergence, as the dominant technology for military development.

Although the PLA has traditionally sought to learn lessons from foreign conflicts, its current thinking on the implications of artificial intelligence has been informed not by a war but by a game. AlphaGos defeat of Lee Sedol in the ancient Chinese game of Go has seemingly captured the PLAs imagination at the highest levels. From the perspective of influential PLA strategists, this great war of man and machine () decisively demonstrated the immense potential of artificial intelligence to take on an integral role in command and control and also decisionmaking in future warfare. Indeed, the success of AlphaGo is considered a turning point that demonstrated the potential of artificial intelligence to engage in complex analyses and strategizing comparable to that required to wage warnot only equaling human cognitive capabilities but even contributing a distinctive advantage that may surpass the human mind. In fact, AlphaGo has even been able to invent its own, novel techniques that human players of this ancient game had never devised. This capacity to formulate unique, even superior strategies implies that the application of artificial intelligence to military decisionmaking could also reveal unimaginable ways of waging war. At the highest levels, the Central Military Commission Joint Staff Department has called for the PLA to progress towards intelligentized command and decisionmaking in its construction of a joint operations command system.

Continue reading here:

The Next US-China Arms Race: Artificial Intelligence? - The National Interest Online

Posted in Artificial Intelligence | Comments Off on The Next US-China Arms Race: Artificial Intelligence? – The National Interest Online

How online retailers are using artificial intelligence to make shopping a smoother experience – Economic Times

Posted: at 3:12 am

The next time you shop on fashion website Myntra, you might end up choosing a t-shirt designed completely by a softwarethe pattern, colour and texture without any intervention from a human designer. And you would not realise it. The first set of these t-shirts went on sale four days ago. This counts as a significant leap for Artificial Intelligence in ecommerce.

For customers, buying online might seem simpleclick, pay and collect. But it's a different ballgame for e-tailers. Behind the scenes, from the warehouses to the websites, artificial intelligence plays a huge role in automating processes. Online retailers are employing AI to solve complex problems and make online shopping a smoother experience. This could involve getting software to understand and process voice queries, recommend products based on a person's buying history, or forecast demand.

SO WHAT ARE THE BIG NAMES DOING? "In terms of industry trends, people are going towards fast fashion. (Moda) Rapido does fast fashion in an intelligent way," said Ambarish Kenghe, chief product officer at Myntra, a Flipkart unit and India's largest online fashion retailer.

The Moda Rapido clothing label began as a project in 2015, with Myntra using AI to process fashion data and predict trends. The companys human designers incorporated the inputs into their designs. The new AI-designed t-shirts are folded into this label unmarked, so Myntra can genuinely test how well these sell when pitted against shirts designed by humans.

Also Read: AI will help answer queries automatically: Rajeev Rastogi, Amazon

"Till now, designers could look at statistics (for inputs). But you need to scale. We are limited by the bandwidth of designers. The next step is, how about the computer generating the design and us curating it," Kenghe said. "It is a gold mine. Our machines will get better on designing and we will also get data."

This is not a one-off experiment. Ecommerce, which has a treasure trove of data collected over the last few years is ripe for disruption from AI. Companies are betting big on AI and pouring in funds to push the boundaries of what can be done with data. "We are applying AI to a number of problems such as speech recognition, natural language understanding, question answering, dialogue systems, product recommendations, product search, forecasting future product demand, etc.," said Rajeev Rastogi, director, machine learning, at Amazon.

An example of how AI is used in recommendations could be this: if you started your search on a retailers website with, say, a white shirt with blue polka dots, and your next search is for an shirt with a similar collar and cuff style, the algorithm understands what is motivating you. "We start with personalizationit is key. If you have enough and more collection, clutter is an issue. How do you (a customer) get to the product that you want? We are trying to figure it out. We want to give you precisely what you are looking for," said Ajit Narayanan, chief technology officer, Myntra.

A related focus area for AI is recommending the right sizes as this can vary across brands. "We have pretty high return rates across many categories because people think that sizes are the same across brands and across geographies. So, trying to make recommendations with appropriate size is another problem that we are working on. Say, a size 6 in Reebok might be 7 in Nike, and so on," Rastogi said in an earlier interview with ET.

Myntra uses data intelligence to also decide which payment gateway is the best for a transaction.

"Minute to minute there is a difference. If you are going from, say, a HDFC Bank card to a certain gateway at a certain time, the payment success rate may be different than for the same gateway and for the same card at a different time, based on the load. This is learning over a period of time," said Kenghe. "Recently, during the Chennai cyclone, one of the gateways had an outage. The system realised this and auto-routed all transactions away from the gateway. Elsewhere, humans were trying to figure out what happened.

SUPPORT FROM AI SPECIALISTS A number of independent AI-focused startups are also working on automating manually intensive tasks in ecommerce. Take cataloging. If not done properly, searching for the right product becomes cumbersome and shoppers might log out.

"Catalogues are (usually) tagged manually. One person can tag 2,000 to 10,000 images. The problem is, it is inconsistent. This affects product discovery. We do automatic tagging (for ecommerce clients) and reduce 90% of human intervention," said Ashwini Asokan, chief executive of Chennai-based AI startup Mad Street Den. "We can tag 30,000 images in, say, two hours."

Mad Street Den also offers a host of other services such as sending personalised emails to their clients' customers, automating warehouse operations and providing analysis and forecasting.

Gurugram-based Staqu works on generating digital tags that make searching for a product online easier. "We provide a software development kit that can be integrated into an affiliate partner's website or app. Then the site or app will become empowered by image search. It will recognise the product and start making tags for that," said Atul Rai, cofounder of Staqu, which counts Paytm and Yepme among clients. Staqu is a part of IBM's Global Entrepreneurship Program.

The other big use of AI is to provide business intelligence. Bengaluru-based Stylumia informs their fashion retailer clients on the latest design trends. "We deliver insights using computer vision, meaning visual intelligence," said CEO Ganesh Subramanian. "Say, for example, (how do you exactly describe a) dark blue stripe shirt. Now, dark blue is subjective. You cannot translate dark blue, so we pull information from the Net and we show it visually."

In product delivery, algorithms are being used to clean up and automate the process.

Bengaluru-based Locus is enabling logistics for companies using AI. "We use machine learning to convert (vaguely described) addresses into valid (recognizable) addresses. There are pin code errors, spelling mistakes, missing localities. Machine learning is critical in logistics. We even do demand predictions and predict returns," said Nishith Rastogi, chief executive of Locus, whose customers include Quikr, Delhivery, Lenskart and Urban Ladder.

Myntra is trying to use AI to predict for customers the exact time of product delivery. "The exact time is very important to us. However, it is not straightforward. It depends on what time somebody placed an order, what was happening in the rest of the supply chain at that time, what was its capacity. It is a complicated thing to solve but we threw this (challenge) to the machine," said Kenghe. "(The machine) learnt over a period of time. It learnt what happens on weekends, what happens on weekdays, and which warehouse to which pin code is (a product) going to, and what the product is and what size it is. It figured these out with some supervision and came up with (more accurate delivery) dates. I do not think we have perfected it, but it is a big deal for us."

THE NEXT BIG CHALLENGE One of Myntra's AI projects is to come up with a fashion assistant that can talk in common language and recommend what to wear for various occasions. But "conversational flows are difficult to solve. This is very early. It will not see the light of the day very soon. The assistants first use would be for support, say (for a user to ask) where is my order, (or instruct) cancel order," said Kenghe.

The world over, conversational bots are the next big thing. Technology giants like Google and Amazon are pushing forward research on artificial intelligence. "As we see (customer care) agents responding (to buyers), the machine can learn from it. The next stage is, a customer can say 'I am going to Goa' and the assistant will figure out that Goa means beach and give a list of things (to take along)," Kenghe said.

While speech is one crucial area in AI research, vision is another. Mad Street Den is trying to use AI in warehouses to monitor processes. "Using computer vision, there is no need for multiple photoshoots of products. This avoids duplication and you are saving money for the customer almost 16-25% savings on the operational side. We can then start seeing who is walking into the warehouse, how many came in, efficiency, analytics, etc. We are opening up the scale of operations," said Asokan.

Any opportunity to improve efficiency and cut cost is of supreme importance in ecommerce, said Partha Talukdar, assistant professor at Bengaluru's Indian Institute of Science, where he heads the Machine and Language Learning Lab (MALL), whose mission is to give a "worldview" to machines.

"Companies like Amazon are doing automation wherever they can... right to the point of using robots for warehouse management and delivery through drones. AI and ML are extremely important because of the potential. There are a lot of diverse experiments going on (in ecommerce). We will certainly see a lot of innovative tech from this domain."

Go here to read the rest:

How online retailers are using artificial intelligence to make shopping a smoother experience - Economic Times

Posted in Artificial Intelligence | Comments Off on How online retailers are using artificial intelligence to make shopping a smoother experience – Economic Times

Artificial intelligence? Only an idiot would think that – Irish Times

Posted: March 9, 2017 at 3:22 am

Prof Ian Bogost of the Georgia Institute of Technology: not every technological innovation merits being called AI

Not every technological innovation is artificial intelligence and labelling it as such is making the term AI virtually meaningless, says Ian Bogost, a professor of interactive computing at the Georgia Institute of Technology in the US. Bogost gives the example of Googles latest algorithm, Perspective, which is designed to detect hate speech. While media coverage has been hailing this as an AI wonder, it turns out that simple typos can fool the system and allow abusive, harassing, and toxic comments to slip through easily enough.

Researchers from the University of Washington, Seattle, put the algorithm through its paces by testing the phrase Anyone who voted for Trump is a moron, which scored 79 per cent on the toxicity scale. Meanwhile, Anyone who voted for Trump is a mo.ron scored a tame 13 per cent. If you can easily game Artificial Intelligence, was it really intelligent in the first place?

https://arxiv.org/pdf/1702.08138.pdf

Originally posted here:

Artificial intelligence? Only an idiot would think that - Irish Times

Posted in Artificial Intelligence | Comments Off on Artificial intelligence? Only an idiot would think that – Irish Times

IBM Rated Buy On ‘Upside Potential,’ Artificial Intelligence Move – Investor’s Business Daily

Posted: at 3:22 am

IBM CEO Ginni Rometty told investors that her company is emerging as a leader in cognitive computing. (IBM)

IBM (IBM) is an attractive turnaround story with improved fundamental trends, says a Drexel Burnham analyst who reiterated a buy rating and raised his price target on the computer giant.

The buy rating by Drexel Burnham analyst Brian White follows a day of briefings that IBM presented to investors at its annual Investor Briefing conference that ended Tuesday.

"We believe IBM has further upside potential as the fruits of the company's labor around its strategic imperatives are better appreciated and more investors warm up to the stock," White wrote in a research note. Along with his buy rating, White raised his price target on IBM to 215, from 186.

IBM stock ended the regular trading session at179.45, down fractionally on the stock market today. It's currently trading near a 29-month high.

The investor's day events included a presentation by IBM Chief Executive Ginni Rometty, who said the company has reached an important moment with a solid foundation and is emerging as a leader in cognitive computing with its Watson computing platform and cloud services.

Announcements from the investor briefing included IBM and Salesforce.com (CRM) agreeing to a strategic partnership focused on artificial intelligence and supported by IBM's Watson computer and the Einstein computing platform by Salesforce.com.

Salesforce and IBM will combine their two AI offerings but will also continue to sell the combined offering under two brands. Salesforce and IBM said they would "seamlessly connect" their AI offerings "to enable an entirely new level of intelligent customer engagement across sales, service, marketing, commerce and more."

Salesforce stock finished at83.48, up 0.6%.

Decades of research and billions of dollars have poured into developingartificial intelligence, which has crossedover from science fiction to game-show novelty to the cusp of widespread business applications. IBM has said Watson represents a new era of computing.

IBD'S TAKE: After six consecutive quarters of declining quarterly earnings at IBM,growth may be on the mend. IBM reported fourth-quarter earnings after the market close Jan. 19 that beat on the top and bottom lines for the fifth straight quarter.

"We believe IBM is furthest ahead in the cognitive computing movement and we believe the Salesforce partnership is only the beginning of more deals in the coming years," White wrote.

Other companies investing heavily in AI include Google parent Alphabet (GOOGL) and graphics chip company Nvidia (NVDA).

Alphabet has used AI to enhance Google search abilities, improve voice recognition and to derive more data from images and video.

Nvidia has developed chip technology for AI platforms used in autonomous driving features, and to enhance how a driver and car communicate.

Not everyone is a bull on the IBM train. Credit Suisse analyst Kulbinder Garcha, has an underperform rating on IBM and price target of 110. Garcha, in a research note, said IBM remains in a multiyear turnaround.

"We believe it will take multiple years for faster growing segments such as the Cognitive Solutions segment and Cloud to offset the decline in the core business," Garcha wrote.

RELATED:

AI Meets ROI: Where Artificial Intelligence Is Already Smart Business

IBM Takes Watson Deeper Into Business Computing Field

3/08/2017 Ulta Beauty and Finisar lead earnings news, while Presidio prices its IPO and tech giants like Microsoft, Alphabet and Texas...

3/08/2017 Ulta Beauty and Finisar lead earnings news, while Presidio prices...

See the original post here:

IBM Rated Buy On 'Upside Potential,' Artificial Intelligence Move - Investor's Business Daily

Posted in Artificial Intelligence | Comments Off on IBM Rated Buy On ‘Upside Potential,’ Artificial Intelligence Move – Investor’s Business Daily

Artificial Intelligence for Cars May Drive Future of Healthcare – Healthline

Posted: at 3:22 am

The same artificial intelligence that may soon drive your new car is being adapted to help drive interventional radiology care for patients.

Researchers at the University of California, Los Angeles (UCLA), have used advanced artificial intelligence, also called machine learning, to create a chatbot or Virtual Interventional Radiologist (VIR).

This device communicates automatically with a patients physicians and can quickly offer evidence-based answers to frequently asked questions.

The scientists will present their research today at the Society of Interventional Radiologys 2017 annual scientific meeting in Washington, D.C.

This breakthrough will allow clinicians to give patients real-time information on interventional radiology procedures as well as planning the next step of their treatment.

Dr. Edward W. Lee, assistant professor of radiology at UCLAs David Geffen School of Medicine, and one of the authors of the study, said he and his colleagues theorized they could use artificial intelligence in low-cost, automated ways to improve patient care.

The fundamental technology that has made self-driving cars possible is deep learning, a type of artificial intelligence modeled after the connections in the human brain, explained Dr. Kevin Seals, resident physician in diagnostic radiology at UCLA Health, and a study co-author, said in a Healthline interview.

Seals, who programmed the VIR, said advanced computers and the human brain have a number of similarities.

Using deep learning, computers are now essentially as good as humans at identifying particular objects, making it possible for self-driving cars to see and appropriately navigate their environment, he said.

This same technology can allow computers to understand complex text inputs such as medical questions from healthcare professionals, he added. By implementing deep learning using the IBM Watson cognitive technology and Natural Language Processing, we are able to make our virtual interventional radiologist smart enough to understand questions from physicians and respond in a smart, useful way.

Read more: Regenerative medicine has a bright future

Think of it as an initial, superfast layer of information gathering that can be used prior to taking the time to contact an actual human diagnostic or interventional radiologist, Seals said.

The user simply texts a question to the virtual radiologist, which in many cases provides an excellent, evidence-based response more or lessinstantaneously, he said.

He noted that if the patient doesnt receive a helpful response, they are rapidly referred to a human radiologist.

Tools such as our chatbot are particularly important in the current clinical environment, which focuses on quality metrics and follows evidence-based clinical guidelines that are proven to help patients, he said.

Seals said a team of academic radiologists curated the information provided in the application from the radiology literature, and it is rigorously scientific and evidence-based.

We hope that using the application will encourage cutting-edge patient management that results in improved patient care and significantly benefits our patients, he added.

It can be thought of as texting with a virtual representation of a human radiologist that offers a significant chunk of the functionality of speaking with an actual human radiologist, Seals said.

When the non-radiologist clinician texts a question to the VIR, deep learning is used to understand that message and respond in an intelligent manner.

We get a lot of questions that are fairly readily automated, Seals said. Such as I am worried that my patient has a blood clot in their lungs. What is the best type of imaging to perform to make the diagnosis? The chatbot can respond to questions like this in a supersmart, evidence-based way.

Sample responses, he said, can include instructive images (for example, a flowchart that shows a clinical algorithm), response text messages, and subprograms within the application such as a calculator to determine a patients Wells score, a metric doctors use to guide clinical management.

The VIR application resembles an online customer service chat.

To create a crucial foundation of knowledge, the researchers fed the app more than 2,000 data points that simulated the common inquiries interventional radiologists receive when they meet with patients.

Read more: A watch that tells you when youre getting sick

When a referring clinician asks a question, the extensive knowledge base of the app allows it to respond instantly with the best answer.

The various forms of responses can include websites, infographics, and custom programs.

If the VIR determines that an answer requires a human response, the program will provide contact information for a human interventional radiologist.

The app learns as clinicians use it, and each scenario teaches the VIR to become increasingly smarter and more powerful, Seals said.

The nature of chatbot communications should protect patient privacy.

Confidentiality is critically important in the world of modern technology and something we take very seriously, Seals said.

He added that the application was created and programmed by physicians with extensive HIPAA (Health Insurance Portability and Accountability Act of 1996) training.

We are able to avoid these issues because users ask questions in a general and anonymous manner, Seals said. Protected health information is never needed to use the application, nor is it relevant to its function.

All users professional healthcare providers such as physicians and nurses must agree to not include any specific protected patient information in their texts to the chatbot, he added.

None of the diverse functionality within the application requires specific patient information, Seals said.

Read more: Artificial bones are the latest thing in 3-D printing

This new technology represents the fastest and easiest way for clinicians to get the information they need in the hospital, starting with radiology and eventually expanding to other specialties such as neurosurgery and cardiology, Seals said.

Our technology can power any type of physician chatbot, he explained. Currently, there are information silos of sorts that exist between various specialists in the hospital, and there is no good tool for rapidly sharing information between these silos. It is often slow and difficult to get a busy radiologist on the phone, which inconveniences clinicians and delays patient care.

Other clinicians at the UCLA David Geffen School of Medicine are testing the chatbot, and Seals and Lee say their technology is fully functional now.

We are refining it and perfecting it so it can thrive in a wide release, Seals said.

Seals engineering and software background allowed him to perform the necessary programming for the as-yet unfunded research project. He said he and his colleagues will seek funding as they expand.

This breakthrough technology will debut soon.

The VIR will be made available in about one month to all clinicians at the UCLA Ronald Reagan Medical Center. Further use at UCLA will help the team to refine the chatbot for wider release.

The VIR could also become a free app.

We are exploring potential models for releasing the application, Seals said. It may very well be a free tool we release to assist our clinician colleagues, as we are academic radiologists focused on sharing knowledge and improving clinical medicine.

The researchers described the importance of the VIR in a summary of their findings: Improved artificial intelligence through deep learning has the potential to fundamentally transform our society, from automated image analysis to the creation of self-driving cars.

View original post here:

Artificial Intelligence for Cars May Drive Future of Healthcare - Healthline

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence for Cars May Drive Future of Healthcare – Healthline

Google bets big on artificial intelligence to make a cloud push for enterprises – Economic Times

Posted: at 3:22 am

SAN FRANCISCO : Google is betting big on its dominance in machine learning and artificial intelligence to break into the cloud market, a message that was the underlying theme on the first day of the technology giant's cloud conference that began here on Wednesday.

It also made a slew of announcements further strengthening its place as the leader in machine learning and artificial intelligence platforms.

We put $30 billion in the Google Cloud Platform, said Eric Schmidt, chairman of Googles parent company Alphabet. He added that big data, or large data sets that are analysed to reveal patterns through machine learning and artificial intelligence, is so powerful that nation states will fight over it.

Google announced big names, such as HSBC, Colgate-Palmolive, the Home Depot, SAP, Disney, Verizon and Ebay as customers, most of who have large data sets to the tune of billions of records.

Google currently lags in the cloud market, with Amazon Web Services and Microsoft's Azure taking the lead with big customers, but the technology giant aims to change that through the Google Cloud Platform (GCP) which allows customers to leverage its machine learning and artificial intelligence capabilities.

Google Cloud's vice president Diane Greene led the agenda on the opening day, along with CEO Sundar Pichai, Fei Fei Li, the chief scientist, cloud AI and ML, Eric Schmidt.

The GCP lets enterprises host applications and websites, store data, and analyze data on Google's scalable infrastructure. Added on top of that is Googles machine learning capability which is widely acknowledged as being the best in the industry.

Machine learning is a phenomenal tool for enterprises to get insights like never before, said Greene at a post keynote press conference.

Artificial Intelligence, or building applications that mimic human-like behaviour, making them more intuitive and useful to the end consumer, is fast becoming the battleground for all major cloud providers, as they take a step further from providing just storage capability to their clients.

Research firm IDC said last year that widespread adoption of cognitive systems and AI across a broad range of industries will drive their worldwide revenues from nearly $8 billion in 2016 to more than $47 billion in 2020. Last year, banking, retail, healthcare and discrete manufacturing were the largest spenders in AI. In the future, education and process manufacturing are expected to drive more revenues.

ET reported last week that the India arm of Amazon Web Services (AWS) is looking to add AI services such as speech recognition, text-to-voice services, visual search and image analysis, to its base infrastructure on which startups, enterprises and developers can build their products.

Google, with its deep learning capability through its Brain project as well as the work done by Li, has an edge over its competitors in understanding and crunching large data sets, as well as understanding artificial intelligence.

Li, whose work in visual search is well recognised, built on the theme of "democratizing AI", and said Google wants to take it to developers, people and enterprises.

Describing video as the dark matter of digital, Li launched the Google video API for Google Cloud, which accurately identifies things and places in videos. For instance, in a medical procedure video, if a user wants to search for specific body parts, they will be able to do it just by searching for the name of the part instead of having to go through the entire video.

Google also announced the acquisition of Kaggle, a San Francisco-based data science startup that runs programming contests and competitions for machine learning projects. Google did not disclose the value of the acquisition, but said Kaggle would continue to operate as an independent brand for a while.

Some of the Google announced on Wednesday also use AWS and Microsoft clouds, in what they described as having a multi-cloud strategy. We want all players in the cloud market to compete against each other, said Paul Gaffney, senior vice president at The Home Depot, summing up what the cloud market is already looking like.

Read the original:

Google bets big on artificial intelligence to make a cloud push for enterprises - Economic Times

Posted in Artificial Intelligence | Comments Off on Google bets big on artificial intelligence to make a cloud push for enterprises – Economic Times

Artificial intelligence is already all around us: John MacIntyre – Livemint

Posted: at 3:22 am

Mumbai: As pro vice-chancellor (product and partner development) of the University of Sunderland in the UK, Prof. John MacIntyres brief includes covering research, innovation, knowledge exchange, employer engagement and regional economy. Since 1996, MacIntyre has also been the editor-in-chief of Neural Computing and Applicationsan international scientific peer- reviewed journal published by Springer Verlag.

In an interview, he talks about why artificial intelligence (AI) needs to be looked at more positively and how AI can contribute to society. MacIntyre will also address EmTech India 2017an emerging tech conference organized by Mint and MIT Technology Reviewon 9 March in New Delhi. Edited excerpts:

You completed your PhD in applied AI, focussing on the use of neural networks in predictive maintenance. What prompted you to do this research and what were your research findings?

When I worked in the Middle East, I taught myself programming and did a range of jobs and tasks to build my skill sets. I ended up managing teams and wanted to further my career, but also realized that I needed formal qualifications to do that. So, I returned to the UK, and took a full-time job working night shifts, to allow me to study full-time during the day.

The University of Sunderland had a programme of Combined Sciences that allowed you to take a major and minor optionso I majored in computer science, and my minor choice was physiologywhich I chose simply out of personal interest. As it happened, it became very relevant as I then embarked on my doctoral work.

Having achieved a First Class Honours degree, I was offered the chance to do a PhDand the most interesting option was a programme of research looking at how to improve the performance and reduce costs of a power generation plant through predictive maintenance and condition monitoring. The sponsor company was National Power, and I liked the idea of applying my knowledge in computer science and engineering to a specific industrial problem, and coming up with new ideas.

My physiology minor ended up being relevant because of the choice of using neural networks as a model or technique for pattern recognition and classification, in the face of very noisy and sometimes incomplete data, to provide diagnostics and prognostics for engineers to use in making decisions about maintaining the ancillary plant in power generation stations.

By the time I completed my PhD, we had saved literally millions of pounds for the company, through elimination of catastrophic failures, reduced downtime of generating plant, and reduced costs.

The study of neural networks does involve an interdisciplinary approach. Please elaborate.

Applications of neural networks (and the associated natural computational techniques, such as genetic algorithms) are incredibly varied and diverse. This is because the range of techniques can be applied, appropriately, to a wide range of problem typesclassification, pattern recognition, optimization and prediction, to name only a fewin an even wider range of sectors and applications e.g. medical, industrial, financial, commercial, geophysical, and so on.

This means that collaborative ventures, where expertise from a range of fields is brought to bear on applying the techniques to help solve a problem or create a solution (not necessarily a perfect solution, but at least an advance on current technology) are becoming commonplace.

Doctors, engineers, bankers, geologists, physicists, metallurgists and computer scientists will all work together in various project teams to focus their collective expertise on applying AI techniques to create advances in knowledge and technology. I see this as the way forward and it is always refreshing to see how the blend of such disciplinary expertise creates a new dynamic to tackle difficult problems.

While there are those who believe in the potential of AI and its applications, a sizeable number of people including Stephen Hawking, Bill Gates and Elon Musk have expressed fears that AI-powered machines could rule over humans? Whats your take on this subject?

This is a major problem and encompasses some really big issues, including understanding, ignorance, focus and ethics. AI is already all around us, sometimes in very visible ways (e.g. Siri) but often in very invisible ways (linked to Internet profiling, banking algorithms, even embedded AI in cameras and washing machines).

These applications would generally be seen as positive, supporting humans in their modern, everyday lives. And yet, still, AI is perceived very negatively by many in society who dont understand what AI really is, and what it means to them.

As editor-in-chief of the scientific journal Neural Computing and Applications, published by Springer Verlag, I see thousands of scientific papers each year, from all around the world, advancing AI techniques and applicationsall of which, I would say, are intended to be positive contributions to society.

The problem is that the general public only see, and quite understandably, take their information from what the media, and in particular, film and TV, put before them. And because that is dominated by negative stories about AI taking over the world, eliminating humans (literally or metaphorically), and rendering humanity obsolete, its hardly surprising that most people have a pretty negative view of AI.

I believe the scientific and technical community has a responsibility to counter this negative with good news about AI, and to make it understandable, accessible, and therefore less frightening to society.

Tell us something about the work that the University of Sunderland does with its Institute of Automotive Manufacturing and Advanced Practice (AMAP). Do you believe that electric vehicles and connected cars will be the normal by 2025?

Connected cars are already here!

Most new generation vehicles are already IP-enabled devices with sophisticated interfaces, connecting them to the Internet. The next few years will see more developments in how vehicles connect to the environment, for example, the Connected Car programme of Hitachi Data Systems is driving towards the CFX conceptwhere the car can connect to any other Internet-enabled device.

The major developments are linked to the development of driverless carsautonomous vehicles, in effect.

There are many, many difficult issues to resolve before driverless cars will be the normand I think that is likely to be decades away. Electric (and other alternatively-fuelled) vehicles are already commonplace, but I dont think they will have completely replaced the internal combustion (IC) engine by 2025.

It seems to me that we will see, over say the next 20 years, a multi-faceted strategy of development, with even more efficient and clean IC engines being developed alongside improvements in battery technology and range for electric vehicles, and hydrogen and other alternatively-fuelled vehicles also being developed.

Right now, it is impossible to say which will become the dominant technology, or when.

First Published: Thu, Mar 09 2017. 05 02 AM IST

Read the original:

Artificial intelligence is already all around us: John MacIntyre - Livemint

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is already all around us: John MacIntyre – Livemint

The Architecture of Artificial Intelligence – Archinect

Posted: March 8, 2017 at 1:22 pm

Behnaz Farahi Breathing Wall II

This vision of the future architect was imagined by engineer and inventor Douglas Engelbart during his research into emerging computer systems at Stanford in 1962. At the dawn of personal computing he imagined the creative mind overlapping symbiotically with the intelligent machine to co-create designs. This dual mode of production, he envisaged, would hold the potential to generate new realities which could not be realized by either entity operating alone. Today, self-learning systems, otherwise known as artificial intelligence or AI, are changing the way architecture is practiced, as they do our daily lives, whether or not we realize it. If you are reading this on a laptop or tablet, then you are directly engaging with a number of integrated AI systems, now so embedded in our the way we use technology, they often go unnoticed.

As an industry, AI is growing at an exponential rate, now understood to be on track to be worth $70bn globally by 2020.This is in part due to constant innovation in the speed of microprocessors, which in turn increases the volume of data that can be gathered and stored. But dont panicthe artificial architect with enhanced Revit proficiency is not coming to steal your job. The human vs. robot debate, while compelling, is not so much the focus here but instead how AI is augmenting design and how architects are responding to and working with these technological developments. What kind of innovation is artificial intelligence generating in the construction industry?

Assuming you read this as a non-expert, it is likely that much of the AI you have encountered to this point has been weak AI, otherwise known as ANI (Artificial Narrow Intelligence). ANI follows pre-programmed rules so that it appears intelligent but is in effect a simulation of a human-like thought process. With recent innovations such as that of Nvidias microchip in April 2016, a shift is now being seen towards what we might understand as deep learning, where a system can, in effect, train and adapt itself. The interest for designers is that AI is, therefore, starting to apply itself to more creative tasks, such aswriting books, making art, web design, or self-generating design solutions, due to its increased proficiency in recognizing speech and images. Significant AI winters', or periods where funding has been hard to source for the industry, have occurred over the last twenty years, but commentators such as philosopher Nick Bostrom now suggest we are on the cusp of an explosion in AI, and this will not only shape but drive the design industry in the next century. AI, therefore, has the potential to influence the architectural design process at a series of different construction stages, from site research to the realization and operation of the building.

1. Site and social research

By already knowing everything about us, our hobbies, likes, dislikes, activities, friends, our yearly income, etc., AI software can calculate population growth, prioritize projects, categorize streets according to usage and so on, and thus predict a virtual future and automatically draft urban plans that best represent and suit everyone. -Rron Beqiri on Future Architecture Platform.

Gathering information about a project and its constraints is often the first stage of an architectural design process, traditionally involving traveling to a site, perhaps measuring, sketching and taking photographs. In the online and connected world, there is already a swarm-like abundance of data for the architect to tap into, already linked and referenced against other sources allowing the designer to, in effect, simulate the surrounding site without ever having to engage with it physically. This information fabric has been referred to as the internet of things. BIM tools currently on the market already tap into these data constellations, allowing an architect to evaluate site conditions with minute precision. Software such as EcoDesigner Star or open-source plugins for Google SketchUp allows architects to immediately calculate necessary building and environmental analyses without ever having to leave their office. This phenomenon is already enabling many practices to take on large projects abroad that might have been logistically unachievable just a decade ago.The information gathered by our devices and stored in the Cloud amounts to much more than the material conditions of the world around us

The information gathered by our devices and stored in the Cloud amounts to much more than the material conditions of the world around us. Globally, we are amassing ever-expanding records of human behavior and interactions in real-time. Personal, soft data might, in the most optimistic sense, work towards the socially focused design that has been widely publicized in recent years by its ability to integrate the needs of users. This approach, if only in the first stages of the design process, would impact the twentieth-century ideals of mass production and standardization in design. Could the internet of things create a socially adaptable and responsive architecture? One could speculate that, for example, when the population of children in a city crosses a maximum threshold in relation to the number of schools, a notification might be sent to the district council that it is time to commission a new school. AI could, therefore, in effect, write the brief for and commission architects by generating new projects where they are most needed.

Autodesk. Bicycle design generated by Dreamcatcher AI software.

2. Design decision-making

Now that we have located live-updating intelligence for our site, it is time to harness AI to develop a design proposal. Rather than a program, this technology is better understood as an interconnected, self-designing system that can upgrade itself. It is possible to harness a huge amount of computing power and experience by working with these tools, even as an individual as Autodesk president Pete Baxtertold the Guardian: now a one-man designer, a graduate designer, can get access to the same amount of computing power as these big multinational companies. The architect must input project parameters, in effect an edited design brief, and the computer system will then suggest a range of solutions which fulfill these criteria. This innovation has the potential to revolutionize how architecture is not only imagined but how it is fundamentally expressed for designers who choose to adopt these new methods.

I spoke with Michael Bergin, a researcher at Project Dreamcatcher at Autodesks Research Lab, to get a better understanding of how AI systems are influencing the development of design software for architects. While their work was initially aimed at the automotive and industrial design industries, Dreamcatcher now is beginning to filter into architecture projects. It was used recently to develop The Livings generative design for Autodesk's new office in Toronto and MX3Ds steel bridge in Amsterdam. The basic concept is that CAD models of the surrounding site and other data, such as client databases and environmental information, are fed into the processor. Moments later, the system outputs a series of optimized 3D design solutions ready to render. These processes effectively rely on cloud computing to create a multitude of options based on self-learning algorithmic parameters. Lattice-like and fluid forms are often the aesthetic result, perhaps unsurprisingly, as the software imitates structural rules found in nature.future architects would be less in the business of drawing and more into specifying requirements of the problem

The Dreamcatcher software has been designed to optimize parametric design and link into and extend existing software designed by Autodesk, such as Revit and Dynamo. Interestingly, Dreamcatcher can make use of a wide and increasing spectrum of design input datasuch as formulas, engineering requirements, CAD geometry, and sensor informationand the research team is now experimenting with Dreamcatchers ability to recognize sketches and text as input data. Bergin suggests he imagines the future of design tools as systems that accept any type of input that a designer can produce [to enable] a collaboration with the computer to iteratively target a high-performing design that meets all the varied needs of the design team. This would mean future architects would be less in the business of drawing and more into specifying requirements of the problem, making them more in sync with their machine counterparts in a project. Bergin suggests architects who adopt AI tools would have the ability to synthesize a broad set of high-level requirements from the design stakeholders, including clients and engineers, and produce design documentation as output, in line with Engelbarts vision of AI augmenting the skills of designers.

AI is also being used directly in software such as Space Syntaxs depthmapX, designed at The Bartlett in London, to analyze the spatial network of a city with an aim to understand and utilize social interactions and in the design process. Another tool, Unity 3D, is built from software developed for game engines to enable designers to analyze their plans, such as the shortest distances to fire exits. This information would then allow the architect to re-arrange or generate spaces in plan, or even to organize entire future buildings. Examples of architects who are adopting these methods include Zaha Hadid with the Beijing Tower project (designed antemortem) and MAD Architects in China, among others.

Computational Architecture Digital Grotesque Project

3. Client and user engagement

As so much of the technology built into AI has been developed from the gaming industry, its ability to produce forms of augmented reality have interesting potential to change the perception and engagement with architecture designs for both the architects and non-architects involved in a project. Through the use of additional hardware, augmented reality has the ability to capture and enhance real-world experience. It would enable people to engage with a design prior to construction, for example, to select the most appealing proposal from their experiences within its simulation. It is possible that many architecture projects will also remain in this unbuilt zone, in a parallel digital reality, which the majority of future world citizens will simultaneously inhabit.

Augmented reality would, therefore, allow a client to move through and sense different design proposals before they are built. Lights, sounds, even the smells of a building can be simulated, which could reorder the emphasis architects currently give to specific elements of their design. Such a change in representational method has the potential to shift what is possible within the field of architecture, as CAD drafting did at the beginning of this century. Additionally, the feedback generated by augmented reality can feed directly back into the design, allowing models to directly interact and adapt to future users. Smart design tools such as Materiable by Tangible Media are beginning to experiment with how AI can begin to engage with and learn from human behavior.

Computational Architecture Digital Grotesque Project

4. Realizing designs and rise of robot craftsmen

AI systems are already being integrated into the construction industryinnovative practices such asComputational Architectureare working with robotic craftsmen to explore AI in construction technology and fabrication. Michael Hansmeyer and Benjamin Dillenburger, founders of Computational Architecture, are investigating the new aesthetic language these developments are starting to generate. Architecture stands at an inflection point, he suggests on their website, the confluence of advances in both computation and fabrication technologies lets us create an architecture of hitherto unimaginable forms, with an unseen level of detail, producing entirely new spatial sensations.

3D printing technology developed from AI software has the potential to offer twenty-first-century architects a significantly different aesthetic language, perhaps catalyzing a resurgence of detail and ornamentation, now rare due to the decline in traditional crafts. Hansmeyer and Dillenburgers Grotto Prototype for the Super Material exhibition, London, was a complex architectural grotto 3D-printed from sandstone. The form of the sand grains was arranged by a series of algorithms custom designed by the practice. The technique allowed forms to be developed which were significantly different to that of traditional stonemasonry. The aim of the project was to show that it is now possible to print building-scale rooms from sandstone and that 3D printing can also be used for heritage applications, such as repairs to statues.The confluence of advances in both computation and fabrication technologies lets us create an architecture of hitherto unimaginable forms

Robotics are also becoming more common on construction job sites, mostly dealing with human resources and logistics. According to AEM, their applications will soon expand to bricklaying, concrete dispensing, welding, and demolition. Another example of their future use could include working with BIM to identify missing elements in the snagging process and update the AI in real-time. Large scale projects, for example, government-lead infrastructure initiatives, might be the first to apply this technology, followed by mid-scale projects in the private sector, such as cultural buildings. The challenges of the construction site will bring AI robotics out of the indoor, sanitized environment of the lab into a less scripted reality. Robert Saunders, a researcher into AI and fabrication at the University of Sydney, told New Atlas that "robots are great at repetitive tasks and working with materials that react reliablywhat we're interested in doing is trying to develop robots that are capable of learning how to work with materials that work in non-linear ways like working with hot wax or expanding foam or, more practically, with low-grade building materials like low-grade timber. Saunders foresees robot stonemasons and other craftsbots working in yet unforeseen ways, such as developing the architect's skeleton plans, in effect, spontaneously generating a building on-site from a sketch.

Ori System by Ori

5. Integrating AI systems

This innovation involves either integrating developing artificial technologies with existing infrastructure or designing architecture around AI systems. There is a lot of excitement in this field, influenced in part by Mark Zuckerbergs personal project to develop networked AI systems within his home, which he announced in hisNew years Facebook postin 2016. His wish is to develop simple AI systems to run his home and help with his day-to-day work. This technology would have the ability to recognize the voices of members of the household and respond to their requests. Designers are taking on the challenge of designing home-integrated systems, such as theOri Systemof responsive furniture, or gadgets such asEliqfor energy monitoring. Other innovations, such as driverless cars that run on an integrated system of self-learning AI, have the potential to shape how our cities are laid out and plannedin the most basic sense, limiting our need for more roads and parking areas.

Behnaz Farahi is a young architect who is employing her research into AI and adaptive surfaces to develop interactive designs, such as in her Aurora and Breathing Wall projects. She creates immersive and engaging indoor environments which adapt to and learn from their occupants. Her approach is one of manydifferent practices with different goals will adapt AI at different stages of their process, creating a multitude of architectural languages.

Researchers and designers working in the field of AI are attempting to understand the potential of computational intelligence to improve or even upgrade parts of the design process with an aim to create a more functional and user-optimized built environment. It has always been the architects task to make decisions based on complex, interwoven and sometimes contradictory sets of information. As AI gradually improves in making useful judgments in real-world situations, it is not hard to imagine these processes overlapping and engaging with each other. While these developments have the potential to raise questions in terms of ownership, agency and, of course, privacy in data gathering and use, the upsurge in self-learning technologies is already altering the power and scope of architects in design and construction. As architect and design theorist Christopher Alexander said back in 1964, We must face the fact that we are on the brink of times when man may be able to magnify his intellectual and inventive capacity, just as in the nineteenth century he used machines to magnify his physical capacity.To think architecturally is to imagine and construct new worlds, integrate systems and organize information

In our interview, Bergin gave some insights into how he sees this technology impacting designers in the next twenty years. The architectural language of projects in the future may be more expressive of the design teams intent, he stated. Generative design tools will allow teams to evaluate every possible alternative strategy to preserve design intent, instead of compromising on a sub-optimal solution because of limitations in time and/or resources. Bergin believes AI and machine learning will be able to support a dynamic and expanding community of practice for design knowledge. He can also foresee implications of this in the democratization of design work, suggesting the expertise embodied by a professional of 30 years may be more readily utilized by a more junior architect. Overall, he believes architectural practice over the next 20 years will likely become far more inclusive with respect to client and occupant needs and orders of magnitude more efficient when considering environmental impact, energy use, material selection and client satisfaction.

On the other hand, Pete Baxter suggestsarchitects have little to fear from artificial intelligence: "Yes, you can automate. But what does a design look like that's fully automated and fully rationalized by a computer program? Probably not the most exciting piece of architecture you've ever seen. At the time of writing, many AI algorithms are still relatively uniform and relatively ignorant of context, and it is proving difficult to automate decision-making that would at first glance seem simple for a human. A number of research labs, such theMIT Media Lab, are working to solve this. However, architectural language and diagramming have been part of programming complex systems and software from the start, and they have had a significant influence on one another. To think architecturally is to imagine and construct new worlds, integrate systems and organize information, which lends itself to the front line of technical development. As far back as the 1960s, architects were experimenting with computer interfaces to aid their design work, and their thinking has inspired much of the technology we now engage with each day.

Behnaz Farahi Aurora

Read the original post:

The Architecture of Artificial Intelligence - Archinect

Posted in Artificial Intelligence | Comments Off on The Architecture of Artificial Intelligence – Archinect

Page 194«..1020..193194195196..200210..»