Page 272«..1020..271272273274..280..»

Category Archives: Ai

Line’s AI Bets Pit Japanese Messenger Against Amazon and Google – Bloomberg

Posted: March 1, 2017 at 9:15 pm

by

March 1, 2017, 11:00 AM EST March 1, 2017, 7:16 PM EST

Line Corp. outlined an ambitious artificial-intelligence strategy that promises to transform Japans most popular messaging service while pitting it against Google, Facebook Inc. and Amazon.com Inc.

The company is launching a suite of AI software tools to power an online digital assistant capable of conversing in Japanese and Korean, Line said at the Mobile World Congress in Barcelona on Wednesday.Users can talk to the assistant, getting the latest weather and news through either a dedicated smartphone app or a tabletop-speaker called Wave thats similar to Amazons Echo. Both will be available in early summer.

Clova's smart speaker WAVE.

Source: Line Corp.

Silicon Valley companies are exploring ways of extending their reach beyond smartphones, with Amazon and Google both selling AI-powered digital assistants not unlike Wave. Facebook, whose Messenger and WhatsApp compete with Line, haslaunched a chatbot platform and plowed more than $2 billion into virtual reality. But Line believes it can leverage local knowledge to beat tech giants in its home country and markets where its messaging service is popular, including South Korea, Taiwan, Thailand and Indonesia.

There is a shift toward toward post-smartphone, post-touch technologies, Chief Executive Officer Takeshi Idezawa said in an interview. These connected devices will permeate even deeper into our daily lives and therefore must even closer match the local needs, languages and cultures.

Line developed its AI platform with parent Naver Corp. The South Korean company operates that countrys dominant search engine, displacing Google in a testament to the power of local knowledge, Idezawa said.

Tokyo-based Line is already much more than a messaging service on its home turf, with people using the app to read news, hail taxis and find part-time jobs. That wealth of content and interaction in local languages gives Line an advantage over larger rivals because AI is only as good as the data on which its trained, Idezawa said.

Line is also open to acquisitions and partnerships in the field. The company is buying a stake in Vinclu, a Tokyo-basedInternet of Things startup. It invested inSound Hound, aU.S.-based voice recognition company,together with Naverlast month. And its considering joining forces with Sony Corp. to develop smart devices.

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

Lines shares rose as much as 2.1 percent in Tokyo on Thursday. The stock is still down about 2 percentthis year, amid concerns about stagnating growth at a company that pulled off 2016s biggest technology public offering. Idezawa is under pressure to find new sources of revenue on what is otherwise a free messaging service, as subscriber addition and revenue from games and digital stickers slow.

For now, Line has pinned its hopes on advertising and as-yet unannounced products, while AI remains a more distant prospect.

Its one of the longer-term bets, Idezawa said. The point is to secure a position early on. People will probably begin to use these services more regularly three to five years from now.

More:

Line's AI Bets Pit Japanese Messenger Against Amazon and Google - Bloomberg

Posted in Ai | Comments Off on Line’s AI Bets Pit Japanese Messenger Against Amazon and Google – Bloomberg

Google’s anti-trolling AI can be defeated by typos, researchers find [Updated] – Ars Technica

Posted: at 9:15 pm

Visit any news organization's website or any social media site, and you're bound to find some abusive or hateful language being thrown around. As those who moderate Ars' comments know, trying to keep a lid on trolling and abuse in comments can be an arduous and thankless task: whendone too heavily, it smacks of censorship and suppression of free speech; when applied too lightly, it can poison the community and keep people from sharing their thoughts out of fear of being targeted. And human-based moderation is time-consuming.

Both of these problems are the target of a project by Jigsaw, an Alphabet startup effort spun off from Google. Jigsaw's Perspective project is an application interface currently focused on moderating online conversationsusing machine learning to spot abusive, harassing, and toxic comments. The AI applies a "toxicity score" to comments, which can be used to either aide moderation or to reject comments outright, giving the commenter feedback about why their post was rejected. Jigsaw is currently partnering with Wikipedia and The New York Times, among others, to implement the Perspective API to assist in moderating reader-contributed content.

But that AI still needs some training, as researchers at the University of Washington's Network Security Lab recently demonstrated. In a paper published on February 27, Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran demonstrated that they could fool the Perspective AI into giving a low toxicity score to comments that it would otherwise flag by simply misspelling key hot-button words (such as "iidiot") or inserting punctuation into the word ("i.diot" or "i d i o t," for example). By gaming the AI's parsing of text, they were able to get scores that would allow comments to pass a toxicity test that would normally be flagged as abusive.

"One type of the vulnerabilities of machine learning algorithms is that an adversary can change the algorithm output by subtly perturbing the input, often unnoticeable by humans," Hosseini and his co-authors wrote. "Such inputs are called adversarial examples, and have been shown to be effective against different machine learning algorithms even when the adversary has only a black-box access to the target model."

The researchers also found that Perspective would flag comments that were not abusive in nature but used keywords that the AI had been trained to see as abusive. The phrases "not stupid" or "not an idiot" scored nearly as high on Perspective's toxicity scale as comments that used "stupid" and "idiot."

These sorts of false positives, coupled with easy evasion of the algorithms by adversaries seeking to bypass screening, belie the basic problem with any sort of automated moderation and censorship. Update: CJ Adams,Jigsaw's product manager for Perspective, acknowledged the difficulty in a statement he sent to Ars:

It's great to see research like this. Online toxicity is a difficult problem, and Perspective was developed to support exploration of how ML can be used to help discussion. We welcome academic researchers to join our research efforts on Github and explore how we can collaborate together to identify shortcomings of existing models and find ways to improve them.

Perspective is still a very early-stage technology, and as these researchers rightly point out, it will only detect patterns that are similar to examples of toxicity it has seen before. We have more details on this challenge and others on the Conversation AI research page. The API allows users and researchers to submit corrections like these directly, which will then be used to improve the model and ensure it can to understand more forms of toxic language, and evolve as new forms emerge over time.

More here:

Google's anti-trolling AI can be defeated by typos, researchers find [Updated] - Ars Technica

Posted in Ai | Comments Off on Google’s anti-trolling AI can be defeated by typos, researchers find [Updated] – Ars Technica

Facebook enlists AI tech to help prevent suicide – Mashable

Posted: at 9:15 pm


Mashable
Facebook enlists AI tech to help prevent suicide
Mashable
The AI tool looks at words in the post and, especially, comments from friends such as "Are you okay?" and "I'm here to help" that may indicate someone is struggling. This part of the system won't auto-report those at risk to Facebook, but will ...
Facebook testing AI that helps spot suicidal usersEngadget
Can AI save a life? Facebook thinks soTechRepublic
Facebook is testing AI tools to help prevent suicideNew Scientist
TheStreet.com -RT -New Atlas -Facebook Newsroom
all 103 news articles »

See the original post here:

Facebook enlists AI tech to help prevent suicide - Mashable

Posted in Ai | Comments Off on Facebook enlists AI tech to help prevent suicide – Mashable

What Does An AI Chip Look Like? – SemiEngineering

Posted: at 9:15 pm

Depending upon your point of reference, artificial intelligence will be the next big thing or it will play a major role in all of the next big things.

This explains the frenzy of activity in this sector over the past 18 months. Big companies are paying billions of dollars to acquire startup companies, and even more for R&D. In addition, governments around the globe are pouring additional billions into universities and research houses. A global race is underway to create the best architectures and systems to handle the huge volumes of data that need to be processed to make AI work.

Market projections are rising accordingly. Annual AI revenues are predicted to reach $36.8 billion by 2025, according to Tractica. The research house says it has identified 27 different industry segments and 191 use cases for AI so far.

Fig. 1. AI revenue growth projection. Source: Tractica

But dig deeper and it quickly becomes apparent there is no single best way to tackle AI. In fact, there isnt even a consistent definition of what AI is or the data types that will need to be analyzed.

There are three problems that need to be addressed here, said Raik Brinkmann, president and CEO of OneSpin Solutions. The first is that you need to deal with a huge amount of data. The second is to build an interconnect for parallel processing. And the third is power, which is a direct result of the amount of data that you have to move around. So you really need to move from a von Neumann architecture to a data flow architecture. But what exactly does that look like?

So far there are few answers, which is why the first chips in this market include various combinations of off-the-shelf CPUs, GPUs, FPGAs and DSPs. While new designs are under development by companies such as Intel, Google, Nvidia, Qualcomm and IBM, its not clear whose approach will win. It appears that at least one CPU always will be required to control these systems, but as streaming data is parallelized, co-processors of various types will be required.

Much of the processing in AI involves matrix multiplication and addition. Large numbers of GPUs working in parallel offer an inexpensive approach, but the penalty is higher power. FPGAs with built-in DSP blocks and local memory are more energy efficient, but they generally aremore expensive. This also is a segment where software and hardware really need to be co-developed, but much of the software is far behind the hardware.

There is an enormous amount of activity in research and educational institutions right now, said Wally Rhines, chairman and CEO of Mentor Graphics. There is a new processor development race. There are also standard GPUs being used for deep learning, and at the same time there are a whole bunch of people doing work with CPUs. The goal is to make neural networks behave more like the human brain, which will stimulate a whole new wave of design.

Vision processing has received most of the attention when it comes to AI, largely because Tesla has introduced self-driving capabilities nearly 15 years before the expected rollout of autonomous vehicles. That has opened a huge market for this technology, and for chip and overall system architectures needed to process data collected by image sensors, radar and LiDAR. But many economists and consulting firms are looking beyond this market to how AI will affect overall productivity. A recent report from Accenture predicts that AI will more than double GDP for some countries (see Fig. 2 below). While that is expected to cause significant disruption in jobs, the overall revenue improvement is too big to ignore.

Fig. 2: AIs projected impact.

Aart de Geus, chairman and co-CEO of Synopsys, points to three waves of electronicscomputation and networking, mobility, and digital intelligence. In the latter category, the focus shifts from the technology itself to what it can do for people.

Youll see processors with neural networking IP for facial recognition and vision processing in automobiles, said de Geus. Machine learning is the other side of this. There is a massive push for more capabilities, and the state of the art is doing this faster. This will drive development to 7nm and 5nm and beyond.

Current approaches Vision processing in self-driving dominates much of the current research in AI, but the technology also has a growing role in drones and robotics.

For AI applications in imaging, the computational complexity is high, said Robert Blake, president and CEO of Achronix. With wireless, the mathematics is well understood. With image processing, its like the Wild West. Its a very varied workload. It will take 5 to 10 years before that market shakes out, but there certainly will be a big role for programmable logic because of the need for variable precision arithmetic that can be done in a highly parallel fashion.

FPGAs are very good at matrix multiplication. On top of that, programmability adds some necessary flexibility and future-proofing into designs, because at this point it is not clear where the so-called intelligence will reside in a design. Some of the data used to make decisions will be processed locally, some will be processed in data centers. But the percentage of each could change for each implementation.

Thats has a big impact on AI chip and software design. While the big picture for AI hasnt changed muchmost of what is labeled AI is closer to machine learning than true AIthe understanding of how to build these systems has changed significantly.

With cars, what people are doing is taking existing stuff and putting it together, said Kurt Shuler, vice president of marketing at Arteris. For a really efficient embedded system to be able to learn, though, it needs a highly efficient hardware system. There are a few different approaches being used for that. If you look at vision processing, what youre doing is trying to figure out what is it that a device is seeing and how you infer from that. That could include data from vision sensors, LiDAR and radar, and then you apply specialized algorithms. A lot of what is going on here is trying to mimic whats going on in the brain using deep and convolutional neural networks.

Where this differs from true artificial intelligence is that the current state of the art is being able to detect and avoid objects, while true artificial intelligence would be able to add a level of reasoning, such as how to get through a throng of people cross a street or whether a child chasing a ball is likely to run into the street. In the former, judgments are based on input from a variety of sensors based upon massive data crunching and pre-programmed behavior. In the latter, machines would be able to make value judgments, such as the many possible consequences of swerving to avoid the childand which is the best choice.

Sensor fusion is an idea that comes out of aircraft in the 1990s, said Shuler. You get it into a common data format where a machine can crunch it. If youre in the military, youre worried about someone shooting at you. In a car, its about someone pushing a stroller in front of you. All of these systems need extremely high bandwidth, and all of them have to have safety built into them. And on top of that, you have to protect the data because security is becoming a bigger and bigger issue. So what you need is both computational efficiency and programming efficiency.

This is what is missing in many of the designs today because so much of the development is built with off-the-shelf parts.

If you optimize the network, optimize the problem, minimize the number of bits and utilize hardware customized for a convolutional neural network, you can achieve a 2X to 3X order of magnitude improvement in power reduction, said Samer Hijazi, senior architect at Cadence and director of the companys Deep Learning Group. The efficiency comes from software algorithms and hardware IP.

Google is attempting to alter that formula. The company has developed Tensor processing units (TPUs), which are ASICs created specifically for machine learning. And in an effort to speed up AI development, the company in 2015 turned its TensorFlow software into open source.

Fig. 3: Googles TPU board. Source: Google.

Others have their own platforms. But none of these is expected to be the final product. This is an evolution, and no one is quite sure how AI will evolve over the next decade. Thats partly due to the fact that use cases are still being discovered for this technology. And what works in one area, such as vision processing, is not necessarily good for another application, such as determining whether an odor is dangerous or benign, or possibly a combination of both.

Were shooting in the dark, said Anush Mohandass, vice president of marketing and business development at NetSpeed Systems. We know how to do machine learning and AI, but how they actually work and converge is unknown at this point. The current approach is to have lots of compute power and different kinds of compute enginesCPUs, DSPs for neural networking types of applicationsand you need to make sure it works. But thats just the first generation of AI. The focus is on compute power and heterogeneity.

That is expected to change, however, as the problems being solved become more targeted. Just as with the early versions of IoT devices, no one quite knew how various markets would evolve so systems companies threw in everything and rushed products to market using existing chip technology. In the case of smart watches, the result was a battery that only lasted several hours between charges. As new chips are developed for those specific applications, power and performance are balanced through a combination of more targeted functionality, more intelligent distribution of how processing is parsed between a local device and the cloud, and a better understanding of where the bottlenecks are in a design.

The challenge is to find the bottlenecks and constraints you didnt know about, said Bill Neifert, director of models technology at ARM. But depending on the workload, the processor may interact differently with the software, which is almost inherently a parallel application. So if youre looking at a workload like financial modeling or weather mapping, the way each of those stresses the underlying system is different. And you can only understand that by probing inside.

He noted that the problems being solved on the software side need to be looked at from a higher level of abstraction, because it makes them easier to constrain and fix. Thats one key piece of the puzzle. As AI makes inroads into more markets, all of this technology will need to evolve to achieve the same kinds of efficiencies that the tech industry in general, and the semiconductor industry in particular, have demonstrated in the past.

Right now we find architectures are struggling if they only handle one type of computing well, said Mohandass. But the downside with heterogeneity is that the whole divide and conquer approach falls apart. As a result, the solution typically involves over-provisioning or under-provisioning.

New approaches As more use cases are established for AI beyond autonomous vehicles, adoption will expand.

This is why Intel bought Nervana last August. Nervana develops 2.5D deep learning chips that utilize a high-performance processor core, moving data across an interposer to high-bandwidth memory. The stated goal is a 100X reduction in time to train a deep learning model as compared with GPU-based solutions.

Fig. 4: Nervana AI chip. Source: Nervana

These are going to look a lot like high-performance computing chips, which are basically 2.5D chips and fan-out wafer-level packaging, said Mike Gianfagna, vice president of marketing at eSilicon. You will need massive throughput and ultra-high-bandwidth memory. Weve seen some companies looking at this, but not dozens yet. Its still a little early. And when youre talking about implementing machine learning and adaptive algorithms, and how you integrate those with sensors and the information stream, this is extremely complex. If you look at a car, youre streaming data from multiple disparate sources and adding adaptive algorithms for collision avoidance.

He said there are two challenges to solve with these devices. One is reliability and certification. The other is security.

With AI, reliability needs to be considered at a system level, which includes both hardware and software. ARMs acquisition of Allinea in December provided one reference point. Another comes out of Stanford University, where researchers are trying to quantify the impact of trimming computations from software. They have discovered that massive cutting, or pruning, doesnt significantly impact the end product. University of California at Berkeley has been developing a similar approach based upon computing that is less than 100% accurate.

Coarse-grain pruning doesnt hurt accuracy compared with fine-grain pruning, said Song Han, a Ph.D. candidate at Stanford University who is researching energy-efficient deep learning. Han said that a sparse matrix developed at Stanford required 10X less computation, an 8X smaller memory footprint, and used 120X less energy than DRAM. Applied to what Stanford is calling an Efficient Speech Recognition Engine, he said that compression led to accelerated inference. (Those findings were presented at Cadences recent Embedded Neural Network Summit.)

Quantum computing adds yet another option for AI systems. Leti CEO Marie Semeria said quantum computing is one of the future directions for her group, particularly for artificial intelligence applications. And Dario Gil, vice president of science and solutions at IBM Research, explained that using classical computing, there is a one in four chance of guessing which of four cards is red if the other three are blue. Using a quantum computer and entangling of superimposed qubits, by reversing the entanglement the system will provide a correct answer every time.

Fig. 5: Quantum processor. Source: IBM.

Conclusions AI is not one thing, and consequently there is no single system that works everywhere optimally. But there are some general requirements for AI systems, as shown in the chart below.

Fig. 6: AI basics. Source: OneSpin

And AI does have applications across many markets, all of which will require extensive refinement, expensive tooling, and an ecosystem of support. After years of relying on shrinking devices to improve power, performance and cost, entire market segments are rethinking how they will approach new markets. This is a big win for architects and it adds huge creative options for design teams, but it also will spur massive development along the way, from tools and IP vendors all the way to packaging and process development. Its like hitting the restart button for the tech industry, and it should prove good for business for the entire ecosystem for years to come.

Related Stories What Does AI Really Mean? eSilicons chairman looks at technology advances, its limitations, and the social implications of artificial intelligenceand how it will change our world. Neural Net Computing Explodes Deep-pocket companies begin customizing this approach for specific applicationsand spend huge amounts of money to acquire startups. Plugging Holes In Machine Learning Part 2: Short- and long-term solutions to make sure machines behave as expected. Wearable AI System Can Detect A Conversation Tone (MIT) An artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a persons speech patterns and vitals.

More:

What Does An AI Chip Look Like? - SemiEngineering

Posted in Ai | Comments Off on What Does An AI Chip Look Like? – SemiEngineering

AI in healthcare must overcome security, interoperability concerns – TechTarget

Posted: at 9:15 pm

Artificial intelligence is beginning to gain ground in healthcare. The combination of advanced algorithms, large...

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

data sets and powerful computers has offered a new way to leverage technology in patient care. AI is also able to perform complex cognitive tasks and analyze large amounts of patient data instantly. However, despite the powerful capabilities that AI can offer, some physicians are skeptical about the safety of using AI in healthcare, especially in roles that can impact a patient's health.

Today, most consumers have been exposed to some form of AI. Services like Google Home and Amazon's Alexa extensively use artificial intelligence and machine learning as part of their core application. But AI is not limited to taking basic commands to give weather forecasts or set reminders. Artificial intelligence has shown that it can perform several complex and cognitive tasks faster than a human. The automotive industry has already showcased its ability to leverage AI to offer driverless cars, while other industries have also found ways to use machine learning to detect fraud or assess financial risks. These are just a few examples that highlight the maturity level of AI.

Companies such as IBM play a big part in pushing AI into healthcare. Its use in leveraging its Watson platform in cancer research, insurance claims and clinical support tools has encouraged many in the industry to see the importance of this technology. Despite these encouraging signs and positive uses of artificial intelligence in healthcare, there are still some concerns and questions around its potential risks, and some healthcare professionals are uneasy about AI getting into the business of patient care. Below are four challenges of artificial intelligence in healthcare that need to be overcome before physicians will fully adopt the technology.

Patient health data is protected under federal law, and any breaches or failure to maintain its integrity can have legal and financial penalties. Since AI used for patient care would need access to multiple health data sets, it would need to adhere to the same regulations that current applications and infrastructures must meet. As most AI platforms are consolidated and require extensive computing power, patient data -- or parts of it -- would likely reside in the vendor's data centers. This would cause concerns around data privacy, but could also lead to significant risk if the platform is breached.

One of the popular subjects in the healthcare industry in recent years has been interoperability. Hospitals across the nation face the challenge of not being able to efficiently exchange patient health data across other healthcare organizations, despite the availability of data standards across the world. Adding AI to the mix would likely complicate things even further. When vendors like IBM or Microsoft actively deliver health-related services using their AI capabilities, the likelihood of these organizations talking to each other is very slim due to competition and proprietary technology. However, if policies are put in place that require these platforms to meet current interoperability requirements, this may help address the exchange of data right away.

Opponents of AI in healthcare have argued that computers are not always reliable and can fail on us from time to time. These failures can lead to catastrophic consequences if AI prescribes the wrong medication or gives a patient the wrong diagnosis. However, AI could eventually move to a stage where it can be trusted once it has proven its safety and readiness for patient care. If its error margins are less than or equal to those of its human counterparts, then the platform could be ready to take on an active role in patient care.

AI has progressed to the point where robots or virtual characters can mimic human behavior and interact naturally with humans. Emotional responses expressed in voice tones or text have been engineered based on human emotional reactions. However, there are several decisions physicians make that are based on their gut feeling, and intuition that may never be replicated using algorithms and super computers. These are the areas of patient care that would be hard to replace with a robot.

AI technology is advancing at a rapid rate. Several well-known scientists and popular figures such as Stephen Hawking, Bill Gates and Elon Musk have said that AI could become so powerful and self-aware that it may put its own interests before those of humans. But before robots become the enemy, there are tremendous benefits of artificial intelligence in healthcare, and many physicians are welcoming the technology. AI in healthcare offers the opportunity to help physicians identify better treatment options, detect cancer early and engage patients.

MD Anderson pauses IBM Watson AI project

How radiology can benefit from AI

How to overcome obstacles to AI implementation

See the rest here:

AI in healthcare must overcome security, interoperability concerns - TechTarget

Posted in Ai | Comments Off on AI in healthcare must overcome security, interoperability concerns – TechTarget

How AI will lead to self-healing mobile networks – VentureBeat

Posted: at 9:15 pm

Today we are routinely awed by the promise of machine learning (ML) and artificial intelligence (AI). Our phones speak to us and our favorite apps can ID our friends and family in our photographs. We didnt get here overnight, of course. Enhancements to the network itself deep, convolutional neural networks executing advanced computer science techniques brought us to this point.

Now one of the primary beneficiaries of our super-connected world will be the very networks we have come to rely on for information, communication, commerce, and entertainment. Much has been written about the networked society, but on this transformative journey, the network itself is becoming a full-fledged, contributingmemberof that society.

AI and ML will propel networks through four stages of evolution, from todays self-healing networks to learning networks to data-aware networks to self-driving networks.

Todays networks are in Stage I a real-time feedback loop of network status monitoring and near real-time optimizations to fix problems or improve performance. The sensory systems and the network optimizations are based on human-made rules and heuristics using simple descriptive analytics. For instance, if signal A goes above threshold B for C seconds, initiate action X.

These rules are typically easy to interpret but are suboptimal to modern, data-driven alternatives because they are hard-coded, cannot adapt to changing environments, and lack the complexity to effectively deal with a wide range of possible situations. In fact, these rules are limited by the inability of the human mind, even an experienced and intelligent mind, to find all the meaningful correlations affecting network KPIs among a massive data set of influencing factors. They also dont allow the humans responsible for network performance toanticipatetrouble, making real-time the limiting factor to an optimally-performing network.

Timing is everything. Stage II networks will continuously find patterns in past network data and use them to predict future behavior. ML can be directed to analyze factors thought to be impactful, like time/day, network events, or one-time or recurring external events or factors (e.g. an election, a natural disaster, or a trend on YouTube).

The value in the data lies in probabilistic correlations between past network performance and manual solutions that provide future optimizations. ML can capture as many correlations as model complexity allows, with data scientists and domain experts working together to best separate signal from noise, calibrating and testing ML models before they are put into production. ML models can reveal an exhaustive distribution of network KPIs and a dizzying array of external influencing factors, and then expose the subtlest of correlative relationships for the sake of predicting future outcomes.

These predictions give human overseers advanced warnings of how to distribute network resources and perform other optimizations, leading to enhanced performance at lower cost. For example, a network autopilot could detect the slightest predicted deviations from the optimal path and issue warnings to human operators long before actual problems emerge. Continuously collecting data and comparing predictions against reality will enhance accuracy, leading to better next-gen models.

ML methods of note for Stage II include linear and non-linear supervised methods, tree-based ensembles, neural networks, and batch learning (e.g., retrain overnight). In Stage II, predictive assistance means more time for human operators to effect change, and the result is a breakthrough in network performance. Machines make predictions, and humans find solutions, with time to spare.

The student becomes the master. By Stage III, AI algorithms review past performance and, independent of human direction, identify undiscovered correlative factors affecting future performance outside the guidance of human logic. They do so by looking beyond network data and initial guidance into external data sets such as generated and simulated data.

Machines use knowledge obtained from supervised methods and apply that knowledge to unsupervised methods, revealing undiscovered correlative factors without human intervention or guidance.

A Stage III network provides predictions of multiple possible futures and creates forecasts allowing management to predict potential business outcomes based on their own theoretical actions. For example, the network could let human managers select from a set of possible future outcomes (highest-possible performance during the Super Bowl, or lowest-possible power usage during holiday hours). Thus begins the era of strategic network optimization, with the network not only predicting a single future, but offering multiverse futures to its human colleagues. ML methods for Stage III include deep learning, simulation techniques, and other advanced computer science techniques like bandits, advanced statistics, model governance, and automatic model selection.

While highly capable, a Stage III network is still not technically intelligent. That grand jump towards the Singularity occurs in Stage IV.

I reason, therefore I am. A Stage IV network can (1) independently identify and prioritize factors of interest that impact network performance, (2) accurately predict multiverse outcomes in time for optimally executed human-effected remedies, and, most importantly, (3) distinguish between those factors that are causal vs. correlative to gain deeper insights and drive better decisions.

The distinction between causal and correlative is itself based on probabilistic analysis as seen in research. The ability of AI to establish causality is the ability to understand the root causes of network performance as opposed to the correlative signs of those causes. The ability to identify causal factors will lead to more accurate predictions and an even better-performing network. At this stage, the network gains the ability to reason cause vs. effect and the truly intelligent network is born.

A Stage IV network can autonomously choose a course of action to maximize operational efficiency in the face of external influences. It can improve security against new incoming threats and more generally operate to maximize a given set of KPIs. The system is adaptive to real-time changes and continuously learns and improves in a data-driven context. ML methods of note for Stage IV include deep learning, reinforcement learning, online learning, dynamic systems, and other advanced computer science techniques.

The notion of applying remedies at locally before globally is apropos in the case of AI and ML. While the world will no doubt benefit greatly from the democratization and mobilization of its ever-expanding mountain of data, it is the network and the networked society that stand to benefit the most, soonest, from our journey towards the truly intelligent machine.

Diomedes Kastanis is VP, Chief Innovation Office, atEricsson, supporting advancement of the companys technology vision and innovation.

Read more:

How AI will lead to self-healing mobile networks - VentureBeat

Posted in Ai | Comments Off on How AI will lead to self-healing mobile networks – VentureBeat

Meet the weaponized propaganda AI that knows you better than you know yourself – ExtremeTech

Posted: at 9:15 pm

Is it worse to be distracted by irrelevant ads, or to be monitored closely enough that the ads are accurate but creepy? Why choose? (Why not Zoidberg?) One company called Cambridge Analytica has managed to apply what some are calling a weaponized AI propaganda machine in order to visit both fates upon us at once. And its all made possible by Facebook.

Cambridge Analytica specializes in the mass manipulation of thought. One way they accomplish this is through social media, particularly by deploying native advertising. Otherwise known as sponsored content, these are ads designed to fool you into assimilating the ad unchallenged. The company also uses Facebook as a platform to push microtargeted posts to specific audiences, looking for the tipping point where someones political inclination can be changed, just a little bit, for the right price. Much like Facebook games designed specifically for their addictive potential, rather than for any entertainment value, these intellectual salesmen exist solely to hit every sub-perceptual lever in order to bypass our conscious barriers.

Cambridge Analytica is one subsidiary of a UK-based firm called SCL for Strategic Communication Laboratories that does business in psychometrics, an emerging field concerned with applying the big data approach to psychology and the social sciences. SCL also claims secretive but highly paid disinformation and psy-ops contract work on at least four continents. Their CV includes work done on the public dime here in America, training our military for counterterrorism. Also among their services is the euphemistically named practice of election management. They are riding to fame or at least better funding on the coattails of Donald Trumps ascension to the White House, for which they claim no small degree of responsibility.

If you want certainty, you need scale, their website asserts, and they say theyre just the outfit to provide it. Like any business proposition, this is best taken with some skepticism. But turning political tides in favor of the highest bidders ideology is their whole business model. Their parent company claims to have exerted material influence over elections and other geopolitical outcomes in 22 countries. They, and Cambridge Analytica as their agent, claim to be mindshare brokers of the highest order.

Image source: Cambridge Analytica

Nobody iswilling to go on the record and put their name to assertions that the emperor has no clothes, for fear of incurring the wrath of newly powerful Cambridge Analytica board member Steve Bannon, or yanking too hard on the Koch brothersmonetary speech apparatus. Its not clear whether Cambridge Analytica is pulling the strings they say theyre pulling, or just really good at knowing what side is going to win. But they definitely have something under their hats.

There are a few fundamental tech applications that underlie what Cambridge Analytica claims it can do. But they all depend on the idea that artificial intelligence isnt some dissimilar alien entity, sprung fully self-actualized from the forebrain of humanity like HAL. AI is an extension of human intelligence, which we accomplish by applying the organization and data-handling power of computers to our own tasks and problems. On a reductionist level, all theyre doing at Cambridge Analytica is using more RAM and a rigorous, written-down set of rules to organize and manipulate data that social scientists handled with clipboards and calculators and pencils back in the day. The AI that enables the entire business model is likely an intellectual descendant of Dr. Michal Kosinskis work in the Cambridge University social sciences department and an illegitimate one, if you ask Kosinski himself. The story reads like a film noir.

It starts with the marriage of Facebook, psychology, and AI. Facebook activity has an uncanny amount of predictive power. Back in the 80s, scientists developed the questionnaire-based OCEAN model of five major psychological traits, still in use today. Michal Kosinskis 2014 PhD project rested on a psychometric Facebook survey called MyPersonality, which added AI to the mix. MyPersonality catalogued participants Facebook profile information including social connections and Likes, and also asked the participants to take a Facebook quiz to find out their OCEAN scores. Then it used machine learning to predict their OCEAN scores based on their Facebook activity. With only a persons Facebook likes plugged into a MyPersonality dossier, Kosinskis AI could reliably predict their sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender.

Success rates for Kosinskis prediction algorithm. Source

More data meant a better guess, of course. Seventy likes were enough to make the AIs prediction of a persons OCEAN score better than their friends could do, 150 made it more accurate than what their parents got, and 300 likes could do better predicting a persons OCEAN score than the best human judge of a person: their spouse. More likes could even surpass what a person thought they knew about themselves, by predicting their OCEAN score closer than the persons own best estimate of what their score would be.

It goes the other way, too. To a database, a persons name and entries from their profile are just nodes in an n-dimensional space, and the connections between nodes arent necessarily directional. You can class individuals by similarities in the data, or you can search the data for individuals who fit into a class. Its as simple as doing an alphabetical sort in an Excel sheet.

Working with the predictive power of Facebook likes and quizzes became Kosinskis stock in trade. Kosinski even used Amazons Mechanical Turk in some of his research, crowdsourcing his quizzes to probe what made people respond to them. (Spoiler: getting paid helps.) His work earned him a deputy directorship at Cambridges Psychometrics Centre. It also earned him the attention of SCL. Kosinski told Motherboard that in 2014, a junior professor in his department named Aleksandr Kogan cold approached him asking for access to the MyPersonality database. Kogan, it turns out, was affiliated with SCL. Kosinski Googled Kogan, discovered this affiliation and declined to collaborate. But his research and methods were already in the wild, which meant in Kogans hands.

Kogan founded his own company that contracted with SCL to do psychometrics and predictive analysis, using aggregated Facebook data and a governing AI. At least some of this data came from jobs posted to Mechanical Turk, where participants were paid about $1 in exchange for access to Facebook profile data. Kogan changed his name and moved to Singapore. Kosinski remained deputy director of the Psychometrics Centre until he moved to the States in 2014.

Facebook has been in the news again and again because of the sheer extent of their data collection. One way they get the information they have is by using a thing called a conversion pixel. You know that stupid social network widget thats on every web page these days, including this one? Its designed to let you like and share a page without having to navigate back to Facebook. It also affords incredible mass surveillance opportunities. Every time you visit a web page with a Facebook share widget, you query one of Facebooks servers for a conversion pixel. Facebook then promptly attempts to phone home with what link you visited, how long you lingered on the page, whether you scrolled down or signed up or bought anything, and whether you chose to Like or share the page, plus the text of whatever comment you might post at the bottom using your Facebook profile. Even if you delete the text and dont publish the post. Likes already have enough predictive power; between likes and activity, that widget can produce a comprehensive set of metadata on a persons personality.

When logged-in users take Facebook quizzes like Kosinskis, the quiz can ask for permission to scrape any or all of this data out of their Facebook profile and into the hands of any marketer, data analyst, or election management specialist willing to pay for it. Between that and purchasing life history data and credit reports from brokers like Experian, this is how Cambridge Analytica profiles their marks in the first place. In return maximum you get to post a little quizlet thing to your wall, so you and all of your friends data can know which Walking Dead character each of you would be.

This is not an exchange of equivalent value.

CORAL!

And then theres microtargeting: the idea that Alice the Advertiser can accurately change the mind of Bob the Buyerbased on information Alice can buy.

The notion of microtargeting is not itself new, but what Cambridge Analytica is doing with it is novel. Theyre using the Facebook ecosystem because it perfectly enables the goal of targeting individuals and using their longer-lasting personality characteristics like a psychological GPS. It all hinges on a Facebook advertising tool called unpublished posts. Among advertisers, these are simply called dark posts.

Normally, when you make a Facebook post, it appears on your Timeline within your current privacy settings; this is true for people and Pages alike. When an advertiser makes a dark post, though, they can choose to serve that post to only a certain subset of users. Nobody sees it but the people the advertiser was targeting. And theyre canny about choosing their targets, looking for persuadable voters.

For example,explained Cambridge Analyticas CEO Alexander Nix in an op-ed last year about the companys work on the Ted Cruz presidential campaign, our issues model identified that there was a small pocket of voters in Iowa who felt strongly that citizens should be required by law to show photo ID at polling stations.

Almost certainly informed by Kosinskis work on Facebook profiling, Cambridge Analytica used the OCEAN model to advise the Cruz campaign on how to capture the vote on the issue of voter ID. The approach: use machine learning to classify, target, and serve dark posts to specific individuals based on their unique profiles, in order to use this relatively niche issue as a political pressure point to motivate them to go out and vote for Cruz. Later, Cambridge Analytica would use the same approach for the Trump campaign. Its not possible to make a complete count, but various places around the web have claimed that Cambridge Analytica tested between 45,000 and 175,000 different dark posts on the days of the Clinton-Trump debates.

Where do they get all the content to serve? Its difficult to say, because Cambridge Analytica doesnt respond to journalists who ask them about their methods. But the $6 million or so Trump has paid Cambridge Analytica can only pay just so many people for just so long. One journalist has been digging into this issue, and his research strongly suggests that much of the political propaganda surrounding the 2016 election was procedurally generated using machine learning, and then packaged and served to target audiences. As that Facebook widget follows a user around the web, the AI gets better and better at serving the user politically polarizing content shell click on. Mindshare acquired.

Nix went on: For people in the Temperamental personality group, who tend to dislike commitment, messaging on the issue should take the line that showing your ID to vote is as easy as buying a case of beer. Whereas the right message for people in the Stoic Traditionalist group, who have strongly held conventional views, is that showing your ID in order to vote is simply part of the privilege of living in a democracy.

We call this behavioral microtargeting, Nix later told Bloomberg, and this is really our secret sauce, if you like. This is what were bringing to America.

But dont take my word for it. Listen to Nix explain his own methods:

If you dont want to opt in to the secret sauce, what can you do?

On the individual level, bluntly, get good at knowing when youre being sold something. Dont reward intellectual salesmanship that you wouldnt tolerate elsewhere. After all, build a better mousetrap, and Nature will build a better mouse.

From the top-down direction, one way is to work to pass strong privacy regulations. They would need to entail meaningful oversight, and consequences that have teeth when an organization is found in breach of the law. But they also have to be nuanced, because if the government tries to ban something, and then that ban gets challenged in court, the government can lose. That sets legal precedent, just like a win in court would.

Also, heres a thought experiment: Watching Deadpool from your desk chair is not the same as taking in a late-night show in a theater, with the popcorn and the bass and all that. If pirating the data that can reconstruct a movie is the moral and legal equivalent of stealing the movie from a store, then pirating a model that can be used to reconstruct someones personality with enough fidelity to predict and alter their behavior without their consent might also be worth legal attention. Can you consent to be misled, and then vote based on that? Our legislature can be sold ideas, and they enact policy by voting. Whos serving dark posts to Congress, and whats in those posts?

If data feels cold and impersonal, a Cambridge Analytica press release muses, then consider this: the data revolution is in the end making politics (or shopping) more intimate by restoring the human scale.

Thats exactly the problem. It is personal. So much is built on the fact that data can be personal, even when dealing en masse. The salient thing here is that there is an outfit which means to leverage the enormous body of intimately personal data they can gather, in order to conduct large-scale and yet individualized psy ops for the highest bidder. The stakes theyre after are no less than the medium-term fate of nations. Whether or not Cambridge Analytica has done what they claim to have done, Pandoras box is open.

Now read:19 ways to stay anonymous and protect your online privacy

Read more:

Meet the weaponized propaganda AI that knows you better than you know yourself - ExtremeTech

Posted in Ai | Comments Off on Meet the weaponized propaganda AI that knows you better than you know yourself – ExtremeTech

Growth of AI Means We Need To Retrain Workers… Now – Forbes

Posted: February 28, 2017 at 8:08 pm


Forbes
Growth of AI Means We Need To Retrain Workers... Now
Forbes
Picture a future where a robot suggests where to go for dinner, which meetings to take or which hotel you should stay at during an important client event. That's just an example of the impact artificial intelligence (AI) can have on the ways we work ...

Excerpt from:

Growth of AI Means We Need To Retrain Workers... Now - Forbes

Posted in Ai | Comments Off on Growth of AI Means We Need To Retrain Workers… Now – Forbes

How AI Is Changing The Way Companies Are Organized … – Fast Company

Posted: at 8:08 pm

By Jared Lindzon 02.28.17 | 5:55 am

Artificial Intelligence may still be in its infancy, but its already forcing leadership teams around the world to reconsider some of their core structures.

Advances in technology are causing firms to restructure their organizational makeup, transform their HR departments, develop new training models, and reevaluate their hiring practices. This is according to Deloittes 2017 Human Capital Trends Report, which draws on surveys from over 10,000 HR and business leaders in 140 countries. Much of these changes are a result of the early penetration of basic AI software, as well as preparation for the organizational needs that will emerge as they mature.

What we concluded is that what AI is definitely doing is not eliminating jobs, it is eliminating tasks of jobs, and creating new jobs, and the new jobs that are being created are more human jobs, says Josh Bersin, principal and founder of Bersin by Deloitte. Bersin defines more human jobs as those that require traits robots havent yet mastered, like empathy, communication, and interdisciplinary problem solving. Individuals that have very task-oriented jobs will have to be retrained, or theyre going to have to move into new roles, he adds.

The survey found that 41% of respondents have fully implemented or made significant progress in adopting AI technologies in the workforce, yet only 15% of global executives say they are prepared to manage a workforce with people, robots, and AI working side by side.

As a result, early AI technologies and a looming AI revolution are forcing organizations to reevaluate a number of established strategies. Instead of hiring the most qualified person for a specific task, many companies are now putting greater emphasis on cultural fit and adaptability, knowing that individual roles will have to evolve along with the implementation of AI.

On-the-job training has become more vital to transition people into new roles as new technologies are adapted, and HRs function is quickly moving away from its traditional evaluation and recruiting functionwhich can increasingly be done more efficiently using big data and AI softwaretoward a greater focus on improving the employee experience across an increasingly contingent workforce.

The Deloitte survey also found that 56% of respondents are already redesigning their HR programs to leverage digital and mobile tools, and 33% are utilizing some form of AI technology to deliver HR functions.

The integration of early artificial intelligence tools is also causing organizations to become more collaborative and team-oriented, as opposed to the traditional top-down hierarchal structures.

To integrate AI, you have to have an internal team of expert product people and engineers that know its application and are working very closely with the frontline teams that are actually delivering services, says Ian Crosby, cofounder and CEO of Bench, a digital bookkeeping provider. When we are working AI into our frontline service, we dont go away to a dark room and come back after a year with our masterpiece. We work with our frontline bookkeepers day in, day out.

In order to properly adapt to changing technologies, organizations are moving away from a top-down structure and toward multidisciplinary teams. In fact, 32% of survey respondents said they are redesigning their organizations to be more team-centric, optimizing them for adaptability and learning in preparation for technological disruption.

Finding a balanced team structure, however, doesnt happen overnight, explains Crosby. Very often, if theres a big organization, its better to start with a small team first, and let them evolve and scale up, rather than try to introduce the whole company all at once.

Crosby adds that Benchs eagerness to integrate new technologies also impacts the skills the company recruits and hires for. Beyond checking the boxes of the jobs technical requirements, he says the company looks for candidates that are ready to adapt to the changes that are coming.

When youre working with AI, youre building things that nobody has ever built before, and nobody knows how that will look yet, he says. If theyre not open to being completely wrong, and having the humility to say they were wrong, we need to reevaluate.

As AI becomes more sophisticated, leaders will eventually need to decide where to place human employees, which tasks are best suited for machines, and which can be done most efficiently by combining the two.

Its a few years before we have actual AI, its getting closer and closer, but AI still has a big problem understanding human intent, says Rurik Bradbury, the global head of research and communication for online chat software provider LivePerson. As more AI software becomes available, he advises organizations to think of those three different categorieshuman, machine, or cyborgand decide who should be hired for this job.

While AI technologies are still in their infancy, it wont be long before every organization is forced to develop their own AI strategy in order to stay competitive. Those with the HR teams, training program, organizational structures, and adaptable staff will be best prepared for this fast-approaching reality.

is a freelance journalist born, raised and residing in Toronto, covering technology, entrepreneurship, entertainment and more for a wide variety of publications in Canada, the United States and around the world.When he's not playing with gadgets, interviewing entrepreneurs or traveling to music festivals and tech conferences you can usually find him diligently practicing his third-person bio writing skills.

More

See original here:

How AI Is Changing The Way Companies Are Organized ... - Fast Company

Posted in Ai | Comments Off on How AI Is Changing The Way Companies Are Organized … – Fast Company

AI scheduling startup launches subscription for businesses | PCWorld – PCWorld

Posted: at 8:08 pm

Thank you

Your message has been sent.

There was an error emailing this page.

Setting up meetings can be a pain, since they often require folks to send emails back and forth figuring out a time before finally sending off a calendar invitation to block everyones schedule. A New York startup called x.ai wants to simplify that with a helpful bot, and they just launched a product aimed at serving businesses.

The service provides users with access to x.ais assistant, which can go by Andrew or Amy Ingram, to automatically set up meetings with people inside a company and help schedule time with folks who work elsewhere. Its an extension of the companys existing service, which is built for individuals.

Both share the same core functionality: users can loop x.ais assistant into an email conversation by copying it on the thread, and the assistant will jump in to help figure out a time when everyone can meet. The assistant can analyze an email to identify parameters for a meeting, and then look through a users calendar to see what times work.

Once it has a time to suggest, the assistant will reach out to other participants in the conversation to gather their availability and book a meeting.

Business users get a few additional benefits: people inside a company can use the assistant to automatically schedule time with one another, without requiring any back and forth. Administrators will get a dashboard to manage and track employee use of the service, and companies will be able to customize the assistants signature and use a custom domain name for the email address needed to summon it.

While that all sounds lovely, the service comes at a high price: businesses are expected to pay $59 per active user per month. To put that in context, Microsoft's most expensive Office 365 Enterprise subscription costs $35 per user per month.

The good news is that x.ai wont charge companies for people who dont use its service to schedule meetings, even if those folks have access to it.

X.ai CEO Dennis Mortensen argued that its worth paying so much for the service because of the productivity gains that users receive from it. The companys hypothesis is that companies will see productivity gains from its service to offset the cost.

Theres also the question of security and privacy. In situations when x.ais automated systems dont understand input, the service will send human reviewers slices of an email to try and get the correct result. Those people are supposed to only see parts without context in such a way that would prevent them from seeing whats being discussed, but that may not be an acceptable risk for some businesses.

In order to use the assistant, people have to give it access to their calendars, too. However, Mortensen wanted to make clear that the companys business is helping with scheduling, and it doesnt resell user data to try and make a buck.

Theres also another benefit to users on the security side: the assistant is designed to protect the calendar of the person its working for by default, keeping them from being scheduled for meetings they dont want. It also wont give away information about a users availability to salespeople or other social engineers, like a human assistant might.

Right now, the assistant only understands English, so companies looking for other language support will need to wait for x.ai to add it.

The startup already has a handful of customers signed up, including venture capital firm Work-Bench, and Assist, another AI startup.

Blair Hanley Frank is primarily focused on the public cloud, productivity and operating systems businesses for the IDG News Service.

See the article here:

AI scheduling startup launches subscription for businesses | PCWorld - PCWorld

Posted in Ai | Comments Off on AI scheduling startup launches subscription for businesses | PCWorld – PCWorld

Page 272«..1020..271272273274..280..»