Page 127«..1020..126127128129..140150..»

Category Archives: Artificial Intelligence

China should step up regulation of artificial intelligence in finance, think tank says – Reuters

Posted: December 25, 2019 at 6:51 am

QINGDAO, China/BEIJING (Reuters) - China should introduce a regulatory framework for artificial intelligence in the finance industry, and enhance technology used by regulators to strengthen industry-wide supervision, policy advisers at a leading think tank said on Sunday.

FILE PHOTO: China Securities Regulatory Commission Chairman Xiao Gang addresses the Asian Financial Forum in Hong Kong January 19, 2015. REUTERS/Bobby Yip/File Photo

We should not deify artificial intelligence as it could go wrong just like any other technology, said the former chief of Chinas securities regulator, Xiao Gang, who is now a senior researcher at the China Finance 40 Forum.

The point is how we make sure it is safe for use and include it with proper supervision, Xiao told a forum in Qingdao on Chinas east coast.

Technology to regulate intelligent finance - referring to banking, securities and other financial products that employ technology such as facial recognition and big-data analysis to improve sales and investment returns - has largely lagged development, showed a report from the China Finance 40 Forum.

Evaluation of emerging technologies and industry-wide contingency plans should be fully considered, while authorities should draft laws and regulations on privacy protection and data security, the report showed.

Lessons should be learned from the boom and bust of the online peer-to-peer (P2P) lending sector where regulations were not introduced quickly enough, said economics professor Huang Yiping at the National School of Development of Peking University.

Chinas P2P industry was once widely seen as an important source of credit, but has lately been undermined by pyramid-scheme scandals and absent bosses, sparking public anger as well as a broader government crackdown.

Changes have to be made among policy makers, said Zhang Chenghui, chief of the finance research bureau at the Development Research Institute of the State Council.

We suggest regulation on intelligent finance to be written in to the 14th five-year plan of the countrys development, and each financial regulator - including the central bank, banking and insurance regulators and the securities watchdog - should appoint its own chief technology officer to enhance supervision of the sector.

Zhang also suggested the government brings together the data platforms of each financial regulatory body to better monitor potential risk and act quickly as problems arise.

Reporting by Cheng Leng in Qingdao, China, and Ryan Woo in Beijing; Editing by Christopher Cushing

See the article here:

China should step up regulation of artificial intelligence in finance, think tank says - Reuters

Posted in Artificial Intelligence | Comments Off on China should step up regulation of artificial intelligence in finance, think tank says – Reuters

Why video games and board games arent a good measure of AI intelligence – The Verge

Posted: at 6:51 am

Measuring the intelligence of AI is one of the trickiest but most important questions in the field of computer science. If you cant understand whether the machine youve built is cleverer today than it was yesterday, how do you know youre making progress?

At first glance, this might seem like a non-issue. Obviously AI is getting smarter is one reply. Just look at all the money and talent pouring into the field. Look at the milestones, like beating humans at Go, and the applications that were impossible to solve a decade ago that are commonplace today, like image recognition. How is that not progress?

Another reply is that these achievements arent really a good gauge of intelligence. Beating humans at chess and Go is impressive, yes, but what does it matter if the smartest computer can be out-strategized in general problem-solving by a toddler or a rat?

This is a criticism put forward by AI researcher Franois Chollet, a software engineer at Google and a well-known figure in the machine learning community. Chollet is the creator of Keras, a widely used program for developing neural networks, the backbone of contemporary AI. Hes also written numerous textbooks on machine learning and maintains a popular Twitter feed where he shares his opinions on the field.

In a recent paper titled On the Measure of Intelligence, Chollet also laid out an argument that the AI world needs to refocus on what intelligence is and isnt. If researchers want to make progress toward general artificial intelligence, says Chollet, they need to look past popular benchmarks like video games and board games, and start thinking about the skills that actually make humans clever, like our ability to generalize and adapt.

In an email interview with The Verge, Chollet explained his thoughts on this subject, talking through why he believes current achievements in AI have been misrepresented, how we might measure intelligence in the future, and why scary stories about super intelligent AI (as told by Elon Musk and others) have an unwarranted hold on the publics imagination.

This interview has been lightly edited for clarity.

In your paper, you describe two different conceptions of intelligence that have shaped the field of AI. One presents intelligence as the ability to excel in a wide range of tasks, while the other prioritizes adaptability and generalization, which is the ability for AI to respond to novel challenges. Which framework is a bigger influence right now, and what are the consequences of that?

In the first 30 years of the history of the field, the most influential view was the former: intelligence as a set of static programs and explicit knowledge bases. Right now, the pendulum has swung very far in the opposite direction: the dominant way of conceptualizing intelligence in the AI community is the blank slate or, to use a more relevant metaphor, the freshly-initialized deep neural network. Unfortunately, its a framework thats been going largely unchallenged and even largely unexamined. These questions have a long intellectual history literally decades and I dont see much awareness of this history in the field today, perhaps because most people doing deep learning today joined the field after 2016.

Its never a good thing to have such intellectual monopolies, especially as an answer to poorly understood scientific questions. It restricts the set of questions that get asked. It restricts the space of ideas that people pursue. I think researchers are now starting to wake up to that fact.

In your paper, you also make the case that AI needs a better definition of intelligence in order to improve. Right now, you argue, researchers focus on benchmarking performance in static tests like beating video games and board games. Why do you find this measure of intelligence lacking?

The thing is, once you pick a measure, youre going to take whatever shortcut is available to game it. For instance, if you set chess-playing as your measure of intelligence (which we started doing in the 1970s until the 1990s), youre going to end up with a system that plays chess, and thats it. Theres no reason to assume it will be good for anything else at all. You end up with tree search and minimax, and that doesnt teach you anything about human intelligence. Today, pursuing skill at video games like Dota or StarCraft as a proxy for general intelligence falls into the exact same intellectual trap.

This is perhaps not obvious because, in humans, skill and intelligence are closely related. The human mind can use its general intelligence to acquire task-specific skills. A human that is really good at chess can be assumed to be pretty intelligent because, implicitly, we know they started from zero and had to use their general intelligence to learn to play chess. They werent designed to play chess. So we know they could direct this general intelligence to many other tasks and learn to do these tasks similarly efficiently. Thats what generality is about.

But a machine has no such constraints. A machine can absolutely be designed to play chess. So the inference we do for humans can play chess, therefore must be intelligent breaks down. Our anthropomorphic assumptions no longer apply. General intelligence can generate task-specific skills, but there is no path in reverse, from task-specific skill to generality. At all. So in machines, skill is entirely orthogonal to intelligence. You can achieve arbitrary skills at arbitrary tasks as long as you can sample infinite data about the task (or spend an infinite amount of engineering resources). And that will still not get you one inch closer to general intelligence.

The key insight is that there is no task where achieving high skill is a sign of intelligence. Unless the task is actually a meta-task, that involves acquiring new skills over a broad [range] of previously unknown problems. And thats exactly what I propose as a benchmark of intelligence.

If these current benchmarks dont help us develop AI with more generalized, flexible intelligence, why are they so popular?

Theres no doubt that the effort to beat human champions at specific well-known video games is primarily driven by the press coverage these projects can generate. If the public wasnt interested in these flashy milestones that are so easy to misrepresent as steps toward superhuman general AI, researchers would be doing something else.

I think its a bit sad because research should about answering open scientific questions, not generating PR. If I set out to solve Warcraft III at a superhuman level using deep learning, you can be quite sure that I will get there as long as I have access to sufficient engineering talent and computing power (which is on the order of tens of millions of dollars for a task like this). But once Id have done it, what would I have learned about intelligence or generalization? Well, nothing. At best, Id have developed engineering knowledge about scaling up deep learning. So I dont really see it as scientific research because it doesnt teach us anything we didnt already know. It doesnt answer any open question. If the question was, Can we play X at a superhuman level?, the answer is definitely, Yes, as long as you can generate a sufficiently dense sample of training situations and feed them into a sufficiently expressive deep learning model. Weve known this for some time. (I actually said as much a while before the Dota 2 and StarCraft II AIs reached champion level.)

What do you think the actual achievements of these projects are? To what extent are their results misunderstood or misrepresented?

One stark misrepresentation Im seeing is the argument that these high-skill game-playing systems represent real progress toward AI systems, which can handle the complexity and uncertainty of the real world [as OpenAI claimed in a press release about its Dota 2-playing bot OpenAI Five]. They do not. If they did, it would be an immensely valuable research area, but that is simply not true. Take OpenAI Five, for instance: it wasnt able to handle the complexity of Dota 2 in the first place because it was trained with 16 characters, and it could not generalize to the full game, which has over 100 characters. It was trained over 45,000 years of gameplay then again, note how training data requirements grow combinatorially with task complexity yet, the resulting model proved very brittle: non-champion human players were able to find strategies to reliably beat it in a matter of days after the AI was made available for the public to play against.

If you want to one day become able to handle the complexity and uncertainty of the real world, you have to start asking questions like, what is generalization? How do we measure and maximize generalization in learning systems? And thats entirely orthogonal to throwing 10x more data and compute at a big neural network so that it improves its skill by some small percentage.

So what would be a better measure of intelligence for the field to focus on?

In short, we need to stop evaluating skill at tasks that are known beforehand like chess or Dota or StarCraft and instead start evaluating skill-acquisition ability. This means only using new tasks that are not known to the system beforehand, measuring the prior knowledge about the task that the system starts with, and measuring the sample-efficiency of the system (which is how much data is needed to learn to do the task). The less information (prior knowledge and experience) you require in order to reach a given level of skill, the more intelligent you are. And todays AI systems are really not very intelligent at all.

In addition, I think our measure of intelligence should make human-likeness more explicit because there may be different types of intelligence, and human-like intelligence is what were really talking about, implicitly, when we talk about general intelligence. And that involves trying to understand what prior knowledge humans are born with. Humans learn incredibly efficiently they only require very little experience to acquire new skills but they dont do it from scratch. They leverage innate prior knowledge, besides a lifetime of accumulated skills and knowledge.

[My recent paper] proposes a new benchmark dataset, ARC, which looks a lot like an IQ test. ARC is a set of reasoning tasks, where each task is explained via a small sequence of demonstrations, typically three, and you should learn to accomplish the task from these few demonstrations. ARC takes the position that every task your system is evaluated on should be brand-new and should only involve knowledge of a kind that fits within human innate knowledge. For instance, it should not feature language. Currently, ARC is totally solvable by humans, without any verbal explanations or prior training, but it is completely unapproachable by any AI technique weve tried so far. Thats a big flashing sign that theres something going on there, that were in need of new ideas.

Do you think the AI world can continue to progress by just throwing more computing power at problems? Some have argued that, historically, this has been the most successful approach to improving performance. While others have suggested that were soon going to see diminishing returns if we just follow this path.

This is absolutely true if youre working on a specific task. Throwing more training data and compute power at a vertical task will increase performance on that task. But it will gain you about zero incremental understanding of how to achieve generality in artificial intelligence.

If you have a sufficiently large deep learning model, and you train it on a dense sampling of the input-cross-output space for a task, then it will learn to solve the task, whatever that may be Dota, StarCraft, you name it. Its tremendously valuable. It has almost infinite applications in machine perception problems. The only problem here is that the amount of data you need is a combinatorial function of task complexity, so even slightly complex tasks can become prohibitively expensive.

Take self-driving cars, for instance. Millions upon millions of training situations arent sufficient for an end-to-end deep learning model to learn to safely drive a car. Which is why, first of all, L5 self-driving isnt quite there yet. And second, the most advanced self-driving systems are primarily symbolic models that use deep learning to interface these manually engineered models with sensor data. If deep learning could generalize, wed have had L5 self-driving in 2016, and it would have taken the form of a big neural network.

Lastly, given youre talking about constraints for current AI systems, it seems worth asking about the idea of superintelligence the fear that an extremely powerful AI could cause extreme harm to humanity in the near future. Do you think such fears are legitimate?

No, I dont believe the superintelligence narrative to be well-founded. We have never created an autonomous intelligent system. There is absolutely no sign that we will be able to create one in the foreseeable future. (This isnt where current AI progress is headed.) And we have absolutely no way to speculate what its characteristics may be if we do end up creating one in the far future. To use an analogy, its a bit like asking in the year 1600: Ballistics has been progressing pretty fast! So, what if we had a cannon that could wipe out an entire city. How do we make sure it would only kill the bad guys? Its a rather ill-formed question, and debating it in the absence of any knowledge about the system were talking about amounts, at best, to a philosophical argument.

One thing about these superintelligence fears is that they mask the fact that AI has the potential to be pretty dangerous today. We dont need superintelligence in order for certain AI applications to represent a danger. Ive written about the use of AI to implement algorithmic propaganda systems. Others have written about algorithmic bias, the use of AI in weapons systems, or about AI as a tool of totalitarian control.

Theres a story about the siege of Constantinople in 1453. While the city was fighting off the Ottoman army, its scholars and rulers were debating what the sex of angels might be. Well, the more energy and attention we spend discussing the sex of angels or the value alignment of hypothetical superintelligent AIs, the less we have for dealing with the real and pressing issues that AI technology poses today. Theres a well-known tech leader that likes to depict superintelligent AI as an existential threat to humanity. Well, while these ideas are grabbing headlines, youre not discussing the ethical questions raised by the deployment of insufficiently accurate self-driving systems on our roads that cause crashes and loss of life.

If one accepts these criticisms that there is not currently a technical grounding for these fears why do you think the superintelligence narrative is popular?

Ultimately, I think its a good story, and people are attracted to good stories. Its not a coincidence that it resembles eschatological religious stories because religious stories have evolved and been selected over time to powerfully resonate with people and to spread effectively. For the very same reason, you also find this narrative in science fiction movies and novels. The reason why its used in fiction, the reason why it resembles religious narratives, and the reason why it has been catching on as a way to understand where AI is headed are all the same: its a good story. And people need stories to make sense of the world. Theres far more demand for such stories than demand for understanding the nature of intelligence or understanding what drives technological progress.

Read more here:

Why video games and board games arent a good measure of AI intelligence - The Verge

Posted in Artificial Intelligence | Comments Off on Why video games and board games arent a good measure of AI intelligence – The Verge

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

Posted: at 6:51 am

The past decade has seen the rise of remarkably human personal assistants, increasing automation in transportation and industrial environments, and even the alleged passing of Alan Turings famous robot consciousness test. Such innovations have taken artificial intelligence out labs and into our hands.

A.I. programs have become painters, drivers, doctors assistants, and even friends. But with these new benefits have also come increasing dangers. This ending decade saw the first, and likely not the last, death caused by a self-driving car.

This is #20 on Inverses 20 predictions for the 2020s.

And as we head toward another decade of machine learning and robotics research, questions surrounding the moral programming of A.I. and the limits of their autonomy will no longer be just thought-experiments but time-sensitive problem.

One such area to keep on eye on going forward into a new decade will be partially defined by this question: what kind of legal status will A.I. be granted as their capabilities and intelligence continues to scale closer to that of humans? This is a conversation the archipelago nation Malta started in 2018 when its leaders proposed that it should prepare to grant or deny citizenship to A.I.s just as they would humans.

The logic behind this being that A.I.s of the future could have just as much agency and potential to cause disruption as any other non-robotic being. Francois Piccione, policy advisor for the Maltese government, told Inverse in 2019 that not taking such measures would be irresponsible.

Artificial Intelligence is being seen in many quarters as the most transformative technology since the invention of electricity, said Piccione. To realize that such a revolution is taking place and not do ones best to prepare for it would be irresponsible.

While the 2020s might not see fully fledged citizenship for A.I.s, Inverse predicts that there will be increasing legal scrutiny in coming years over who is legally responsible over the actions of A.I., whether it be their owners or the companies designing them. Instead of citizenship or visas for A.I., this could lead to further restrictions on the humans who travel with them and the ways in which A.I. can be used in different settings.

Another critical point of increasing scrutiny in the coming years will be how to ensure A.I. programmers continue to think critically about the algorithms they design.

This past decade saw racism and death as the result of poorly designed algorithms and even poorer introspection. Inverse predicts that as A.I. continues to scale labs will increasingly call upon outside experts, such as ethicists and moral psychologists, to make sure these human-like machines are not doomed to repeat our same, dehumanizing, mistakes.

As 2019 draws to a close, Inverse is looking to the future. These are our 20 predictions for science and technology for the 2020s. Some are terrifying, some are fascinating, and others we can barely wait for. This has been #20. Read a related story here.

See original here:

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test - Inverse

Posted in Artificial Intelligence | Comments Off on In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

The Crazy Government Research Projects You Might’ve Missed in 2019 – Nextgov

Posted: at 6:51 am

If you imagine the U.S. research community as a family party, the Defense Advanced Research Projects Agency is your crazy uncle ranting at the end of the table and the governments other ARPA organizations are the in-laws who are buying into his theories.

DARPA and its counterpartsthe Intelligence Advanced Research Projects Activity and the Advanced Research Projects Agency-Energyare responsible for conducting some of the most innovative and bizarre projects in the governments $140 billion research portfolio. DARPAs past research has laid the groundwork for the internet, GPS and other technologies we take for granted today, and though the other organizations are relatively new, theyre similarly charged with pushing todays tech to new heights.

That means the futuristic-sounding projects the agencies are working on today could give us a sneak peek of where the tech industry is headed in the years ahead.

And based on the organizations 2019 research efforts, the future looks pretty wild.

DARPA Pushes the Limits of AI

Last year, DARPA announced it would invest some $2 billion in bringing about the so-called third wave of artificial intelligence, systems capable of reasoning and human-like communication. And those efforts are already well underway.

In March, the agency started exploring ways to improve how AI systems like Siri and Alexa teach themselves language. Instead of crunching gargantuan datasets to learn the ins and outs of a language, researchers essentially want the tech to teach itself by observing the world, just like human babies do. Through the program, AI systems would learn to associate visual cuesphotos, videos and live demonstrationswith audible sounds. Ultimately, the goal is to build tech that actually understand the meaning of what theyre saying.

DARPA also wants AI tools to assess their own expertise and inform their operators know when they dont know something. The Competency-Aware Machine Learning program, launched in February, looks to enable AI systems to model their own behavior, evaluate past mistakes and apply that information to future decisions. If the tech thinks its results could be inaccurate, it would let users know. Such self-awareness will be critical as the military leans on AI systems for increasingly consequential tasks.

One of the biggest barriers to building AI is the amount of computing power required to run them, but DARPA is looking to the insect world to lower that barrier to entry. Through the MicroBRAIN program, the agency is examining the brains of very small flying insects to get inspiration for more energy efficient AI designs.

Beyond improving the tech itself, DARPA is also looking to AI to tackle some of the most pressing problems facing the government today. The agency is funding research to teach computers to automatically detect errors in deepfakes and other manipulated media. Officials are also investing in AI that could help design more secure weapons systems, vehicles and other network-connected platforms.

Outside of artificial intelligence, DARPA is also working to develop a wide-range of other capabilities that sound like they came straight from a sci-fi movie, including but not limited to satellite-repair robots, automated underground mapping technologies and computers powered by biological processes.

IARPA Wants Eyes in the Sky

Today, the intelligence community consumes an immeasurable amount of information, so much that its virtually impossible for analysts to make sense of it in any reasonable amount of time. In this world of data abundance, intelligence officials see AI as a way to stay one step ahead of adversaries, and the tech is a major priority their bleeding-edge research shop.

AI has numerous applications across the national security world, and in 2019, improving surveillance was a major goal.

In April, the Intelligence Advanced Research Projects Activity announced it was pursuing AI that could stitch together and analyze satellite images and footage collected from planes, drones and other aircraft. The program, called Space-based Machine Automated Recognition Technique, essentially looks to use AI to monitor all human activity around the globe in real-time.

The tech would automatically detect and monitor major construction projects and other anthropogenic activity around the planet, merging data from multiple sources and keeping tabs on how sites change over time. Though their scopes somewhat differ, the SMART harkens back to the Air Forces controversial Project Maven program, which sought to use artificial intelligence to automatically analyze video footage collected by drones.

IARPA is also looking to use artificial intelligence to better monitor human activity closer to the ground. In May, the agency started recruiting teams to help train algorithms to follow people as they move through video surveillance networks. According to the solicitation, the AI would piece together footage picked up by security cameras scattered around a particular space, letting agencies track individuals movements in crowded.

Combine this capability with long-range biometric identification systemsa technology IARPA also began exploring in 2019and you could have machines naming people and tracking their movements without spy agencies needing to lift a finger.

The Funding Fight at ARPA-E

The Energy Departments bleeding-edge research office, ARPA-E, is also supporting a wide array of efforts to advance the nations energy technologies. This year, the organization launched programs to improve carbon-capture systems, reduce the cost of nuclear energy and increase the efficiency of the power grid, among other things.

But despite those efforts, the Trump administration has repeatedly tried to shut down the office.

In its budget request for fiscal 2020, the White House proposed reducing ARPA-Es funding by 178%, giving the agency a final budget of negative $287 million. The administration similarly defunded the office in its 2019 budget request.

While its unclear exactly how much funding ARPA-E will receive next year, its safe to say its budget will go up. The Senate opted to increase the agencys funding by $62 million in its 2020 appropriations, and the House version of the legislation included a $59 million increase. In October, the House Science, Space and Technology Committee advanced a bill that would provide the agency with nearly $2.9 billion over the course of five years, though the bill has yet to receive a full vote in the chamber.

Read the original post:

The Crazy Government Research Projects You Might've Missed in 2019 - Nextgov

Posted in Artificial Intelligence | Comments Off on The Crazy Government Research Projects You Might’ve Missed in 2019 – Nextgov

Can AI restore our humanity? – Gigabit Magazine – Technology News, Magazine and Website

Posted: at 6:51 am

Sudheesh Nair, CEO of ThoughtSpot earnestly campaigns for artificial intelligence as a panacea for restoring our humanity - by making us able to do more work.

Whether AI is helping a commuter navigate through a city or supporting a doctors medical diagnosis, it relieves humans from mind-numbing, repetitive and error-prone tasks. This scares some business leaders, who worry AI could make people lazy, feckless and over-dependent. The more utopian minded - me included - see AI improving society and business while individuals get to enjoy happier, more fulfilling lives.

Fortunately, this need not launch yet another polarised debate. The more we apply AI to real world problems, the more glaringly clear it becomes that machine and human intelligence must work together to produce the right outcomes. Humans teach AI to understand context and patterns so that algorithms produce fair, ethical decisions. Equally, AIs blind rationality helps humans overcome destructive failings like confirmation bias.

Crucially, as humans and machines are increasingly able to converse through friendlier interfaces, decision-making improves and consumers are better served. Through this process, AI is already ending what I call the tyranny of averages - where people with similar preferences, habits, or even medical symptoms, get lumped into broad categories and receive identical service or treatment.

Fewer hours, higher productivity

In business AI is taking over mundane tasks like expense reporting and timesheets, along with complex data analysis. This means people can devote time to charity work, spend time with their kids, exercise more or just kick back. In their jobs, they get to do all those human things that often wind up on the back burner, like mentor others and celebrate success. For this reason alone, I see AI as an undeniable force for good.

One strong indicator that AIs benefits are kicking in is that some companies are successfully moving to a four-day workweek. Companies like the American productivity software firm Basecamp and New Zealands Perpetual Guardian are recent poster children for working shorter hours while raising productivity. This has profound implications for countries like Japan, whose economy is among the least productive despite its people notoriously working the longest hours.

SEE ALSO:

However, AI is about more than having to work fewer hours. Having to multitask less means less stress over the possibility of dropping the ball. Workers can focus more on tasks that contribute positively and visibly to their companies success. Thats why more employers are starting to place greater value now on business outcomes and less on presenteeism.

AI and transparency go hand in hand

But we mustnt get complacent or apply AI uniformly. Even though many studies say that AI will create many more jobs than it replaces we have to manage its impact differently depending on the type of work it affects. Manual labourers like factory workers, farmers and truck drivers understandably fear the march of technology. In mass-market industries, technology has often (but not always) completely replaced the clearly defined tasks that these workers carry out repeatedly during their shifts. Employers and governments must work together to communicate honestly to workers about the trajectory of threatened jobs and help them to adapt and develop new skills for the future.

Overcoming the tyranny of averages in service

An area where we risk automating inappropriately is that which includes entry- and mid-level customer service professions like call centre workers, bank managers, and social care providers. Most will agree that automating some formerly personal transactions, like withdrawing cash, turned out pretty well. However higher involvement decisions like buying home insurance or selecting the best credit card usually benefit from having a sympathetic human guide them through to the right decision.

Surprisingly, AI may be able to help re-humanise customer service in these areas threatened by over- or inappropriate automation. Figuring out the right product or service to offer someone with complex needs at the right time, price and place is notoriously hard. Whether its to give a medical diagnosis or recommend pet insurance, AI can give service workers the data they need to provide highly personalised information and expert advice.

There are no simple formulae to apply to the labour market as technology advances and affects all of our lives. While it's becoming clear that the AI's benefits to knowledge workers are almost universally positive, others must get the support to adapt and reskill so they are not left behind.

For consumers, however, AI means being freed from the tyranny of averages that makes so many transactions, particularly with large, faceless organisations so soul-destroying. For this and other reasons I mentioned, I truly believe AI will indeed help restore our humanity

Visit link:

Can AI restore our humanity? - Gigabit Magazine - Technology News, Magazine and Website

Posted in Artificial Intelligence | Comments Off on Can AI restore our humanity? – Gigabit Magazine – Technology News, Magazine and Website

Artificial Intelligence (AI) in Retail Market worth $15.3 billion by 2025 – Exclusive Report by Meticulous Research – GlobeNewswire

Posted: November 19, 2019 at 11:44 am

London, Nov. 19, 2019 (GLOBE NEWSWIRE) -- According to a new market research report Artificial Intelligence (AI) in Retail Market by Product (Solution and Services), Application (Predictive Merchandizing, Programmatic Advertising, Market Forecasting, In-store Visual Monitoring and Surveillance, Location-based Marketing), Technology (Machine Learning, Natural Language Processing), Deployment (Cloud, On-premises) and Geography - Global Forecasts to 2025, published by Meticulous Research, the global AI in retail market is expected to grow at a CAGR of 35.9% from 2019 to reach $15.3 billion by 2025.

Request Sample Report:https://www.meticulousresearch.com/request-sample-report/cp_id=4979

Over the past few years, digital technologies are being embedded into core value-generation processes in society and businesses by creating innovation. The growing number of millennials with their inclination towards digital-first approaches is putting organizations under constant pressure to innovate; thus, making artificial intelligence (AI) a top priority for retail businesses. Various well-established retailers are struggling with increasing cost, dissatisfied customers, declining sales and upstart competition. Implementing artificial intelligence in retail creates new opportunities and capabilities for retailers by leveraging new possibilities, fastening processes, and making organizations adaptable to changes in the future. Realizing the fact, retail companies are investing in billions to reap benefits of AI technology and improve profitability of their businesses. Strong participation of industry players in leveraging AI technology is reshaping the technology landscape of the retail industry.

The overall artificial intelligence in the retail market is witnessing a consistent penetration of smartphones & connected devices, advancements in big data for retail sector, rapid adoption of advancement in technology across the retail chain, and increasing adoption of the multi-channel or omnichannel retailing strategy. Furthermore, the efforts from retailers to gain access to more customers, enhance business visibility, and build customer loyalty are also playing a vital role in driving adoption of AI technology in the retail industry. The increasing adoption of AI-powered voice enabled devices owing to their benefits in the form of enhanced user experience and improved productivity are also contributing to the market growth.

The global artificial intelligence market in retail is majorly segmented by product offering, application, learning technology, type, deployment type, and geography. Based on product offering, the global AI in retail market is majorly segmented into solutions and services. The solution segment is categorized into chatbot, customer behavior tracking, customer relationship management (CRM), inventory management, price optimization, recommendation engines, supply chain management, and visual search. The service segment is further segmented into managed services and professional services. Recommendation engine dominates the AI solutions market for the retail industry and it is expected to register a strong growth over the forecast period. The features in terms of enhanced user experiences, better customer engagement, precise recommendations of products, and personalized recommendation is helping recommendation engines to maintain their growth in the global artificial intelligence in retail market.

Based on application, the overall AI in retail market is majorly segmented into predictive merchandising, programmatic advertising, market forecasting, in-store visual monitoring & surveillance, and location-based marketing. In-store visual monitoring and surveillance applications are spearheading the growth of the AI market in the retail industry. This segment is expected to register a steady growth over the coming years and continue its dominance during the forecast period. Its benefits in the form of better inventory tracking, customer traffic monitoring, enhanced safety protocols, prevention against shoplifting, outpacing shrink caused by employee theft, vendor fraud, and administrative errors are contributing to the growth of this segment.

Meticulous Researchis Glad to Announce Year-End Discount Offer.Grab Maximum 25% Discount On All OurResearch Reports, Discount Valid Till 31st December 2019

Geographically, the global artificial intelligence in retail market is segmented into five major regions, namely, North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa. The global AI in retail market is analyzed methodically with respect to major countries in each of the regions with the help of bottom-up approach to arrive at the most precise market estimation. At present, North America holds a dominating position in the global AI in retail market. The region has high technology adoption rate, presence of key players & start-ups, and high penetration of internet. Consequently, North America is expected to retain its dominance throughout the forecast period. However, factors such as rapid growth in consumer spending, presence of young population, government initiatives towards digitization, developing internet and connectivity infrastructure, and growing adoption of AI-based solutions and services among retailers are helping Asia Pacific region to register the fastest growth in the global artificial intelligence in retail market.

The global artificial intelligence (AI) in retail market is consolidated and dominated by few major players namely, Amazon.com, Inc. (U.S.), Google LLC (U.S.), IBM Corporation (U.S.), Intel Corporation (U.S.), Microsoft Corporation (U.S.), Nvidia Corporation (U.S.), Oracle Corporation (U.S.), SAP SE (Germany), Salesforce.com, Inc. (U.S.), and BloomReach, Inc. (U.S.) along with several local and regional players.

Browse key industry insights spread across 216 pages with 205 market data tables & 25 figures & charts from the report:https://www.meticulousresearch.com/product/artificial-intelligence-in-retail-market-4979/

Scope of the AI in Retail Market Report:

AI in Retail Market by Product:

AI in Retail Market by Application:

AI in Retail Market by Technology:

AI in Retail Market by End User:

AI in Retail Market by Deployment Mode:

AI in Retail Market by Geography:

DownloadFree Sample Report Now @https://www.meticulousresearch.com/download-sample-report/cp_id=4979

Related Reports:

Automotive Artificial Intelligence (AI) Market by Offering (Hardware, Software), Technology (Machine Learning, Deep Learning, Computer Vision, Context Awareness, Natural Language Processing), Process (Signal Recognition, Image Recognition, Voice Recognition, Data Mining), Drive (Autonomous Drive, Semi-autonomous Drive), and Region Global Forecast to 2025, read more:https://www.meticulousresearch.com/product/automotive-artificial-intelligence-market-4996/

Artificial Intelligence (AI) in Manufacturing Market by Offering (Hardware, Software, and Services), End-use Industry (Semiconductors and Electronics, Energy and power, Pharmaceuticals, Chemical, Medical Devices, Automobile, Heavy Metal and Machine Manufacturing, Food and Beverages, Others), Technology (Machine Learning, NLP, Context-Aware Computing, and Computer Vision), Application (Predictive Maintenance, Material Movement, Production Planning, Field Services, Quality Management, Cybersecurity, Industrial Robotics, and Reclamation), and Region - Global Forecast to 2025, read more:https://www.meticulousresearch.com/product/artificial-intelligence-in-manufacturing-market-4983/

About Meticulous Research

The name of our company defines our services, strengths, and values. Since the inception, we have only thrived to research, analyze and present the critical market data with great attention to details.

Meticulous Research was founded in 2010 and incorporated as Meticulous Market Research Pvt. Ltd. in 2013 as a private limited company under the Companies Act, 1956. Since its incorporation, with the help of its unique research methodologies, the company has become the leading provider of premium market intelligence in North America, Europe, Asia-Pacific, Latin America, and Middle East & Africa regions.

With the meticulous primary and secondary research techniques, we have built strong capabilities in data collection, interpretation, and analysis of data including qualitative and quantitative research with the finest team of analysts. We design our meticulously analyzed intelligent and value-driven syndicate market research reports, custom studies, quick turnaround research, and consulting solutions to address business challenges of sustainable growth.

Excerpt from:

Artificial Intelligence (AI) in Retail Market worth $15.3 billion by 2025 - Exclusive Report by Meticulous Research - GlobeNewswire

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence (AI) in Retail Market worth $15.3 billion by 2025 – Exclusive Report by Meticulous Research – GlobeNewswire

Colorado at the forefront of AI and what it means for jobs of the future – The Denver Channel

Posted: at 11:44 am

LITTLETON, Colo. -- A group of MIT researchers visited Lockheed Martin this month for a chance to talk about the future of artificial intelligence and automation.

Liz Reynolds is the executive director of the MIT Task Force on the Work of the Future and says her job is to focus on the relationship between new technologies and how they will affect jobs.

Colorado is at the forefront of thinking about these things, Reynolds said. All jobs will be affected by this technology.

Earlier this year, U.S. Sen. Michael Bennet, D-Colo., created an artificial intelligence strategy group to take a closer look at how AI is being used in the state and how that will change in the future.

We need a national strategy on AI that galvanizes innovation, plans for the changes to our workforce, and is clear-eyed about the challenges ahead. And while were seeing progress, workers and employers cant wait on Washington, said Sen. Bennet in a statement. Colorado is well-positioned to shape those efforts, which is why weve made it a priority to bring together Colorado leaders in education, business, nonprofits, labor, and government to think through how we can best support and train workers across Colorado so they are better prepared for a changing economy."

MIT recently released a 60-page report detailing some of the possibilities and challenges with AI and automation.

One of the major challenges the group is considering is how the technology will affect vulnerable workers, particularly people who do not have a four-year degree.

The MIT team is looking for ways to train those workers to better prepare them for the changes.

Were not trying to replace a human, thats not something youre ever going to do with eldercare. For example, youre going be looking for ways to use this technology to help, Reynolds said.

Despite recent advances in AI, Reynolds believes the changes to the workforce will happen over a matter of decades, not years.

We think its going to be a slower process and its going to give us time to make the changes that we need institutionally, she said.

Beyond that, projections suggest that, with an aging workforce, there will be a scarcity of people to employ in future and technology can help fill some of those gaps.

The bigger question is how to ensure that workers can get a quality job that results in economic security for their families.

I think theres really an opportunity for us to see technology not as a threat but really, as a tool, Reynold said. If we can use the right policies and institutions to support workers in this transition then we could really be working toward something that works for everyone.

Lockheed Martin has been using artificial intelligence and automation in its space program for years. The companys scientists rely on automation to manage and operate spacecrafts that are on missions.

However, the technology is also being applied closer to home. The AI Lockheed Martin has created is already being applied to peoples day-to-day lives, from GPS navigation to banking. Now, the company is looking for more ways to make use of it.

Even though its been around for some time, we want to think about how we can use it in different, emerging ways and apply it to other parts of our business as well, said Whitley Poyser, the business transformation acting director for Lockheed Martin Space.

One of the areas in particular Lockheed Martin is looking to apply the technology is in its manufacturing, not only to streamline processes but to use data the machines are already collecting to predict potential issues and better prepare for them.

Poyser understands that there are some fears about this technology taking over jobs, but she doesnt believe thats the case.

Its not taking the job away, its just allowing our employees to think differently and think about elevating their skills and their current jobs, Poyser said. Its actually less of a fear to us and more of an opportunity.

The true potential of artificial intelligence is only beginning to be unleashed for companies like Lockheed Martin. Reynolds is hoping that predicting for the possibilities and challenges now will help the country better prepare for the changes in the decades to come.

Continue reading here:

Colorado at the forefront of AI and what it means for jobs of the future - The Denver Channel

Posted in Artificial Intelligence | Comments Off on Colorado at the forefront of AI and what it means for jobs of the future – The Denver Channel

How To Get Your Rsum Past The Artificial Intelligence Gatekeepers – Forbes

Posted: at 11:44 am

Getty

By Jeff Mills, Director, Solution Marketing at SAP SuccessFactors

Its no longer a secret that getting past the robot rsum readers to a human let alone land an interview can seem like trying to get in to see the Wizard of Oz. As the rsums of highly qualified applicants are rejected by the initial automated screening, job seekers suddenly find themselves having to learn rsum submission optimization to please the algorithms and beat the bots for a meeting with the Wizard.

Many enterprise businesses use Artificial Intelligence (AI) and machine learning tools to screen rsums when recruiting and hiring new employees.Even small to midsize companies who use recruiting services are using whatever algorithm or search-driven automated rsum screening those services utilize.

Why dont human beings read rsums anymore? Well, they do, but usually later in the process after the initial shortlist by the bots. Unfortunately, desirable soft skills and unquantifiable experience can go unnoticed by the best-trained algorithms. So far, the only solution is human interaction.

Despite the view from outside the organization, HR has good reason for using automated processes for screening rsums. To efficiently manage the hundreds or even thousands of applications submitted for one position alone, companies have adopted automated AI screening tools to not only save time and human effort but also to find qualified and desirable candidates before they move on or someone else gets to them first.

Nobodys ever seen the Great Oz!

The wealth of impressive time-saving and turnover reduction metrics equates to success and big ROI for organizations who automate recruiting and hiring processes. Most tales of headaches and frustration go untold for many thousands of qualified applicants whose rsums somehow failed to tickle the algorithm just right.

This trend is changing, however, as the bias built into AI and machine learning algorithms unintentionally or otherwise becomes more glaringly apparent and undeniable. Sure, any new technology will have its early adopters and zealous promoters and apologists as well as the naysayers and skeptics. But when that technology shows promise to change industry and increase profit, criticism can be drowned out and ignored.

The problem of bias in AI is not a new concern. For several years, scientists and engineers have warned that because AI is created and developed by humans, the likelihood of bias finding its way into the program code is high if not certain. And the time to think about that and address it as much as possible is during the design, development, and testing process. Blind spots are inevitable. Once buy-in is achieved and business ecosystems integrate that technology, the recursive and reciprocal influences of technology, commerce, and society can make changing course slow and/or costly.

Consider the recent trouble Amazon found itself in for some of its hiring practices when it had been determined that their AI recruiting tool was biased against women. AI in itself is not biased and performs only as it is instructed and adapts to new information. Rather, the bias comes from the way human beings program and develop the way machines learn and execute commands. Or if the outputs of the AI are taken at face value and never trained by ongoing human interaction, they can never adapt.

Bias enters in a few ways. One source is rooted in the data sets used to train algorithms for screening candidates. Other sources of bias enter when certain criteria are privileged, such as growing up in a certain area, attending a top university, or certain age preferences. By using the data for existing employees as a model for qualified candidates, the screening process can become a kind of feedback loop of biased criteria.

A few methods and practices can help correct or avoid this problem. One is to use broad swaths of data, including data from outside your company and even your industry. Also, train algorithms on a continual basis, incorporating new data, and monitoring algorithm function and results. Set benchmarks for measuring data quality and have humans screen rsums as well. Management of automated recruiting and screening solutions can go a long way in minimizing bias as well as reducing the number of qualified candidates who get their rsums rejected.

Bell out of order, please knock

As mentioned earlier, change takes time once these processes are in place and embedded. Until widespread acceptance that problems exist, and steps are taken to address them, the best job seekers can do is adapt.

With all of the possible ways that programmers biases influence the bots screening rsums, what can people applying for jobs do to improve their chances of getting past the AI gatekeepers?

The good news is that these moves will not only help eliminate false negatives and keep your rsum out of the abyss, but they are likely to make things easier for the human beings it reaches.

Well, why didnt you say so? Thats a horse of a different color!

So, what are they looking for? How do you beat the bots?

In the big picture, AI is still young, and we are working out the kinks and bugs not only at a basic code and function level, but also on the human level. We are still learning how to navigate and account for our roles and responsibilities in the overall ecosystem of human-computer interaction.

The bottom line is that AI, machine learning, and automation can eliminate bias or reinforce it. That separation may never be pure, but its an ideal that is not only worth striving for, it is absolutely necessary to work toward. The impact and consequences of our choices today will leave long-lasting effects on every area of human life.

And the bright side is that were already beginning to see how those theoretical concerns can play out in the real world, and we have an opportunity to improve a life-changing technological development whose reach and impact we can still only dimly imagine. In the meantime, job seekers looking to beat the bots are not entirely powerless, but can do what human beings have done well for ages: adapt.

Interested in how to deliver a great candidate experience? Read our guide on how to Transform the Candidate Experience.

Link:

How To Get Your Rsum Past The Artificial Intelligence Gatekeepers - Forbes

Posted in Artificial Intelligence | Comments Off on How To Get Your Rsum Past The Artificial Intelligence Gatekeepers – Forbes

Public fears about artificial intelligence are ‘not the fault of A.I.’ itself, tech exec says – CNBC

Posted: at 11:44 am

Rong Luo, CFO of TAL Education Group, Doranda Doo, SVP of iFLYTEK Co. Ltd. and Song Zhang, Managing Director of Thoughtworks China on Day 2 of CNBC East Tech West at LN Garden Hotel Nansha Guangzhou on November 19, 2019 in Nansha, Guangzhou, China.

Zhong Zhi | Getty Images News | Getty Images

The technology industry and policymakers need to address public concerns about artificial intelligence (AI) which are "not the fault of AI" itself, a tech executive said Tuesday.

"It is the fault of developers, so we need to solve this problem," said Song Zhang, managing director for China at global software consultancy, ThoughtWorks.

Consumer worries relating to AI include concerns about personal privacy and how the systems may get out of control, said Zhang during a panel discussion discussing the "Future of AI" at CNBC's East Tech West conference in the Nansha district of Guangzhou, China.

It is the duty of the tech industry and policymakers to focus on, discuss and solve such problems, said Zhang in Mandarin, according to a CNBC translation. Indeed, while consumers are curious about AI when they first come into contact with the technology, their mindset changes over time, said Rong Luo, chief financial officer of TAL Education Group.

"The first phase is everyone finds it refreshing, they like something new, they want to give it a try," said Luo.

But "in phase two, people start to care a lot about their privacy, their security," Luo added.

And finally, after "one to two years of adjustments, we (have) now entered phase three, we have a more objective view of the technology. We do not put (it) on the pedestal nor do we demonize it," said Luo.

Panelists at the session acknowledged the potential of AI in various fields such as language translation and education.

"Technology is here to assist them, empower them. We want to free them from those repetitive and meaningless work (tasks) so they have more energy and time for other more creative jobs," said Doranda Doo, senior vice president of Chinese artificial intelligence firm iFlytek.

"So I think what's the most powerful is not AI itself, but people who are empowered by AI," Doo said.

Link:

Public fears about artificial intelligence are 'not the fault of A.I.' itself, tech exec says - CNBC

Posted in Artificial Intelligence | Comments Off on Public fears about artificial intelligence are ‘not the fault of A.I.’ itself, tech exec says – CNBC

4 Reasons to Use Artificial Intelligence in Your Next Embedded Design – DesignNews

Posted: at 11:44 am

For many, just mentioning artificial intelligence brings up mental images of sentient robots at war with mankind and mans struggle to avoid the endangered species list. While this may one day be a real scenario for when (perhaps a big if?) mankind ever creates an artificial general intelligence (AGI), the more pressing matter is whether embedded software developers should be embracing or fearing the use of artificial intelligence in their systems. Here are five reasons why you may want to include machine learning in your next project.

Reason #1 Marketing Buzz

From an engineering perspective, including a technology or methodology in a design simply because it has marketing buzz is something that every engineer should fight. The fact though is that if there is a buzz around something, odds are it will in the end help to sell the product better. Technology marketing seems to come in cycles, but there are always underlying themes that are driving those cycles that at the end of the day do turn out to be real.

Artificial intelligence has progressed through the years, with deep learning on the way. (Image source: Oracle)

Machine learning has a ton of buzz around it right now. Im finding this year that had industry events, machine learning typically makes up at least 25% of the event talks. Ive had several clients tell me that they need machine learning in their product and when I ask them their use case and why they need it, the answer is just that they need it. Ive heard this same story from dozens of colleagues, but the push for machine learning seems relentless right now. The driver is not necessarily engineering, but simply leveraging industry buzz to sell product.

Reason #2 The Hardware Can Support It

Its truly amazing how much microcontroller and application processors have changed in just the last few years. Microcontrollers which I have always considered to be resource constrained devices are now supporting megabytes of flash and RAM, having on-board cache and reaching system clock rates of 1 GHz and beyond! These little controllers are now even supporting DSP instructions which means that they can efficiently execute inferences.

With the amount of computing power available on these processors, it may not require much additional cost on the BOM to be able to support machine learning. If theres no added cost, and the marketing department is pushing for it, then leveraging machine learning might make sense simply because the hardware can support it!

Reason #3 It May Simplify Development

Machine learning has risen on the buzz charts for a reason. It has become a nearly indispensable tool for the IoT and the cloud. Machine learning can dramatically simplify software development. For example, have you ever tried to code up an application that can recognize gestures, handwriting or classify objects? These are really simple problems for a human brain to solve, but extremely difficult to write a program for. In certain program domains such as voice recognition, image classification and predictive maintenance, machine learning can dramatically simplify the development process and speed-up development.

With an ever expanding IoT and more data than one could ever hope for, its becoming far easier to classify large datasets and then train a model to use that information to generate the desired outcome for the system. In the past, developers may have had configuration values or acceptable operation bars that were constantly checked during runtime. These often involved lots of testing and a fair amount guessing. Through machine learning this can all be avoided by providing the data, developing a model and then deploying the inference on an embedded systems.

Reason #4 To Expand Your Solution Toolbox

One aspect of engineering that I absolutely love is that the tools and technologies that we use to solve problems and development products is always changing. Just look at how you developed an embedded one, three and five years ago! While some of your approaches have undoubtedly stayed constant, there should have been considerable improvements and additions to your processes that have improved your efficiency and the way that you solve problems.

Leveraging machine learning is yet another tool to add to the toolbox that in time, will prove to be an indispensable tool for developing embedded systems. However, that tool will never be sharpened if developers dont start to learn about, evaluate and use that tool. While it may not make sense to deploy a machine learning solution for a product today or even next year, understanding how it applies to your product and customers, the advantages and disadvantages can help to ensure that when the technology is more mature, that it will be easier to leverage for product development.

Real Value Will Follow the Marketing Buzz

There are a lot of reasons to start using machine learning in your next design cycle. While I believe marketing buzz is one of the biggest driving forces for tinyML right now, I also believe that real applications are not far behind and that developers need to start experimenting today if they are going to be successful tomorrow. While machine learning for embedded holds great promise, there are several issues that I think should strike a little bit of fear into the cautious developer such as:

These are concerns for a later time though, once weve mastered just getting our new tool to work the way that we expect it to.

Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 200 articles on embedded software development techniques, is a sought-after speaker and technical trainer, and holds three degrees which include a Masters of Engineering from the University of Michigan. Feel free to contact him at [emailprotected], at his website, and sign-up for his monthly Embedded Bytes Newsletter.

January 28-30:North America's largest chip, board, and systems event,DesignCon, returns to Silicon Valleyfor its 25th year!The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard?Register to attend!

Follow this link:

4 Reasons to Use Artificial Intelligence in Your Next Embedded Design - DesignNews

Posted in Artificial Intelligence | Comments Off on 4 Reasons to Use Artificial Intelligence in Your Next Embedded Design – DesignNews

Page 127«..1020..126127128129..140150..»