China plans to launch national AI plan- China Daily – Reuters

SHANGHAI China will launch a series of artificial intelligence (AI) projects and increase efforts to cultivate tech talent as part of a soon to announced national AI plan, the China Daily said on Friday, citing a senior official.

The country is focusing on AI as it is seen as a tool to boost productivity and empower employees, the paper said.

China will roll out a slew of AI research and development projects, allocate more resources to nurturing talent and increase the use of AI in education, healthcare and security among other things, said Wan Gang, the minister of science and technology at a conference in Tianjin.

The plan will soon be released to the public, said Wan.

China will build cooperation with international AI organizations and encourage foreign AI firms to set up R&D centers in the country, he added.

(Reporting By Engen Tham; Editing by Michael Perry)

SAN FRANCISCO Peer at the instrument panel on your new car and you may find sleek digital gauges and multicolored screens. But a glimpse behind the dashboard could reveal what U.S. auto supplier Visteon Corp found: a mess.

SEOUL Samsung Electronics Co Ltd said it will open its first U.S. appliances plant in more than three decades, a politically pleasing investment ahead of South Korean leader Moon Jae-in's two-day summit with U.S. President Donald Trump.

More here:

China plans to launch national AI plan- China Daily - Reuters

AI Weekly: Welcome to The Machine, VentureBeats AI site – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

VentureBeat readers likely noticed this week that our site looks different. On Thursday, we rolled out a significant design change that includes not just a new look but also a new brand structure that better reflects how we think about our audiences and our editorial mission.

VentureBeat remains the flagship brand devoted to covering transformative technology that matters to business decision makers and now, our longtime GamesBeat sub-brand has its own homepage of sorts, and definitely its own look. And weve launched a new sub-brand. This one is for all of our AI content, and its called The Machine.

By creating two distinct brands under the main VentureBeat brand, were leaning hard into what weve acknowledged internally for a long time: Were serving more than one community of readers, and those communities dont always overlap. There are readers who care about our AI and transformative tech coverage, and there are others who ardently follow GamesBeat. We want to continue to cultivate those communities through our written content and events. So when we reorganized our site, we created dedicated space for games and AI coverage, respectively, while leaving the homepage as the main feed.

GamesBeat has long been a standout sub-brand under VentureBeat, thanks to the leadership of managing editor Jason Wilson and the hard work of Dean Takahashi, Mike Minotti, and Jeff Grubb. Thus, giving it a dedicated landing page makes logical sense. We want to give our AI coverage the same treatment, which is why we created The Machine.

We chose to take a long and winding path to selecting The Machine as the name for our AI sub brand. We could have just put our heads together and picked one, but wheres the fun in that? If youre going to come up with a name for an AI-focused brand, you should use AI to help you do it. And thats what we did.

First, we went through the necessary exercises to map out a brand: We talked through brand values, created an abstract about its focus and goals, listed the technologies and verticals we wanted to cover, and so on. Then, we humans brainstormed some ideas for names. (None stood out as clear winners.)

Armed with this data, we turned to Hugging Faces free online NLP tools, which require no code you just put text into the box and let the system do its thing. Essentially, we ended up following these tips to generate name ideas.

There are a few different approaches you can take. You can feed the system 20 names, lets say, and ask it to generate a 21st. You can give it tags and relevant terms (like machine learning, artificial intelligence, computer vision, and so on) and hope that it converts those into something that would be worthy of a name. You can enter a description of what you want (like a paragraph about what the sub-brand is all about) and see if it comes up with something. And you can tweak various parameters, like model size and temperature, to extract different results.

This sort of tinkering is a delightful rabbit hole to tumble down. After incessantly fiddling both with the data we fed the system and the various adjustable parameters, we ended up with a long and hilarious list of AI-generated names to chew on.

Here are some of our favorite terrible names that the tool generated:

This is a good lesson in the limitations of AI. The system had no idea what we wanted it to do. It couldnt, and didnt, solve our problem like some sort of name vending machine. AI isnt creative. We had to generate a bunch of data at the beginning, and then at the end, we had to sift through mostly unhelpful output (we ended up with dozens and dozens of names) to find inspiration.

But in the detritus, we found some nuggets of accidental brilliance. Here are a few NLP-generated names that are actually kind of good:

Its worth noting that the system all but insisted on AIBeat. No matter what permutations we tried, AIBeat kept resurfacing. It was tempting to pluck that low-hanging fruit it matched VentureBeat and GamesBeat, and theres no confusion about what beat wed be covering. But we humans decided to be more creative with the name, so we moved away from that construction.

We took a step back and used the long list of NLP-generated names to help us think up some fresh ideas. For example, We the Machine stood out to some of us as particularly punchy, but it wasnt quite right for a publication name. (Hello, I write for We the Machine doesnt exactly roll off the tongue.) But that inspired The Machine, which emerged as the winner from our shortlist.

The Machine has multiple layers. Its a play on machine learning, and its a wink at the persistent fear of sentient robots. And it frames our AI team as a formidable, well-oiled content machine, punching well above our weight with a tiny roster of writers.

And so, I write for The Machine. Bookmark this page and visit every day for the latest AI news, analysis, and features.

Here is the original post:

AI Weekly: Welcome to The Machine, VentureBeats AI site - VentureBeat

The state of AI in 2020: democratization, industrialization, and the way to artificial general intelligence – ZDNet

After releasing what may well have been the most comprehensive report on the State of AI in 2019, Air Street Capital and RAAIS founder Nathan Benaich and AI angel investor, and UCL IIPP Visiting Professor Ian Hogarth are back for more.

In the State of AI Report 2020 released today, Benaich and Hogarth outdid themselves. While the structure and themes of the report remain mostly intact, its size has grown by nearly 30 percent. This is a lot, especially considering their 2019 AI report was already a 136 slide long journey on all things AI.

The State of AI Report 2020 is 177 slides long, and it covers technology breakthroughs and their capabilities, supply, demand and concentration of talent working in the field, large platforms, financing and areas of application for AI-driven innovation today and tomorrow, special sections on the politics of AI, and predictions for AI.

ZDNet caught up with Benaich and Hogarth to discuss their findings.

We set out by discussing the rationale for such a substantial contribution, which Benaich and Hogarth admitted to having taken up an extensive amount of their time. They mentioned their feeling is that their combined industry, research, investment and policy background and currently held positions give them a unique vantage point. Producing this report is their way of connecting the dots and giving something of value back to the AI ecosystem at large.

Coincidentally, Gartner's 2020 Hype cycle for AI was also released a couple of days back. Gartner identifies what it calls 2 megatrends that dominate the AI landscape in 2020 -- democratization and industrialization. Some of Benaich and Hogarth's findings were about the massive cost of training AI models, and the limited availability of research. This seems to contradict Gartner's position, or at least imply a different definition of democratization.

Benaich noted that there are different ways to look at democratization. One of them is the degree to which AI research is open and reproducible. As the duo's findings show, it is not: only 15% of AI research papers publish their code, and that has not changed much since 2016.

Hogarth added that traditionally AI as an academic field has had an open ethos, but the ongoing industry adoption is changing that. Companies are recruiting more and more researchers (another theme the report covers), and there is a clash of cultures going on as companies want to retain their IP. Notable organizations criticized for not publishing code include OpenAI and DeepMind:

"There's only so close you can get without a sort of major backlash. But at the same time, I think that data clearly indicates that they're certainly finding ways to be close when it's convenient", said Hogarth.

Industrialization of AI is under way, as open source MLOps tools help bring models to production

As far as industrialization goes, Benaich and Hogarth pointed towards their findings in terms of MLOps. MLOps, short for machine learning operations, is the equivalent of DevOps for ML models: taking them from development to production, and managing their lifecycle in terms of improvements, fixes, redeployments and so on.

Some of the more popular and fastest growing Github projects in 2020 are related to MLOps, the duo pointed out. Hogarth also added that for start up founders, for example, it's probably easier to get started with AI today than it was a few years ago, in terms of tool availability and infrastructure maturity. But there is a difference when it comes to training models like GPT3:

"If you wanted to start a sort of AGI research company today, the bar is probably higher in terms of the compute requirements. Particularly if you believe in the scale hypothesis, the idea of taking approaches like GPT3 and continuing to scale them up. That's going to be more and more expensive and less and less accessible to new entrants without large amounts of capital.

The other thing that organizations with very large amounts of capital can do is run lots of experiments and iterates in large experiments without having to worry too much about the cost of training. So there's a degree to which you can be more experimental with these large models if you have more capital.

Obviously, that slightly biases you towards these almost brute force approaches of just applying more scale, capital and data to the problem. But I think that if you buy the scaling hypothesis, then that's a fertile area of progress that shouldn't be dismissed just because it doesn't have deep intellectual insights at the heart of it".

This is another key finding of the report: huge models, large companies and massive training costs dominate the hottest area of AI today: NLP - Natural Language Processing. Based on variables released by Google et. al., research has estimated the cost of training NLP models at about $1 per 1000 parameters.

That means that a model such as OpenAI's GPT3, which has been hailed as the latest and greatest achievement in AI, could have cost tens of millions to train. Experts suggest the likely budget was $10M. That clearly shows that not everyone can aspire to produce something like GPT3. The question is, is there another way? Benaich and Hogarth think so, and have an example to showcase.

PolyAI is a London-based company active in voice assistants. They produced, and open sourced, a conversational AI model (technically, a pre-trained contextual re-ranker based on transformers) that outperforms Google's BERT model in conversational applications. PolyAI's model not only performs much better than Google's, but it required a fraction of the parameters to train, meaning also a fraction of the cost.

PolyAI managed to produce a machine learning language models that performs better than Google in a specific domain, at a fraction of the complexity and cost.

The obvious question is, how did PolyAI did it, as this could be inspiration for others, too. Benaich noted that the task of detecting intent and understanding what somebody on the phone is trying to accomplish by calling is solved in a much better way by treating this problem as what is called a contextual re-ranking problem:

"That is, given a kind of menu of potential options that a caller is trying to possibly accomplish based on our understanding of that domain, we can design a more appropriate model that can better learn customer intent from data than just trying to take a general purpose model -- in this case BERT.

BERT can do OK in various conversational applications, but just doesn't have kind of engineering guardrails or engineering nuances that can make it robust in a real world domain. To get models to work in production, you actually have to do more engineering than you have to do research. And almost by definition, engineering is not interesting to the majority of researchers".

Long story short: you know your domain better than anyone else. If you can document and make use of this knowledge, and have the engineering rigor required, you can do more with less. This once more pointed to the topic of using domain knowledge in AI. This is what critics of the brute force approach, also known as the "scaling hypothesis", point to.

What the proponents of the scaling hypothesis seem to think, simplistically put, is that intelligence is an emergent phenomenon relating to scale. Therefore, by extension, if at some point models like GPT3 become large enough, complex enough, the holy grail of AI, and perhaps science and engineering at large, artificial general intelligence (AGI), can be achieved.

How to make progress in AI, and the topic of AGI, is at least as much about philosophy as it is about science and engineering. Benaich and Hogarth approach it in a holistic way, prompted by the critique to models such as GPT3. The most prominent critic to approaches such as GPT3 is Gary Marcus. Marcus has been consistent in his critique of models predating GPT3, as the "brute force" approach does not seem to change regardless of scale.

Benaich referred to Marcus' critique, summing it up. GPT3 is an amazing language model that can take a prompt and output a sequence of text that is legible and comprehensible and in many cases relevant to what the prompt was. What's more, we should add, GPT3 can even be applied to other domains, such as writing software code for example, which is a topic in and of its own.

However, there are numerous examples where GPT3 is off course, either in a way that expresses bias, or it just produces, irrelevant results. An interesting point is how we are able to measure the performance of models like GPT3. Benaich and Hogarth note in their report that existing benchmarks for NLP, such as GLUE and SuperGLUE are now being aced by language models.

These benchmarks are meant to compare the performance of AI language models against humans at a range of tasks spanning logic, common sense understanding, and lexical semantics. A year ago, the human baseline in GLUE was beat by 1 point. Today, GLUE is reliably beat, and its more challenging sibling SuperGLUE is almost beat too.

AI language models are getting better, but does that mean we are approaching artificial general intelligence?

This can be interpreted in a number of ways. One way would be to say that AI language models are just as good as humans now. However, the kind of deficiencies that Marcus points out show this is not the case. Maybe then what this means is that we need a new benchmark. Researchers from Berkeley have published a new benchmark, which tries to capture some of these issues across various tasks.

Benaich noted that an interesting extension towards what GPT3 could do relates to the discussion around PolyAI. It's the aspect of injecting some kind of toggles to the model that allow it to have some guardrails, or at least tune what kind of outputs it can create from a given input. There are different ways that you might be able to do this, he went on to add.

Previously, the use of knowledge bases and knowledge graphs was discussed. Benaich also mentioned some kind of learned intent variable that could be used to inject this kind of control over this more general purpose sequence generator. Benaich thinks the critical view is certainly valid to some degree, and points to what models like GPT3 could use, with the goal of making them useful in production environments.

Hogarth on his part noted that Marcus is "almost a professional critic of organizations like DeepMind and OpenAI". While it's very healthy to have those critical perspectives when there is reckless hype cycle around some of this work, he went on to add, OpenAI has one of the more thoughtful approaches to policy around this.

Hogarth emphasized the underlying difference in philosophy between proponents and critics of the scaling hypothesis. However, he went on to add, if the critics are wrong, then we might have a very smart but not very well-adjusted AGI on our hands as as evidenced by sort of some of these early instances of bias as you scale these models:

"So I think it's incumbent on organizations like OpenAI if they are going to pursue this approach to tell us all how they're going to do it safely, because it's not obvious yet from their research agenda. How do you marry AI safety with this kind of this kind of throw more data and compute to the problem and AGI will emerge approach".

This discussion touched on another part of the State of AI Report 2020. Some researchers, Benaich and Hogarth noted, feel that progress in mature areas of machine learning is stagnant. Others call for a advancing causal reasoning, and claim that adding this element to machine learning approaches could overcome barriers.

Adding causality to machine learning could be the next breakthrough. The work of pioneers like Judea Pearl shows the way

Causality, Hogarth said, is arguably at the heart of much of human progress. From an epistemological perspective, causal reasoning has given us the scientific method, and it's at the heart of all of our best world models. So the work that people like Judea Pearl have pioneered to bring causality to machine learning is exciting. It feels like the biggest potential disruption to the general trend of larger and larger correlation driven models:

"I think if you can if you can crack causality, you can start to build a pretty powerful scaffolding of knowledge upon knowledge and have machines start to really contribute to our own knowledge bases and scientific processes. So I think it's very exciting. There's a reason that some of the smartest people in machine learning are spending weekends and evenings working on it.

But I think it's still in its infancy as an area of attention for the commercial community. We really only found one or two examples of it being used in the wild, one by faculty at a London based machine learning company and one by BenevolentAI in our report this year".

If you thought that's enough cutting edge AI research and applications for one report, you'd be wrong. The State of AI Report 2020 is a trove of references, and we'll revisit it soon, with more insights from Benaich and Hogarth.

More here:

The state of AI in 2020: democratization, industrialization, and the way to artificial general intelligence - ZDNet

Navigating supply chains with AI and data analytics – Supply Chain Digital – The Procurement & Supply Chain Platform

Supply Chain Digital explores the utilisation of AI and analytics with experts in the sector, particularly in regard to how it is shaping corporate attitudes to data.

In an era calling for latency-sensitive applications, where the emergence of edge computing, 5G and artificial intelligence (AI) powered analytics are ushering in the possibility of real-time solutions, companies, now more than ever, are looking for the most efficient ways to make use of their data. The sheer volumes of information that can be gathered from every aspect of a business are overwhelming: with so much data available, where do you start when examining it? The challenge for modern supply chains is knowing where to place a strategic focus and not becoming paralysed by information overload, whilst also not excluding details that could increase efficiency, allow for better forecasts or enhance customer experience (CX).

Grant Millard, Director of Technology at Vendigital, says that, prior to the advent of Big Data and analytics, companies struggled to deliver clear or credible data-based insights. More often than not companies are operating in a data vacuum, continuously manipulating the data to get the insight they are after and repeating this process every time they want to get that insight. However, the data produced by emails, live chat, social media, sales reports, etc, can often be too vast to administrate manually. Once analytical software began collating information from multiple areas, algorithms developed for machine learning programmes liberated companies by allowing them to take action on the collected data, as opposed to simply managing it. Automating the collection and management process means that procedures can be faster and more accurate. With businesses able to pinpoint problems and resolve them in real-time, the possibilities are nothing short of transformational.

This evolution is a necessity both for supply chains and modern business generally. Data no longer comes from standard sources; in an increasingly digital world, it is woven into practically every facet, including customer interaction. Jonathan Clarke, Manager of Statistical Modelling at LexisNexis Risk Solutions, believes any supply chain without strong analytical capabilities will fail to be competitive. Analytics has, therefore, become a key necessity in any business process to sufficiently review data in order to make informed business decisions, he explains. Using bespoke Big Data architecture that can process vast data assets, as well as leverage machine learning tools, will empower a business to be able to highlight risk quickly and efficiently. However, before these advantages can properly manifest themselves, companies must employ an intelligent and well-thought-out strategy to make the most out of the data collected.

Big Data represents a double-edged sword: more information is available, yet it is amorphous and representative of nothing if not sufficiently harnessed. As data generation continues to grow, the amount of useful data decreases, says Nikul Amin, Director of Consulting and Analytics at Acxiom. Therefore, setting measurable goals and establishing desirable outcomes will help facilitate the productive analysis of information. Employing edge computing - wherein data analysis is conducted on a device directly, as opposed to being sent to a server first - enables supply chains to concentrate on a particular problem quickly and accurately. This has the additional benefit of making data analysis more efficient: essential data can be ascertained more easily and inconsequential information disregarded.

The enhanced level of scrutiny and diligence that this improved level of speed and reliability can bring a supply chain is significant. Data from differentiated sources can provide a comprehensive, balanced view as opposed to relying on one single source, Clarke says. A focused analysis will flag areas containing inherent risk and offer a wider view of entities within the chain and potential data points that may be a cause for concern. Kirsty Braines, COO at Oliver Wight EAME, concurs with this view, opining that increasingly elaborate supply chains - containing, amongst other elements, procurement, logistics, sales, etc - are difficult to transparently track from start to finish. By strengthening traceability in situations such as product recalls or quality problems, organisations can isolate the issue efficiently and accurately, minimising the cost of locating the root of the problem and limiting damage to a brands reputation.

Real-time analysis of data also affords a far more dynamic and fluid customer experience, wherein feedback is used to modify services on the fly. Combined with a streamlined machine learning algorithm, supply chains could deliver an almost autonomous level of self-improvement and be able to anticipate future trends, thus enabling companies to turn forecasted difficulties into opportunities. As Millard puts it, The holy grail is to use Big Data and analytics to deliver action-oriented insights. To reach this stage, he considers automatisation via AI analytics and blockchain and the removal of manual administrative procedures to be the way forward. Successfully doing this, however, may require larger companies to take notice of the more agile working methods of startups and SMEs. Those making the biggest impact are niche operators that are able to blend technology, data science, SME and industry expertise to deliver data-based insights that provide answers to critical business questions and enhance enterprise value.

The possibilities for supply chain optimisation are exciting, but Braines is hesitant to say that the transition to this new way of operating will be easy. Big Data and analytics are game-changers for businesses. However, its all advanced technology and the clue is very much in the name: advanced, she says. A huge proportion of companies havent reached the maturity to completely handle data, with the technology not fully understood, let alone successfully implemented. It is therefore imperative that companies incorporate these technologies into their supply chains strategically and thoughtfully. The insights provided by data and analytics can be enlightening, but, without a plan or vision for how it will be utilised, a supply chain will not ultimately benefit from it.

Read this article:

Navigating supply chains with AI and data analytics - Supply Chain Digital - The Procurement & Supply Chain Platform

US defense must have foundations for AI integration by 2025, report says – Global Government Forum

Military intelligence: the DoD is now trying to make the leap to a software-intensive enterprise says the new report from the NSCAI.

The US Department of Defense (DoD) must set an ambitious goal to have the foundations for widespread integration of artificial intelligence (AI) across defence in place by 2025, according to a draft of the final report from the National Security Commission on Artificial Intelligence (NSCAI).

This should include a common digital infrastructure that is accessible to internal AI development teams and critical industry partners, a technically literate workforce, and modern AI-enabled business practices that improve efficiency.

The draft report was published last month; the final version will be released on 1 March 2021.

The Commission has advocated for greater investment and uptake in AI in the defence and security sectors. It frames the USs efforts in AI similarly to an arms race, as hostile actors develop their own capabilities in autonomous weaponry, cyber tools and disinformation.

The magnitude of the technological opportunity coincides with a moment of strategic vulnerability. China is a competitor possessing the might, talent, and ambition to challenge Americas technological leadership, military superiority, and its broader position in the world, the introduction notes.

AI is deepening the threat posed by cyber attacks and disinformation campaigns that Russia, China, and other state and non-state actors use to infiltrate our society, steal our data, and interfere in our democracy, it adds.

The NSCAI was established in August 2018 as part of the annual defence spending settlement, with a mission to scope out how to advance AI, machine learning and associated technologies in relation to US national security and defence needs.

It is chaired by former Google chief executive Dr Eric Schmidt. The vice chair is Robert Work, a former Deputy Secretary of Defense from 2014 to 2017, under both the Obama and Trump administrations.

Its fifteen commissioners, supported by a secretariat of 25 staff, have completed five interim reports and memos since July 2019, informed by submissions from a wide range of experts. The commission is scheduled to be wound up in October 2021.

Previous reports have urged policies such as creating a national digital corps, setting up a military cyber academy, and increasing the federal budget for research and development into AI and associated technologies, according to US government news website Fedscoop.

The draft final report is in two halves. The first, Defending America in the AI Era, focuses on the defence applications of AI, and what the US should do to respond to the spectrum of AI-related threats from state and non-state actors.

In the second part, Winning the Technology Competition, the commission looks at AI as part of a wider global competition around new technologies and recommends policies to promote innovation in AI and create a critical and competitive advantage for the US.

The introduction paints a picture of a nation at risk of slipping behind competitor states, which, in future, could include small nations and actors able to exploit affordable, off-the-shelf hardware and readily available algorithms.

The report is also blunt about Chinas capability. In some areas of research and applications, China is already an AI peer, and it is more technically advanced in some applications. Within the next decade, China could surpass the United States as the worlds AI superpower, it notes.

It warns that US citizens have also not recognised the assertive role the government will have to play in ensuring the United States wins this innovation competition or the public investment needed. Despite our private sector and university leadership in AI, the United States remains unprepared for the coming era, the commission writes.

On the other hand, capabilities in AI could ensure the US can respond with greater agility to new or emerging vulnerabilities. Global crises exemplified in the global pandemic and climate change are expanding the definition of national security and crying out for innovative solutions. AI can help us navigate many of these challenges, the introduction says.

The authors argue that AI development and implementation requires a stack of interconnected elements containing including talent, data, hardware, algorithms, applications, and integration.

We regard talent as the most essential requirement because it drives the creation and management of all the other elements, the report says, recommending a focus on improving the government technology talent pipeline, both through new recruiting practices and retraining current employees.

If government agencies do not have enough of the right talent, every AIinitiative will struggle and most will fail, said commissioner Dr Jos-Marie Griffiths, president of Dakota State University, according to Fedscoop.

While the US armed forces might already deploy, and be able to counter, drones and autonomous weapons, the NSCAI warns that rapidly advancing capabilities could change the dynamic within human-machine teams.

In the past, computers could only perform tasks that fell within a clearly defined set of parameters or rules programmed by a human. As AI becomes more capable, computers will be able to learn and perform tasks based on parameters that humans do not explicitly program, creating choices and taking actions at a volume and speed never before possible.

The report therefore sees the construction of an AI infrastructure as the first step to creating new defence capabilities. DoD has long been hardware-oriented toward ships, planes, and tanks. It is now trying to make the leap to a software-intensive enterprise, it notes.

Original post:

US defense must have foundations for AI integration by 2025, report says - Global Government Forum

How to Fight Discrimination in AI – Harvard Business Review

Executive Summary

Ensuring that your AI algorithm doesnt unintentionally discriminate against particular groups is a complex undertaking. What makes it so difficult in practice is that it is often extremely challenging to truly remove all proxies for protected classes.Determining what constitutes unintentional discrimination at a statistical level is also far from straightforward.So what should companies do to steer clear of employing discriminatory algorithms? They can start by looking to a host of legal and statistical precedents for measuring and ensuring algorithmic fairness.

Is your artificial intelligence fair?

Thanks to the increasing adoption of AI, this has become a question that data scientists and legal personnel now routinely confront. Despite the significant resources companies have spent on responsible AI efforts in recent years, organizations still struggle with the day-to-day task of understanding how to operationalize fairness in AI.

So what should companies do to steer clear of employing discriminatory algorithms? They can start by looking to a host of legal and statistical precedents for measuring and ensuring algorithmic fairness. In particular, existing legal standards that derive from U.S. laws such as the Equal Credit Opportunity Act, the Civil Rights Act, and the Fair Housing Act and guidance from the Equal Employment Opportunity Commission can help to mitigate many of the discriminatory challenges posed by AI.

At a high level, these standards are based on the distinction between intentional and unintentional discrimination, sometimes referred to as disparate treatment and disparate impact, respectively. Intentional discrimination is subject to the highest legal penalties and is something that all organizations adopting AI should obviously avoid. The best way to do so is by ensuring the AI is not exposed to inputs that can directly indicate protected class such as race or gender.

Avoiding unintentional discrimination, or disparate impact, however, is an altogether more complex undertaking. It occurs when a seemingly neutral variable (like the level of home ownership) acts as a proxy for a protected variable (like race). What makes avoiding disparate impact so difficult in practice is that it is often extremely challenging to truly remove all proxies for protected classes. In a society shaped by profound systemic inequities such as that of the United States, disparities can be so deeply embedded that it oftentimes requires painstaking work to fully separate what variables (if any) operate independently from protected attributes.

Indeed, because values like fairness are subjective in many ways there are, for example, nearly two dozen conceptions of fairness, some of which are mutually exclusive its sometimes not even clear what the most fair decision really is. In one study by Google AI researchers, the seemingly beneficial approach of giving disadvantaged groups easier access to loans had the unintended effect of reducing these groups credit scores overall. Easier access to loans actually increased the number of defaults within that group, thereby lowering their collective scores over time.

Determining what constitutes disparate impact at a statistical level is also far from straightforward. Historically, statisticians and regulators have used a variety of methods to detect its occurrence under existing legal standards. Statisticians have, for example, used a group fairness metric called the 80 percent rule (its also known as the adverse impact ratio) as one central indicator of disparate impact. Originating in the employment context in the 1970s, the ratio consists of dividing the proportion of the selected group in the disadvantaged class by the proportion of selected members of the advantaged group. A ratio below 80% is generally considered to be evidence of discrimination. Other metrics, such as standardized mean difference or marginal effects analysis, have been used to detect unfair outcomes in AI as well.

All of which means that, in practice, when data scientists and lawyers are asked to ensure their AI is fair, theyre also being asked to select what fairness should mean in the context of each specific use case and how it should be measured. This can be an incredibly complex process, as a growing number of researchers in the machine learning community have noted in recent years.

Despite all these complexities, however, existing legal standards can provide a good baseline for organizations seeking to combat unfairness in their AI. These standards recognize the impracticality of a one-size-fits-all approach to measuring unfair outcomes. As a result, the question these standards ask is not simply is disparate impact occurring?. Instead, existing standards mandate what amounts to two essential requirements for regulated companies.

First, regulated companies must clearly document all the ways theyve attempted to minimize and therefore to measure disparate impact in their models. They must, in other words, carefully monitor and document all their attempts to reduce algorithmic unfairness.

Second, regulated organizations must also generate clear, good faith justifications for using the models they eventually deploy. If fairer methods existed that would have also met these same objectives, liability can ensue.

Companies using AI can and should learn from many of these same processes and best practices to both identify and minimize cases when their AI is generating unfair outcomes. Clear standards for fairness testing that incorporate these two essential elements, along with clear documentation guidelines for how and when such testing should take place, will go a long way towards ensuring fairer and more-carefully-monitored outcomes for companies deploying AI. Companies can also draw from public guidance offered by experts such as BLDSs Nicholas Schmidt and Bryce Stephens.

Are these existing legal standards perfect? Far from it. There is significant room for improvement, as regulators have in fact noted in recent months. (A notable exception is the Trump administrations Department of Housing and Urban Development, which is currently attempting to roll back some of these standards.) Indeed, the U.S. Federal Trade Commission has indicated an increasing focus on fairness in AI in recent months, with one of its five commissioners publicly stating that it should expand its oversight of discriminatory AI.

New laws and guidance targeting fairness in AI, in other words, are clearly coming. If shaped correctly, they will be a welcome development when they arrive.

But until they come, its critical that companies build off of existing best practices to combat unfairness in their AI. If deployed thoughtfully, the technology can be a powerful force for good. But if used without care, it is all too easy for AI to entrench existing disparities and discriminate against already-disadvantaged groups. This is an outcome that both businesses and society at large cannot afford.

View post:

How to Fight Discrimination in AI - Harvard Business Review

Zero-sum thinking is an AI talent killer – Times Higher Education (THE)

Technology arising out of artificial intelligence has untold potential to benefit humanity and to generate wealth and employment. It also offers exciting new ways to explore most academic disciplines.

Moreover, if a technology company cannot learn from the innovative work done in higher education, it will probably fail to find a platform for sustainable growth. Similarly, without both public and private funding, new inventions may languish in a laboratory.

Hence, the case is obvious for industry and academia to collaborate on AI. However, not everyone is convinced. Some people regard cooperation between enterprises and universities as benefitting only the companies, and that research should be conducted in a commercial vacuum. Politicians are less concerned about commercial benefit since they welcome the economic growth and jobs that commercial success brings. But they do sometimes object when the company concerned is based in a foreign country.

Such zero-sum thinking has parallels with fears voiced by some US politicians in the 1980s and 1990s about Japans economic rise. In the 2018 book Prediction Machines, Ajay Agrawal recounts how the MIT economist Scott Stern was asked at a congressional hearing in 1999 how the US should respond to comparably higher R&D spending by Japan and other economies, suggesting that these countries posed a threat to American prosperity. The first thing we should do is send them a thank you letter, Stern said. Innovative investment is not a win-lose situation. American consumers are going to benefit from more investment by other countriesIt is a race we can all win."

Huawei is not holding out for a thank you letter any time soon. But our message will always be clear: countries have more to gain than to fear from healthy international competition. By placing Chinese companies on a blacklist, politicians are restricting the talent growth required for a successful AI-informed future that will benefit everyone.

Talent growth is a hugely important issue. The European Commission believes that there could be as many as 750,000 unfilled jobs in the European information and communication technology sector this year. And a 2018 Ernst & Young poll found that 56 per cent of senior AI professionals believe this lack of qualified talent is the single biggest barrier to AI implementation across business operations. This means talent, not technology, is key to economic growth.

The trick, then, will be to upskill people up to work in IT, rather than allowing them to be put out of work by it: what Andy Haldane, chief economist at the Bank of England, calls technological unemployment. But the existing talent is spread all around the world; 71 per cent of tech employees in Silicon Valley are foreign born, for instance. So cross-border collaboration on AI makes overwhelming sense. We need a global Silicon Valley: an international community of academics, entrepreneurs, companies and investors working together to nurture talent and push ideas forward.

To this end, Huawei has launched a developer enablement programme, providing an investment of $1 billion (777 million) to address the widening skills gap. We work with universities to publish textbooks and educational material related to AI, help to build AI labs and train AI teachers, and encourage universities from all over the world to participate in the Huawei cloud open community.

But we also need to keep making the case for investment in basic research, which has hit a bottleneck worldwide, primarily because government funding in this area has fallen significantly. Product innovation during the past 20 years has relied on ideasthat were conceivedin academiaand developed by industry into products and solutions that address customer needs. The next phase will be a search for new theoretical concepts that will shape consumer needs in 50 years time: a future ShannonHartley theorem or Moores Law, for example.

Despite the geopolitical pressures that international tech companies such as Huawei face, we will continue to increase investment in collaborating with universities. A collective effort is the only way to answer the worlds greatest challenges and respond to an uncertain future. It is not a zero-sum game.

Jack Lyu Ke is president of Huaweis Human Resource Management Department, and chairman of the Huawei Corporate Advisory Committee.

More:

Zero-sum thinking is an AI talent killer - Times Higher Education (THE)

The Rise of AI Is Forcing Google and Microsoft to Become Chipmakers – WIRED

V w{ =HBVr_IdTn7= pHB*7bcd3HUuXUE ~7 ozij>7l5ODOZ* BQ/D NGUQVNNGQU"f4P#-ZE /LG,U"WyQu@^'q8U ]~8,0(Ly[{:UPx#5,4#ym7ygYsxd8-Lud,XdU%yqZFBrmZ'i+-K:"JV:,@Z,%B(4H -2ItLwqU+QT7|~r%Tiv^j"Thak/q88'#0(n1[X32wFy2/EI!c@;^-*/;t:e;3c<1 c r|v~4w}u"eGw0i(r;BJs]"E:0.ER =a'>ST%@ZZ=ZS+Jt7dS~%s`21[TuRU8LH#L&lC*)A2,@~`i[.{rawH7+4} &kLv!M(3C*/wtVC,EG]Ouh<0=t0 w]hG]!ub|My~WR 6b1:ZB0]~"+nlM;:uzp~&_:Rlu{/34Y& A7ypb{1IXZcFq 309/WYQG MgYb4&^v)prjCEg:BAG_'y|2zHHH"[GayE'EqGTmk+1JAdvouDtdt$p,]]cT0K/r+3)}o :-l.N;3,NERVogf/F(SS&2C<@aNxQUrt[?>5 hAlcqd%x:OPx }G_,=1M/#*$FNlt o_NpfFG%:Z/O_|[/ }=~GSK}`~l^-%048Irc$khT?~Q_BTZm}atz9vjiZ3N/0]Z*XR"`2Zex_.gg+>eG1]O>*hu^`G#b8Q>;` (w/`,hM2o"7VicZ@M.%4T6 k H>?'vcdwa{N`cr0Kq}NG[(X_7QM_1e19q? Y:xRGsf vimP;J!4M*x5yJKb)"cia?@AYhzkK=m0@^%StP3Cl HIT'Pd5a:;J5B]mR{on[}H^[4e}xYO&L;u,9H>+GK(BFC=zY$1hs'*D `C#a1{dt,kG[o-7w]>gc4<"Wg}rg8[rrh= Umiwy tI=ehk&qO`q4:BKRDpG +b+26}< h"1m"}@8^eHm=gGn?&xw m~YUY!Fz'@MYam67{q([y-M&7{AcDZH{/F>> :qJF> m&Q=F^BMv94c8)*ou|6AGJG-KQ|"NU]5NGdv'3TLK `|kDL86O=]&U?0i6Ajc'0BS@R@sowS@kL*g;~=]Ut5x7}wwYt!;Z;rujwL{m qwh#N8<=gh{QBkm m# txlM!XTo-Za o{k;`==/:dc6(]KCve"uN&cu&8,T[L -Sg{iR Kms09({a Asiu7Dgw'}2S}/y3:1,Jz|I,#~XI<6|ygb?b5e|RBb?7d*~ 6wK2, 7,n2q8L9N3z]6X~$_!LD]^[7%|+*^)_L0A@*uSnf//9c=Rjd>%DGZ,J&Ji%g;t*$B)t16, 69_y,'E=>';0ZFMAnLs-rHE{A/tt[Xumg@HZgD;5 &sy $R*cqK=bVa`$yZWQ32j{rxY^% IA{ bxx5zY oUqr] 5Jkv$SMkK("RO}jdzYE,uN./m 9C[NKoqgQ>c=y>$'@E~;=1GH)gmv?/ !oM'l>Si^.~i_c]y>YMkPbuuzve[T?wlrnO%Vv|i!C |T]`,5k2+G(0yf7|]vf:l ;VG7W0E ax jq^QTsl JV6@F{~lc_^6|56|913x2x;I#b&m2 k]:0CL,689 4/Oz>#rygOo_/`aEb*?lS7^u)>g_F0Jr1t?~tJl1e:9F/i }xo{ Ou|m^eA&jzu?8Vyb=8d)Z=y?3i`wl9b?Po< @}zbZ*l4wv:q0=| y:':SZ0L_%sLK6a|mUc)"oG'WDx[>IH,x& 3xN1yXPc1CqL(<<>5ggRqyGh7)<>B^C<a'B!Gq);.qsM',Sca@y;C>WT>yv<4`Cu^xB` XP'~L:<{A5&rx0T&*-x0Q 9{

Read this article:

The Rise of AI Is Forcing Google and Microsoft to Become Chipmakers - WIRED

How AI and Unmanned Aerial Systems Could Change the Future of Crop Scouting – Ohio’s Country Journal and Ohio Ag Net

Crop scouting may transition from a boots-on-the-ground job to an artificial intelligence endeavor in the sky thanks to research from The Ohio State University (OSU) and investments made by the Ohio Soybean Council (OSC) and soybean checkoff. Dr. Scott Shearer, professor and chair of OSUs Department ofFood,Agricultural and Biological Engineering, and his team are testing the use of small Unmanned Aerial Systems (sUAS) in Ohio fields to automate the scouting process with data collected directly from the crop canopy.

To dig deeper, OSC talked with Dr. Shearer about the project and the impact it could have on Ohio agriculture.

Q: Tell us about your current work with AI and sUAS.

A: We have developed a stinger platform suspended beneath a multi-rotor drone, or sUAS, to insert sensors into the crop canopy. These sensors capture high-resolution imagery from within the plant canopy, which can be used for real-time plant stress classification.

Over the past two growing seasons, we have been scouting soybean fields and building an extensive image library of soybean crop stress imagery. Convolutional Neural Networks (CNNs), AI algorithms used for image recognition, have been trained using the image library to support real-time classification of crop stress. The resulting CNN classifiers are being field tested for accuracy.

Currently, the predominant sensing technique uses low-cost RGB cameras. However, additional work has been conducted this growing season to include a near-infrared spectroscopic sensor as well as tissue sampler, both suspended on the stinger beneath the drone.

Q: How does this technology benefit Ohio soybean farmers?

A: The direct benefit to Ohio soybean farmers is a more efficient and accurate scouting approach for improved crop health monitoring. Current scouting practices require the farmer to scout three or more locations within a field. However, the sUAS approach significantly expands the scouts ability to monitor many more sites within a field and to automate the stress detection and specification process using AI.

Ideally, farmers using this method will be alerted of stressors affecting their soybean crop sooner so they can implement corrective measures more quickly and preserve yield potential for improved profitability. This rapid assessment approach will move the industry toward a more prescriptive approach to crop stress management where economic thresholds are addressed on a refined spatial basis.

Q: When do you anticipate this technology could be commercialized?

A: Researchers and technology commercialization managers have been in contact with several venture capitalists and ag tech providers to explore commercialization options. The approach is somewhat constrained by the ability to develop regional and crop-specific reference libraries to train the CNN classifiers, so the value of this approach depends on who can develop those libraries. Its likely the first commercial deployment of this system will occur within three to five years.

This technology may be a few years from your farm, but there are many other innovations coming available every day to help improve the efficiency of your operation. Find out which technologies are currently working for other Ohio soybean farmers here.

See the rest here:

How AI and Unmanned Aerial Systems Could Change the Future of Crop Scouting - Ohio's Country Journal and Ohio Ag Net

Top Voice AI Stories in the First Half of 2020 with Lau, Prescott and Knig – Voicebot Podcast Ep 162 – Voicebot.ai

on August 9, 2020 at 4:25 pm

This weeks roundtable discussion focuses on the top voice AI news from the first half of 2020. We, of course, talk about COVID-19 and how that is changing adoption patterns or the industry. Plus we discuss smart speaker adoption figures, the voice app ecosystems and whether there is a voice app winter underway, the rise of custom assistants, a shift to mobile, and more.

Guests this week include Theo Lau, founder of Unconventional Ventures, Katherine Prescott, founder of Voicebrew, and Jan Knig, co-founder of Jovo. Unconventional Ventures is a firm that focuses on banking and has a lot of crossover with voice and AI among other financial services technologies. Theo is also an advisor to Bond.ai and is the former director for market innovation at AARP. Earlier in her career, she worked with voice and data products and services at Nextel and Teligent. Theo earned a degree in Chemical Engineering from RPI and an MS from George Washington University.

VoiceBrew is a daily newsletter that teaches consumers how to get more out of their Alexa-enabled devices. Previously, Katherine was a senior vice president of corporate strategy at Highbridge Capital Management and an analyst to Morgan Stanley focusing on M&A. She earned a BS in economics at Harvard.

Jovo provides a cross-platform development framework for voice apps and technologies. Jan is also an entrepreneur partner at Etribes and was a product manager at Blue Yonder. He earned both Bachelors and Masters degrees in Industrial Engineering from Karlsruhe Institute of Technology.

You can listen to the podcast interview above, on Google or Apple Podcasts or most of the leading podcast players.

2019 Voice Year in Review with Jargon, Voxly, and Voicebot Voicebot Podcast Ep 128

2020 Voice AI Predictions Part 1 with Ware, Bass, and Lens-FitzGerald Voicebot Podcast Ep 130

2020 Voice AI Predictions Part 2 on Voice App Architecture with Kelvie, McElreath and Ream Voicebot Podcast Ep 131

Harjinder Sandhu Founder and CEO of Saykara a Voice Assistant for Doctors Voicebot Podcast Ep 161

Bret is founder, CEO, and research director of Voicebot.ai. He was named commentator of the year by the Alexa Conference in 2019 and is widely cited in media and academic research as an authority on voice assistants and AI. He is also the host of the Voicebot Podcast and editor of the Voice Insider newsletter.

Originally posted here:

Top Voice AI Stories in the First Half of 2020 with Lau, Prescott and Knig - Voicebot Podcast Ep 162 - Voicebot.ai

Artificial Intelligence In Diagnostics Market Worth $3.0 Billion By 2027: Grand View Research, Inc. – PRNewswire

SAN FRANCISCO, July 29, 2020 /PRNewswire/ -- The global artificial intelligence in diagnostics market size is expected to reach USD 3.0 billion by 2027, expanding at a CAGR of 32.3%, according to a new report by Grand View Research, Inc. Increase in the number of healthcare Artificial Intelligence (AI) diagnostic startups coupled with huge investments by venture capitalist firms to develop innovative technologies that allow fast and effective diagnostic procedures due to continuous increase in number of patients suffering from chronic diseases supports the growth of the market. Around 33.3% of all healthcare AI SaaS companies are engaged in developing diagnostics, making it largest focus area for startups in the market.

Growing investments and funding for AI in healthcare is also one of the key factors driving the market. For instance, in 2016, the U.S.-based startup, PathAI, secured USD 75.2 million investment for developing machine learning technology that assists pathologists in making more precise diagnosis. Rising investments in AI diagnosis-based startups is one of the key indicators that depicts upcoming opportunities.

Key suggestions from the report:

Read 150 page research report with ToC on "Artificial Intelligence In Diagnostics Market Size, Share And Trends Analysis Report By Component (Software, Hardware, Services), By Diagnosis Type, By Region, And Segment Forecasts, 2020 - 2027" at: https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-diagnostics-market

Moreover, increasing adoption of AI technology by hospitals and research centers for clinical diagnosis purpose is another factor propelling market growth. For instance, in July 2018, two national research institutes in Japan succeeded in implementing AI technology for detecting early stage stomach cancer with high precision rate of 95.0% for healthy tissues and 80.0% for cancer tissues. According to National Cancer Centre and Riken, AI technology took 0.004 seconds to identify whether obtained endoscopic image contains normal stomach tissue or early stage cancer tissue. Growing awareness regarding the technology is expected to boost the usage of AI in medical procedures.

Grand View Research has segmented the artificial intelligence in diagnostics market on the basis of component, diagnosis type, and region:

Find more research reports on Healthcare IT Industry,by Grand View Research:

Gain access to Grand View Compass,our BI enabled intuitive market research database of 10,000+ reports

About Grand View Research

Grand View Research, U.S.-based market research and consulting company, provides syndicated as well as customized research reports and consulting services. Registered in California and headquartered in San Francisco, the company comprises over 425 analysts and consultants, adding more than 1200 market research reports to its vast database each year. These reports offer in-depth analysis on 46 industries across 25 major countries worldwide. With the help of an interactive market intelligence platform, Grand View Research helps Fortune 500 companies and renowned academic institutes understand the global and regional business environment and gauge the opportunities that lie ahead.

Contact:

Sherry James Corporate Sales Specialist, USAGrand View Research, Inc.Phone: +1-415-349-0058Toll Free: 1-888-202-9519Email: [emailprotected] Web: https://www.grandviewresearch.com Follow Us: LinkedIn | Twitter

SOURCE Grand View Research, Inc.

Continue reading here:

Artificial Intelligence In Diagnostics Market Worth $3.0 Billion By 2027: Grand View Research, Inc. - PRNewswire

Adobe tries to make selfies less embarrassing using AI and machine learning – The Verge

Sensei, the arm of Adobe that fiddles around with AI and machine learning, has released a trailer for some new selfie-improving features they have in the works. Adobe hasnt announced if or when these features will be included in any of its apps, but the video is still a fun watch.

It shows a dude taking a bad, embarrassing selfie and improving it using tools that apply artificial depth of field, tweak the perspective so its not clear it was taken from so close to the face, and steal aesthetics from other portraits of other dudes.

That last feature the ability to copy the style of any given photograph and transfer it to another photograph was described in detail by a joint research team last week. The researchers, from Adobe and Cornell University, published a paper titled Deep Photo Style Transfer, which outlines how the process is more complex and precise than merely applying an Instagram-style filter. The code they used is open source, and you can download it on GitHub if you want.

The selfie our demonstrator has at the end does look better, but I feel it would be irresponsible of me not to point out that the selfie he started with wouldnt have been so awful if he hadnt gone with the classic My Dad move of aiming the camera directly up his chin.

Another way to drastically improve your selfies, beyond relying on admittedly cool-looking Adobe AI, would be to listen to even one thing Kim Kardashian-West has ever said. Repeat after me: never take pictures of yourself from any lower than one foot above your head.

Here is the original post:

Adobe tries to make selfies less embarrassing using AI and machine learning - The Verge

Ozlo releases a suite of APIs to power your next conversational AI … – TechCrunch


TechCrunch
Ozlo releases a suite of APIs to power your next conversational AI ...
TechCrunch
Building on its promise to give the entrenched a run for their money, conversational AI startup Ozlo is making its meticulously crafted knowledge layer..
This AI startup wants to help robot assistants ask people the ...Recode
Ozlo finally reveals a business model for its AI training app ...SiliconANGLE (blog)

all 4 news articles »

See the rest here:

Ozlo releases a suite of APIs to power your next conversational AI ... - TechCrunch

ARMs new edge AI chips promise IoT devices that wont need the cloud – The Verge

Edge AI is one of the biggest trends in chip technology. These are chips that run AI processing on the edge or, in other words, on a device without a cloud connection. Apple recently bought a company that specializes in it, Googles Coral initiative is meant to make it easier, and chipmaker ARM has already been working on it for years. Now, ARM is expanding its efforts in the field with two new chip designs: the Arm Cortex-M55 and the Ethos-U55, a neural processing unit meant to pair with the Cortex-M55 for more demanding use cases.

The benefits of edge AI are clear: running AI processing on a device itself, instead of in a remote server, offers big benefits to privacy and speed when it comes to handling these requests. Like ARMs other chips, the new designs wont be manufactured by ARM; rather, they serve as blueprints for a wide variety of partners to use as a foundation for their own hardware.

But what makes ARMs new chip designs particularly interesting is that theyre not really meant for phones and tablets. Instead, ARM intends for the chips to be used to develop new Internet of Things devices, bringing AI processing to more devices that otherwise wouldnt have those capabilities. One use case ARM imagines is a 360-degree camera in a walking stick that can identify obstacles, or new train sensors that can locally identify problems and avoid delays.

As for the specifics, the Arm Cortex-M55 is the latest model in ARMs Cortex-M line of processors, which the company says offers up to a 15x improvement in machine learning performance and a 5x improvement in digital signal processing performance compared to previous Cortex-M generations.

For truly demanding edge AI tasks, the Cortex-M55 (or older Cortex-M processors) can be combined with the Ethos-U55 NPU, which takes things a step further. It can offer another 32x improvement in machine learning processing compared to the base Cortex-M55, for a total of 480x better processing than previous generations of Cortex-M chips.

While those are impressive numbers, ARM says that the improvement in data throughput here will make a big difference in what edge AI platforms can do. Current Cortex-M platforms can handle basic tasks like keyword or vibration detection. The M55s improvements let it work with more advanced things like object recognition. And the full power of a Cortex-M chip combined with the Ethos-U55 promises even more functionality, with the potential for local gesture and speech recognition.

All of these advances will take some time to roll out. While ARM is announcing the designs today and releasing documentation, it doesnt expect actual silicon to arrive until early 2021 at the earliest.

See more here:

ARMs new edge AI chips promise IoT devices that wont need the cloud - The Verge

Confirmed: Facebook shifts away from AI and like a miracle, the bots start working – The Register

Facebook has revamped its Messenger bot platform that allows businesses to engage with the app's massive audience and the story is a lesson for anyone looking for practical applications of the AI and machine learning hype.

If you believe the evangelists, "new advances in AI" will allow businesses to restructure their support and sales operations, and maybe even their business models.

Via a chatbot interface, the AI would take care of work a human would do. Mark Zuckerberg implied as much when he introduced Facebooks bot platform only last April to great fanfare, in a move hailed as part of an AI arms race.

The "Fourth Industrial Revolution" of white-collar automation was almost here.

But reports that Facebook has scaled back the fashionable emphasis on machine learning and AI after encountering a high failure rate have turned out to be true. And British entrepreneur Syd Lawrence thinks Facebook has been wise to move away from an emphasis on AI.

"The whole AI hype is horrifically off the mark," he told us. "Whereas these things can be really useful and powerful. But they're certainly not AI. We're way away from AI being really useful."

Lawrence, CEO of The Bot Platform, says: "The whole conversation around AI and even the term chatbots is all horribly wrong," says Lawrence. "I really hope we can put AI chatbots to rest. No one wants to talk to a chatbot." If that sounds odd from a company at the bleeding edge of the chatbot hype, then it needs an explanation.

The bots Lawrence's team creates are really akin to a Unix daemon, or in Windows, a Service: they're a background process or agent. In fact, we all use these often without knowing it, Lawrence tells us, and they've never stopped being useful. Now they're slightly more useful.

"One of the bots using our platform is performing almost 10,000 per cent better than their mobile website for one common task, and we've customers that have their bots performing 30,000 per cent better than an email mailing list," says Lawrence.

A real-life walkthrough of a bot on the revamped Facebook Messenger shows what he means.

The revamped bot UI has a persistent menu, and can now run "headless". Imagine bypassing a bank or a utility's voice menu and instead using a set of menu buttons instead. It's so much faster to navigate, you won't want to go back to "press 5 for an customer service representative".

Hopefully this will put "fake AI" to rest, says Syd in a Medium post describing the changes.

Just be clear about you're trying to do, be aware of what the limitations are specifically, typing on mobile is hard and stop claiming "that it's AI". Facebook is now careful to call the agents "bots", not "chatbots", in addition to offering a less annoying customer interface than IVR, aka voice menus.

"Disabling free text completely might sound counter-intuitive, especially for a messaging platform, but it still lets you do outbound messaging, and control the conversation. You can still present the user with buttons. Not surprisingly, people prefer to press the buttons rather than type."

So perhaps the future isn't virtual robot buddies replacing humans, but simple interactive mobile UIs replacing another technology: the 1990s-spawned nightmare of voice menus.

Original post:

Confirmed: Facebook shifts away from AI and like a miracle, the bots start working - The Register

Even successful cities aren’t ready for AI’s disruptions – Quartz

Which major city is well prepared for the challenges that will be brought about by artificial intelligence? According to one recent report, the answer is simple: none.

AI, which refers to programming that can mimic human behaviors such as speaking, learning and carrying out tasks, is flourishing fast across the world, and being used in applications ranging from facial recognition to autonomous driving. However, along with the many possibilities of AI, risks from its capacity to replace human workers, or from unethical uses of the technology, have also become more obvious.

The Global Cities AI Disruption Index, published by research outfit Oliver Wyman Forum today (Sept. 26), aims to look at how 105 major cities are preparing for the AI era. The report was conducted based on interviews with stakeholders such as government officials and academics, a survey of 9,000 residents in 21 of those cities, as well as an analysis of public social and economic data on the cities examined. Overall, the report measures readiness using four broad parametersa citys understanding of AI-related risks and its corresponding plans, its ability to carry out those plans, the asset base it can rely on, and the direction the city is taking.

Singapore, Stockholm, London and Shenzhen rank first in each of those four categories, respectively. But not a single city ranks in the top 20 among all four categories, and none appears in the top 10 across more than two categories. This means no city is close to being ready for the challenges ahead, said the report. Sure, some are better prepared than others, but all cities will need to continue to make substantial improvements to fully prepare for the impacts of next-generation technology.

Gauging AI readiness is far from an exact science at the moment though various efforts in recent years have been trying to do it. For now, it looks like many of the qualities that might make a government or a city prepared to deal with AI are likely to be similar to those that put them high up on rankings of good places to do business. A three-year-old index from Oxford Insights that gauges government ability to capitalize on AI looks at parameters like the skill level of the workforce, for example, as well as a more technical measure like the quality of the data the government has to work with.

But given controversies surrounding the use of AI, from how it can propagate existing biases, to privacy and misinformation risks, governments abilityand transparencyin dealing with those need to be a vital part of AI readiness.

China, for example, has been accused of using facial recognition in profiling ethnic minorities including Muslims in its Xinjiang region. Meanwhile FaceApp, an AI-powered face-ageing app developed by a Russian company, has also stirred privacy concerns among users.

The OW Forum research urged governments worldwide to get real about the risks posed by AI, saying they tend to downplay or ignore such disruptions while focusing only on opportunities such as smart city projects.

Follow this link:

Even successful cities aren't ready for AI's disruptions - Quartz

Poker-playing AI beats pros using ‘intuition,’ study finds – ABC News

Computer researchers are betting they can take on the house after designing a new artificial intelligence program that has beat professional poker players.

Researchers from University of Alberta, Czech Technical University and Charles University in Prague developed the "DeepStack" program as a way to build artificial intelligence capable of playing a complex kind of poker. Creating an AI program that can win against a human player in a no-limit poker game has long been a goal of researchers due to the complexity of the game.

Michael Bowling, a professor in the Department of Computing Science in the University of Alberta, explained that computers have been able to win at "perfect" games such as chess or Go, in which all the information is available to both players, but that "imperfect" games like poker have been much harder to program for.

"This game [poker] embodies situations where you find yourself not having all the information you need to make a decision," said Bowling. "In real-life situations, it's a rare moment that we have all the information."

There have been other poker-playing AI programs, but they were playing a poker game that included a pot limit, meaning there were limitations on the amount of money could be bet during different stages. As a result, there was less information and risk analysis for the program to compute. In those programs, Bowling explained, the program could look at all potential paths and probabilities for playing different hands prior to playing the game and then simply plug in the information from each hand to win the game.

In this new version of a two-person Texas hold'em poker, there were no limits on betting vastly expanding the amount of information that would need to be processed. Bowling explained without that limitation there were more potential outcomes "than there are atoms in the universe."

"DeepStack gets around that by not pre-computing everything in advance, it will process information at each time," said Bowling.

The programmers were able to create an "intuition" program system for the AI that would focus on looking at each hand in real time and then compute the probability of winning the next few hands, rather than the entire game.

"It only looks a few answers ahead," Bowling explained.

In order for the program to be able to respond in real time, Bowling and his co-researchers were able to create special machinery designed to "learn" complex information. Called a deep neural network, the technology allows the AI to "learn" by looking at past poker games and their outcomes. By simulating poker games over and over, the AI is able to better estimate how to play a hand and figure out a hand's "value."

Bowling explained the program could see via the simulations "how much money would I expect to win if I found myself in this situation."

"If it's positive, it's good for me; if it's negative, it's bad," Bowling said.

The "intuition" could then determine if a hand was more valuable by looking at past simulation results and then be able to better predict a winning move.

To test if their AI could win, the researchers worked with the International Federation of Poker to recruit players willing to play against DeepStack. In four weeks, they had 11 professional poker players each play 3,000 games against DeepStack. They found DeepStack won most of the time against all the players.

"We were ahead by quite a large margin," Bowling said. When they went back to look and see if the program might have just been lucky, they found the program was likely ahead due to skill not luck when pitted against 10 of the 11 participants.

The researchers hope the program will be able to be used for other complicated situations such as "defending strategic resources" or making difficult decisions in medical treatment recommendations.

"With many real-world problems involving information asymmetry, DeepStack also has implications for seeing powerful AI applied more in settings that do not fit the perfect information assumption," the authors said.

Read this article:

Poker-playing AI beats pros using 'intuition,' study finds - ABC News

Can AI and connected tech foster better disaster decision-making? – GCN.com

Can AI and connected tech foster better disaster decision-making?

Florida communities frequently battered by hurricanes, flooding and tornadoes may soon have more tools for responding to and bouncing back from disasters.

Researchers at the University of Central Florida have launched a three-year interdisciplinary project that will examine how artificial intelligence and smart technologies can improve collective decision-making among emergency managers, local government agencies, businesses and nonprofits. The goal is to reduce community vulnerability and enhance resilience.

Although smart technologies -- like streetlights that monitor traffic flow or sensors that transmit real-time data about rising water levels -- provide emergency managers with situational awareness, the increasing amount of data is becoming unmanageable without some AI assistance.

One facet of the project will use AI for real-time analysis and reporting on the massive amount of incoming emergency management data -- including social media text and images -- so community leaders can be better prepared and craft informed responses.

The study, funded by a $1.2 million grant from the National Science Foundation, area will cover 78 towns and cities in eight counties in east central Florida with the idea that improvements in community resilience could be generalized to other locations.

The research design assessing resilience changes will help decision makers in governments, businesses and nonprofits obtain a deeper understanding of how AI-aided information technologies can advance collective decision making to reduce community vulnerability and enhance resilience, Yue Gurt Ge, an assistant professor in the universitysSchool of Public Administrationand principal investigator of the project, told UCF Today.

The researchers will also develop and launch the Community Resilience Data Depot, a platform that will allow community leaders and emergency personnel to share data more easily and support real-time collective decision-making.

The projects proposed metrics to assess the extent and speed of achieving appropriate post-event functionality will help address a nationwide community capacity building need to quantitatively evaluate resilience increases by public-private partnerships, Ge said.

About the Author

Connect with the GCN staff on Twitter @GCNtech.

See the rest here:

Can AI and connected tech foster better disaster decision-making? - GCN.com

Decades-old ASCII adventure NetHack may hint at the future of AI – TechCrunch

Machine learning models have already mastered Chess, Go, Atari gamesand more, but in order for it to ascend to the next level, researchers at Facebook intend for AI to take on a different kind of game: the notoriously difficult and infinitely complex NetHack.

We wanted to construct what we think is the most accessible grand challenge with this game. It wont solve AI, but it will unlock pathways towards better AI, said Facebook AI Researchs Edward Grefenstette. Games are a good domain to find our assumptions about what makes machines intelligent and break them.

You may not be familiar with NetHack, but its one of the most influential games of all time. Youre an adventurer in a fantasy world, delving through the increasingly dangerous depths of a dungeon thats different every time. You must battle monsters, navigate traps and other hazards, and meanwhile stay on good terms with your god. Its the first roguelike (after Rogue, its immediate and much simpler predecessor) and arguably still the best almost certainly the hardest.

(Its free, by the way, and you can download and play it on nearly any platform.)

Its simple ASCII graphics, using a g for a goblin, an @ for the player, lines and dots for the levels architecture, and so on, belie its incredible complexity. Because Nethack, which made its debut in 1987, has been under active development ever since, with its shifting team of developers expanding its roster of objects and creatures, rules, and the countless, countless interactions between them all.

And this is part of what makes NetHack such a difficult and interesting challenge for AI: Its so open-ended. Not only is the world different every time, but every object and creature can interact in new ways, most of them hand-coded over decades to cover every possible player choice.

NetHack with a tile-based graphics update all the information is still available via text.

Atari, Dota 2, StarCraft 2 the solutions weve had to make progress there are very interesting. NetHack just presents different challenges. You have to rely on human knowledge to play the game as a human, said Grefenstette.

In these other games, theres a more or less obvious strategy to winning. Of course its more complex in a game like Dota 2 than in an Atari 800 game, but the idea is the same there are pieces the player controls, a game board of environment, and win conditions to pursue. Thats kind of the case in NetHack, but its weirder than that. For one thing, the game is different every time, and not just in the details.

New dungeon, new world, new monsters and items, you dont have a save point. If you make a mistake and die you dont get a second shot. Its a bit like real life, said Grefenstette. You have to learn from mistakes and come to new situations armed with that knowledge.

Drinking a corrosive potion is a bad idea, of course, but what about throwing it at a monster? Coating your weapon with it? Pouring it on the lock of a treasure chest? Diluting it with water? We have intuitive ideas about these actions, but a game-playing AI doesnt think the way we do.

The depth and complexity of the systems in NetHack are difficult to explain, but that diversity and difficulty make the game a perfect candidate for a competition, according to Grefenstette. You have to rely on human knowledge to play the game, he said.

People have been designing bots to play NetHack for many years that rely not on neural networks but decision trees as complex as the game itself. The team at Facebook Research hopes to engender a new approach by building a training environment that people can test machine learning-based game-playing algorithms on.

NetHack screens with labels showing what the AI is aware of.

The NetHack Learning Environment was actually put together last year, but the NetHack Challenge is only just now getting started. The NLE is basically a version of the game embedded in a dedicated computing environment that lets an AI interact with it through text commands (directions, actions like attack or quaff)

Its a tempting target for ambitious AI designers. While games like StarCraft 2 may enjoy a higher profile in some ways, NetHack is legendary and the idea of building a model on completely different lines from those used to dominate other games is an interesting challenge.

Its also, as Grefenstette explained, a more accessible one than many in the past. If you wanted to build an AI for StarCraft 2, you needed a lot of computing power available to run visual recognition engines on the imagery from the game. But in this case the entire game is transmitted via text, making it extremely efficient to work with. It can be played thousands of times faster than any human could with even the most basic computing setup. That leaves the challenge wide open to individuals and groups who dont have access to the kind of high-power setups necessary to power other machine learning methods.

We wanted to create a research environment that had a lot of challenges for the AI community, but not restrict it to only large academic labs, he said.

For the next few months, NLE will be available for people to test on, and competitors can basically build their bot or AI by whatever means they choose. But when the competition itself starts in earnest on October 15, theyll be limited to interacting with the game in its controlled environment through standard commands no special access, no inspecting RAM, etc.

The goal of the competition will be to complete the game, and the Facebook team will track how many times the agent ascends, as its called in NetHack, in a set amount of time. But were assuming this is going to be zero for everyone, Grefenstette admitted. After all, this is one of the hardest games ever made, and even humans who have played it for years have trouble winning even once in a lifetime, let alone several times in a row. There will be other scoring metrics to judge winners in a number of categories.

The hope is that this challenge provides the seed of a new approach to AI, one that more fundamentally resembles actual human thinking. Shortcuts, trial and error, score-hacking, and zerging wont work here the agent needs to learn systems of logic and apply them flexibly and intelligently, or die horribly at the hands of an enraged centaur or owlbear.

You can check out the rules and other specifics of the NetHack Challenge here. Results will be announced at the NeurIPS conference later this year.

See original here:

Decades-old ASCII adventure NetHack may hint at the future of AI - TechCrunch

Science-fiction master Ted Chiang explores the rights and wrongs of AI – GeekWire

The story of Ted Chiangs life includes stints as a technical writer in the Seattle area and worldwide acclaim as a science-fiction writer. (Alan Berner Photo via Knopf Doubleday Publicity)

What rights does a robot have? If our machines become intelligent in the science-fiction way, thats likely to become a complicated question and the humans who nurture those robots just might take their side.

Ted Chiang, a science-fiction author of growing renown with long-lasting connections to Seattles tech community, doesnt back away from such questions. They spark the thought experiments that generate award-winning novellas like The Lifecycle of Software Objects, and inspire Hollywood movies like Arrival.

Chiangs soulful short stories have earned him kudos from the likes of The New Yorker, which has called him one of the most influential science-fiction writers of his generation. During this years pandemic-plagued summer, he joined the Museum of Pop Cultures Science Fiction and Fantasy Hall of Fame. And this week, hes receiving an award from the Arthur C. Clarke Foundation for employing imagination in service to society.

Can science fiction have an impact in the real world, even at times when the world seems as if its in the midst of a slow-moving disaster movie?

Absolutely, Chiang says.

Art is one way to make sense of a world which, on its own, does not make sense, he says in the latest episode of the Fiction Science podcast, which focuses on the intersection between science and fiction. Art can impose a kind of order onto things. It doesnt offer a cure-all, because I dont think theres going to be any easy cure-all, but I think art helps us get by in these stressful times.

COVID-19 provides one illustration. Chiang would argue that our response to the coronavirus pandemic has been problematic in part because it doesnt match what weve seen in sci-fi movies.

The greatest conflict that we see generated is from people who dont believe in it vs. everyone else, he said. That might be the product of the fact that it is not as severe. If it looked like various movie pandemics, itd probably be hard for anyone to deny that it was happening.

This pandemic may well spark a new kind of sci-fi theme.

Its worth thinking about, that traditional depictions of pandemics dont spend much time on people coming together and trying to support each other, Chiang said. That is not typically a theme in stories about disaster or enormous crisis. I guess the narrative is usually, Its the end of civilization. And people have not turned on each other in that way.

Artificial intelligence is another field where science fiction often gives people the wrong idea. When we talk about AI in science fiction, were talking about something very different than what we mean when we say AI in the context of current technology, he said.

Chiang isnt speaking here merely as an author of short stories, but as someone who joined the Seattle tech community three decades ago to work at Microsoft as a technical writer. During his first days in Seattle, his participation in 1989s Clarion West Science Fiction and Fantasy Writers Workshop helped launch his second career as a fiction writer.

In our interview, Chiang didnt want to say much about the technical-writing side of his career, but his expertise showed through in our discussion about real vs. sci-fi AI. When people talk about AI in the real world theyre talking about a certain type of software that is usually like a superpowered version of applied statistics, he said.

Thats a far cry from the software-enhanced supervillains of movies like Terminator or The Matrix, or the somewhat more sympathetic characters in shows like Westworld and Humans.

In Chiangs view, most depictions of sci-fi AI fall short even by science-fiction standards. A lot of stories imagine something which is a product like a robot that comes in a box, and you flip it on, and suddenly you have a butler a perfectly competent and loyal and obedient butler, he noted. That, I think jumps over all these steps, because butlers dont just happen.

In The Lifecycle of Software Objects, Chiang imagines a world in which it takes just as long to raise a robot as it does to raise a child. That thought experiment sparks all kinds of interesting all-too-human questions: What if the people who raise such robots want them to be something more than butlers? Would they stand by and let their sci-fi robot progeny be treated like slaves, even like sex slaves?

Maybe they want that robot, or conscious software, to have some kind of autonomy, Chiang said. To have a good life.

Chiangs latest collection of short stories, Exhalation, extends those kinds of thought experiments to science-fiction standbys ranging from free will to the search for extraterrestrial intelligence.

Both those subjects come into play in whats certainly Chiangs best-known novella, Story of Your Life, which was first published in 1998 and adapted to produce the screenplay for Arrival in 2016. Like so many of Chiangs other stories, Story of Your Life takes an oft-used science-fiction trope in this case, first contact with intelligent aliens and adds an unexpected but insightful and heart-rending twist.

Chiang said that the success of the novella and the movie hasnt led to particularly dramatic changes in the story of his own life, but that it has broadened the audience for the kinds of stories he tells.

My work has been read by people who would not describe themselves as science-fiction readers, by people who dont usually read a lot of science fiction, and thats been amazing. Thats been really gratifying, he said. Its not something that I ever really expected.

Whats more, Chiangs work has been popping up in places where you wouldnt expect to see science fiction such as The New York Times, where he weighs in on the implications of human gene editing; or Buzzfeed News, where he reflects on the downside of Silicon Valleys world view; or the journal Nature, where you can find Chiangs thought experiments on free will and transhumanism; or Nautilus, where Chiang offers an unorthodox perspective on SETI.

During our podcast chat, Chiang indulged in yet another thought experiment: Could AI replace science-fiction writers?

Chiangs answer? It depends.

If we could get software-generated novels that were coherent, but not necessarily particularly good, I think there would be a market for them, he said.

But Chiang doesnt think that would doom human authors.

For an AI to generate a novel that you think of as really good, that you feel like, Oh, wow, this novel was both gripping and caused me to think about my life in a new way that, I think, is going to be very, very hard, he said.

Ted Chiang only makes it look easy.

Chiang and other Arthur C. Clarke Foundation awardees will take part in the 2020 Clarke Conversation on Imagination at 9 a.m. PT Nov. 12. Register via the foundations website and Eventbrite to get in on the interactive video event.

This is a version of an article first published on Cosmic Log. Check out the Cosmic Log posting for Ted Chiangs reading recommendations, which are this months selections for the Cosmic Log Used Book Club.

My co-host for the Fiction Science podcast is Dominica Phetteplace, an award-winning writerwho is a graduate of theClarion West Writers Workshopand currently lives in Berkeley, Calif. Shes among the science-fiction authors featured inThe Best Science Fiction of the Year. To learn more about Phetteplace, check out her website,DominicaPhetteplace.com.

Here is the original post:

Science-fiction master Ted Chiang explores the rights and wrongs of AI - GeekWire