China Will Outpace US Artificial Intelligence Capabilities, But Will It Win The Race? Not If We Care About Freedom – Forbes

Bottom view of the famous Statue of Liberty, icon of freedom and of the United States. Red and ... [+] purple duotone effect

Weve all heard that China is preparing itself to outpace not only the United States but every global economy in Artificial Intelligence (AI). China is graduating more than 2-3x the amount of engineers each year than any other nation, the government is investing to accelerate AI initiatives, and, according to Kai-Fu Lee in a recent Frontline documentary, China is now producing more than 10x more data than the United States. And if data is the new oil, then China, according to Lee, has become the new Saudi Arabia.

Its clear that China is taking the steps necessary to lead the world in AI. The question that needs to be asked is will they win the race?

According to the Frontline documentary, Chinas goal is to catch up to the United States by 2025 and lead the world by 2030. If things stay the way they are, I do believe China will outpace the United States in technical capabilities, available talent, and data (if theyre not already). However, I also believe that eventually, the Chinese system will either implode, not be adopted outside of China, or both. Why? Let me explain.

A recent report from Freedom House shows that freedom on the internet is declining, and has been for quite some time. Study after study shows that when we know were being surveilled, our behaviors change. Paranoia creeps in. The comfort of being ourselves is lost. And, ultimately, society is corralled into a state of learned helplessness where, like dogs with shock collars, our invisible limits are not clearly understood or defined but learned over time through pain and fear. This has been shown to lead to systemic mental illness ranging from mass depression to symptoms of PTSD and beyond.

Not so ironically, were seeing a realization of these impacts within society, especially among the tech-literate, younger generations. A recent study from Axios found that those age 18-34 are least likely to believe "It's appropriate for an employer to routinely monitor employees using technology" and most likely to "change their behavior if they know their employer was monitoring them." A deeper impact of this type of surveillancewhat Edward Snowden has deemed our Permanent Recordcan be read about in a recent New York Times article about Cancel Culture within teens. People, especially the younger generations, dont want to be surveilled or to have their past mistakes held against them in perpetuity. And if theyre forced into it, theyll find ways around it.

In the Freedom House report, China is listed as one of the worst nations in the world for this type of behavior. This is also spoken to in the Frontline documentary where it was mentioned that Chinas powerful new social credit score is changing behavior by operationalizing an Orwellian future. Some places in China have gone as far as requiring facial recognition to get access to toilet paper in public restrooms. Is this the life we want to live?

If the system continues this way, people will change their behavior. They will game the system. They will put their devices down and do things offline when they don't want to be tracked. They will enter false information to spoof the algorithms when they're forced to give up information. They will wear masks or create new technologies to hide from facial recognition systems. In short, they will do everything possible to not be surveilled. And in doing so, they will provide mass amounts of low-quality, if not entirely false, data, poisoning the overarching system.

If China continues on its current path of forced compliance through mass surveillance, the country will poison its own data pool. This will lead to a brittle AI system that only works for compliant Chinese citizens. Over time, their system will cripple.

Great AI requires a lot of data, yes, but the best AI will be trained on the diversity of life. Its data will include dissenting opinions. It will learn from, adapt to, and unconditionally support outliers. It will form to, and Shepard, the communities it beholds, not the other way around. And if it does not, those of us living in a democratic world will push back. No sane, democratic society will adopt such a system unless forced into it through predatory economic activity, war, or both. We are already seeing an uprising against the surveillance systems in Europe and the United States, where privacy concerns are becoming mainstream news and policies are now being put into place to protect the people, whether tech companies like it or not.

If our democratic societies decide to go down the same path as China because theyre afraid we wont keep up with societies that dont regulate, then were all bound to lose. A race to the bottoma race to degrade privacy and abuse humanity in favor of profit and strategic dominanceis not one we will win. Nor is it a race we should care to win. Our work must remain focused on human rights and democratic processes if we hope to win. It can not come down to an assault on humanity in the form of pure logic and numbers. If it does, we, as well as our democratic societies, will lose.

So whats the moral of this story? China will outpace the United States in Artificial Intelligence capabilities. But will it win the race? Not if we care about freedom.

Follow this link:

China Will Outpace US Artificial Intelligence Capabilities, But Will It Win The Race? Not If We Care About Freedom - Forbes

Will the next Mozart or Picasso come from artificial intelligence? No, but here’s what might happen instead – Ladders

As artificial intelligence has been slowly becoming more and more of a mainstream term, there has been a question rumbling in the art community:

Will AI replace creativity?

Its a fantastic question, to tell you the truthand certainly shows what sorts of problems were wrestling with as a society in todays day and age.

First, its important to consider what our definition of art is in the first place. A very broad definition within the art world would be, Anything created by a human to please someone else. Thats what makes something art. In this sense, photography is an art. Videography is an art. Painting, music, drawing, sculpture, all of these things are done to evoke an emotion, to please someone elsecreated by one human, and enjoyed by another.

Stage one:AI became a trendy marketing phrase used by everyone from growth hackers to technologists, with the intention of getting more eyeballs on their work, faster. So the term AI actually made its way into the digital art world faster than the technology itself, since people would use the term to make what they were building seem more cutting-edge than anything else in the spaceregardless of whether or not it was actually utilizing true artificial intelligence.

Stage two:Companies saw the potential artificial intelligence had in being able to provide people (in a wide range of industries) with tools to solve critical programs. For example, we use data science atSkylum, to help photographers and digital content creators be more efficient when performing complex operationslike retouching photos, replacing backgrounds, etc. We use AI to make the process of creating the art more efficient, automating the boring or tedious tasks so that artists can focus more time and energy on the result instead of the process.

Theres a great article in Scientific American titled,Is Art Created By AI Really Art?And the answer is both yes and no.

Its not that artificial intelligence will fundamentally replace human artists. Its that AI will lower the barrier to entry in terms of skill, and give the world access to more creative minds because of what can be easily achievable using digital tools. Art will still require a human vision, however, the way that vision is executed will become easier, more convenient, less taxing, and so on.

For example, if you are only spending one day in Paris, and you want to capture a particular photograph of the Eiffel Tower, that day might not be the best day for your photo. The weather might be terrible, there might be thousands of people around, etc. Well, you can use artificial intelligence to not only remove people from the photograph but even replace the Eiffel Tower with an even higher resolution (from a separate data set) picture of the toweror change the sky, the weather, etc.

The vision is yours, but suddenly you are not limited by the same constraints to execute your vision.

Digital art tools are built to make the process as easy as possible for the artist. If you consider the history of photography, as an art, back in the film days, far more time was spent developing film than actually taking pictures. This is essentially the injustice technologists are looking to solve. The belief in the digital art community is that more time shouldnt be spent doing all the boring things required for you to do what you love. Your time should be spent doing what you love and executing your vision, exclusively.

Taking this a step further, a photographer today usually spends 20-30% of their time giving a photo the look and feel they want. But they spend 70% of their time selecting an object in Photoshop or whichever program theyre using, cutting things out, creating a mask, adding new layers, etc. In this sense, the artist is more focused on theprocessof creating their visionwhich is what creates a hurdle for other artists and potentially very creative individuals to even get into digital art creation. They have to learn these processes and these skills in order to participate, when in actuality, they may be highly capable of delivering a truly remarkable resultif only they werent limited, either by their skills, their environment, or some other challenge.

So, artificial intelligence isnt here to replace the everyday artist. If anything, the goal of technology is to allow more people to express their own individual definition of art.

There may be more Mozarts and Picassos in our society than we realize.

This article first appeared on Minutes Magazine.

Here is the original post:

Will the next Mozart or Picasso come from artificial intelligence? No, but here's what might happen instead - Ladders

Maximize The Promise And Minimize The Perils Of Artificial Intelligence (AI) – Forbes

How businesses can use artificial intelligence (AI) to their advantage, perhaps even in a ... [+] transformative way, without turning the pursuit of AI advantage into a quixotic quest

Frankly, I was hoping an artificial intelligence (AI) algorithm would write this column for me, because who knows more about AI than the mysterious little gremlins that make machine learning possible? That, alas, didnt happen; so Im on my own.

Like most people in business, I dont need any convincing that artificial intelligence (for most companies in many areas of their operations) will become a game-changer.

Still, it remains a fluid, if not amorphous, concept in many respects. What, exactly, can we expect AI to do for us that were not already doingor how will it improve what were doing by doing it better, faster, cheaper, with greater insight or fewer errors?

As an important article (Winning With AI) in the MIT Sloan Management Review put it back in October, AI can be revolutionary, but executives must act strategically. [And] acting strategically means deciding what not to do. Thats not as easy as it sounds.

The problem I have with most discussions of artificial intelligence is that they assume the reader or listener already understands the promises and perils of AI. But based on my conversations with a lot of very intelligent people I dont think thats always the case.

As Amir Husain, founder and CEO of the Austin, Texas-based machine learning company, SparkCognition, explained to Business News Daily last spring, Artificial intelligence is kind of the second coming of software. Its a form of software that makes decisions on its own, thats able to act even in situations not foreseen by the programmers.

AI is so ubiquitous we hardly even think about it. As the Business News Daily article pointed out: Most of us interact with artificial intelligence in some form or another on a daily basis. From the mundane to the breathtaking, artificial intelligence is already disrupting virtually every business process in every industry.

Examples abound: online searches, spam filters, smart personal assistants, such as Alexa, Echo, Google Home and Siri, the programs that protect our information when we buy (or sell) something online, voice to text programs, smart-auto technologies, programs that automatically sound alarms or shut down operating systems when problems are identified, security alarm systems, even those annoying pop-up ads that follow us throughout the day. To one degree or another, theyre all based on or impacted by AI.

In other words, most of us are far more familiar with AIintimately sothan we give ourselves credit for.

The business question is (as the Sloan article correctly put it): How can executives exploit the opportunities, manage the risks, and minimize the difficulties associated with AI? Put another way, how can they use it to their advantage, perhaps even in a transformative way, without turning the pursuit of AI advantage into a quixotic questkeeping in mind that acting strategically involves deciding what not to do as well as pushing ahead and taking chances in some areas?

Some suggestions from the MIT Sloan Management Review article:

First: Dont treat AI initiatives as everyday technology gambits. Theyre more important than that. Run them from the C suite and closely coordinate them with other digital transformation efforts.

Second: Be sure to coordinate AI with the companys overall business strategy. One of the surest ways to come up shortas most AI initiatives do [from 40% to 70%, according to the Sloan article]is to focus AI narrowly on one set of priorities while the company is equally or more concerned with others. While AI can help companies reduce costs, for example, by identifying waste and inefficiencies, growing the business may be a higher priority.

The Hartford, Conn.-based insurance company, Aetna (now a subsidiary of CVS), for example, has been using AI to prevent fraud and uncover overpaymentstypical insurance company concerns. Its also been using AI to design products and increase customers and customer engagement. In one Medicare-related Aetna product, the article notes, designers used AI to customize benefits, leading to 180% growth in new member acquisition. More long term, Aetnas head of analytics, VP Ali Keshavarz, told the authors Aetnas goal is to use AI to become the first place customers go when they are thinking about their health.

Third: This may be obvious to the geeks among us, but perhaps less so to the more technology-challenged: Be sure to align the production of AI with the consumption of AI.

Fourth: Invest in AI talent, data and process change in addition to (and often more so than) AI technology. Recognize that every successful AI undertaking is the product of a great group of people. While some of this talent should be home grown, youll also have to hire from the outside: bring people in to develop and enhance your internal capabilities. Thats a fact of modern business life.

As with everything else in business, all companies are different. Their needs are different. Their available resources (financial, talent, patience) are different. And their goals and expectations should be different.

Its important to take the time to understand how to maximize the promise and minimize the pitfalls of AI. If you do, youre more likely to succeed.

See the rest here:

Maximize The Promise And Minimize The Perils Of Artificial Intelligence (AI) - Forbes

Longer Looks: The Psychology Of Voting; Overexcited Neurons And Artificial Intelligence; And More – Kaiser Health News

Each week, KHN finds interesting reads from around the Web.

FiveThirtyEight:Does Knowing Whom Others Might Vote For Change Whom Youll Vote For?When a presidential race that was supposed to be won by a mainstream moderate instead ends being captured by a far-right gadfly, you better believe pollsters are gonna get some scrutiny. But when this situation took place in the first round of French elections in 2002, bumping the incumbent prime minister from the final round, it wasnt just the failure of prediction that led to a polling protest. Instead, people were concerned that opinion polling, itself, had caused the outcome. Twenty-four years earlier, France had muzzled opinion polling, banning the publication of poll results for a week before any election out of fear that voters were following the polls, rather than the other way around. (Koerth, 12/5)

Wired:How Overexcited Neurons Might Affect How You AgeA thousand seemingly insignificant things change as an organism ages. Beyond the obvious signs like graying hair and memory problems are myriad shifts both subtler and more consequential: Metabolic processes run less smoothly; neurons respond less swiftly; the replication of DNA grows faultier. But while bodies may seem to just gradually wear out, many researchers believe instead that aging is controlled at the cellular and biochemical level. They find evidence for this in the throng of biological mechanisms that are linked to aging but also conserved across species as distantly related as roundworms and humans. Whole subfields of research have grown up around biologists attempts to understand the relationships among the core genes involved in aging, which seem to connect highly disparate biological functions, like metabolism and perception. (Greenwood, 11/30)

Undark:Unpacking The Black Box In Artificial Intelligence For MedicineIn clinics around the world, a type of artificial intelligence called deep learning is starting to supplement or replace humans in common tasks such as analyzing medical images. Already, at Massachusetts General Hospital in Boston, every one of the 50,000 screening mammograms we do every year is processed through our deep learning model, and that information is provided to the radiologist, says Constance Lehman, chief of the hospitals breast imaging division. In deep learning, a subset of a type of artificial intelligence called machine learning, computer models essentially teach themselves to make predictions from large sets of data. The raw power of the technology has improved dramatically in recent years, and its now used in everything from medical diagnostics to online shopping to autonomous vehicles. (Bender, 12/4)

The New York Times:The Champion Who Picked A Date To DieChampagne flutes were hastily unpacked from boxes, filled to their brims and passed around the room. Dozens of people stood around inside Marieke Vervoorts cramped apartment, unsure of what to say or do. This was a celebration, Vervoort had assured her guests. But it did not feel like one. Eleven years earlier, Vervoort had obtained the paperwork required to undergo doctor-assisted euthanasia. Since her teenage years she had been battling a degenerative muscle disease that stole away the use of her legs, stripped her of her independence, and caused her agonizing, unrelenting pain. The paperwork had returned some sense of control. Under Belgian law, she was free to end her life anytime she chose. (Addario, 12/5)

The Atlantic:Your Morning Routine Doesn't Have To Be PerfectMy mornings are the messiest part of my day. I do not rise and shine. Instead, I hit snooze on the alarm and throw the covers over my head. As I hear the early bus shuffle through my stop outside my window, my mind fills with thoughts from the night before, with to-do lists and deadlines. The alarm goes off again, and I repeat the snooze cycle twice more. By the time I roll out of bed, Im a tangle of anxiety. (Koren, 12/2)

View original post here:

Longer Looks: The Psychology Of Voting; Overexcited Neurons And Artificial Intelligence; And More - Kaiser Health News

52 ideas that changed the world: 26. Artificial intelligence – The Week UK

In this series, The Week looks at the ideas and innovations that permanently changed the way we see the world. This week, the spotlight is on artificial intelligence:

Artificial intelligence (AI), sometimes referred to as machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence of humans.

AI is the ability of a computer program or a machine to think and learn, so that it can work on its own without being encoded with commands. The term was first coined by American computer scientist John McCarthy in 1955.

Human intelligence is the combination of many diverse abilities, says Encyclopaedia Britannica. AI, it says, has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception and using language.

AI is currently used for understanding human speech, competing in game systems such as chess and go, self-driving cars and interpreting complex data.

Some people are wary of the rise of artificial intelligence, with the New Yorkerhighlighting that a number of scientists and engineers fear that, once we build an artificial intelligence smarter than we are, a form of AI known as artificial general intelligence (AGI), doomsday may follow.

In The Age of Spiritual Machine, American inventor and futurist Ray Kurzweil writes that as AI develops and machines have the capacity to learn more quickly they will appear to have their own free will, while Stephen Hawking declared that AGI will beeither the best, or the worst thing, ever to happen to humanity.

The Tin Man from The Wizard of Oz also represented peoples fascination with robotic intelligence, with the humanoid robot that impersonated Maria in Metropolis also displaying characteristics of AI.

But in the real world, the British computer scientist Alan Turing published a paper in 1950 in which he argued that a thinking machine was actually possible.

The first actual AI research began following a conference at Dartmouth College, USA in 1956.

In 1955, John McCarthy set about organising what would become the Dartmouth Summer Research Project on Artificial Intelligence. The conference took the form of a six- to eight-week brainstorming session, with attendees including scientists and mathematicians with an interest in AI.

According to AI: The Tumultuous History of the Search for Artificial Intelligence by Canadian researcher Daniel Crevier, one attendee at the conference wrote Within a generation... the problem of creating artificial intelligence will substantially be solved.

It was at the conference that McCarthy was credited with first using the phrase artificial intelligence.

Following the conference, science website livescience.com reports that the US Department of Defense became interested in AI, but after several reports criticising progress in AI, government funding and interest in the field dropped off. The period from 1974 to 1980, it says, became known as the AI winter.

Interest in AI was revived in the 1980s, when the British government started funding it again in part to compete with efforts by the Japanese. From 1982-1990, the Japanese government invested $400m with the goal of revolutionising computer processing and improving artificial intelligence, according to Harvard University research.

Research into the field started to increase and by the 1990s many of the landmark goals of artificial intelligence had been achieved. In 1997, IBMs Deep Blue became the first computer to beat a chess champion when it defeated Russian grandmaster Garry Kasparov.

This was surpassed in 2011, when IBMs question-answering system Watson won the US quiz show Jeopardy! by beating the shows reigning champions Brad Rutter and Ken Jennings.

In 2012, a talking computer chatbot called Eugene Goostman tricked judges into believing that it was human in a Turing Test. Thiswas devised by Turing in the 1950s. He thought that if a human could not tell the difference between another human and a computer, that computer must be as intelligent as a human.

Forbes highlights that AI is currently being deployed in services such as mobile phones (for example, Apples Siri app), Amazons Alexa, self-driving Tesla cars and Netflixs film recommendation service.

Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab has developed an AI model that can work out the exact amount of chemotherapy a cancer needsto shrink a brain tumour.

AI is already changing the world and looks set to define the future too.

According to Harvard researchers, we can expect to see AI-powered driverless cars on the road within the next 20 years, while machine calling is already a day-to-day reality.

Looking beyond driverless cars, the ultimate ambition is general intelligence that is a machine that surpasses human cognitive abilities in all tasks. If this is developed a future of humanoid robots is not impossible to envision.

Although, as the likes of Stephen Hawking have warned, some fear the rise of an AI-dominated future.

Tech entrepreneur Elon Musk has warned that AI could becomean immortal dictator from which we would never escape,signing a letter alongside Hawking and a number of AI experts calling for research into the potential pitfalls and societal impacts of widespread AI use.

Read the rest here:

52 ideas that changed the world: 26. Artificial intelligence - The Week UK

Artificial intelligence: How to measure the I in AI – TechTalks

Image credit: Depositphotos

This article is part ofDemystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

Last week, Lee Se-dol, the South Korean Go champion who lost in a historical matchup against DeepMinds artificial intelligence algorithm AlphaGo in 2016, declared his retirement from professional play.

With the debut of AI in Go games, Ive realized that Im not at the top even if I become the number one through frantic efforts, Lee told theYonhap news agency. Even if I become the number one, there is an entity that cannot be defeated.

Predictably, Se-dols comments quickly made the rounds across prominent tech publications, some of them using sensational headlines with AI dominance themes.

Since the dawn of AI, games have been one of the main benchmarks to evaluate the efficiency of algorithms. And thanks to advances in deep learning and reinforcement learning, AI researchers are creating programs that can master very complicated games and beat the most seasoned players across the world. Uninformed analysts have been picking up on these successes to suggest that AI is becoming smarter than humans.

But at the same time, contemporary AI fails miserably at some of the most basic that every human can perform.

This begs the question, does mastering a game prove anything? And if not, how can you measure the level of intelligence of an AI system?

Take the following example. In the picture below, youre presented with three problems and their solution. Theres also a fourth task that hasnt been solved. Can you guess the solution?

Youre probably going to think that its very easy. Youll also be able to solve different variations of the same problem with multiple walls, and multiple lines, and lines of different colors, just by seeing these three examples. But currently, theres no AI system, including the ones being developed at the most prestigious research labs, that can learn to solve such a problem with so few examples.

The above example is from The Measure of Intelligence, a paper by Franois Chollet, the creator of Keras deep learning library. Chollet published this paper a few weeks before Le-sedol declared his retirement. In it, he provided many important guidelines on understanding and measuring intelligence.

Ironically, Chollets paper did not receive a fraction of the attention it needs. Unfortunately, the media is more interested in covering exciting AI news that gets more clicks. The 62-page paper contains a lot of invaluable information and is a must-read for anyone who wants to understand the state of AI beyond the hype and sensation.

But I will do my best to summarize the key recommendations Chollet makes on measuring AI systems and comparing their performance to that of human intelligence.

The contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games, Chollet writes, adding that solely measuring skill at any given task falls short of measuring intelligence.

In fact, the obsession with optimizing AI algorithms for specific tasks has entrenched the community in narrow AI. As a result, work in AI has drifted away from the original vision of developing thinking machines that possess intelligence comparable to that of humans.

Although we are able to engineer systems that perform extremely well on specific tasks, they have still stark limitations, being brittle, data-hungry, unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators, and unable to repurpose themselves to deal with novel tasks without significant involvement from human researchers, Chollet notes in the paper.

Chollets observations are in line with those made by other scientists on the limitations and challenges of deep learning systems. These limitations manifest themselves in many ways:

Heres an example: OpenAIs Dota-playing neural networks needed 45,000 years worth of gameplay to reach a professional level. The AI is also limited in the number of characters it can play, and the slightest change to the game rules will result in a sudden drop in its performance.

The same can be seen in other fields, such as self-driving cars. Despite millions of hours of road experience, the AI algorithms that power autonomous vehicles can make stupid mistakes, such as crashing into lane dividers or parked firetrucks.

One of the key challenges that the AI community has struggled with is defining intelligence. Scientists have debated for decades on providing a clear definition that allows us to evaluate AI systems and determine what is intelligent or not.

Chollet borrows the definition by DeepMind cofounder Shane Legg and AI scientist Marcus Hutter: Intelligence measures an agents ability to achieve goals in a wide range of environments.

Key here is achieve goals and wide range of environments. Most current AI systems are pretty good at the first part, which is to achieve very specific goals, but bad at doing so in a wide range of environments. For instance, an AI system that can detect and classify objects in images will not be able to perform some other related task, such as drawing images of objects.

Chollet then examines the two dominant approaches in creating intelligence systems: symbolic AI and machine learning.

Early generations of AI research focused on symbolic AI, which involves creating an explicit representation of knowledge and behavior in computer programs. This approach requires human engineers to meticulously write the rules that define the behavior of an AI agent.

It was then widely accepted within the AI community that the problem of intelligence would be solved if only we could encode human skills into formal rules and encode human knowledge into explicit databases, Chollet observes.

But rather than being intelligent by themselves, these symbolic AI systems manifest the intelligence of their creators in creating complicated programs that can solve specific tasks.

The second approach, machine learning systems, is based on providing the AI model with data from the problem space and letting it develop its own behavior. The most successful machine learning structure so far is artificial neural networks, which are complex mathematical functions that can create complex mappings between inputs and outputs.

For instance, instead of manually coding the rules for detecting cancer in x-ray slides, you feed a neural network with many slides annotated with their outcomes, a process called training. The AI examines the data and develops a mathematical model that represents the common traits of cancer patterns. It can then process new slides and outputs how likely it is that the patients have cancer.

Advances in neural networks and deep learning have enabled AI scientists to tackle many tasks that were previously very difficult or impossible with classic AI, such as natural language processing, computer vision and speech recognition.

Neural networkbased models, also known as connectionist AI, are named after their biological counterparts. They are based on the idea that the mind is a blank slate (tabula rasa) that turns experience (data) into behavior. Therefore, the general trend in deep learning has become to solve problems by creating bigger neural networks and providing them with more training data to improve their accuracy.

Chollet rejects both approaches because none of them has been able to create generalized AI that is flexible and fluid like the human mind.

We see the world through the lens of the tools we are most familiar with. Today, it is increasingly apparent that both of these views of the nature of human intelligenceeither a collection of special-purpose programs or a general-purpose Tabula Rasaare likely incorrect, he writes.

Truly intelligent systems should be able to develop higher-level skills that can span across many tasks. For instance, an AI program that masters Quake 3 should be able to play other first-person shooter games at a decent level. Unfortunately, the best that current AI systems achieve is local generalization, a limited maneuver room within their own narrow domain.

In his paper, Chollet argues that the generalization or generalization power for any AI system is its ability to handle situations (or tasks) that differ from previously encountered situations.

Interestingly, this is a missing component of both symbolic and connectionist AI. The former requires engineers to explicitly define its behavioral boundary and the latter requires examples that outline its problem-solving domain.

Chollet also goes further and speaks of developer-aware generalization, which is the ability of an AI system to handle situations that neither the system nor the developer of the system have encountered before.

This is the kind of flexibility you would expect from a robo-butler that could perform various chores inside a home without having explicit instructions or training data on them. An example is Steve Wozniaks famous coffee test, in which a robot would enter a random house and make coffee without knowing in advance the layout of the home or the appliances it contains.

Elsewhere in the paper, Chollet makes it clear that AI systems that cheat their way toward their goal by leveraging priors (rules) and experience (data) are not intelligent. For instance, consider Stockfish, the best rule-base chess-playing program. Stockfish, an open-source project, is the result of contributions from thousands of developers who have created and fine-tuned tens of thousands of rules. A neural networkbased example is AlphaZero, the multi-purpose AI that has conquered several board games by playing them millions of times against itself.

Both systems have been optimized to perform a specific task by making use of resources that are beyond the capacity of the human mind. The brightest human cant memorize tens of thousands of chess rules. Likewise, no human can play millions of chess games in a lifetime.

Solving any given task with beyond-human level performance by leveraging either unlimited priors or unlimited data does not bring us any closer to broad AI or general AI, whether the task is chess, football, or any e-sport, Chollet notes.

This is why its totally wrong to compare Deep Blue, Alpha Zero, AlphaStar or any other game-playing AI with human intelligence.

Likewise, other AI models, such as Aristo, the program that can pass an eighth-grade science test, does not possess the same knowledge as a middle school student. It owes its supposed scientific abilities to the huge corpora of knowledge it was trained on, not its understanding of the world of science.

(Note: Some AI researchers, such as computer scientist Rich Sutton, believe that the true direction for artificial intelligence research should be methods that can scale with the availability of data and compute resources.)

In the paper, Chollet presents the Abstraction Reasoning Corpus (ARC), a dataset intended to evaluate the efficiency of AI systems and compare their performance with that of human intelligence. ARC is a set of problem-solving tasks that tailored for both AI and humans.

One of the key ideas behind ARC is to level the playing ground between humans and AI. It is designed so that humans cant take advantage of their vast background knowledge of the world to outmaneuver the AI. For instance, it doesnt involve language-related problems, which AI systems have historically struggled with.

On the other hand, its also designed in a way that prevents the AI (and its developers) from cheating their way to success. The system does not provide access to vast amounts of training data. As in the example shown at the beginning of this article, each concept is presented with a handful of examples.

The AI developers must build a system that can handle various concepts such as object cohesion, object persistence, and object influence. The AI system must also learn to perform tasks such as scaling, drawing, connecting points, rotating and translating.

Also, the test dataset, the problems that are meant to evaluate the intelligence of the developed system, are designed in a way that prevents developers from solving the tasks in advance and hard-coding their solution in the program. Optimizing for evaluation sets is a popular cheating method in data science and machine learning competitions.

According to Chollet, ARC only assesses a general form of fluid intelligence, with a focus on reasoning and abstraction. This means that the test favors program synthesis, the subfield of AI that involves generating programs that satisfy high-level specifications. This approach is in contrast with current trends in AI, which are inclined toward creating programs that are optimized for a limited set of tasks (e.g., playing a single game).

In his experiments with ARC, Chollet has found that humans can fully solve ARC tests. But current AI systems struggle with the same tasks. To the best of our knowledge, ARC does not appear to be approachable by any existing machine learning technique (including Deep Learning), due to its focus on broad generalization and few-shot learning, Chollet notes.

While ARC is a work in progress, it can become a promising benchmark to test the level of progress toward human-level AI. We posit that the existence of a human-level ARC solver would represent the ability to program an AI from demonstrations alone (only requiring a handful of demonstrations to specify a complex task) to do a wide range of human-relatable tasks of a kind that would normally require human-level, human-like fluid intelligence, Chollet observes.

See the rest here:

Artificial intelligence: How to measure the I in AI - TechTalks

Artificial Intelligence and National Security, and More from CRS – Secrecy News

The 2019 defense authorization act directed the Secretary of Defense to produce a definition of artificial intelligence (AI) by August 13, 2019 to help guide law and policy. But that was not done.

Thereforeno official U.S. government definition of AI yet exists, the Congressional Research Service observed ina newly updated reporton the subject.

But plenty of other unofficial and sometimes inconsistent definitions do exist. And in any case, CRS noted, AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles. Already, AI has been incorporated into military operations in Iraq and Syria.

The Central Intelligence Agency alone has around 140 projects in development that leverage AI in some capacity to accomplish tasks such as image recognition and predictive analytics. CRS surveys the field inArtificial Intelligence and National Security, updated November 21, 2019.

* * *

The 2018 financial audit of the Department of Defense, which was the first such audit ever, cost a stunning $413 million to perform. Its findings were assessed by CRS in another new report. SeeDepartment of Defense First Agency-wide Financial Audit (FY2018): Background and Issues for Congress, November 27, 2019.

* * *

The Arctic region is increasingly important as a focus of security, environmental and economic concern. So it is counterintuitive and likely counterproductive that the position of U.S. Special Representative for the Arctichas been left vacant since January 2017. In practice it has beeneffectively eliminatedby the Trump Administration. SeeChanges in the Arctic: Background and Issues for Congress, updated November 27, 2019.

* * *

Other noteworthy new and updated CRS reports include the following (which are also available through the CRS public website atcrsreports.congress.gov).

Resolutions to Censure the President: Procedure and History, updated November 20, 2019

Immigration: Recent Apprehension Trends at the U.S. Southwest Border, November 19, 2019

Air Force B-21 Raider Long Range Strike Bomber, updated November 13, 2019

Precision-Guided Munitions: Background and Issues for Congress, November 6, 2019

Space Weather: An Overview of Policy and Select U.S. Government Roles and Responsibilities, November 20, 2019

Intelligence Community Spending: Trends and Issues, updated November 6, 2019

See original here:

Artificial Intelligence and National Security, and More from CRS - Secrecy News

Baidu Leads the Way in Innovation with 5712 Artificial Intelligence Patent Applications – GlobeNewswire

Top 10 AI Patent Applicants in China as of 2019

Source: China Industrial Control Systems Cyber Emergency Response Team Ministry of Industry and Information Technology

Baidu USA, LLC

BEIJING, Dec. 07, 2019 (GLOBE NEWSWIRE) -- Baidu, Inc. (NASDAQ: BIDU) has filed the most AI-related patent applications in China, a recognition of the companys long-term commitment to driving technological advancement, a recent study from the research unit of Chinas Ministry of Industry and Information Technology (MIIT) has shown.

Baidu filed a total of 5,712 AI-related patent applications as of October 2019, ranking No.1 in China for the second consecutive year. Baidus patent applications were followed by Tencent (4,115), Microsoft (3,978), Inspur (3,755), and Huawei (3,656), according to the report issued by the China Industrial Control Systems Cyber Emergency Response Team, a research unit under the MIIT.

Baidu retained the top spot for AI patent applications in China because of our continuous research and investment in developing AI, as well as our strategic focus on patents, said Victor Liang, Vice President and General Counsel of Baidu.

In the future, we will continue to increase our investments into securing AI patents, especially for high-value and high-quality patents, to provide a solid foundation for Baidus AI business and for our development of world-leading technology, he said.

The report showed that Baidu is the patent application leader in several key areas of AI. These include deep learning (1,429), natural language processing (938), and speech recognition (933). Baidu also leads in the highly competitive area of intelligent driving, with 1,237 patent applications, a figure that surpasses leading Chinese universities and research institutions, as well as many international automotive companies. With the launch of the Apollo open source autonomous driving platform and other intelligent driving innovations, Baidu has been committed to pioneering the intelligent transformation of the mobility industry.

After years of research, Baidu has developed a comprehensive AI ecosystem and is now at the forefront of the global AI industry. Moving forward, Baidu will continue to conduct research in the core areas of AI, contribute to scientific and technological innovation in China, and actively push forward the application of AI into more vertical industries. Baidu is positioned to be a global leader in a wave of innovation that will transform industries.

About BaiduBaidu, Inc. is the leading Chinese language Internet search provider. Baidu aims to make the complicated world simpler for users and enterprises through technology. Baidus ADSs trade on the NASDAQ Global Select Market under the symbol BIDU. Currently, ten ADSs represent one Class A ordinary share.

Media ContactIntlcomm@baidu.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/300812be-c1a6-44c4-af8c-fbc0a0613019

Read more:

Baidu Leads the Way in Innovation with 5712 Artificial Intelligence Patent Applications - GlobeNewswire

Pondering the Ethics of Artificial Intelligence in Health Care Kansas City Experts Team Up on Emerging – Flatland

Share this story

Published December 4th, 2019 at 9:59 AM

Artificial Intelligence (AI) the ability of machines to make decisions that normally require human expertise already is changing our world in countless ways, from self-driving cars to facial-recognition technology.

But the best and maybe the worst is yet to come.

AI is being used increasingly in health care, including the possibility of a radiology tool that might eliminate the need for tissue samples. Knowing that, the people leading a new project called Ethical-AI for the Center for Practical Bioethics (CPB) are trying to make sure that AI health care tools will be created and used in ethical ways.

The ethical questions the project is raising should have been considered in a systematic way years ago, of course. But the good news is that the recommendations produced by this effort may be able to prevent misconstruction or misuse of AI health care tools.

Weve been excited about technology since we landed on the moon, says Lindsey Jarrett, a researcher for Cerner Corp., a Kansas City-based global health care technology company. That has put us into a fast pace that I dont think we were prepared for. Now were looking and saying, OK, wait, hold on. Maybe we should re-evaluate this.

Jarrett is working with Matthew Pjecha, a (CPB) program associate, to produce a series of ethical guidelines for how AI should and shouldnt be used in health care.

When were talking about (AI in) health care, the stakes get really high really fast, Pjecha says. What were hoping comes from this project is a robust set of recommendations (about ethics) for people who are designing, implementing and using AI in health care.

Pjecha, Jarrett and CPB leaders, such as CPB President John G. Carney, worry that if AI tools are created without first thinking about ethical issues, the results can be disastrous for lots of people.

In 2018, for instance, Pjecha gave a presentation at a symposium, attended by Jarrett, in which he looked at an AI instrument used in Arkansas to allocate Medicaid benefits. Because that AI tool was flawed by a failure to include data from a broad segment of the population, it deployed an algorithm that threw many eligible Medicaid recipients off the program, resulting in severe problems.

Pjecha and Jarrett later decided to work together under the CPB umbrella to make sure future AI health care tools were designed properly and ethically.

Once an AI tool has been created, Pjecha says, if you get outcomes from them that youre not sure about or uncomfortable with its not easy to go back and find out why you got those. So its vital to make sure that the data that goes into creating AI tools is reliable and not biased in some way.

What we have learned, Pjecha says, is that AI will express the biases that their creators have.

One way in which technology is affecting health care is through the growing use of wearable activity monitors, which track our daily movements and bodily reactions.

But, says Jarrett, If someone is making really big clinical decisions based on the watch that youre wearing every day, there are lots of times when that device doesnt catch everything you need to know.

Pjecha adds: I could wear a Fitbit every day of my life and I dont think a picture of my life would really be captured in it. But those are the numbers. And we have a kind of fascination with the role that numbers play in the provision of health care.

Without broadly accepted ethical guidelines for AIs creation and use in health care, Pjecha says, 10 years down the roadwe would find ourselves with a health care system that is less relatable and less compassionate and less human. We know that AI systems are quickly going to start outpacing human physicians in certain types of tasks. A good example is recognizing anomalies in imaging.

AI tools, for instance, already can find imaging irregularities at the pixel level, which human eyes cant see. We need to figure out what it means when providers deploy a certain tool that is better qualified to make a type of call than they are, Pjecha says. Im really interested in what happens when one of these systems hypothetically makes a certain determination and a human physician disagrees with it. What kind of trust are we placing in these tools? A lot of these questions are just open.

And, adds Jarrett, another worry is that big companies are entering the health care space of the economy without knowing much about health care, such as Amazon and Google. That may add to the lack of ethical considerations required to make sure AI tools are fair.

So once again, we risk science and technology moving more quickly than our human capacity to understand and control them.

CPB and Cerner both are funding this project, though CPB continues to seek additional investments to support it.

Bill Tammeus, a Presbyterian elder and former award-winning Faith columnist forThe Kansas City Star, writes the daily Faith Matters blog forThe Starswebsite and columns forThe Presbyterian Outlook and formerly for The National Catholic Reporter. His latest book isThe Value of Doubt: Why Unanswered Questions, Not Unquestioned Answers, Build Faith. Email him atwtammeus@gmail.com.

Discover more unheard stories about Kansas City, every Thursday.

Check your inbox, you should see something from us.

The rest is here:

Pondering the Ethics of Artificial Intelligence in Health Care Kansas City Experts Team Up on Emerging - Flatland

Opinion | The artificial intelligence frontier of economic theory – Livemint

Until recently, two big impediments limited what research economists could learn about the world with the powerful methods that mathematicians and statisticians, starting in the early 19th century, developed to recognize and interpret patterns in noisy data: Data sets were small and costly, and computers were slow and expensive. So it is natural that as gains in computing power have dramatically reduced these impediments, economists have rushed to use big data and artificial intelligence to help them spot patterns in all sorts of activities and outcomes.

Data summary and pattern recognition are big parts of the physical sciences as well. The physicist Richard Feynman once likened the natural world to a game played by the gods: You dont know the rules of the game, but youre allowed to look at the board from time to time, in a little corner, perhaps. And from these observations, you try to figure out what the rules are."

Feynmans metaphor is a literal description of what many economists do. Like astrophysicists, we typically acquire non-experimental data generated by processes we want to understand. The mathematician John von Neumann defined a game as (1) a list of players; (2) a list of actions available to each player; (3) a list of how pay-offs accruing to each player depend on the actions of all players; and (4) a timing protocol that tells who chooses what when. This elegant definition includes what we mean by a constitution" or an economic system": a social understanding about who chooses what when.

Like Feynmans metaphorical physicist, our task is to infer a game"which for economists is the structure of a market or system of marketsfrom observed data.

But then we want to do something that physicists dont: Think about how different games" might produce improved outcomes. That is, we want to conduct experiments to study how a hypothetical change in the rules of the game or in a pattern of observed behaviour by some players" (say, government regulators or a central bank) might affect patterns of behaviour by the remaining players.

Thus, structural model builders" in economics seek to infer from historical patterns of behaviour a set of invariant parameters for hypothetical (often historically unprecedented) situations in which a government or regulator follows a new set of rules. The government has strategies, and the people have counter-strategies, according to a Chinese proverb.

Structural models" seek such invariant parameters in order to help regulators and market designers understand and predict data patterns under historically unprecedented situations. The challenging task of building structural models will benefit from rapidly developing branches of artificial intelligence (AI) that dont involve more than pattern recognition. A great example is AlphaGo. The team of computer scientists that created the algorithm to play the Chinese game Go combined a suite of tools that had been developed by specialists in statistics, simulation, decision theory, and game theory communities.

Many of the tools used in just the right proportions to make an outstanding artificial Go player are also economists bread-and-butter tools for building structural models to study macroeconomics and industrial organization.

Of course, economics differs from physics in a crucial respect. Whereas Pierre-Simon Laplace regarded the present state of the universe as the effect of its past and the cause of its future," the reverse is true in economics: what we expect other people to do later causes what we do now.

We typically use personal theories about what other people want to forecast what they will do. When we have good theories of other people, what they are likely to do determines what we expect them to do. This line of reasoning, sometimes called rational expectations", reflects a sense in which the future causes the present" in economic systems. Taking this into account is at the core of building structural" economic models.

For example, I will join a run on a bank if I expect that other people will. Without deposit insurance, customers have incentives to avoid banks vulnerable to runs. With deposit insurance, customers dont care and wont run. On the other hand, if governments insure deposits, bank owners will want their assets to become as big and as risky as possible, while depositors wont care.

There are similar trade-offs with unemployment and disability insuranceinsuring people against bad luck may weaken their incentive to provide for themselvesand for official bailouts of governments and firms.

More broadly, my reputation is what others expect me to do. I face choices about whether to confirm or disappoint those expectations. Those choices will affect how others behave in the future. Central bankers think about that a lot.

Like physicists, we economists use models and data to learn. We dont learn new things until we appreciate that our old models cannot explain new data. We then construct new models in light of how their predecessors failed.

This explains how we have learned from past depressions and financial crises. And with big data, faster computers and better algorithms, we might see patterns where once we heard only noise.

*Thomas J. Sargent is professor of economics at New York University and senior fellow at the Hoover Institution

2019/project syndicate

More here:

Opinion | The artificial intelligence frontier of economic theory - Livemint