Page 139«..1020..138139140141..150160..»

Category Archives: Artificial Intelligence

How Google is making music with artificial intelligence – Science Magazine

Posted: August 9, 2017 at 5:11 am

A musician improvises alongside A.I. Duet, software developed in part by Googles Magenta

google

By Matthew HutsonAug. 8, 2017 , 3:40 PM

Can computers be creative? Thats a question bordering on the philosophical, but artificial intelligence (AI) can certainly make music and artwork that people find pleasing. Last year, Google launched Magenta, a research project aimed at pushing the limits of what AI can do in the arts. Science spoke with Douglas Eck, the teams lead in San Francisco, California, about the past, present, and future of creative AI. This interview has been edited for brevity and clarity.

Q: How does Magenta compose music?

A: Learning is the key. Were not spending any effort on classical AI approaches, which build intelligence using rules. Weve tried lots of different machine-learning techniques, including recurrent neural networks, convolutional neural networks, variational methods, adversarial training methods, and reinforcement learning. Explaining all of those buzzwords is too much for a short answer. What I can say is that theyre all different techniques for learning by example to generate something new.

Q: What examples does Magenta learn from?

A:We trained theNSynthalgorithm, which uses neural networks to synthesize new sounds, on notes generated by different instruments. TheSketchRNNalgorithm was trained onmillions of drawingsfrom ourQuick, Draw!game. Our most recent music algorithm,Performance RNNwas trained on classical piano performances captured on a modern player piano [listen below]. I'd like musicians to be able to easily train models on their own musical creations, then have fun with the resulting music, further improving it.

Q: How has computer composition changed over the years?

A:Currently the focus is on algorithms which learn by example, i.e., machine learning, instead of using hard-coded rules. I also think theres been increased focus on using computers as assistants for human creativity rather than as a replacement technology, such as our work and Sonys Daddys Car [a computer-composed song inspired by The Beatles and fleshed out by a human producer].

Q: Do the results of computer-generated music ever surprise you?

A:Yeah. All the time. I was really surprised at how expressive the short compositions were from Ian Simon and Sageev Oores recent Performance RNN algorithm. Because they trained on real performances captured in MIDI on Disklavier pianos, their model was able to generate sequences with realistic timing and dynamics.

Q: What else is Magenta doing?

A:We did a summer internship around joke telling, but we didnt generate any funny jokes. Were also working on image generation and drawing generation [seeexample below]. In the future, Id like to look more at areas related to design. Can we provide tools for architects or web page creators?

Magenta software can learn artistic styles from human paintings and apply them to new images.

Fred Bertsch

Q: How do you respond to art that you know comes from a computer?

A: When I was on the computer science faculty at University of Montreal [in Canada], I heard some computer music by a music faculty member, Jean Pich. Hed written a program that could generate music somewhat like that of the jazz pianist Keith Jarrett. It wasnt nearly as engaging as the real Keith Jarrett! But I still really enjoyed it, because programming the algorithm is itself a creative act. I think knowing Jean and attributing this cool program to him made me much more responsive than I would have been otherwise.

Q: If abilities once thought to be uniquely human can be aped by an algorithm, should we think differently about them?

A: I think differently about chess now that machines can play it well. But I dont see that chess-playing computers have devalued the game. People still love to play! And computers have become great tools for learning chess.Furthermore, I think its interesting to compare and contrast how chess masters approach the game versus how computers solve the problemvisualization and experience versus brute-force search, for example.

Q: How might people and machines collaborate to be more creative?

A: I think its an iterative process. Every new technology that made a difference in art took some time to figure out. I love to think of Magenta like an electric guitar. Rickenbacker and Gibson electrified guitars with the purpose of being loud enough to compete with other instruments onstage.Jimi Hendrix and Joni Mitchell and Marc Ribot and St. Vincent and a thousand other guitarists who pushed the envelope on how this instrument can be played were all using the instrument the wrong way, some saidretuning, distorting, bending strings, playing upside-down, using effects pedals, etc. No matter how fast machine learning advances in terms of generative models, artists will work faster to push the boundaries of whats possible there, too.

Excerpt from:

How Google is making music with artificial intelligence - Science Magazine

Posted in Artificial Intelligence | Comments Off on How Google is making music with artificial intelligence – Science Magazine

The Real Threat of Artificial Intelligence – The New York Times

Posted: at 5:11 am

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs mostly lower-paying jobs, but some higher-paying ones, too.

This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?

Part of the answer will involve educating or retraining people in tasks A.I. tools arent good at. Artificial intelligence is poorly suited for jobs involving creativity, planning and cross-domain thinking for example, the work of a trial lawyer. But these skills are typically required by high-paying jobs that may be hard to retrain displaced workers to do. More promising are lower-paying jobs involving the people skills that A.I. lacks: social workers, bartenders, concierges professions requiring nuanced human interaction. But here, too, there is a problem: How many bartenders does a society really need?

The solution to the problem of mass unemployment, I suspect, will involve service jobs of love. These are jobs that A.I. cannot do, that society needs and that give people a sense of purpose. Examples include accompanying an older person to visit a doctor, mentoring at an orphanage and serving as a sponsor at Alcoholics Anonymous or, potentially soon, Virtual Reality Anonymous (for those addicted to their parallel lives in computer-generated simulations). The volunteer service jobs of today, in other words, may turn into the real jobs of the future.

Other volunteer jobs may be higher-paying and professional, such as compassionate medical service providers who serve as the human interface for A.I. programs that diagnose cancer. In all cases, people will be able to choose to work fewer hours than they do now.

Who will pay for these jobs? Here is where the enormous wealth concentrated in relatively few hands comes in. It strikes me as unavoidable that large chunks of the money created by A.I. will have to be transferred to those whose jobs have been displaced. This seems feasible only through Keynesian policies of increased government spending, presumably raised through taxation on wealthy companies.

As for what form that social welfare would take, I would argue for a conditional universal basic income: welfare offered to those who have a financial need, on the condition they either show an effort to receive training that would make them employable or commit to a certain number of hours of service of love voluntarism.

To fund this, tax rates will have to be high. The government will not only have to subsidize most peoples lives and work; it will also have to compensate for the loss of individual tax revenue previously collected from employed individuals.

This leads to the final and perhaps most consequential challenge of A.I. The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes. But what about other countries?

They face two insurmountable problems. First, most of the money being made from artificial intelligence will go to the United States and China. A.I. is an industry in which strength begets strength: The more data you have, the better your product; the better your product, the more data you can collect; the more data you can collect, the more talent you can attract; the more talent you can attract, the better your product. Its a virtuous circle, and the United States and China have already amassed the talent, market share and data to set it in motion.

For example, the Chinese speech-recognition company iFlytek and several Chinese face-recognition companies such as Megvii and SenseTime have become industry leaders, as measured by market capitalization. The United States is spearheading the development of autonomous vehicles, led by companies like Google, Tesla and Uber. As for the consumer internet market, seven American or Chinese companies Google, Facebook, Microsoft, Amazon, Baidu, Alibaba and Tencent are making extensive use of A.I. and expanding operations to other countries, essentially owning those A.I. markets. It seems American businesses will dominate in developed markets and some developing markets, while Chinese companies will win in most developing markets.

The other challenge for many countries that are not China or the United States is that their populations are increasing, especially in the developing world. While a large, growing population can be an economic asset (as in China and India in recent decades), in the age of A.I. it will be an economic liability because it will comprise mostly displaced workers, not productive ones.

So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software China or the United States to essentially become that countrys economic dependent, taking in welfare subsidies in exchange for letting the parent nations A.I. companies continue to profit from the dependent countrys users. Such economic arrangements would reshape todays geopolitical alliances.

One way or another, we are going to have to start thinking about how to minimize the looming A.I.-fueled gap between the haves and the have-nots, both within and between nations. Or to put the matter more optimistically: A.I. is presenting us with an opportunity to rethink economic inequality on a global scale. These challenges are too far-ranging in their effects for any nation to isolate itself from the rest of the world.

Kai-Fu Lee is the chairman and chief executive of Sinovation Ventures, a venture capital firm, and the president of its Artificial Intelligence Institute.

Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.

A version of this op-ed appears in print on June 25, 2017, on Page SR4 of the New York edition with the headline: The Real Threat of Artificial Intelligence.

Visit link:

The Real Threat of Artificial Intelligence - The New York Times

Posted in Artificial Intelligence | Comments Off on The Real Threat of Artificial Intelligence – The New York Times

Artificial intelligence is inevitable. Will you embrace or resist it in your practice? – Indiana Lawyer

Posted: at 5:11 am

Growing up, Kightlinger and Gray LLP attorney Adam Ira can recall members of his family, many of whom were factory workers, expressing concerning about the prospect of automated machines taking their jobs. Now, Ira said similar concerns are creeping into his work as a lawyer, as the rise of artificial intelligence in the practice of law has begun automating legal tasks previously performed by humans.

As the number of available AI products grows, attorneys have begun to gravitate toward tools that enable them to do their work quickly and more efficiently. Artificial intelligence can come in multiple forms, legal tech experts say, from simple document automation to more complex intelligence using algorithms to predict legal outcomes.

In recent months, several new AI products have been introduced with the promise of automating the mundane tasks of being a lawyer, leaving attorneys with more time to focus on the complex legal questions raised by their clients.

For example, Seattle-based TurboPatent Corp. launched an AI tool in mid-July known as RoboReview. Through RoboReview, patent attorneys can upload a patent application into the AI software, which then scans the document and assesses it for similarities to previous patent applications and uses the level of similarity to predict patent eligibility. RoboReview can also make other predictions about the patent process, such as how long the process might take or what actions the U.S. Patent and Trademark Office may take with the application, said Dave Billmaier, TurboPatent vice president of product marketing.

Shortly after RoboReview went public, Amy Wan, attorney and founder and CEO of Bootstrap Legal, introduced an AI product that automates the process of drafting legal paperwork for people trying to raise capital for a real estate project of $2 million or less. As a former real estate securities attorney, Wan said she witnessed firsthand how inefficient the process of drafting such documents could be, especially considering that much of the work involved routine tasks such as copying and pasting from previous documents.

With Wans AI product, users answer questions about their real estate project, and the software uses those answers to develop the necessary legal documents, which are returned to the user within 48 hours. Such technology expedites drafting the documents a process she said could otherwise take 20 to 25 hours to complete while also cutting the costs associated with raising real estate capital. Wan said her company and AI product are based on the principle that cost considerations should not prevent people from accessing legal services.

Saving time and cutting costs are AI advantages that serve as the key selling points for legal tech developers, as clients have come to expect their attorneys to use modern technology to perform efficient work at the lowest possible cost, said Jason Houdek, a patent and intellectual property attorney with Taft Stettinius & Hollister LLP. Though RoboReview is new, Houdek said he has been using similar AI tools to determine patent eligibility, ensure application quality and predict patent examiner behavior for several years.

Similarly, Haley Altman, CEO of Indianapolis-based Doxly Inc., said most legal tech entrepreneurs like her are trying to develop AI tools that take large sets of data or documents and extrapolate the relevant information lawyers are looking for, thus reducing the amount of time they spend combing through documents. The Doxly software, which is designed to automate the legal transaction process, uses AI to mimic a transactional attorneys natural workflow, making the software feel natural, she said.

Job security?

Despite these benefits, some attorneys are concerned that continued use of AI in the practice of law could put them out of a job. Further, Jim Billmaier, TurboPatent CEO, said the old guard of attorneys, those who have been in practice for many years, can be inclined to resist artificial intelligence tools because they go against traditional practice.

There may be some legitimacy to those concerns, the attorneys and legal tech experts said. For example, attorneys at large firms that still employ the typical billable-hour model could see a drop in their hours as a result of AI products, said Dan Carroll with vrsus LLC, a rebranded version of legal tech company CasePacer. The vrsus technology utilizes AI to enable attorneys at plaintiffs firms to reach outcomes for their clients as quickly as possible, rather than focusing on how many hours they are able to bill, Carroll said.

Similarly, certain practice areas that are more transactional in nature, such as bankruptcy or tax law, might be more susceptible to automation, Ira said.

But such automation is now inevitable, as further AI development is a matter of when, not if, Houdek said. Jim Billmaier agreed and noted that attorneys who are resistant to AI advancements will find themselves underperforming if they choose not to take advantage of tools that increase efficiency.

While technological advancements might be inevitable, they do not have to be uncontrollable, said Ira Smith, vrsus chief strategy officer. Few attorneys fully understand the nuances of what makes AI work, Smith said, yet few tech developers, such as IBM, understand the nuances of practicing law.

As a result, attorneys and legal tech companies should focus less on how new artificial intelligence products might change their work and instead try to mold whatever AI tools are currently on the market to improve the product of their work, Smith said. He encouraged attorneys to be product agnostic and focus less on the technological platform and more on technologys possible benefits.

Why would it matter whether (IBMs) Watson is utilizing my data as long as I can take that and serve it back to my clients? Smith said.

Human advantage

Even as legal tech and other companies offer new and ever more advanced AI products, attorneys said the human mind will always be needed in the practice of law.

For example, even if a computer becomes intelligent enough to draw up contracts on its own, lawyers will still need to review and finalize them, Altman said. Ira agreed and noted that use of AI can create ethical issues, as attorneys must ensure the automated documents they produce reflect accurate and competent work.

Further, the power of persuasion is a trait that is uniquely human, and one that is critical to the practice of law, Ira said. Though an intelligent computer might able to cobble together a legal argument one day an advancement he thinks is still at least 10 to 15 years off it could never speak to a judge or a jury in a manner meant to persuade and effectively advocate on behalf of a client, Ira said.

Similarly, judges will always be needed to use their minds and legal training to decide the outcome of cases, Houdek said, and human juries will always be needed to decide cases.

Though some human jobs or billable hours might decrease as a result of advancements in artificial intelligence, the legal tech experts said AI is more of a benefit than a threat because it allows legal professionals to use their minds and training for the creative work that comes with being an attorney.

AI technology isnt taking their jobs, Altman said. The whole point of it is to enable them to do the work that they really want to be focusing on.

Read more:

Artificial intelligence is inevitable. Will you embrace or resist it in your practice? - Indiana Lawyer

Posted in Artificial Intelligence | Comments Off on Artificial intelligence is inevitable. Will you embrace or resist it in your practice? – Indiana Lawyer

The Race to Cyberdefense, Artificial Intelligence and the Quantum Computer – Government Technology

Posted: at 5:11 am

I've been following cybersecurity startups and hackers for years, and I suddenly discovered how hackers are always ahead of the rest of us they have a better business model funding them in their proof of concept (POC) stage of development.

To even begin protecting ourselves from their well-funded advances and attacks, cyberdefense and artificial intelligence (AI)technologies must be funded at the same level in the POC stage.

Today, however, traditional investors not only want your technology running, they also need assurances that you already have a revenue stream which stifles potential new technology discovery at the POC level. And in some industries, this is dangerous.

Consider the fast-paced world of cybersecurity, in which companies are offered traditional funding avenues as they promote their product's tech capabilities so people will invest. This promotion and disclosure of their technology, however, gives hackers a road map to the new cyberdefense technologies and a window of time to gain knowledge on how to exploit them.

This same road map exists for technologies covered in detail when standard groups, universities, governments and private labs publish white papers documents that essentially assist hackers by giving them advanced notice of cyberdefense techniques.

In addition to this, some hackers receive immediate funding through nation states that are coordinating cyberwarfare like the traditional military and others are involved in organized secret groups that fund the use of ransomware and DDoS attacks. These hackers get immediate funding and then throw their technology on the Internet for POC discovery.

One project that strongly makes a case for rapidly funding cyberdefense technologies in an effort to keep up with hackers is the $5.7 billion U.S. Department of Homeland Security's (DHS) EINSTEIN cyberdefense system, which was deemed obsolete upon its deployment for failing to detect 94 percent of security vulnerabilities. As this situation illustrates, the traditional methods of funding cyberdefense taking years of bureaucratic analysis and vendor contracts does not work in the fast technology discovery world of cyberdefense. After the EINSTEIN project failure, DHS decided to conduct an assessment it's currently working to understand if it's making the right investments in dealing with the ever-changing cyberenvironment.

But it also has other roadblocks, as even large technology companies and contractors with which DHS does business have their own bureaucracies and investments that ultimately deter the department from getting the best in cyberdefense technologies. And once universities, standards groups, regulation and funding approvals are added to these processes, you're pretty much assured to be headed for another disaster.

But DHS doesnt need to develop these technologies itself. The department needs to support public- and private-sector POCs to rapidly mature and deploy new cyberdefense technologies. This suggestion is supported by what other countries are successfully doing including our adversaries.

The same two things that have motivated mankind all through history immediate power and money are now motivating hackers, and cyberdefense technologies are taking years to be deployed. So I'll say it again: The motivational and funding model of cyberdefense technologies must change. The key to successful cyberdefense technology development is making it as aggressive as the hackers that attack it. And this needs to be done at the conceptual POC level.

The concern in cyberdefense (and really all AI) is the race to the quantum computer.

Quantum computer technologies cant be hacked, and in theory, its processing power can break all encryption. The computational physics behind the quantum also offer remarkable capabilities that will drastically change all current AI and cyberdefense technologies. This is a winner-takes-all technology that offers capability with absolute security capabilities capabilities that we can now only imagine.

The most recent funding source for hackers is Bitcoin, which uses the decentralized and secure blockchain technology. It has even been used to support POC funding in what is called an Initial Coin Offering (ICO), the intent of which is to crowdfund early startup companies at the development or POC level by bypassing traditional and lengthy funding avenues. Because this type of startup seed offering has been clouded with scams, it is now in regulatory limbo.

Some states have passed laws that make it difficult to legally present and offer an ICO. While the U.S. seems to be pushing ICOregulation, other countries are still deciding what to do. But like ICOs or not, they offer first-time startups an avenue of fast-track funding at the concept level where engineers and scientists can jump on newer technologies by focusing seed money on testing their concepts. Bogging ICOs down with regulatory laws will both slow down legitimate POC innovation in the U.S. and give other countries a competitive edge.

Another barrier to cyberdefense POC funding is the size and technological control of a handful of tech companies. Google, Facebook, Amazon, Microsoft and Apple have become enormous concentrations of wealth and data, drawing the attention of economists and academics who warn they're growing too powerful. Now as big as major American cities, these companies are mega centers of both money and technology. They are so large and control so much of the market that many are beginning to view them as in violation of the Sherman Antitrust Act. So how can small startups compete with these tech giants and potentially fund POCs in areas such as cyberdefense and AI? By aligning with giant companies in industries that have the most need for cyberdefense and AI technologies: critical infrastructure.

The industries that are most vulnerable and could cause the most devastation if hacked are those involved in critical infrastructure. These large industries have the resources to fund cyberdefense technologies at the concept level and they would obtain superior cyberdefense technologies in doing so.

Cyberattacks to critical infrastructure could devastate entire country economies and must be protected by the most advanced cyberdefense. Quantum computing and artificial intelligence will initiate game-changing technology in both cyberdefense and the new intellectual property deriving from quantum sciences. Entering these new technologies at the POC level is like being a Microsoft or Google years ago. Funding the development of these new technologies in cyberdefense and AI are needed soon but what about today?

Future quantum computer capabilities will also demand immediate short-term fixes in current cyberdefense and AI. New quantum-ready compressed encryption and cyberdefense deep learning AI must be funded and tested now at the concept level. The power grid, oil and gas, and even existing telecoms are perfect targets for this funding and development. Investing today would offer current cyberdefense and business intelligence protection while creating new profit centers in the licensing and sale of these leading-edge technologies. This is true for many other industries, all differing in their approach and requiring specialized cyberdefense capabilities and new intelligence gathering that will shape their future.

So we must find creative ways of rapidly funding cyberdefense technologies at the conceptual level. If this is what hackers do and it's why they're always one step ahead, shouldn't we work to surpass them?

Link:

The Race to Cyberdefense, Artificial Intelligence and the Quantum Computer - Government Technology

Posted in Artificial Intelligence | Comments Off on The Race to Cyberdefense, Artificial Intelligence and the Quantum Computer – Government Technology

AI Vs. Bioterrorism: Artificial Intelligence Trained to Detect Anthrax by Scientists – Newsweek

Posted: at 5:11 am

South Korean scientists have been able to train artificial intelligence to detect anthrax at fast speeds, potentially dealing ablow to bioterrorism.

Hidden in letters, the biological agent killed five Americans and infected 17 morein the yearfollowing the 9/11 attacks, and the threat of a biological attackremains a top concern of Western security services as radicals such as the Islamic State militant group (ISIS) seek new ways to attack the West.

Researchers from the Korea Advanced Institute of Science and Technology have now created an algorithm that is able to study bacterial spores and quickly identify the biological agent, according to a paper published last week for the Science Advances journal.

Tech & Science Emails and Alerts - Get the best of Newsweek Tech & Science delivered to your inbox

The new training of AI to identify the bacteria usingmicroscopic images could decrease the time it takes to detect anthrax drastically, to mere seconds from a day. It is also accurate 95 percent of the time.

Anthrax contaminates the body when spores enter it, mostly through inhalation, multiplying and spreading an illness that could be fatal. Skin infections of anthrax are less deadly.

Spores from the Sterne strain of anthrax bacteria (Bacillus anthracis) are pictured in this handout scanning electron micrograph obtained by Reuters May 28, 2015. Reuters/Center for Disease Control/Handout

This study showed that holographic imaging and deep learning can identify anthrax in a few seconds,YongKeun Paul Park, associate professor of physics at the Korea Advanced Institute of Science and Technology, told the IEEE Spectrum blog.

Conventional approaches such as bacterial culture or gene sequencing would take several hours to a day, he added.

Park is working with the South Korean agency responsible for developing the country's defense capabilities amid fears that North Korea may plan a biological attack against its archenemy across their shared border.

North Korea's regime is no stranger to chemical agents. South Korea has accused operatives linked toPyongyang of responsibility for the assassination of North Korean leader Kim JongUn's half brother, Kim Jong Nam, using a VX agent at Malaysia's Kuala Lumpur International Airport in February.

Contamination by anthrax hasa death rate of 80 percent, so detection of the bacteria is crucial.

Spreading anthrax far and wide in an attack would mean that thousands would die if contaminated. So Western security services fear that hostile parties, such as ISIS sympathizers or regimes such as North Korea, will make attempts to develop a capability to cause a mass-casualty attack.

The researchers say the AI innovation could bring advances elsewhere, too, including the potential to detect other bacterias, such as those that cause food poisoning and kill more than a quarter of a million people every year.

See the original post:

AI Vs. Bioterrorism: Artificial Intelligence Trained to Detect Anthrax by Scientists - Newsweek

Posted in Artificial Intelligence | Comments Off on AI Vs. Bioterrorism: Artificial Intelligence Trained to Detect Anthrax by Scientists – Newsweek

When artificial intelligence goes wrong – Livemint

Posted: at 5:11 am

Even as artificial intelligence and machine learning continue to break new ground, there is enough evidence to indicate how easy it is for bias to creep into even the most advanced algorithms. Photo: iStockphoto

Bengaluru: Last year, for the first time ever, an international beauty contest was judged by machines. Thousands of people from across the world submitted their photos to Beauty.AI, hoping that their faces would be selected by an advanced algorithm free of human biases, in the process accurately defining what constitutes human beauty.

In preparation, the algorithm had studied hundreds of images of past beauty contests, training itself to recognize human beauty based on the winners. But what was supposed to be a breakthrough moment that would showcase the potential of modern self-learning, artificially intelligent algorithms rapidly turned into an embarrassment for the creators of Beauty.AI, as the algorithm picked the winners solely on the basis of skin colour.

The algorithm made a fairly non-trivial correlation between skin colour and beauty. A classic example of bias creeping into an algorithm, says Nisheeth K. Vishnoi, an associate professor at the School of Computer and Communication Sciences at Switzerland-based cole Polytechnique Fdrale de Lausanne (EPFL). He specializes in issues related to algorithmic bias.

A widely cited piece titled Machine bias from US-based investigative journalism organization ProPublica in 2016 highlighted another disturbing case.

It cited an incident involving a black teenager named Brisha Borden who was arrested for riding an unlocked bicycle she found on the road. The police estimated the value of the item was about $80.

In a separate incident, a 41-year-old Caucasian man named Vernon Prater was arrested for shoplifting goods worth roughly the same amount. Unlike Borden, Prater had a prior criminal record and had already served prison time.

Yet, when Borden and Prater were brought for sentencing, a self-learning program determined Borden was more likely to commit future crimes than Praterexhibiting the sort of racial bias computers were not supposed to have. Two years later, it was proved wrong when Prater was charged with another crime, while Bordens record remained clean.

And who can forget Tay, the infamous racist chatbot that Microsoft Corp. developed last year?

Even as artificial intelligence and machine learning continue to break new ground, there is enough evidence to indicate how easy it is for bias to creep into even the most advanced algorithms. Given the extent to which these algorithms are capable of building deeply personal profiles about us from relatively trivial information, the impact that this can have on personal privacy is significant.

This issue caught the attention of the US government, which in October 2016 published a comprehensive report titled Preparing for the future of artificial intelligence, turning the spotlight on the issue of algorithmic bias. It raised concerns about how machine-learning algorithms can discriminate against people or sets of people based on the personal profiles they develop of all of us.

If a machine learning model is used to screen job applicants, and if the data used to train the model reflects past decisions that are biased, the result could be to perpetuate past bias. For example, looking for candidates who resemble past hires may bias a system toward hiring more people like those already on a team, rather than considering the best candidates across the full diversity of potential applicants, the report says.

The difficulty of understanding machine learning results is at odds with the common misconception that complex algorithms always do what their designers choose to have them do, and therefore that bias will creep into an algorithm if and only if its developers themselves suffer from conscious or unconscious bias. It is certainly true that a technology developer who wants to produce a biased algorithm can do so, and that unconscious bias may cause practitioners to apply insufficient effort to preventing bias, it says.

Over the years, social media platforms have been using similar self-learning algorithms to personalize their services, offering content better suited to the preferences of their usersbased solely on their past behaviour on the site in terms of what they liked or the links they clicked on.

What you are seeing on platforms such as Google or Facebook is extreme personalizationwhich is basically when the algorithm realizes that you prefer one option over another. Maybe you have a slight bias towards (US President Donald) Trump versus Hillary (Clinton) or (Prime Minister Narendra) Modi versus other opponentsthats when you get to see more and more articles which are confirming your bias. The trouble is that as you see more and more such articles, it actually influences your views, says EPFLs Vishnoi.

The opinions of human beings are malleable. The US election is a great example of how algorithmic bots were used to influence some of these very important historical events of mankind, he adds, referring to the impact of fake news on recent global events.

Experts, however, believe that these algorithms are rarely the product of malice. Its just a product of careless algorithm design, says Elisa Celis, a senior researcher along with Vishnoi at EPFL.

How does one detect bias in an algorithm? It bears mentioning that machine learning-algorithms and neural networks are designed to function without human involvement. Even the most skilled data scientist has no way to predict how his algorithms will process the data provided to them, said Mint columnist and lawyer Rahul Matthan in a recent research paper on the issue of data privacy published by the Takshashila Institute, titled Beyond consent: A new paradigm for data protection.

One solution is black-box testing, which determines whether an algorithm is working as effectively as it should without peering into its internal structure. In a black-box audit, the actual algorithms of the data controllers are not reviewed. Instead, the audit compares the input algorithm to the resulting output to verify that the algorithm is in fact performing in a privacy-preserving manner. This mechanism is designed to strike a balance between the auditability of the algorithm on the one hand and the need to preserve proprietary advantage of the data controller on the other. Data controllers should be mandated to make themselves and their algorithms accessible for a black box audit, says Matthan, who is also a fellow with Takshashilas technology and policy research programme.

He suggests the creation of a class of technically skilled personnel or learned intermediaries whose sole job will be to protect data rights. Learned intermediaries will be technical personnel trained to evaluate the output of machine-learning algorithms and detect bias on the margins and legitimate auditors who must conduct periodic reviews of the data algorithms with the objective of making them stronger and more privacy protective. They should be capable of indicating appropriate remedial measures if they detect bias in an algorithm. For instance, a learned intermediary can introduce an appropriate amount of noise into the processing so that any bias caused over time due to a set pattern is fuzzed out, Matthan explains.

That said there still remain significant challenges in removing the bias once discovered.

If you are talking about removing biases from algorithms and developing appropriate solutions, this is an area that is still largely in the hands of academiaand removed from the broader industry. It will take time for the industry to adopt these solutions on a larger scale, says Animesh Mukherjee, an associate professor at the Indian Institute of Technology, Kharagpur, who specializes in areas such as natural language processing and complex algorithms.

This is the first in a four-part series. The next part will focus on consent as the basis of privacy protection.

A nine-judge Constitution bench of the Supreme Court is currently deliberating whether or not Indian citizens have the right to privacy. At the same time, the government has appointed a committee under the chairmanship of retired Supreme Court judge B.N. Srikrishna to formulate a data protection law for the country. Against this backdrop, a new discussion paper from the Takshashila Institute has proposed a model of privacy particularly suited for a data-intense world. Over the course of this week we will take a deeper look at that model and why we need a new paradigm for privacy. In that context, we examine the increasing reliance on software to make decisions for us, assuming that dispassionate algorithms will ensure a level of fairness that we are denied because of human frailties. But algorithms have their own shortcomingsand those can pose a serious threat to our personal privacy.

Continue reading here:

When artificial intelligence goes wrong - Livemint

Posted in Artificial Intelligence | Comments Off on When artificial intelligence goes wrong – Livemint

Artificial Intelligence will lead to the human soul, not …

Posted: August 8, 2017 at 4:11 am

Elon Musk famously equated Artifical Intelligence with summoning the demon and sounds the alarm that AI is advancing faster than anyone realizes, posing an existential threat to humanity. Stephen Hawking has warned that AI could take off and leave the human race, limited by evolutions slow pace, in the dust. Bill Gates counts himself in the camp concerned about super intelligence. And, although Mark Zuckerburg is dismissive about AIs potential threat, Facebook recently shut down an AI engine after reportedly discovering that it had created a new language humans cant understand.

Concerns about AI are entirely logical if all that exists is physical matter. If so, itd be inevitable that AI -- designed by our intelligence but built on a better platform than biochemistry -- would exceed human capabilities that arise by chance.

In fact, in a purely physical world, fully-realized AI should be recognized as the appropriate outcome of natural selection; we humans should benefit from it while we can. After all, sooner or later, humanity will cease to exist, whether from the sun running out or something more mundane including AI-driven extinction. Until then, wouldnt it be better to maximize human flourishing with the help of AI rather than forgoing its benefits in hopes of extending humanitys end date?

As possible as all this might seem, in actuality, what we know about the human mind strongly suggests that full AI will not happen. Physical matter alone is not capable of producing whole, subjective experiences, such as watching a sunset while listening to sea gulls, and the mechanisms proposed to address the known shortfalls of matter vs. mind, such as emergent properties, are inadequate and falsifiable. Therefore, it is highly probable that we have immaterial minds.

Deep down, we all know were more than biological robots. Thats why almost everyone rebels against materialisms implications. We dont act as though we believe everything is ultimately meaningless.

Granted, forms of AI are already achieving impressive results. These use brute force, huge and fast memory, rules-based automation, and layers of pattern matching to perform their extraordinary feats. But this processing is not aware, perceiving, feeling, cognition. The processing doesnt go beyond its intended activities even if the outcomes are unpredictable. Technology based on this level of AI will often be quite remarkable and definitely must be managed well to avoid dangerous repercussions. However, in and of itself, this AI cannot lead to a true replication of the human mind.

Full AI that is, artificial intelligence capable of matching and perhaps exceeding the human mind -- cannot be achieved unless we discover, via material means, the basis for the existence of immaterial minds, and then learn how to confer that on machines. In philosophy the underlying issue is known as the qualia problem. Our awareness of external objects and colors; our self-consciousness; our conceptual understanding of time; our experiences of transcendence whether simple awe in front of beauty or mathematical truth; or our mystical states, all clearly point to something that is qualitatively different from the material world. Anyone with a decent understanding of physics, computer science and the human mind ought to be able to know this, especially those most concerned about AIs possibilities.

That those who fear AI dont see its limitations indicates that even the best minds fall victim to their biases. We should be cautious about believing that exceptional achievements in some areas translate to exceptional understanding in others. For too many including some in the media -- the mantra, question everything, applies only within certain boundaries. They never question methodological naturalism -- the belief that there is nothing that exists outside the material world -- which blinds them to other possibilities. Even with what seems like more open-minded thinking, some people seem to suffer from a lack of imagination or will. For example, Peter Thiel believes that the human mind and computers are deeply different yet doesnt acknowledge that implies that the mind comprises more than physical matter. Thomas Nagle believes that consciousness could not have arisen via materialistic evolution yet explicitly limits the implications of that because he doesnt want God to exist.

Realizing that we have immaterial minds, i.e. genuine souls, is far more important than just speculating on AIs future. Without immaterial minds, there is no sustainable basis for believing in human exceptionalism. When human life is viewed only through a materialistic lens, it gets valued based on utility. No wonder the young nones young Americans who dont identify with a religion think their lives are meaningless and some begin to despair. It is time to understand that evolution is not a strictly material process but one in which the immaterial mind plays a major role in human, and probably all sentient creatures, adaption and selection.

Deep down, we all know were more than biological robots. Thats why almost everyone rebels against materialisms implications. We dont act as though we believe everything is ultimately meaningless.

Were spiritual creatures, here by intent, living in a world where the supernatural is the norm; each and every moment of our lives is our souls in action. Immaterial ideas shape the material world and give it true meaning, not the other way around.

In the end, the greatest threat that humans face is a failure to recognize what we really are.

If were lucky, what people learn in the pursuit of full AI will lead us to the re-discovery of the human soul, where it comes from, and the important understanding that goes along with that.

Bruce Buff is a management consultant and the author of the scientific-spiritual thriller "The Soul of the Matter" (Howard Books, September 13, 2016).

See the rest here:

Artificial Intelligence will lead to the human soul, not ...

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence will lead to the human soul, not …

Artificial Intelligence And Its Impact On Legal Technology (Part II … – Above the Law

Posted: at 4:11 am

Artificial intelligence (AI) is quickly coming into its own in terms of use by the legal industry. We are on the cusp of a revolution in the legal profession led by the adoption of AI throughout the legal industry, but in particular by in-house lawyers. Much like how email changed the way we do business every day, AI will become ubiquitous an indispensable assistant to practically every lawyer. But what is the future of AI in the legal industry? A bigger question is whether AI will actually replace lawyers as seems to be implicated above (a scary thought if you are new to the profession vs. an old-timer like me). And if so, are there ethical or moral dilemmas that should be considered regarding AI and the legal industry? When considering the future of AI in the industry, a few things are for sure. First, those who do not adopt and embrace the change will get left behind in some manner and second, those who do embrace AI will ultimately find themselves freed up to do the two things there always seems to be too little time for: thinking and advising. Welcome to the second of a four-part series on AI; this article discusses whether lawyers should be concerned about whether AI will replace lawyers.

Robot Lawyer Army?

In the first installment of this series, I wrote about what AI is, how it works, and its general impact on the legal industry and legal technology. In this article, I will tackle the question of whether AI will replace lawyers.

I am sorry to disappoint anyone who had visions of unleashing a horde of mechanical robot lawyers to lay waste to their enemies via a mindless rampage of bone-chilling logic and robo-litigation. That isnt happening, but it does paint a pretty cool picture of the robot lawyer army Ive always wanted. Instead, what most likely to happen are three things.

1) Some legal jobs will be eliminated, e.g., those which involve the sole task of searching documents or other databases for information and coding that information are most at risk.

2) Jobs will be created, including managing and developing AI (legal engineers), writing algorithms for AI, and reviewing AI-assisted work product (because lawyers can never concede the final say or the provision of legal advice to AI).

3) Most lawyers will be freed from the mundane task of data gathering for the value-added task of analyzing results, thinking, and advising their clients. These are roles that will always require the human touch. AI will just be a tool to help lawyers do all of this better, faster and more cost effectively.

For more about the future of AI for in-house counsel, see the full version of this article. Or visit the larger Legal Department 2025 Resource Center from Thomson Reuters.

Sterling Miller spent over 20 years as in-house counsel, including being general counsel for Sabre Corporation and Travelocity. He currently serves as Senior Counsel for Hilgers Graben PLLC focusing on litigation, contracts, data privacy, compliance, and consulting with in-house legal departments. He is CIPP/US certified in data privacy.

View post:

Artificial Intelligence And Its Impact On Legal Technology (Part II ... - Above the Law

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence And Its Impact On Legal Technology (Part II … – Above the Law

Microsoft’s New Artificial Intelligence Mission Is Nothing To Dismiss – Seeking Alpha

Posted: at 4:11 am

Just when you thought you were getting to know Microsoft (MSFT) it goes and changes personalities.

Actually, the new-and-improved Microsoft has been making itself known for quite some time with a minimal amount of fanfare - it only became officially official last week. In that the shift is apt to make in an increasingly big difference in the company's results though, fans and followers of the company would be wise to take a closer look at what Microsoft has become.

And what is this new focal point for CEO Satya Nadella? Take it with a grain of salt, because corporate slogans are as much of a sales pitch as they are an ambition anymore. But, per the company's most recent annual filing with the SEC, Microsoft is now an "AI (artificial intelligence) first" outfit. Previous annual reports had suggested its focal point was mobile... a mission that ended with mixed results. While Microsoft has a strong presence in the mobility market in the sense that many of its cloud services are accessible via mobile devices, Microsoft's smartphone dreams turned into nightmares.

It does beg the question though - what exactly does an AI-focused Microsoft look like when artificial intelligence was never a priority before?

They were touted by the company, though in light of the fact that it's the big new hot button, the AI acquisitions Microsoft has made to date weren't touted enough (and certainly not framed within the context of its new mission).

As a quick recap, the more prescient artificial intelligence deals Nadella has made:

1. SwiftKey

Back in early 2016, Microsoft ponied up a reported $250 million to get its hands on a technology that predicts what word you're typing into your smartphone or tablet before you have to tap all the letters out. Some find it annoying because the word it guesses isn't always the one you want... a problem solved just by continuing to type. Others love the idea of not being forced to finish typing a word.

At first blush it seems superfluous, and truth be told, it is. It's not quite as meaningless as some have made it out to be though, in that users have largely come to expect such a feature from most of their electronics.

2. Genee

Just a few months after acquiring SwiftKey last year, it bought chatbot specialist Genee, primarily to make its office productivity programs more powerful an easy to use. Users can simply speak into their computer to manipulate apps like Office 365. Its claim to fame is the ability to schedule meetings on a calendar just by understanding the context of an e-mail.

The tool in itself isn't the proverbial "killer app." In fact, Microsoft shut down Genee shortly after it bought it. It just didn't shut it down after ripping out the most marketable pieces of the platform and adding them to its bigger chatbot machine.

Microsoft has struggled with AI chat in the past - like Tay, which quickly learned to be racist - but it's getting very, very good at conversational instructions. But the establishment of a 100-member department aimed solely improving artificial intelligence strongly suggests the company is going to keep working on its chat technologies until it gets it right.

3. Maluuba

It's arguably the most game-changing artificial intelligence acquisition Microsoft has made to date, even though it's the furthest away from being useful.

Maluuba was the Canadian artificial intelligence outfit Microsoft bought in January of this year. It was billed as a general AI company, which could mean a lot of different things. For Maluuba though, that meant building systems that could read (and comprehend) words, understand dialog, and perform common-sense reasoning.

A completely impractical but amazingly impressive use of that technology: Maluuba's technology was the platform that allowed a computer to beat the notoriously difficult Ms. Pac Man video game for the Atari 2600. Even more interesting is how it happened. Microsoft essentially arranged for a committee of different digital thought patterns with different priorities. That is, one AI's priority was to score as many points as possible. Another AI's priority was to eat the game's ghosts when they were edible. Yet another AI's purpose was avoiding those ghosts. All of the different 'committee' members negotiated each move Ms. Pac Man made at any given time, based on the risk or reward of a particular (and ever-changing) scenario in the game.

The end result: The artificial intelligence achieved the best-ever known score for the game.

It remains to be seen how that premise will be applied in the future, but it's got a lot of potential. It's one of the few artificial intelligence platforms that had to reason its way through a problem created by an outside, third-party source rather than one that was built from the ground up to perform a very specific, limited function.

Getting a bead on the nascent artificial market is tough. There's no shortage of outlooks. There's just a shortage of history and understanding about what artificial intelligence really is and how it can be practically commercialized.

To the extent AI's potential can be quantified though, PricewaterhouseCoopers thinks it will create an additional $16 trillion worth of commerce over the course of the coming ten years... that's above and beyond what would have been created without it.

In other words, that's not the likely market size for artificial intelligence software, hardware and services - that figure will be smaller. Tractica thinks the actual amount of spending on AI services and hardware will be on the order of $16 billion by 2025... a number that seems reasonable and rational, though also somehow seems small relative to the value artificial intelligence will have to enterprises. In fact, others think (when factoring in the underlying software and related services that will mature with AI) the artificial intelligence market will be worth $59 billion by 2025.

Whatever's in the cards, it's a worthy market to address, and Microsoft is surprisingly almost as well equipped to run the race as well as its peers and rivals can. Though meaningful revenue is still a few years off, the new Microsoft mantra is one that matters, in that it's a viable growth engine for the company.

In other words, take Microsoft's AI ambitions as seriously as you should have taken its cloud-computing ambitions a couple of years ago.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

See the original post here:

Microsoft's New Artificial Intelligence Mission Is Nothing To Dismiss - Seeking Alpha

Posted in Artificial Intelligence | Comments Off on Microsoft’s New Artificial Intelligence Mission Is Nothing To Dismiss – Seeking Alpha

Six disturbing predictions about how artificial intelligence will transform life on Earth by 2050 – Mirror.co.uk

Posted: at 4:11 am

We all know that the world is being transformed by technology, but a leading artificial intelligence expert has made a series of predictions that put these changes into harsh perspective.

In his new book, It's Alive!: Artificial Intelligence from the Logic Piano to Killer Robots , Professor Toby Walsh paints a horrifying picture of life in 2050.

From autonomous vehicles to robot managers, humans will be at the mercy of artificially intelligent computers that will control almost every aspect of our lives.

As people's role in society diminishes, they will retreat further and further into virtual worlds, were they will be able to live out their darkest fantasies without fear of recrimination.

"By 2050, the year 2000 will look as quaintly old-fashioned as the horse drawn era of 1900 did to people in 1950," said Walsh, who is professor of artificial intelligence at the University of New South Wales in Australia.

Here are some of his most bone-chilling predictions about life in 2050:

Work is already underway to build cars that can drive themselves, but by 2050, Professor Walsh predicts that humans will be banned from driving althogether.

The vast majority of road accidents are caused by human error, he argues, so autonomous vehicles will make the roads inherently safer and less congested.

As self-driving cars become more ubiquitous, most people will lose their driving skills, and street parking will disappear.

Eventually, ships, planes and trains will also become autonomous, allowing goods to be transported all over the world without human intervention.

"If we can take the human out of the loop, we can make our roads much safer," said Professor Walsh.

As computers become more "intelligent", AI systems will increasingly manage how you work - from scheduling your tasks and approving holidays to monitoring and rewarding your performance.

They could even be put in charge of hiring and firing employees, looking at qualifications and skill sets to match people with jobs.

Professor Walsh points out that matching people with jobs is no more complicated than matching people with each other - something that we already rely on dating sites to do for us.

However, he admits there are some decisions that machines should not be allowed to make.

"We will have to learn when to say to computers: 'Sorry, I can't let you do that,'" he said.

If you're not answering to a computer, then you've probably been replaced by one.

Robots are already replacing humans in many factories and customer service roles, but by 2050, the same technology will have eliminated many middle-class "white collar" jobs.

The news will be written by artificially intelligent computers and presented by avatars and chatbots, which will tailor content to viewers' personal preferences.

Robots will surpass athletes on the sports field, exhibiting greater speed, accuracy and stamina than their human counterparts, and data scientists will be some of the best paid members of football clubs.

Even doctors will be largely replaced by AI physicians that will continually monitor your blood pressure, sugar levels, sleep and exercise, and record your voice for signs of a cold, dementia or a stroke.

"Our personal AI physician will have our life history, it will know far more about medicine than any single doctor, and it will stay on top of all the emerging medical literature," Professor Walsh said.

As society becomes less and less reliant on human input, people will become increasingly absorbed in virtual worlds that merge the best elements of Hollywood and the computer games industry.

Viewers will have complete control over the course of events, and avatars can be programmed to act and talk like anyone they choose - including long-dead celebrities like Marilyn Monroe.

However, there will be increasing concern about the seductive nature of these virtual worlds, and the risk of addicts abandoning reality in order to spend every waking moment in them.

They could also give people the opportunity to behave in distasteful or illegal ways, or live out their darkest fantasies without fear of recrimination.

"This problem will likely trouble our society greatly," Professor Walsh said. "There will be calls that behaviours which are illegal in the real world should be made illegal or impossible in the virtual."

Governments already rely heavily on hacking and cyber surveillance to gather intelligence about foreign enemies, but they will increasingly use these tools to carry out attacks.

Artificial intelligence will quickly surpass human hackers, and the only defence will be other AI programs, so governments will be forced to enter a cyber arms race with other nation states.

As these tools make their way onto the dark web and into the hands of cyber criminals, they will also be used to attack companies and financial institutions.

"Banks will have no choice but to invest more and more in sophisticated AI systems to defend themselves from attack," said Professor Walsh.

Humans will become further and further removed from these crimes, making tracking down the perpetrators increasingly difficult for law enforcement authorities.

If you thought that death would be sweet relief from this dystopian vision of the future, you can think again.

In 2050, humans will will live on as artificially intelligent chatbots after they die, according to Professor Walsh.

These chatbots will draw from social media and other sources to mimic the way you talk, recount the story of your life and comfort your family when you die.

Some people might even give their chatbot the task of reading their will, settling old scores, or relieving grief through humour.

This will of course raise all kinds of ethical questions, such as whether humans have a right know if they're interacting with a computer rather than a real person, and who can switch off your bot after you die.

"It will be an interesting future," said Professor Walsh.

Read the original here:

Six disturbing predictions about how artificial intelligence will transform life on Earth by 2050 - Mirror.co.uk

Posted in Artificial Intelligence | Comments Off on Six disturbing predictions about how artificial intelligence will transform life on Earth by 2050 – Mirror.co.uk

Page 139«..1020..138139140141..150160..»