The Artificially Intelligent Investor: AI And The Future Of Stock Picking – Forbes

A computer can recognize a cat. Can it spot a bargain stock?

Sitting in a business school lecture on hedge funds four years ago, Chidananda Khatua got the inspiration to answer this question. A veteran Intel engineer working on a nights-and-weekends M.B.A. at UC Berkeley, Khatua imagined that something powerful might come out of the ability to blend precise financial data with the fuzzier information to be found in annual reports and news articles.

For most of their history on Wall Street, computers have been strictly quantitativedividing, say, prices by earnings and ranking the results. But that is destined to change. A dramatic demonstration of silicons verbal potential came in 2011, when an IBM system called Watson bested two human champions at Jeopardy! To accomplish this feat the computer had to grasp not just numbers but genealogical relationships, time, proximity, causality, taxonomy and a lot of other connections.

Put that kind of artificial intelligence to work and it could do a lot more than win TV game shows. It might function as a physicians assistant, as a recommender of products to consumers or as a detector of credit card fraud. Maybe it could manage portfolios.

Khatua, now 44, enlisted two B-school classmates in his venture. Arthur Amador, 35, had spent much of his career at Fidelity Investments advising wealthy families. Christopher Natividad, 37, was a money manager for corporations.

They didnt have any illusions that a computer would have understanding the way humans do. But it could have knowledge. It could glean factsa mountain of themand search for patterns and trends in the securities markets. Perhaps it could make up in brute force what it lacked in intuition.

The trio chipped in savings of their own and $735,000 from angel investors to create EquBot, advisor to exchange-traded funds. IBM, eager to showcase its artificial intelligence offerings, gave the entrepreneurs a $120,000 credit toward software and hardware bills.

Two years ago EquBot opened up AI Powered Equity ETF, with a portfolio updated daily on instruction from computers. In 2018 it added AI Powered International Equity.

Chief Executive Khatua presides over a tiny staff in San Francisco and 17 programmers and statisticians in Bangalore, India. The system swallows 1.3 million texts a day: news, blogs, social media, SEC filings. IBMs Watson system digests the language, picking up facts to feed into a knowledge graph of a million nodes.

Each of those dots to be connected could be a company (one of 15,000), a keyword (like FDA) or an economic factor (like the price of oil). There are a trillion potential arrows to link them. After trial and error inside a neural network, which mimics the neuronal connections in a brain, the computer weights the few arrows that matter. Thus does the system grope its way toward knowing which ripples in input data are felt a week, a month or a year later, in stock prices.

On a busy day EquBot is doing half a quadrillion calculations. Thank goodness for Nvidias graphics chips. These slivers of silicon were designed to keep gamers happy by simultaneously processing different pieces of a moving image. They turned out to be ideal for the intensely parallel computational streams of neural networks, and they power the computer centers that Amazon rents out to EquBot and other AI researchers.

Last year EquBots software picked up a buzz around Amarin Corp., an Irish drug company with a prescription-only diet supplement that uses omega-3 fatty acids. The international ETF got in below $3, well before the regulatory nod that sent the stock to $15. Another move involved adding Visa to the domestic fund after the system measured ripples leading from announcements of chain-store closings toward higher credit card volume.

The computer has its share of duds. It fell in love with NetApp and New Relic, perhaps reacting to a flurry of excitement in cloud computing. The stocks sank. Not to worry, says Khatua. Neural networks learn from mistakes.

Its too early to say whether EquBot, which manages only $120 million, will succeed. So far its U.S. fund has lagged behind the S&P 500 by an annualized 3 percentage points, while the international one is running 6 points ahead of its index.

EquBot, which says its funds are the only actively managed ETFs using AI, wont have this turf to itself for long. IBM is selling AI up and down Wall Street. Donna Dillenberger, an IBM scientist in Yorktown Heights, New York, is working on a stock market model with millions of nodes, and she says billion-node systems are around the corner.

An equally large threat comes from those human analysts Khatua is trying to put out of work. They can track drug trials or notice that Amazon doesnt take cash. What EquBot has in its favor is the explosion in digitized data and a comparable growth in chip power. Humans cant keep up with all the connections.

Ninety percent of the data in existence was created in the past two years, says Art Amador, EquBots chief operating officer. In two years that will still be true.

Get Forbes' daily top headlines straight to your inbox for news on the world's most important entrepreneurs and superstars, expert career advice, and success secrets.

Continued here:

The Artificially Intelligent Investor: AI And The Future Of Stock Picking - Forbes

Defining the Scope of an Artificial Intelligence Project – Toolbox

A final consideration in project selection is the determination of the appropriate size of the project. It must be matched with available resources. Will there be enough time, money, people, or development equipment? What would be the attitude of the domain experts? If favorable, would they be able to devote the effort the project would require of them? How many of the modules are available off the shelf in SaaS oropen source?Predicting the availability of required resources realistically is an important aspect of the project selection.

An attraction of Al technology is its effectiveness in solving problems that contain uncertainty, ambiguity, or complexity. However, it is still necessary to put some bounds on these factors to have a successful project. If the bounds cannot be determined accurately, particularly for early AI projects, a different application should be considered. The same comment applies to applications where the knowledge base may be incomplete. In such, would applications with a partial solution be useful or acceptable? On the other hand, it is tempting to incorporate too much knowledge in the system. Even though the addition of knowledge increases the performance of the system, potential problems with redundancies and inefficiencies could be encountered. Either of these circumstances would substantially increase the scope and cost of the project.

As noted previously, there are many good applications of AI technology which do not have the goal of replacing human experts. Rather, the intent is to assist the experts to do a better job or to improve their work environment. Limiting, at least initially, the extent of assistance to the user, enables a more accurate estimate of project size. Another aid in limiting the scope of a system is to prescribe the range of problems that it is intended to solve. For example, a diagnostic system could be designed to handle 20 percent of the potential faults that cause 80 percent of the problems.

View post:

Defining the Scope of an Artificial Intelligence Project - Toolbox

Facebooks Jerome Pesenti Explains the Limitations of Artificial Intelligence Research – NullTX

Major developments continue to take place in the artificial intelligence industry. Facebooks Jerome Pesenti thinks the current model of deep learning is reaching its limits, however.

Dozens of companies are in the process of exploring the potential of artificial intelligence.

Virtually all of these companies have scientists and engineers pushing the boundaries of deep learning.

The development of new algorithms has allowed for some intriguing insights and developments over the years.

Unfortunately it would appear that the current strategy involving deep learning may hit a glass ceiling sooner rather than later.

Those are the findings of Jerome Pesenti, head of artificial intelligence at Facebook.

In a recent interview with Wired, Pesenti acknowledges how deep learning and current artificial intelligence have severe limitations.

Achieving human intelligence, while still an attainable goal, will not happen any time soon.

Thankfully, there is still progress being made to address some limitations.

Taking into account how the artificial intelligence space is still evolving and growing in 2019, there are still millions of options left unexplored.

One aspect no one can ignore is how the compute power required to research advanced AI continues to increase twofold every three years or so.

Pesenti confirms this problem exists, and highlights the need for scaling if any more progress is to be made.

At the same time, he is convinced the rate of progress for advanced artificial intelligence is not sustainable through this model.

Rising costs make it rather unattractive to conduct these levels of experiments today.

Image(s): Shutterstock.com

Read the original here:

Facebooks Jerome Pesenti Explains the Limitations of Artificial Intelligence Research - NullTX

Aural Analytics Joins Consumer Technology Association Initiative to Set New Standards for Artificial Intelligence in Healthcare – Business Wire

SCOTTSDALE, Ariz.--(BUSINESS WIRE)--Aural Analytics, Inc., a privately held digital health company developing the worlds most advanced speech analytics platform, today announced its participation in the Consumer Technology Association (CTA) initiative to develop new standards and best practices for the use of artificial intelligence (AI) in healthcare.

The CTA AI in Healthcare Working Group, which comprises more than 45 organizations, from major tech companies to health care industry leaders, aims to ultimately enhance health outcomes, improve efficiencies and reduce health care costs.

Aural Analytics is pleased to be working alongside an impressive roster of innovators from across the ecosystem to define standards governing how all modalities, including voice, will be used in healthcare, said Visar Berisha, Ph.D., co-founder, chief analytics officer, Aural Analytics and a member of the working group.

Aural Analytics proprietary platform tracks and analyzes vocal biomarkers (components of speech) that detect and measure subtle, clinically relevant speech changes in patients with neurological conditions that impact speech and language.

Advancing the tremendous potential of artificial intelligence within healthcare requires a rigorous approach and a common understanding of the challenges such as privacy and confidentiality, said Daniel Jones, co-founder, chief executive officer, Aural Analytics. We support CTA and its strategic approach to setting standards in voice and other important modalities that will have far reaching impact within the context of healthcare.

AI has an increasingly significant role in health care today by improving diagnosis, treatment and care, said Rene Quashie, vice president, digital health policy and regulatory affairs, CTA. Across the sector, we are seeing life-changing tech revolutionize health care and some great examples of that will be seen at CES 2020. We convened this group of industry experts to address the challenges of using AI in health care and build an informed framework. Were excited to have Aural Analytics participate in the initiative and provide their expertise to this important work.

About Aural Analytics, Inc.

Aural Analytics, Inc. is a privately held digital health company developing the worlds most advanced speech analytics platform, built on a foundation of 25 years of speech neuroscience research and data. The Companys platform technology is based on pioneering research from Arizona State University and reinforced by multiple high-caliber peer-reviewed publications. Winners of the 2017 Global SCRIP Award for Best Technology in Clinical Trials, Aural Analytics first-to-market technology platform powers health applications all over the world. The Company maintains headquarters in Scottsdale, Ariz. For more information, please visit auralanalytics.com or follow Aural Analytics on Twitter, LinkedIn, Medium and Facebook.

Read the rest here:

Aural Analytics Joins Consumer Technology Association Initiative to Set New Standards for Artificial Intelligence in Healthcare - Business Wire

TECH 2019: stalls related to technology, artificial intelligence a big draw – The Hindu

After two successful editions, Transforming Education Conference for Humanity (TECH) 2019 conducted their third edition at Hotel Novotel. The three-day conference that commenced on Tuesday was packed with sessions from academicians and entrepreneurs. Apart from that the conference also had stalls put up by start-ups from across the country that are related to technology and artificial intelligence.

The venue had over 13 stalls that brought together the advancements in technology and their application in education. Happy Adda, a Bengaluru-based startup, helped children learn the basics of English and application of numbers through games that can be downloaded on smartphones and tablets. The app that is available for free on Google Play aims to help children build cognitive skills in a fun way. Technology is changing the world around us and it is essential that we adapt to the changes and use it as an advantage. Learning need not essentially be a boring process, several start-ups like ours are working towards making studies a fun, said Raja Sekhar Vasa, co-founder of Happy Adda. The app through its specially crafted games tries to improve English language skills, focussing ability and the reasoning skills in its users.

The largest crowd gathered near Biboxs stall which was educating the visitors about artificial intelligence. As the founder of the start-up, Sandeep Senan explained the need to develop artificial intelligence, robots made by his company entertained people by walking around the venue and doing stunts. It is necessary that we encourage our children to experiment, as only through experimenting they will learn better and move beyond theoretical knowledge, said Mr. Senan.

The conference also had on display the science experiments made by students of APSWER School Centre of Excellence, Madhurawada. The students made models from daily use things to solve issues like warning system for open manholes or cheaper alternatives for expensive farm equipment.

You have reached your limit for free articles this month.

Register to The Hindu for free and get unlimited access for 30 days.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

*Our Digital Subscription plans do not currently include the e-paper ,crossword, iPhone, iPad mobile applications and print. Our plans enhance your reading experience.

Visit link:

TECH 2019: stalls related to technology, artificial intelligence a big draw - The Hindu

Artificial intelligence will affect Utah more than other states, new study says – Deseret News

SALT LAKE CITY Utahs economy could be more affected by artificial intelligence than other states, according to a new study by the Brookings Institution, a Washington, D.C.-based think tank.

The study predicted that Salt Lake and Ogden-Clearfield would be among the 10 top regions in the United States for workforces impacted by artificial intelligence.

Up to this point, most research surrounding the impact of technology on employment has focused on the effect of automation on blue-collar jobs, like clerical, manufacturing or construction jobs.

But this report predicted that in the future, artificial intelligence will actually have the most profound impact on white-collar fields like law, engineering and science. That includes tech-based economies like Utahs Silicon Slopes.

Among the most AI-exposed large metro areas are San Jose, California, Seattle, Salt Lake City and Ogden, Utah all high-tech centers, the study states.

What explains this shift in the types of jobs affected by artificial intelligence and how will Utah and other states across the country be affected by it?

Who could be affected?

Much of the discussion about artificial intelligences potential impact on the future whether optimistic or apocalyptic lumps it in with other forms of automation, including robotics and software, the report states.

But the role of artificial intelligence in the future economy should be considered on its own, said Mark Knold, senior and supervising economist for the Utah Department of Workforce Services.

Thats because artificial intelligence involves programming computers to perform tasks which, if done by humans, would require intelligence, such as learning, reasoning, problem-solving, prediction or planning, as defined by the report.

This means that artificial intelligence could potentially replace jobs which involve not just what humans can do, but how humans think. Thats why artificial intelligence could threaten jobs that are typically considered white collar because they primarily deal with human reasoning and problem solving, said Knold.

Artificial intelligence will be a significant factor in the future work lives of relatively well-paid managers, supervisors and analysts, the report states.

But Dan Ventura, professor of computer science at Brigham Young University, said that right now, artificial intelligences primary strength is in tasks involving pattern recognition, such as facial recognition or medical diagnostics, he said, and even those advancements have been subject to criticism for inaccuracy and racial bias.

AI is getting really good at pattern recognition and finding patterns in data, better than humans in some cases, said Ventura. I can say with some level of confidence that the types of jobs that involve that kind of work are potentially vulnerable to being displaced by AI.

But artificial intelligence is nowhere near being able to take on such complex tasks as making judgments and complex decisions, said Ventura. For example, artificial intelligence could detect the presence of a tumor but it would take a human doctor to decide whether to operate, perhaps in concert with discussions with the patient or the patients family.

The kinds of jobs where theres a lot more judgment, subjectivity, human impact, they arent even in the ballpark of being able to do something like that right now, said Ventura. I dont think those kinds of jobs are in any kind of danger in the near future, he said.

The upshot

Both Ventura and Knold say AI shouldnt be viewed only through the lens of fear.

While some industries are likely to be disrupted and some jobs will become obsolete, Ventura predicts, artificial intelligence could also actually create new jobs.

Those jobs could be complementary to work performed by artificial intelligence, such as quality control, or could involve making decisions about information produced through artificial intelligence. In some professions, artificial intelligence could speed up or take care of the busy work, said Ventura, leaving the human professionals more time and resources to focus on decision-making or qualitative analysis.

Knold added that demographic trends indicate that as the baby boomer generation ages out of the workforce, the younger generation which is less populous wont supply enough workers to replace the jobs that the older generation has vacated, said Knold.

In the future, when you have less human brains around, artificial brains could become more valuable and more profitable, he said.

Artificial intelligence could help companies thrive even with less human workers, he said.

The fear is that AI will replace workers and youll have higher unemployment, said Knold. But I think what it will do is help replace missing workers and not displace existing work.

Ventura said that while making such predictions is important to help people start thinking about what careers and skills to build to be prepared for the future, as this technology is still rapidly developing, its very difficult to know how it might actually affect the workforce.

Its important to take this kind of analysis with a grain of salt, said Ventura. Predicting the future is notoriously difficult.

Link:

Artificial intelligence will affect Utah more than other states, new study says - Deseret News

China Will Outpace US Artificial Intelligence Capabilities, But Will It Win The Race? Not If We Care About Freedom – Forbes

Bottom view of the famous Statue of Liberty, icon of freedom and of the United States. Red and ... [+] purple duotone effect

Weve all heard that China is preparing itself to outpace not only the United States but every global economy in Artificial Intelligence (AI). China is graduating more than 2-3x the amount of engineers each year than any other nation, the government is investing to accelerate AI initiatives, and, according to Kai-Fu Lee in a recent Frontline documentary, China is now producing more than 10x more data than the United States. And if data is the new oil, then China, according to Lee, has become the new Saudi Arabia.

Its clear that China is taking the steps necessary to lead the world in AI. The question that needs to be asked is will they win the race?

According to the Frontline documentary, Chinas goal is to catch up to the United States by 2025 and lead the world by 2030. If things stay the way they are, I do believe China will outpace the United States in technical capabilities, available talent, and data (if theyre not already). However, I also believe that eventually, the Chinese system will either implode, not be adopted outside of China, or both. Why? Let me explain.

A recent report from Freedom House shows that freedom on the internet is declining, and has been for quite some time. Study after study shows that when we know were being surveilled, our behaviors change. Paranoia creeps in. The comfort of being ourselves is lost. And, ultimately, society is corralled into a state of learned helplessness where, like dogs with shock collars, our invisible limits are not clearly understood or defined but learned over time through pain and fear. This has been shown to lead to systemic mental illness ranging from mass depression to symptoms of PTSD and beyond.

Not so ironically, were seeing a realization of these impacts within society, especially among the tech-literate, younger generations. A recent study from Axios found that those age 18-34 are least likely to believe "It's appropriate for an employer to routinely monitor employees using technology" and most likely to "change their behavior if they know their employer was monitoring them." A deeper impact of this type of surveillancewhat Edward Snowden has deemed our Permanent Recordcan be read about in a recent New York Times article about Cancel Culture within teens. People, especially the younger generations, dont want to be surveilled or to have their past mistakes held against them in perpetuity. And if theyre forced into it, theyll find ways around it.

In the Freedom House report, China is listed as one of the worst nations in the world for this type of behavior. This is also spoken to in the Frontline documentary where it was mentioned that Chinas powerful new social credit score is changing behavior by operationalizing an Orwellian future. Some places in China have gone as far as requiring facial recognition to get access to toilet paper in public restrooms. Is this the life we want to live?

If the system continues this way, people will change their behavior. They will game the system. They will put their devices down and do things offline when they don't want to be tracked. They will enter false information to spoof the algorithms when they're forced to give up information. They will wear masks or create new technologies to hide from facial recognition systems. In short, they will do everything possible to not be surveilled. And in doing so, they will provide mass amounts of low-quality, if not entirely false, data, poisoning the overarching system.

If China continues on its current path of forced compliance through mass surveillance, the country will poison its own data pool. This will lead to a brittle AI system that only works for compliant Chinese citizens. Over time, their system will cripple.

Great AI requires a lot of data, yes, but the best AI will be trained on the diversity of life. Its data will include dissenting opinions. It will learn from, adapt to, and unconditionally support outliers. It will form to, and Shepard, the communities it beholds, not the other way around. And if it does not, those of us living in a democratic world will push back. No sane, democratic society will adopt such a system unless forced into it through predatory economic activity, war, or both. We are already seeing an uprising against the surveillance systems in Europe and the United States, where privacy concerns are becoming mainstream news and policies are now being put into place to protect the people, whether tech companies like it or not.

If our democratic societies decide to go down the same path as China because theyre afraid we wont keep up with societies that dont regulate, then were all bound to lose. A race to the bottoma race to degrade privacy and abuse humanity in favor of profit and strategic dominanceis not one we will win. Nor is it a race we should care to win. Our work must remain focused on human rights and democratic processes if we hope to win. It can not come down to an assault on humanity in the form of pure logic and numbers. If it does, we, as well as our democratic societies, will lose.

So whats the moral of this story? China will outpace the United States in Artificial Intelligence capabilities. But will it win the race? Not if we care about freedom.

Follow this link:

China Will Outpace US Artificial Intelligence Capabilities, But Will It Win The Race? Not If We Care About Freedom - Forbes

Artificial intelligence apps, Parkinsons and me – BBC News

In my work as a journalist I am lucky enough to meet some brilliant people and learn about exciting advances in technology - along with a few duds.

But every now and then I come across something that resonates in a deeply personal way.

So it was in October 2018, when I visited a company called Medopad, based high up in London's Millbank Tower.

This medical technology firm was working with the Chinese tech giant Tencent on a project to use artificial intelligence to diagnose Parkinson's Disease.

This degenerative condition affects something like 10 million people worldwide. It has a whole range of symptoms and is pretty difficult to diagnose and then monitor as it progresses.

Medopad's work involves monitoring patients via a smartphone app and wearable devices. It then uses a machine learning system to spot patterns in the data rather than trying to identify them by human analysis.

In its offices we found one of its staff being filmed as he rapidly opened and closed his fingers - stiffness in these kind of movements is one of the symptoms of Parkinson's.

As we filmed him being filmed, I stood there wondering whether I should step in front of the camera and try the same exercise.

Media playback is unsupported on your device

For some months, I had been dragging my right foot as I walked and experiencing a slight tremor in my right hand.

I had first dismissed this as just part of getting older, but had eventually gone to see my GP.

She had referred me to a consultant neurologist, but at the time of filming I was still waiting for my appointment.

As we left Medopad, I clenched and unclenched my fingers in the lift and reflected on what I had seen. A few days later my coverage of the project appeared on the BBC website.

Three months on, in January this year, I finally met the consultant.

She confirmed what I had long suspected - I was probably suffering from idiopathic Parkinson's Disease. The "idiopathic" means the cause is unknown.

As I got to grips with the condition and started a course of medication, I quickly found out that there are all sorts of unknowns for people with Parkinson's.

Why did I get it? How quickly will the various symptoms develop? What are the hopes of a cure?

There are no reliable answers.

My response has been to take a great interest in how the technology and pharmaceutical industries are investigating the condition.

Developments in artificial intelligence, coupled with the availability of smartphones, are opening up new possibilities, and this week I returned to Medopad to see how far it had progressed.

Media playback is unsupported on your device

I asked the firm's chief executive, Dan Vahdat, whether he had noticed anything that suggested I might have a special interest in Parkinson's when I first visited.

"I don't think we noticed anything specifically," he said.

"But - and that's weird for me to tell you this - I had this intuition that I wanted to get you to do the test."

That, of course, did not happen but over the last year there has been a clinical trial involving London's King's College Hospital.

People with Parkinson's have been given a smartphone app, which their relatives use to record not just that hand-clenching exercise but other aspects of the way they move.

"We think this technology can help to quantify the disease," Dan explained.

"And if you can quantify the disease, it means you can see how the disease progresses.

"It gives you lots of opportunities, in terms of treatment adjustments, interventions at the right time, potentially screening a larger cohort of patients with the technology in ways that were not possible before."

This made me think about my own situation.

Since February, I have been prescribed Sinemet - one of the most common Parkinson's drugs - in the form of two tablets taken three times a day.

While some patients see an instant impact, I cannot say I notice much effect.

If anything my main symptom, dragging my right foot, has got slightly worse. When I see my consultant every four months we discuss whether the prescription should be adjusted, but it is difficult for me to quantify my symptoms.

Dan told me this was exactly the kind of scenario they are trying to address.

"We think you will end up having a more continuous observation via machine and the doctors can look at it remotely. And with that they will be able to adjust your treatment, if needed, because potentially right now you're either overdosing or underdosing."

I am now going to get access to the trial app and look forward to finding out what it says about me.

This is just one of many projects run by a variety of companies where real-time data is collected from people with Parkinson's and other conditions via their handsets.

The search for a cure to Parkinson's goes on. We appear to be a long way off, but in the meantime quantifying a condition like mine could do a lot to improve how I and many others cope with the symptoms.

What is exciting to me is that the smartphone revolution, which I have documented since watching Steve Jobs unveil the iPhone in 2007, now promises to change healthcare just as it has transformed many other aspects of our lives.

And I hope to continue reporting on that revolution for many more years.

Read the original:

Artificial intelligence apps, Parkinsons and me - BBC News

Will the next Mozart or Picasso come from artificial intelligence? No, but here’s what might happen instead – Ladders

As artificial intelligence has been slowly becoming more and more of a mainstream term, there has been a question rumbling in the art community:

Will AI replace creativity?

Its a fantastic question, to tell you the truthand certainly shows what sorts of problems were wrestling with as a society in todays day and age.

First, its important to consider what our definition of art is in the first place. A very broad definition within the art world would be, Anything created by a human to please someone else. Thats what makes something art. In this sense, photography is an art. Videography is an art. Painting, music, drawing, sculpture, all of these things are done to evoke an emotion, to please someone elsecreated by one human, and enjoyed by another.

Stage one:AI became a trendy marketing phrase used by everyone from growth hackers to technologists, with the intention of getting more eyeballs on their work, faster. So the term AI actually made its way into the digital art world faster than the technology itself, since people would use the term to make what they were building seem more cutting-edge than anything else in the spaceregardless of whether or not it was actually utilizing true artificial intelligence.

Stage two:Companies saw the potential artificial intelligence had in being able to provide people (in a wide range of industries) with tools to solve critical programs. For example, we use data science atSkylum, to help photographers and digital content creators be more efficient when performing complex operationslike retouching photos, replacing backgrounds, etc. We use AI to make the process of creating the art more efficient, automating the boring or tedious tasks so that artists can focus more time and energy on the result instead of the process.

Theres a great article in Scientific American titled,Is Art Created By AI Really Art?And the answer is both yes and no.

Its not that artificial intelligence will fundamentally replace human artists. Its that AI will lower the barrier to entry in terms of skill, and give the world access to more creative minds because of what can be easily achievable using digital tools. Art will still require a human vision, however, the way that vision is executed will become easier, more convenient, less taxing, and so on.

For example, if you are only spending one day in Paris, and you want to capture a particular photograph of the Eiffel Tower, that day might not be the best day for your photo. The weather might be terrible, there might be thousands of people around, etc. Well, you can use artificial intelligence to not only remove people from the photograph but even replace the Eiffel Tower with an even higher resolution (from a separate data set) picture of the toweror change the sky, the weather, etc.

The vision is yours, but suddenly you are not limited by the same constraints to execute your vision.

Digital art tools are built to make the process as easy as possible for the artist. If you consider the history of photography, as an art, back in the film days, far more time was spent developing film than actually taking pictures. This is essentially the injustice technologists are looking to solve. The belief in the digital art community is that more time shouldnt be spent doing all the boring things required for you to do what you love. Your time should be spent doing what you love and executing your vision, exclusively.

Taking this a step further, a photographer today usually spends 20-30% of their time giving a photo the look and feel they want. But they spend 70% of their time selecting an object in Photoshop or whichever program theyre using, cutting things out, creating a mask, adding new layers, etc. In this sense, the artist is more focused on theprocessof creating their visionwhich is what creates a hurdle for other artists and potentially very creative individuals to even get into digital art creation. They have to learn these processes and these skills in order to participate, when in actuality, they may be highly capable of delivering a truly remarkable resultif only they werent limited, either by their skills, their environment, or some other challenge.

So, artificial intelligence isnt here to replace the everyday artist. If anything, the goal of technology is to allow more people to express their own individual definition of art.

There may be more Mozarts and Picassos in our society than we realize.

This article first appeared on Minutes Magazine.

Here is the original post:

Will the next Mozart or Picasso come from artificial intelligence? No, but here's what might happen instead - Ladders

52 ideas that changed the world: 26. Artificial intelligence – The Week UK

In this series, The Week looks at the ideas and innovations that permanently changed the way we see the world. This week, the spotlight is on artificial intelligence:

Artificial intelligence (AI), sometimes referred to as machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence of humans.

AI is the ability of a computer program or a machine to think and learn, so that it can work on its own without being encoded with commands. The term was first coined by American computer scientist John McCarthy in 1955.

Human intelligence is the combination of many diverse abilities, says Encyclopaedia Britannica. AI, it says, has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception and using language.

AI is currently used for understanding human speech, competing in game systems such as chess and go, self-driving cars and interpreting complex data.

Some people are wary of the rise of artificial intelligence, with the New Yorkerhighlighting that a number of scientists and engineers fear that, once we build an artificial intelligence smarter than we are, a form of AI known as artificial general intelligence (AGI), doomsday may follow.

In The Age of Spiritual Machine, American inventor and futurist Ray Kurzweil writes that as AI develops and machines have the capacity to learn more quickly they will appear to have their own free will, while Stephen Hawking declared that AGI will beeither the best, or the worst thing, ever to happen to humanity.

The Tin Man from The Wizard of Oz also represented peoples fascination with robotic intelligence, with the humanoid robot that impersonated Maria in Metropolis also displaying characteristics of AI.

But in the real world, the British computer scientist Alan Turing published a paper in 1950 in which he argued that a thinking machine was actually possible.

The first actual AI research began following a conference at Dartmouth College, USA in 1956.

In 1955, John McCarthy set about organising what would become the Dartmouth Summer Research Project on Artificial Intelligence. The conference took the form of a six- to eight-week brainstorming session, with attendees including scientists and mathematicians with an interest in AI.

According to AI: The Tumultuous History of the Search for Artificial Intelligence by Canadian researcher Daniel Crevier, one attendee at the conference wrote Within a generation... the problem of creating artificial intelligence will substantially be solved.

It was at the conference that McCarthy was credited with first using the phrase artificial intelligence.

Following the conference, science website livescience.com reports that the US Department of Defense became interested in AI, but after several reports criticising progress in AI, government funding and interest in the field dropped off. The period from 1974 to 1980, it says, became known as the AI winter.

Interest in AI was revived in the 1980s, when the British government started funding it again in part to compete with efforts by the Japanese. From 1982-1990, the Japanese government invested $400m with the goal of revolutionising computer processing and improving artificial intelligence, according to Harvard University research.

Research into the field started to increase and by the 1990s many of the landmark goals of artificial intelligence had been achieved. In 1997, IBMs Deep Blue became the first computer to beat a chess champion when it defeated Russian grandmaster Garry Kasparov.

This was surpassed in 2011, when IBMs question-answering system Watson won the US quiz show Jeopardy! by beating the shows reigning champions Brad Rutter and Ken Jennings.

In 2012, a talking computer chatbot called Eugene Goostman tricked judges into believing that it was human in a Turing Test. Thiswas devised by Turing in the 1950s. He thought that if a human could not tell the difference between another human and a computer, that computer must be as intelligent as a human.

Forbes highlights that AI is currently being deployed in services such as mobile phones (for example, Apples Siri app), Amazons Alexa, self-driving Tesla cars and Netflixs film recommendation service.

Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab has developed an AI model that can work out the exact amount of chemotherapy a cancer needsto shrink a brain tumour.

AI is already changing the world and looks set to define the future too.

According to Harvard researchers, we can expect to see AI-powered driverless cars on the road within the next 20 years, while machine calling is already a day-to-day reality.

Looking beyond driverless cars, the ultimate ambition is general intelligence that is a machine that surpasses human cognitive abilities in all tasks. If this is developed a future of humanoid robots is not impossible to envision.

Although, as the likes of Stephen Hawking have warned, some fear the rise of an AI-dominated future.

Tech entrepreneur Elon Musk has warned that AI could becomean immortal dictator from which we would never escape,signing a letter alongside Hawking and a number of AI experts calling for research into the potential pitfalls and societal impacts of widespread AI use.

Read the rest here:

52 ideas that changed the world: 26. Artificial intelligence - The Week UK