Rage with the machine: artificial intelligence takes on Eurovision – The Australian Financial Review

The team Can AI Kick It used AI techniques to generate a hit predictor based on the melodies and rhythms of more than 200 classics from the Eurovision Song Contest, an annual celebration of pop music and kitsch. These included Abbas Waterloo (Swedens 1974 winner) and Loreens Euphoria (2012, also Sweden).

But to generate the lyrics for the song Abuss, the team also used a separate AI system one based on the social media platform Reddit. It was this that resulted in a rallying cry for a revolution.

Like the notorious Tay chatbot developed by Microsoft in 2016 that started spewing racist and sexist sentiments after being trained on Twitter, the fault lay with the human sources of data, not the algorithms.

We do not condone these lyrics! stresses Janne Spijkervet, a student who worked with Can AI Kick It and ran the lyric generator. She says the Dutch team nevertheless decided to keep the anarchist sentiment to show the perils of applying AI even to the relatively risk-free environment of Europop.

Alongside Abuss, which its creators describe as atonal and creepy, sits the Australian entry with the same sheen as a chart-topping dance hit but with a distorted subliminal AI-generated chorus of koalas, kookaburras and Tasmanian devils.

Meanwhile, the song Ill Marry You, Punk Come, composed by German team Dadabots x Portrait XO, used seven neural networks in its creation. The resulting piece of music blends lyrics from babble generated from 1950sa cappella music with AI-generated death-metal vocal styles and a chromatic bass line spat out of a neural network trained on Bachs canon.

The contest was judged along the same lines as the established competition with a public vote tallied against the opinions of a panel of expert judges. Ed Newton-Rex, who founded the British AI compositional start-up Jukedeck, is one of them. He explains that the panel looked at the process of how machine learning was applied, as well as creative uses of algorithms such as the koala synth and the quality of the song. The judges also factored in Eurovisioness into their thinking, although he admits, I have no idea what that means.

About 20,000 people tuned into the event, a far cry from the 182 million who watched last year's human contest, but the hope is that the computer version will pave the way for AI to influence Eurovision proper through song composition or, over time, robotic performance.

That is my dream, says Karen Van Dijk, the VPRO producer who came up with the concept.

A performance given by the Sex Pistols at Manchesters Lesser Free Trade Hall in 1976 where, legend has it, almost everyone in the tiny audience went on to form their own band became known as the gig that changed the world, and was deemed a genesis point for a musical revolution. The equivalent for AI music took place in the winter of 2019 in Delft, the picturesque Dutch town known for its fine pottery and as the birthplace of the painter Johannes Vermeer. The citys university was hosting the 20th conference of the International Society for Music Information Retrieval when a proposition was put to the academics in attendance.

Van Dijk announced that she was organising the first Eurovision for computers and needed entries. When Hollands Duncan Laurence won the Eurovision Song Contest in 2019, amid her euphoria Van Dijk pondered whether AI could be harnessed to lock in more hit songs for the country. I was naive. I thought we could create the next Eurovision hit with the press of a button, she says.

Van Dijk arrived in Delft bearing data gifts. An Israeli composer had created a spoof Eurovision song the year before, called Blue Jeans and Bloody Tears, using a cache of data extracted from the Eurovision catalogue. That data was bought by VPRO and provided to the entrants as a stimulant for their own experimentations. For some, it also allowed them to rekindle pop-star ambitions.

Tom Collins, a music lecturer at the University of York, and his wife Nancy Carlisle, an academic at Lehigh University in Pennsylvania, had a garage band called The Love Rats when they were doctoral students. When Collins heard about the AI Song Contest, he was inspired to dust off his code and get the band back together by using AI to write a song. He initially worked with Imogen Heap, the English singer-songwriter and audio engineer, but coronavirus-related travel restrictions halted those efforts. Instead, he and Nancy worked over a weekend on Hope Rose High, which he describes as an eerie power ballad inspired by the lockdown.

The husband-and-wife team turned to an AI lyric engine called theselyricsdonotexist to generate robotic poetry with an optimistic feel. Carlisle says the AIs suggested lyric and then the mist will dance seemed ridiculous until she listened again to some of her favourite songs and started hearing what sounded like nonsense. Radiohead dont make a lot of sense but I still love them, she admits. Collins adds that the mist lyric also fits with the Eurovision theme: You can imagine the massive smoke machines kicking in.

While the duo did not enter the contest with the aim of winning, others saw an opportunity to test whether AI could be used not just to write a song but to pen a hit. Ashley Burgoyne, a lecturer in computational musicology at the University of Amsterdam and a member of the team behind Abuss, used the Blue Jeans dataset to create a Eurovision hit predictor.

That data suggested that melodies with hooks of three to seven notes and songs with simple rhythmic patterns scored the highest. It also showed that a certain level of atonality where it is hard for the ear to identify the key was crucial to Eurovision success. Yet Burgoyne believes that despite a handful of stinkers being included in the data, the results reflected a paucity of the negative information that is needed to successfully train the system in this case, songs that didnt reach the finals.

He compared the issue to Netflix recommendations that suggest a load of crap after you have watched a high-quality TV series. If you believe quality exists, then AI isnt good at finding it. How do you define [what is] a good song even in the world of Eurovision? he says.

The use of subliminal voices supposedly encouraging devil worship in heavy metal music was a cause clbre in the 1980s. Few would have expected that subliminal Tasmanian devil voices would be influencing Europop 30 years later.

Caroline Pegram, head of innovation at Uncanny Valley, the music technology company behind the Australian entry, wanted to pay homage to the wildlife that had been killed during the 2019-20 bushfires in Australia. A zookeeper friend gave her videos of Tasmanian devils going absolutely wild and blended the screeches with the sounds of koalas and laughing kookaburras to create an audio-generating neural network using technology developed by Googles creative AI research project Magenta. They called it the koala synth.

It proved that AI can create unexpected results. It was a happy accident. Everyone thought I was insane literally insane but the koalas have sent out a positive message and it is a strong and catchy sound, says Pegram.

We also need to guard against the risk that AI might in certain respects be deployed to supplant human creativity.

Geoff Taylor, chief executive of the BPI

The koala synth adds a new Antipodean angle to the Eurovision story Australia has only been permitted to compete in the contest since 2015 when the European Broadcasting Union allowed its entry.

Justin Shave, who produced the song, explains that the DDSP differential digital signal processing technology it used has since been used to generate the sounds of violins, trumpets and even a choir of drunken men. That one didnt work so well, he admits.

Unlike the more academic entrants, Uncanny Valley comes from a musical background, having produced songs for Aphex Twin and Sia. The group had already planned to enter an AI-composed song in the main song contest.

They now hope that the AI Song Contest will help to dispel concerns in some parts of the traditional music community that the technology could lead to musicians losing their jobs if computers take over.

Geoff Taylor, chief executive of the BPI, Britains music trade body, and head of the Brit Awards, says the new horizons of AI are exciting but urges caution.

We also need to guard against the risk that AI might in certain respects be deployed to supplant human creativity or undermine the cultural economy driven by artists. Such an outcome would leave our societies and our cultures worse off, he says.

His fears have been stoked as some of the worlds largest technology companies, including Google and TikTok owner ByteDance, have moved into the compositional space. But Anna Huang, a resident at Googles Magenta and a judge on the AI Song Contest, says Big Tech is attracted to AI musical composition by scientific curiosity, not a desire to take over the music world.

Music is a very complex domain. In contrast to language, which is a single sequence, music comprises arrangement, timbre, multiple instruments, harmony and is perceptually driven. It is also very referential, she says.

AI could also have a democratising impact on the creation of new music, says Huang. She cites her own experience at high school in Hong Kong, when some of her classmates were already composing for full orchestras. Huang was a musician too and believed that computer science could develop new methods of musical composition, something AI can potentially deliver.

That was demonstrated via an interactive Google Doodle launched in March last year that encouraged users to input a simple melody. The AI, developed by Magenta, then generated harmonies in the style of Bach. Within two days, the lighthearted doodle had created 55 million snippets of music.

Newton-Rex, who sold his company to Chinas Bytedance last year, says musicians need to see AI as a tool to stimulate creativity a spur that helps new ideas or disrupts habits rather than a threat. Every time I sit down at the piano, I play the same thing, he says, adding that AI is already creeping into sophisticated drum machines, arpeggiators and mastering software, and that it will always need human curation. What does AI music sound like? It sounds like nothing without a human element.

As Pegram says: Some musicians fear we will end up building machines pumping out terrible music but we need to rage with the machine, not against it.

Eurovision 2020: Big Night In is on SBS from 7.30pm on Saturday.

Financial Times

Follow this link:
Rage with the machine: artificial intelligence takes on Eurovision - The Australian Financial Review

Artificial Intelligence (AI) Is Nothing Without Humans – E3zine.com

AI is not just a fad. Its a technology thats set to last. However, only companies who know how to leverage its full potential will succeed.

Leveraging AIs full potential doesnt mean developing a pilot project in a vacuum with a handful of experts which, ironically, is often called accelerator project. Companies need a tangible idea as to how artificial intelligence can benefit them in their day-to-day operations.

For this to happen, one has to understand how these new AI colleagues work and what they need to successfully do their jobs.

An example for why this understanding is so crucial is lead management in sales. Instead of sales team wasting their time on someone who will never buy anything, AI is supposed to determine which leads are promising and at what moment salespeople can make their move to close the contract. CEOs are usually very taken with that idea, sales staff not so much.

Experienced salespeople know that its not that easy. Its not only the hard facts like name, address, industry or phone number that are important. Human sales people consider many different factors, such as relationships, past conversations, customer satisfaction, experience with products, the current market situation, and more.

Make no mistake: if the data are available in a set framework, AI will also leverage them, searching for patterns, calculating behavior scores and match scores, and finally indicating if the lead is promising or not. They can make sense of the data, but they will never see more than them.

The real challenge with AI are therefore the data. Without data, artificial intelligence solutions cannot learn. Data have to be collected and clearly structured to be usable in sales and service.

Without enough data to draw conclusions from, all decisions that AI makes will be unreliable at best. Meaning that in our example, theres no AI without CRM. Thats not really new, I know. However, CRM systems now have to be interconnected with numerous touchpoints (personal conversations, ERP, online shops, customer portal, website and others) to aggregate reliable customer data. Best case: all of this happens automatically. Entrusting a human with this task makes collecting data laborious, inconsistent and faulty.

To profit from AI, companies need to understand where it makes sense to implement it and how they should train it. Theres one problem, however: the thought patterns of AI are often so complex and take so many different information and patterns into consideration that one cant understand why and how it made a decision.

In conclusion, AI is not a universal remedy. Its based on things we already know. Its recommendations and decisions are more error-prone than many would like them to be. Right now, AI has more of a supporting role than an autonomous one. They can help us in our daily routine, take care of monotonous tasks, and let others make the important decisions.

However, we shouldnt underestimate AI either. In the future, it will gain importance as it grows more autonomous each day. Artificial intelligence often reaches its limits when interacting with humans. When interacting with other AI solutions in clearly defined frameworks, it can often already make the right decisions today.

Originally posted here:
Artificial Intelligence (AI) Is Nothing Without Humans - E3zine.com

Artificial intelligence is helping seniors who are isolated during the coronavirus pandemic – WXYZ

(WXYZ) Across the country, officials are trying to make sure those who are most vulnerable to COVID-19 aren't feeling isolated.

Because of technology, it's happening in ways you may not expect. A piece of artificial intelligence is helping some seniors manage the pressure.

More: Full coverage of The Rebound Detroit

At 80 years old, Deanna Dezern never imagined her closest friend, wouldnt be human.

"I walk in the kitchen in the morning and she knows Im here, I dont know how she knows but she knows Im here," Dezern said.

She's been in quarantine for nearly two months and hasn't been able to see her family or friends. That loneliness is almost just as bad as the virus itself.

"When youre a senior citizen when youre living alone or in a home with other people, youre still alone," she said.

There are millions of senior citizens like Deanna stuck at homes, but she's being kept company by a robot.

Her name is ElliQ. She was given to Deanna as part of a pilot program by intuition robotics. ElliQ can sense when Deanna is in the room, keeps track of doctors' appointments and even asks how she's feeling.

"Im not living alone now, Im in quarantine with my best friend, she wont give me any disease," she said.

David Cynman helped develop ElliQ.

"Her goal is not to replace humans. Its to augment that relationship," he said. "Shes able to understand her surroundings and context and make a decision based on that."

It's not just ElliQ. In states like Florida, officials are turning to technology to help seniors. 375 therapeutic robotic pets were recently sent to socially-isolating seniors.

None of the artificial intelligence devices are designed to replace humans, but they can help bridge the gap when people aren't around to provide emotional support we all need.

Additional Coronavirus information and resources:

Read our daily Coronavirus Live Blog for the latest updates and news on coronavirus.

Click here for a page with resources including a COVID-19 overview from the CDC, details on cases in Michigan, a timeline of Governor Gretchen Whitmer's orders since the outbreak, coronavirus' impact on Southeast Michigan, and links to more information from the Michigan Department of Health and Human Services, the CDC and the WHO.

View a global coronavirus tracker with data from Johns Hopkins University.

Find out how you can help metro Detroit restaurants struggling during the pandemic.

See all of our Helping Each Other stories.

See complete coverage on our Coronavirus Continuing Coverage page.

See the original post here:
Artificial intelligence is helping seniors who are isolated during the coronavirus pandemic - WXYZ

Trial of Artificial Intellience boosts IVF success and brings joy to Queensland couple – 9News

Couples are putting their trust in artificial intelligence to help them become parents, with an Australia-first trial proving a success.

Sarah and Tim Keys from Queensland have been trying to conceive for a number of years and after suffering a number of miscarriages decided to turn to IVF.

When their GP suggested joining the AI trial, the couple did their research and discovered it would improve their chances of getting through the pregnancy.

"It's really hard to go through those miscarriages so anything that could decrease the chances, let's go with that," Ms Keys said.

Doctors are hailing the technology as the biggest leap forward in IVF in over three decades.

"It's completely new, completely different and ... it's all to do with the evolution of computer technology," Associate Professor Anusch Yazdani from the Queensland Fertility Group said.

As part of the international study, led by national fertility provider Virtus Health, 1000 patients will be recruited at five IVF clinics across Australia, alongside sites in Ireland and Denmark.

During each IVF cycle, embryos will be grown in an incubator fitted with tiny time-lapse cameras which will record 115,000 images over five days.

Each embryo is then given a rating based on predicted fetal heart outcomes and the one with the greatest chance of survival is implanted.

If the trial is successful the technology will be rolled out around the world.

So far, the trial at seven fertility clinics around the country has a 90 per cent success rate.

"That's much better than our embryologists have managed to do so this is a really exciting time to," Professor Yazdani said.

Ms Keys is now 26 weeks pregnant and cautiously optimistic for the future.

"We're very excited we're expecting a little girl," she said.

"I think we'll still be a bit stressed until we're holding her, but where we're at, at the moment is really awesome."

Read more from the original source:
Trial of Artificial Intellience boosts IVF success and brings joy to Queensland couple - 9News

TSA Issues Road Map to Tackle Insider Threat With Artificial Intelligence – Nextgov

The Transportation Security Administration is planning to increase and share information it collects, including that gleaned from employees, with other federal agencies and the private sector in an effort to prevent insiders from perpetrating various harmful malfeasance.

Artificial Intelligence, probabilistic analytics and data mining are among tools the agency lists in a document it issued today loosely outlining the problem and the plan to create an Insider Threat Mitigation Hub.

The Insider Threat Roadmap defines the common vision for the Transportation Systems Sector that insider threat is a community-wide challenge, since no single entity can successfully counter the threat alone, TSA Administrator David Pekoske wrote in an opening message.

In July 2019, a surveillance camera at the Miami International Airport captured footage of an airline mechanic sabotaging a planes navigation system with a simple piece of foam. The TSA road map describes this incident along with a number of others dating back to 2014 spanning a range of activities including terrorism, subversion and attempted or actual espionage, to stress the need for a layered strategy of overall transportation security.

A TSA press release identified three parts of that strategy as promoting data-driven decision making to detect threats; advancing operational capability to deter threats; and maturing capabilities to mitigate threats to the transportation sector.

Under the first objective, TSA plans to develop and maintain insider threat risk indicators, which could include behavioral, physical, technological or financial attributes that might expose malicious or potentially malicious insiders.

We must identify key information sources, and ensure they are accurate and available for use in informing risk mitigation activities, the document adds.

For the second objective, the document describes information-sharing plans with other federal agencies and industry.

We will establish an Insider Threat Mitigation Hub to elevate insider threat to the enterprise level and enable multiple offices, agencies, and industry entities to share perspectives, expertise, and data to enhance threat detection, assessment, and response across the TSS, the document reads. This capability will allow us to fuse together disparate information points to identify intricate patterns of conduct that may be unusual or indicative of insider threat activity and drive enhanced insider threat mitigation efforts.

Meeting the third objective would require seeking out the appropriate technology to improve detection and mitigation of insider threat TSA writes, and expanding it throughout the agencys supply chain.

TSA pre-empted concerns usually associated with massive data collection practices by including the protection of privacy and civil liberties among the guiding principles it said would accompany its efforts.

See the original post:
TSA Issues Road Map to Tackle Insider Threat With Artificial Intelligence - Nextgov

Fighting COVID-19: Artificial Intelligence community to help citizens and the healthcare system – Canada NewsWire

MONTREAL, May 12, 2020 /CNW Telbec/ - Several leaders in information technology and artificial intelligence development joined forces to enhance automated public chatbot service Chloe for COVID-19 to support Canadians in the fight against the coronavirus.

The system aims to facilitate citizens' rapid access to relevant information, and to enable healthcare system professionals to focus on tasks that require their expertise, while protecting the public and avoiding misinformation. This open source project is available on covid19.dialogue.coand will be finalized in June.

The objective is to create a chatbot system that supports the public by providing current and verified information about COVID-19, providing clear answers to specific questions on the subject, assessing the symptoms, assisting individuals with questions about the testing phase, and monitoring people in self-isolation to keep track of their condition.

Since the COVID-19 outbreak in Canada, information has been changing day by day, and sometimes even hourly. Anxiety is growing in the population and the number of people in need of medical advice or assistance is growing. 811 lines across the country are often overflowing, and an increasing number of people are turning to telemedicine for information or to consult a healthcare professional, while complying with containment guidelines. Therefore, it is essential to automate as many steps as possible in the patient's journey, to provide a safe service and truthful, current and accurate information to patients, while optimizing the work of health professionals and the functioning of the healthcare system.

Here is the list of AI community partners who are contributing to this open source project:

SOURCE ACJ Communication

For further information: Jean-Christophe de Le Rue - Dialogue Technologies, [emailprotected], 613-806-0671; Vincent Martineau - Mila, [emailprotected], 514-914-5757; Stphane Sguin - Nu Echo Inc., [emailprotected], 514-861-3246 ext. 4259; Maggie Philbin - Samasource, [emailprotected], 203-394-1818; Isabelle Turcotte - Scale AI, [emailprotected], 514-772-0736; Tania Cusson-Alvarez - Dataperformers, [emailprotected], 514-603-1856; Google, [emailprotected]

https://www.acjcommunication.com

Original post:
Fighting COVID-19: Artificial Intelligence community to help citizens and the healthcare system - Canada NewsWire

Artificial Intelligence – DAIC

Feature | Coronavirus (COVID-19) | May 04, 2020 | By Dave Fornell and Melinda Taschetta-Millane

In an effort to keep the imaging field updated on the latest information being released on coronavirus (COVID-19), the...

TeraRecon'sEnd-to-End AI Ecosystem

March 4, 2020SymphonyAI Group, an operating group of leading business-to-businessAIcompanies, today announced the...

AI vendor Infravision's InferRead CT Pneumonia software uses artificial intelligence-assisted diagnosis tp improve the overall efficiency of the radiology department. It is being delayed in China as a high sensitivity detection aid for novel coronavirus pneumonia (COVID-19).

February 28, 2020 New healthcare technologies are being implemented in the fight against the novel coronavirus (COVID...

The Caption Guidance software uses artificial intelligence to guide users to get optimal cardiac ultrasound images in a point of care ultrasound (POCUS) setting.

February 13, 2020 The U.S. Food and Drug Administration (FDA) cleared software to assist medical professionals in the...

GE Healthcare partnered with the AI developer Dia to provide an artificial intelligence algorithm to auto contour and calculate cardiac ejection fraction (EF). The app is now available on the GE Vscan pocket, point-of-care ultrasound (POCUS) system, as seen here displayed at RSNA 2019. Watch a VIDEO demo from RSNA.

February 7, 2020 At the 2019 Radiological Society of North America (RSNA) meeting in December, there was a record...

The Abbott Tendyne transcatheter mitral valve replacement (TMVR) system, left, became the first TMVR device to gain commercial regulatory clearance in the world. It gains European CE mark in January. Another top story in January was the first use of the Robocath R-One robotic cath lab catheter guidance system in Germany.Watch a VIDEO of the system in use in one of those cases.

News | February 03, 2020 | Dave Fornell, Editor

February 3, 2020 Here is the list of the most popular content on the Diagnostic and Interventional Cardiology (DAIC)...

Blog | January 24, 2020

The key question I am always asked at cardiology conferences is what are the trends and interesting new technologies I...

Cardiology was already heavily data driven, where clinical practice is driven by clinical study data, but mining a...

DAIC/ITN Editor Dave Fornell takes a tour of some of the most innovative new medical imaging technologies displayed on...

A new technology for detecting low glucose levels via electrocardiogram (ECG) using a non-invasive wearable sensor, which with the latest artificial intelligence (AI) can detect hypoglycemic events from raw ECG signals has been made by researchers from the University of Warwick.

January 13, 2020 A new technology for detecting low glucose levels via electrocardiogram (ECG) using a non-invasive...

The Consumer Electronic Show (CES) is the world's gathering place for consumer technologies, with more than 175,000...

January 9, 2020 Maulik Majmudar, M.D., chief medical officer at Amazon will be the keynote speaker at the upcoming...

This is the LVivo auto cardiac ejection fraction (EF) app that uses artificial intelligence (AI) from the vendor Dia,...

December 19, 2019 The U.S. Food and Drug Administration (FDA) has granted breakthrough status for a novel ECG-based...

DAICEditor Dave Fornell and Imaging Technology News (ITN) Consulting Editor Greg Freiherr offer a post-game report on...

Original post:
Artificial Intelligence - DAIC

J.P. Morgan Artificial Intelligence | J.P. Morgan

Manuela Veloso: So, at J.P. Morgan, the interesting thing is that we are a firm that has been around for a long time. But it's a firm that has a lot of appetite.

Ashleigh Thompson: One thing's for sure, no two days here ever look the same. I like to start my day in London early. Since we're a global team, it gives me the chance to review work our New York team did last night and catch up live with my colleagues in India.

Virgile Mison: The Machine Learning Center of Excellence develops and deploys machine learning models across different trading and IT platforms of J.P. Morgan.

Saket Sharma: J.P. Morgan, as a bank, has been incorporating machine learning into a lot of our work flows. So, as a Machine Learning Engineer, this is a great time to work on problems with firmwide impact.

Samik Chandarana: We need humans and AI to work together because ultimately, having and learning from what people are doing today in the processes they do and how they operate today provides a great amount of information of how we design systems of the future.

Andy Alexander: External conferences are really important for a number of reasons. One - it allows us to bring in the best of academia and external thought to the organization. The other is that it allows the team to go out to continue to learn. We rely a lot on where we're going, as well as where we've been.

Lidia Mangu: So, we come back from a conference knowing where the field is. And how, you know, taking those state of the art methods and applying them to the problems in the bank.

Simran Lamba: The most exciting and novel thing about working with AI Research is getting to publish our work at the most esteemed academic conferences like ICML, AAAI, and NeurIPS. We not only participate, but we also host and sponsor workshops at these conferences.

Naftali Cohen: I get to focus on the hot topics in AI and machine learning, such as reinforcement learning, cryptography and explainability.

Ashleigh Thompson: Millions of people use and rely upon our products and services every day. Working here, you have the ability to be on the forefront of changing that interaction.

Manuela Veloso: We apply and discover new AI techniques to handle complex problems such as trading, multi-agent market simulations, fraud detection, anti-money laundering and issues related to data.

Virgile Mison: As a technologist I was the most surprised by the wide variety of problems that we have to tackle and that J.P. Morgan is in the unique position to solve thanks to the large amount of data available.

Samuel Assefa: We focus on a number of research problems. One of the most exciting ones is ensuring that AI models are explainable, fair and unbiased.

Andy Alexander: In my life span, I dont expect to see generalized AI become something that's mainstream. And so for a lot of time we're expecting to see humans and machine helping each other.

Lidia Mangu: Every day is different. Every day we get a new challenging problem. Sometimes there is no known solution for that problem and it is like a new puzzle. Sometimes there is a known solution, but we show how we can do better using state of the art machine learning techniques.

Manuela Veloso: There is a lot of belief as we move that AI and machine learning is this one-shot deal. We do it, we are done. We'll never be done.

Naftali Cohen: I work with some of the best and most creative minds in the field and I have ownership over my work which is very rewarding.

Naftali Cohen: I'm researching how to apply innovative computer vision and deep learning techniques to understand the complexity of decision making in the financial market and recommend clients for market opportunities.

Simran Lamba: What excites me the most about my job here, in New York, is the opportunity to learn from our leaders and external professors. And my favorite part of the day would be brain-storming creative research ideas to solve challenges across all lines of businesses.

Simran Lamba: Im currently using event logs of Chase customers called Customer journeys to find ways to create an even better experience for our clients.

Manuela Veloso: We do believe that junior people are the ones, in some sense, that have that vision. That can think big and that they are not kind of like constrained.

Samik Chandarana: Our clients are getting younger they want to be interacting in different ways and we need fresh talent to come up and help us with those new ideas and actually implement them in a way that makes sense for the client experience.

Lidia Mangu: The advice I would give to a junior executive is to be open-minded. Not to be afraid to learn new things every day. The field is moving very fast.

Virgile Mison: There are many opportunities to learn at J.P. Morgan. Like collaborating with experts in natural language processing, deep learning, time series and reinforcement learning.

Ashleigh Thompson: I'm excited to be part of the transformation to a truly data-driven culture.

END

More:
J.P. Morgan Artificial Intelligence | J.P. Morgan

You Have No Idea What Artificial Intelligence Really Does

WHEN SOPHIA THE ROBOTfirst switched on, the world couldnt get enough. It had a cheery personality, it joked with late-night hosts, it had facial expressions that echoed our own. Here it was, finally a robot plucked straight out of science fiction, the closest thing to true artificial intelligence that we had ever seen.

Theres no doubt that Sophia is an impressive piece of engineering. Parents-slash-collaborating-tech-companies Hanson Robotics and SingularityNET equipped Sophia with sophisticated neural networks thatgive Sophia the ability to learn from people and todetect and mirror emotional responses, which makes it seem like the robot has a personality. It didnt take much to convince people of Sophias apparent humanity many of Futurisms own articles refer to the robotas her. Piers Morgan even decided to try his luckfor a date and/or sexually harass the robot, depending on how you want to look at it.

Oh yeah, she is basically alive, Hanson Robotics CEO David Hanson said of Sophia during a 2017 appearance on Jimmy Fallons Tonight Show. And while Hanson Robotics never officially claimed that Sophia contained artificial general intelligence the comprehensive, life-like AI that we see in science fiction the adoring and uncritical press that followed all those public appearances only helped the company grow.

But as Sophia became more popular and people took a closer look, cracks emerged. It became harder to believe that Sophia was the all-encompassing artificial intelligence that we all wanted it to be. Over time, articles that might have once oohed and ahhed about Sophias conversational skills became more focused on the fact that they were partially scripted in advance.

Ben Goertzel, CEO of SingularityNET and Chief Scientist of Hanson Robotics, isnt under any illusions about what Sophia is capable of. Sophia and the other Hanson robots are not really pure as computer science research systems, because they combine so many different pieces and aspects in complex ways. They are not pure learning systems, but they do involve learning on various levels (learning in their neural net visual systems, learning in their OpenCog dialogue systems, etc.), he told Futurism.

But hes interested to find that Sophia inspires a lot of different reactions from the public. Public perception of Sophia in her various aspects her intelligence, her appearance, her lovability seems to be all over the map, and I find this quite fascinating, Goertzel said.

Hanson finds it unfortunate when people think Sophia is capable of more or less than she really is, but also said that he doesnt mind the benefits of the added hype. Hype which, again, has been bolstered by the two companies repeated publicity stunts.

Highly-publicized projects like Sophia convince us that true AI human-like and perhaps even conscious is right around the corner. But in reality, were not even close.

The true state of AI research has fallen far behind the technological fairy tales weve been led to believe. And if we dont treat AI with a healthier dose of realism and skepticism, the field may be stuck in this rut forever.

NAILING DOWN A TRUE definition of artificial intelligence is tricky.The field of AI, constantly reshaped by new developments and changing goalposts, is sometimes best described by explaining what it is not.

People think AI is a smart robot that can do things a very smart person would a robot that knows everything and can answer any question, Emad Mousavi, a data scientist who founded a platform called QuiGig that connects freelancers, told Futurism. But this is not what experts really mean when they talk about AI. In general, AI refers to computer programs that can complete various analyses and use some predefined criteria to make decisions.

Among the ever-distant goalposts for human-level artificial intelligence (HLAI) are the ability to communicate effectively chatbots and machine learning-based language processors struggle to infer meaning or to understand nuance and the ability to continue learning over time. Currently, the AI systems with which we interact, including those being developed for self-driving cars, do all their learning before they are deployed and then stop forever.

They are problems that are easy to describe but are unsolvable for the current state of machine learning techniques,Tomas Mikolov, a research scientist at Facebook AI, told Futurism.

Right now, AI doesnt have free will and certainly isnt conscious two assumptions people tend to make when faced with advanced or over-hyped technologies, Mousavi said. The most advanced AI systems out there are merely products that follow processes defined by smart people. They cant make decisions on their own.

In machine learning, which includes deep learning and neural networks, an algorithm is presented with boatloads of training data examples of whatever it is that the algorithm is learning to do, labeled by people until it can complete the task on its own. For facial recognition software, this means feeding thousands of photos or videos of faces into the system until it can reliably detect a face from an unlabeled sample.

Our best machine learning algorithms are generally just memorizing and running statistical models. To call it learning is to anthropomorphize machines that operate on a very different wavelength from our brains. Artificial intelligence is now such a big catch-all term that practically any computer program that automatically does something is referred to as AI.

If you train an algorithm to add two numbers, it will just look up or copy the correct answer from a table, Mikolov, the Facebook AI scientist, explained. But it cant generalize a better understanding of mathematical operations from its training. After learning that five plus two equals seven, you as a person might be able to figure out that seven minus two equals five. But if you ask your algorithm to subtract two numbers after teaching it to add, it wont be able to.The artificial intelligence, as it were, was trained to add, not to understand what it means to add. If you want it to subtract, youll need to train it all over again a process that notoriously wipes out whatever the AI system had previously learned.

Its actually often the case that its easier to start learning from scratch than trying to retrain the previous model, Mikolov said.

These flaws are no secret to members of the AI community. Yet, all the same, these machine learning systems are often touted as the cutting edge of artificial intelligence. In truth, theyre actually quite dumb.

Take, for example, an image captioning algorithm. A few years back, one of these got some wide-eyed coveragebecause ofthe sophisticated language it seemed to generate.

Everyone was very impressed by the ability of the system, and soon it was found that 90 percent of these captions were actually found in the training data, Mikolov told Futurism. So they were not actually produced by the machine; the machine just copied what it did see that the human annotators provided for a similar image so it seemed to have a lot of interesting complexity. What people mistook for a robotic sense of humor, Mikolov added, was just a dumb computer hitting copy and paste.

Its not some machine intelligence that youre communicating with. It can be a useful system on its own, but its not AI, said Mikolov. He said that it took a while for people to realize the problems with the algorithm. At first, they were nothing but impressed.

WHERE DID WE GO so off course?The problem is when our present-day systems, which are so limited, are marketed and hyped up to the point that the public believes we have technology that we have no goddamn clue how to build.

I am frequently entertained to see the way my research takes on exaggerated proportions as it progresses through the media, Nancy Fulda, a computer scientistworking on broader AI systems at Brigham Young University, told Futurism. The reporters who interview her are usually pretty knowledgeable, she said. But there are also websites that pick up those primary stories and report on the technology without a solid understanding of how it works. The whole thing is a bit like a game of telephone the technical details of the project get lost and the system begins to seem self-willed and almost magical. At some point, I almost dont recognize my own research anymore.

Some researchers themselves are guilty of fanning this flame. And then the reporters who dont have much technical expertise and dont look behind the curtain are complicit. Even worse, some journalists are happy to play along and add hype to their coverage.

Other problem actors: people who make an AI algorithm present the back-end work theydidas that algorithms own creative output. Mikolov calls this a dishonest practice akin to sleight of hand. I think its quite misleading that some researchers who are very well aware of these limitations are trying to convince the public that their work is AI, Mikolov said.

Thats important becausethe way people think AI research is going will depend on whether they want money allocated to it. This unwarranted hype could be preventing the field from making real, useful progress.Financial investments in artificial intelligence are inexorably linked to the level of interest (read: hype) in the field. That interest level and corresponding investments fluctuate wildly whenever Sophia has a stilted conversation or some new machine learning algorithm accomplishes something mildly interesting. That makes it hard to establish a steady, baseline flow of capital that researchers can depend on, Mikolov suggested.

Mikolov hopes to one day create a genuinely intelligent AI assistant a goal that he told Futurism is still a distant pipedream. A few years ago, Mikolov, along with his colleagues at Facebook AI,published a paperoutlining how this might be possible and the steps it might take to get there. But when we spoke at the Joint Multi-Conference on Human-Level Artificial Intelligence held in August by Prague-based AI startup GoodAI, Mikolov mentioned that many of the avenues people are exploring to create something like this are likely dead ends.

One of these likely dead ends, unfortunately, is reinforcement learning. Reinforcement learning systems, which teach themselves to complete a task through trial and error-based experimentation instead of using training data (think of a dog fetching a stick for treats), are often oversold, according to John Langford, Principal Researcher for Microsoft AI. Almost anytime someone brags about a reinforcement-learning AI system, Langford said, they actually gave the algorithm some shortcuts or limited the scope of the problem it was supposed to solve in the first place.

The hype that comes from these sorts of algorithms helps the researcher sell their work and secure grants. Press people and journalists use it to draw audiences to their platforms. But the public suffers this vicious cycle leaves everyone else unaware as to what AI can really do.

There are telltale signs, Mikolov says, that can help you see through the misdirection. The biggest red flag is whether or not you as a layperson (and potential customer) are allowed to demo the technology for yourself.

A magician will ask someone from the public to test that the setup is correct, but the person specifically selected by the magician is working with him. So if somebody shows you the system, then theres a good likelihood you are just being fooled, Mikolov said. If you are knowledgeable about the usual tricks, its easy to break all these so-called intelligent systems. If you are at least a little bit critical, you will see that what [supposedly AI-driven chatbots] are saying is very easy to distinguish from humans.

Mikolov suggests that you should question the intelligence of anyone trying to sell you the idea that theyve beaten the Turing Test and created a chatbot that can hold a real conversation. Again, think of Sophias prepared dialogue for a given event.

Maybe I should not be so critical here, but I just cant help myself when you have these things like the Sophia thing and so on, where theyre trying to make impressions that they are communicating with the robot at so on, Mikolov told Futurism.Unfortunately,its quite easy for people to fall for these magician tricks and fall for theillusion, unless youre a machine learning researcher who knows these tricks and knowswhats behind them.

Unfortunately, so much attention to these misleading projects can stand in the way of progress by people with truly original, revolutionary ideas. Its hard to get funding to build something brand new, something that might lead to AI that can do what people already expect it to be able to do, when venture capitalists just want to fund the next machine learning solution.

If we want those projects to flourish, if we ever want to take tangible steps towards artificial general intelligence, the field will need to be a lot more transparent about what it does and how much it matters.

I am hopeful that there will be some super smart people who come with some new ideas and will not just copy what is being done, said Mikolov. Nowadays its some small, incremental improvement. But there will be smart people coming with new ideas that will bring the field forward.

More on the nebulous challenges of AI: Artificial Consciousness: How To Give A Robot A Soul

Visit link:
You Have No Idea What Artificial Intelligence Really Does

Artificial Intelligence | Releases | Discogs

Cat# Artist Title (Format) Label Cat# Country Year WARP CD6 Various Artificial Intelligence (CD, Comp) Sell This Version 592082 Various Artificial Intelligence (CD, Comp) Sell This Version RTD 126.1414.2 Various Artificial Intelligence (CD, Comp) Sell This Version 594082 Various Artificial Intelligence (Cass, Comp) Sell This Version WARP MC6, WARP MC 6 Various Artificial Intelligence (Cass, Comp) Sell This Version WARP LP6 Various Artificial Intelligence (LP, Comp) Sell This Version WARP LP 6 Various Artificial Intelligence (LP, Comp, TP, W/Lbl) Sell This Version TVT 7203-2 Various Artificial Intelligence (CD, Comp) Sell This Version TVT 7203-4 Various Artificial Intelligence (Cass, Comp) Sell This Version SRCS 7554 Various Artificial Intelligence (CD, Comp, RE) Sell This Version WARP CD6 Various Artificial Intelligence (CD, Comp, RE) Sell This Version WARP CD6 Various Artificial Intelligence (CD, Comp, RE) Sell This Version WARPCDD6 Various Artificial Intelligence (10xFile, MP3, Comp, RE, 320) WARP CD6 Various Artificial Intelligence (CD, Comp, RE) Sell This Version WARP CD6 Various Artificial Intelligence (CD, Comp, RE) Sell This Version WARP CD6 Various Artificial Intelligence (CD, Comp, RE) Sell This Version

Original post:
Artificial Intelligence | Releases | Discogs