A ‘potentially deadly’ mushroom-identifying app highlights the dangers of bad AI – The Verge

Theres a saying in the mushroom-picking community that all mushrooms are edible but some mushrooms are only edible once.

Thats why, when news spread on Twitter of an app that used revolutionary AI to identify mushrooms with a single picture, mycologists and fungi-foragers were worried. They called it potentially deadly, and said that if people used it to try and identify edible mushrooms, they could end up very sick, or even dead.

Part of the problem, explains Colin Davidson, a mushroom forager with a PhD in microbiology, is that you cant identify a mushroom just by looking at it. The most common mushroom near me is something called the yellow stainer, he told The Verge, and it looks just like an edible horse mushroom from above and the side. But if you eat a yellow stainer theres a chance youll be violently ill or even hospitalized. You need to pick it up and scratch it or smell it to actually tell what it is, explains Davidson. It will bruise bright yellow or it will smell carbolic.

And this is only one example. There are plenty of edible mushrooms with toxic lookalikes, and when identifying them you need to study multiple angles to find features like gills and rings, while considering things like whether recent rainfall might have discolored the cap or not. Davidson adds that there are plenty of mushrooms that live up to their names, like the destroying angel or the deaths cap.

but then your organs will start failing.

One eighth of a death cap can kill you, he says. But the worst part is, youll feel sick for a while, then you might feel better and get on with your day, but then your organs will start failing. Its really horrible.

The app in question was developed by Silicon Valley designer Nicholas Sheriff, who says it was only ever intended to be used as a rough guide to mushrooms. When The Verge reached out to Sheriff to ask him about the apps safety and how it works, he said the app wasnt built for mushroom hunters, it was for moms in their backyard trying to ID mushrooms. Sheriff added that hes currently pivoting to turn the app into a platform for chefs to buy and sell truffles.

When we tried the iOS-only software this morning, we found that Sheriff had changed its preview picture on the App Store to say identify truffles instantly with just a pic. However, the name of the app remains Mushroom Instant Mushroom Plants Identification, and the description contains the same claim that so worried Davidson and others: Simply point your phone at any mushroom and snap a pic, our revolutionary AI will instantly identify mushrooms, flowers, and even birds.

In our own tests, though, the app was unable to identify either common button or chestnut mushrooms, and crashed repeatedly. Motherboard also tried the app and found it couldnt identify a shiitake mushroom. Sheriff says he is planning on adding more data to improve the apps precision, and tells The Verge that his intention was never to try and replace experts, but supplement their expertise.

claims about revolutionary AI can be dangerous

And, of course, if you search the iOS or Android app stores, youll find plenty of mushroom identifying apps, most of which are catalogues of pictures and text. Whats different about this one, is that it claims to use machine vision and revolutionary AI to deliver its results terms that seem specifically chosen to give people a false sense of confidence. If youre selling an app to identify flowers, then this sort of language is merely disingenuous; when its mushrooms youre spotting, it becomes potentially dangerous.

As Davidson says: Im absolutely enthralled by the idea of it. I would love to be able to go into a field and point my phone at a mushroom and find out what it is. But I would want quite a lot of convincing that it would be able to work. So far, were not convinced.

Visit link:

A 'potentially deadly' mushroom-identifying app highlights the dangers of bad AI - The Verge

Elon Musk says AI harbors ‘vastly more risk than North Korea’ – CNET

Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.

He's worried. Very worried.

The mention of several place-names currently invokes shudders.

Whether it be North Korea, Venezuela or even Charlottesville, Virginia, it's easy to get a shivering feeling that something existentially unpleasant might happen, with North Korea still topping many people's lists.

For Tesla and SpaceX CEO Elon Musk, however, there's something far bigger that should be worrying us: artificial intelligence.

In a Friday afternoon tweet, he offered, "If you're not concerned about AI safety, you should be. Vastly more risk than North Korea."

He accompanied this with a poster of a worried woman and the words, "In the end, the machines will win."

The machines always win, don't they? Look how phones have turned us into neck-craning zombies. And, lo, here was Musk also tweeting on Fridayabout a bot created by OpenAI -- the nonprofit he backs -- beating real humans at eSports.

Still, Musk thinks humanity can do something to fight the robots.

Indeed, he followed his North Korea message witha renewed call for AI regulation: "Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too."

Musk brought up this idea last month at a meeting of the National Governors Association. On Friday, he explained in the Twitter comments that AI really does pose an immediate threat.

"Biggest impediment to recognizing AI danger are those so convinced of their own intelligence they can't imagine anyone doing what they can't," he tweeted.

You really can't trust humans to do good, even supposedly intelligent humans.

Especially in these times when few appear to agree what good even looks like.

CNET Magazine: Check out a sample of the stories in CNET's newsstand edition.

iHate: CNET looks at how intolerance is taking over the internet.

Go here to see the original:

Elon Musk says AI harbors 'vastly more risk than North Korea' - CNET

Hollywood Is Banking That a Robot Named Erica Can Be the Next Movie Star – Variety

She cant get sick or be late to the set, and her hair and makeup needs are minimal: Her name is Erica, and Hollywood is hoping that a sophisticated robot can be its next big star. The synthetic actor has been cast in b, a $70 million science-fiction movie which producer Sam Khoze describes as a James Bond meets Mission Impossible story with heart.

Scribe Tarek Zohdy (1st Born), says, the story is about scientists who create an AI robot named Erica who quickly realize the danger of this top-secret program that is trying to perfect a human through a non-human form.

Variety caught up with the filmmakers Zohdy and Khoze to discuss b the $70 million film that plans to finish shooting next year, after a director and human star have been brought on.

Tarek Zohdy: The producers, Sam Khoze and Anoush Sadegh, in association with professors Hiroshi Ishiguro and Kohei Ogawa of the University of Osaka and Telecommunication Research Institute, took on the task of training Erica to act.

We wanted to create a story, and we wanted to do it in a revolutionary way. A robot doesnt have life experiences so they created this persona about those experiences, and we taught her how to act.

We found her to be the most capable of performing as an actor. Erica has the ability for natural interaction with people by integrating various technologies such as voice recognition, human tracking, and natural motion generation. She is almost human. Visually, her human-like appearance made her the best-known candidate to play this character in the movie.

We are artists and, we are artists of color who are able to do something with our art. We want to have a very diverse cast and as a diverse filmmaking group, I think that is essential.

Sam Khoze: We went through several rewrites. VFX Supervisor Eric Pham ( Sin City) joined us later to help develop the final version of the story . Its a really beautiful story because, at its heart, Ericas father who spent his life developing her wants her to serve humanity and change the way people look at AI and robots.

Khoze: It took about two years. Shes 23-years old. She has experience. She goes to the museums once a month to meet people, so shes a fun robot.

Khoze: When we started this project in 2018, we had a director who wasnt comfortable with having VFX in the movie.

We went off and started researching and we found Ericas creator. We started training her and shes been performing flawlessly. Shes probably the closest AI ever made to be an artist. We wanted to experiment to see if she would learn acting. And we basically started to training her and she performs flawlessly and very well.

We dont want to replace actors with AI, but its an interesting opportunity for the entertainment industry to look at AI and robots in Hollywood.

By creating her, weve learned shes fully capable of communicating with people and interacting.

What does that mean? Weve created this algorithm to digitally preserve people. So, if an actor doesnt want to risk their life, this allows you to create a digital version of that human being, and she has her own personality without anyone needs to program her. So now were using this algorithm to bring actors to set using this AI technology.

Visit link:

Hollywood Is Banking That a Robot Named Erica Can Be the Next Movie Star - Variety

Will AI-as-a-Service Be the Next Evolution of AI? — The Motley Fool – Motley Fool

With all the excitement surrounding the advent of artificial intelligence (AI), there are still a great many things we don't know. Could it lead to the frightening futures depicted in films like Ex Machina, Terminator, and 2001: A Space Odyssey, or might we see less threatening iterations like Data on Star Trek: The Next Generation, Samantha in Her, or TARS from Interstellar?

The current reality of AI is much less cinematic -- it possesses the learned ability to sift through reams of data in short order and recognize patterns. This has led to breakthroughs in the areas of image recognition, language translation, and beating humans at the age-old game of Go. Some of the biggest advances are ongoing in the areas of medical imaging, cancer research, and self-driving cars.

Still, with plenty of developments thus far, it's hard to know what will be the next groundbreaking application of the technology.

Will AI-as-a-Service be the next killer app? Image source: Getty Images.

Small Canadian start-up Element AI believes it has the answer: It wants to democratize AI by offering "AI-as-a-Service" to businesses that can't afford to develop the systems themselves. Tech giants Microsoft Corp. (NASDAQ:MSFT), Intel Corp. (NASDAQ:INTC), and NVIDIA Corp. (NASDAQ:NVDA) believe that Element is on the right track and have invested millions to back up that belief.

Currently, AI requires massive quantities of data in order to train the system. Element AI wants to improve on this by reducing the size of the data sets required, which would make the technology accessible to a wider range of businesses, not just those with massive budgets. Element is improving on the AI concept of leveraging. By using a previously trained system and then introducing smaller data sets, the system applies what it learned previously to the new sets of data.

Element is currently working on a consulting basis with a very small group of large companies that want to leverage AI without developing the systems in-house. In this way, the company can strategically choose its initial customers and train its systems on the larger data sets, which it will later leverage for smaller clients.

The major players investing in AI have primarily been applying the tech to augment their principal businesses. Microsoft has used the technology to improve its Bing search and to power its Cortana virtual assistant, and has built AI into its Azure cloud computing services. Intel has been working to develop an AI-based CPU and has made numerous acquisitions in the field, hoping to get a leg up.

NVIDIA is the only one to date that has been able to quantify the value of AI to its business, as its GPUs have been used to accelerate the training of AI systems. In its most recent quarter, NVIDIA saw revenue of $1.9 billion, which grew 48% year over year, on the back of a 186% increase in its AI-centric data-center revenue.

Element is providing a novel approach to the AI trend. Image source: Pixabay.

Still, none has emerged as a pure play, selling AI-as-a-Service. Element hopes to change that by being the first company of its kind to provide predictive modeling, recommendations systems, and consumer engagement optimization, available to any business, without them having to start their AI efforts from scratch. Providing access to experts in the field that can analyze a business and determine how best to apply AI to solve specific problems will prove beneficial to a wide range of companies without their own AI resources. By filling that void, Element AI hopes to make its mark.

International Business Machines Corp. (NYSE:IBM) provides the closest example, pivoting from its legacy hardware and consulting businesses to selling cloud and cognitive computing solutions via its AI-based Watson supercomputer. Thus far, these newer growth technologies haven't been able to compensate for the shortfall in its legacy business, though the company is applying AI to a wide variety of business processes and has assembled an impressive array of big-name partners. By casting its net into cybersecurity, tax preparation, and a variety of healthcare-related applications, IBM hopes to capitalize on this emerging trend.

It is still early days in AI research and technology, and how the future plays out is yet to be determined. Element AI is taking a unique approach -- and the backing of these three godfathers of tech shows that it might be on the right track.

Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors; LinkedIn is owned by Microsoft. Danny Vena has the following options: long January 2018 $25 calls on Intel. The Motley Fool owns shares of and recommends Nvidia. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.

Read the original post:

Will AI-as-a-Service Be the Next Evolution of AI? -- The Motley Fool - Motley Fool

Baidu Leads the Way in Innovation With 5,712 AI Patent Applications – AiThority

Baidu, Inc. has filed the most AI-related patent applications in China, a recognition of the companys long-term commitment to driving technological advancement, a recent study from the research unit of Chinas Ministry of Industry and Information Technology (MIIT) has shown.

Baidu filed a total of 5,712 AI-related patent applications as of October 2019, ranking No.1 in China for the second consecutive year. Baidus patent applications were followed by Tencent (4,115), Microsoft (3,978), Inspur (3,755), and Huawei (3,656), according to the report issued by the China Industrial Control Systems Cyber Emergency Response Team, a research unit under the MIIT.

Read More: Portal By Facebook Takes WhatsApp Closer To AI-Based Video Applications

Baidu retained the top spot for AI patent applications in China because of our continuous research and investment in developing AI, as well as our strategic focus on patents, said Victor Liang, Vice President and General Counsel of Baidu.

In the future, we will continue to increase our investments into securing AI patents, especially for high-value and high-quality patents, to provide a solid foundation for Baidus AI business and for our development of world-leading technology, he said.

The report showed that Baidu is the patent application leader in several key areas of AI. These include deep learning (1,429), natural language processing (938), and speech recognition (933). Baidu also leads in the highly competitive area of intelligent driving, with 1,237 patent applications, a figure that surpasses leading Chinese universities and research institutions, as well as many international automotive companies. With the launch of the Apollo open source autonomous driving platform and other intelligent driving innovations, Baidu has been committed to pioneering the intelligent transformation of the mobility industry.

Read More: Governance and Stability are the Keys to Sustaining AI and ML Projects

After years of research, Baidu has developed a comprehensive AI ecosystem and is now at the forefront of the global AI industry. Moving forward, Baidu will continue to conduct research in the core areas of AI, contribute to scientific and technological innovation in China, and actively push forward the application of AI into more vertical industries. Baidu is positioned to be a global leader in a wave of innovation that will transform industries.

Read More: Artificial Intelligence In Restaurant Business

Read the original here:

Baidu Leads the Way in Innovation With 5,712 AI Patent Applications - AiThority

5 pitfalls AI healthcare start-ups need to avoid – – pharmaphorum

Artificial intelligence (AI) has now moved beyond its initial hype towards becoming a key part of the pharma industry with many companies looking to partner with AI drug discovery start-ups.

Pharma and healthcare are data-rich industries and AI helps by turning data into actionable insights, allowing us to solve complex, intricate problems. Using machine learning, AI algorithms can generate patterns that will enable us to predict toxicity, find potential combination treatments, identify and predict new drugs and expand usage of current drugs in other diseases.

However, only a handful of companies from the swarm of AI start-ups have successfully gained traction within the pharma industry.

Over the past month, Ive had conversations with several growing AI drug discovery companies and have analysed some critical strategic shortcomings that can frustrate the upwards journey of these start-ups:

1) Living in a technology bubble and failing to understand the language of pharma

AI cannot be built in isolation without understanding the nuances and the complexity of the business needs it will address. A constant challenge faced by all AI drug discovery companies is sanitisation of the unstructured but useful data in life sciences.

Often start-ups focus on how powerful and transformative their platform is and lack an overall business workflow

Drug discovery is inherently a high-risk endeavour due to poorly understood mechanisms of diseases and lack of negative data (on experiments that do not work) in the public domain.

With high reproducibility issues in life sciences, nuances such as the validation of experimental data, complex networks of interacting proteins, poor quality publications etc., add another layer of difficulty.

Companies will spend much of their time filtering the essential insights from a pile of useless data.

Diversifying the team and having pharma executives on-board who can understand the relevance of a dataset and ontologies created by the platform is critical to success.

Because of a lack of thorough understanding, start-ups often fail to attract investors.

The AI drug discovery ecosystem has been only partially successful in addressing this challenge till now. AI depends critically on the quality and standardisation of data used. Garbage in, garbage out holds true for AI.

2) Prioritising technology over business strategy

AI alone is not enough to succeed. A start-up at the interface of technology and pharma needs a solid business strategy to thrive.

Often these start-ups focus on how powerful and transformative their platform is and lack an overall business workflow. Most start-ups struggle to find the right positioning in the market whether thats providing data analysis as a service, licensing the AI platform, acting as a consultant or developing a new drug/repurposing drugs either alone or in partnership.

Only a handful of these start-ups have internal discovery programmes, while the rest of the companies struggle to find the right business model, often taking a mix of several approaches.

Narrowing down the business strategy while understanding the core strengths early in the journey can address challenges of efficient resource allocation. Likewise, identifying target markets and an effective sales and marketing approach can help clearly define success criteria with defined milestones in the journey.

3) Get stuck in a never ending development loop

AI is very expensive to build and maintain, owing to the need for big data and computational infrastructure. The scarcity of data scientists with domain knowledge in pharma and the large overheads required to hire them adds to the cost.

Many start-ups keep working on improving prediction scores with data and ultimately end up realising that it only adds marginal value for the customer. The benefits of moving to market quickly often outweigh the downsides with early entry, companies can get immediate and frequent feedback from the end-users of the platform, and they can work with academic and industry end-users early in the journey to validate the platforms and generate the required proof-of-concept.

Rather than staying in a continuous development loop, taking one application to market while you build the rest is a stronger approach.

4) Assuming customers are technologically savvy

Developers should keep the end user in mind when designing the UI/UX of the platform while still addressing the key technical components.

End users in pharma are often biologists, with minimal exposure to developer tools. While some computational biologists write scripts, most drug discovery scientists are habituated to one-click software. Moreover, there are certain standard ways of data plotting that scientists are habituated to, owing to requirements in international peer reviewed journals.

An easy, graphical results sheet can communicate information better than the strings and scores preferred by software developers.

5) Taking a do-it-all approach over finding a niche

Start-ups often get starry eyed about the capabilities of their platforms and want to do everything.

The majority of these algorithms can be repurposed for diverse functions depending on the training data and required outcomes. As the Russian proverb goes, If you chase two rabbits, you will not catch either one.

Building new functions requires resources and attention to detail. Start-ups should begin by pioneering one functionality and then building in other directions. Prioritise offerings based on core strengths, get client feedback and revenue flowing and then continue building on other applications.

While a solid foundation has been laid, the AI community will have to be patient and not interpret pharmas conservatism as a lack of interest. They need to engage continuously with the biopharma leaders and regulatory bodies to build a multi-dimensional strategy involving innovation, privacy, compliance, standardisation, and behavioural change management.

About the author

Amandeep Singh is a life science consultant at MP Advisors.

View original post here:

5 pitfalls AI healthcare start-ups need to avoid - - pharmaphorum

AI technology will soon replace error-prone humans all over the world but here’s why it could set us all free – Gulf Today

The photo has been used for illustrative purpose.

It has been oft-quoted albeit humouredly that the ideal of medicine is the elimination of the physician. The emergence and encroachment of artificial intelligence (AI) on the field of medicine, however, puts an inconvenient truth on the aforementioned witticism. Over the span of their professional lives, a pathologist may review 100,000 specimens, a radiologist more so; AI can perform this undertaking in days rather than decades.

Visualise your last trip to an NHS hospital, the experience was either one of romanticism or repudiation: the hustle and bustle in the corridors, or the agonising waiting time in A&E; the empathic human touch, or the dissatisfaction of a rushed consultation; a seamless referral or delays and cancellations.

Contrary to this, our experience of hospitals in the future will be slick and uniform; the human touch all but erased and cleansed, in favour of complete and utter digitalisation. Envisage an almost automated hospital: cleaning droids, self-portered beds, medical robotics. Fiction of today is the fact of tomorrow, doesnt quite apply in this situation, since all of the above-mentioned AI currently exists in some form or the other. But then, what comes of the antiquated, human doctor in our future world? Well, they can take consolation, their unemployment status would be part of a global trend: the creation displacing the creator. Mechanisation of the workforce leading to mass unemployment. This analogy of our friend, the doctor, speaks volumes; medicine is cherished for championing human empathy if doctors arent safe, nobody is. The solution: socialism.

Open revolt against machinery seems a novel concept set in some futuristic dystopian land, though, the reality can be found in history: the Luddites of Nottinghamshire. A radical faction of skilled textile workers protecting their employment through machine destruction and riots, during the industrial revolution of the 18th century. The now satirised term Luddite, may be more appropriately directed to your fathers fumbled attempt at unlocking his iPhone, as opposed to a militia.

What lessons are to be learnt from the Luddites? Much. Firstly, the much-fictionalised fight for dominance between man and machine is just that: fictionalised. The real fight is within mankind. The Luddites fight was always against the manufacturer, not the machine; machine destruction simply acted as the receptacle of dissidence. Secondly, government feeling towards the Luddites is exemplified through 12,000 British soldiers being deployed against the Luddites, far exceeding the personnel deployed against Napoleons forces in the Iberian Peninsula in the same year.

Though providing clues, the future struggle against AI and its wielders will be tangibly different from that of the Luddite struggle of the 18th century, next; its personal, its about soul. Our higher cognitive faculties will be replaced: the diagnostic expertise of the doctor, decision-making ability of the manager, and (if were lucky) political matters too.

The monopolising of AI will lead to mass unemployment and mass welfare, reverberating globally. AI efficiency and efficacy will soon replace the error-prone human. It must be the case that AI is to be socialised and the means of production, the AI, redistributed: in other words, brought under public ownership. Perhaps, the emergence of co-operative groups made up of experienced individuals will arise to undertake managerial functions in their previous, now automated, workplace. Whatever the structure, such an undertaking will require the full intervention of the state; on a moral basis not realised in the Luddite struggle.

Envisaging an economic system of nationalised labour of AI machinery performing laborious as well as lively tasks shant be feared. This economic model, one of abundance, provides a platform of the fullest of creative expression and artistic flair for mankind. Humans can pursue leisurely passions. Imagine the doctor dedicating superfluous amounts of time on the golfing course, the manager pursuing artistic talents. And what of the politician? Well, thats anyones guess

An abundance economy is one of sustenance rather than subsistence; initiating an old form of socialism fit for a futuristic age. AI will transform the labour market by destroying it; along with the feudalistic structure inherent to it.

Thought-provoking questions do arise: what is to become of human aspiration? What exactly will it mean to be human in this world of AI?

Ironically; perhaps it will be the machine revolution that gives us the resolution to age-old problems in society.

More:

AI technology will soon replace error-prone humans all over the world but here's why it could set us all free - Gulf Today

Will Artificial Intelligence Ever Live Up to Its Hype? – Scientific American

When I started writing about science decades ago, artificial intelligence seemed ascendant. IEEE Spectrum, the technology magazine for which I worked, produced a special issue on how AI would transform the world. I edited an article in which computer scientist Frederick Hayes-Roth predicted that AI would soon replace experts in law, medicine, finance and other professions.

That was in 1984. Not long afterward, the exuberance gave way to a slump known as an AI winter, when disillusionment set in and funding declined. Years later, doing research for my book The Undiscovered Mind, I tracked Hayes-Roth down to ask how he thought his predictions had held up. He laughed and replied, Youve got a mean streak.

AI had not lived up to expectations, he acknowledged. Our minds are hard to replicate, because we are very, very complicated systems that are both evolved and adapted through learning to deal well and differentially with dozens of variables at one time. Algorithms that can perform a specialized task, like playing chess, cannot be easily adapted for other purposes. It is an example of what is called nonrecurrent engineering, Hayes-Roth explained.

That was 1998. Today, according to some measures, AI is booming once again. Programs such as voice and face recognition are embedded in cell phones, televisions, cars and countless other consumer products. Clever algorithms help me choose a Christmas present for my girlfriend, find my daughters building in Brooklyn and gather information for columns like this one. Venture-capital investments in AI doubled between 2017 and 2018 to $40 billion,according to WIRED. A Price Waterhouse study estimates that by 2030 AI will boost global economic output by more than $15 trillion, more than the current output of China and India combined.

In fact, some observers fear that AI is moving too fast. New York Times columnist Farhad Manjoo calls an AI-based reading and writing program, GPT-3, amazing, spooky, humbling and more than a little terrifying. Someday, he frets, he might be put out to pasture by a machine. Neuroscientist Christof Koch has suggested that we might need computer chips implanted in our brains to help us keep up with intelligent machines.

Elon Musk made headlines in 2018 when he warned that superintelligent AI, much smarter than we are, represents the single biggest existential crisis that we face. (Really? Worse than climate change? Nuclear weapons? Psychopathic politicians? I suspect that Musk, whohas invested in AI, is trying to promote the technology with his over-the-top fearmongering.)

Experts are pushing back against the hype, pointing out that many alleged advances in AI are based on flimsy evidence. Last January, for example, a team from Google Health claimed in Nature that their AI program had outperformed humans in diagnosing breast cancer. In October, a group led by Benjamin Haibe-Kains, a computational genomics researcher, criticized the Google health paper, arguing that the lack of details of the methods and algorithm code undermines its scientific value.

Haibe-Kains complained to Technology Review that the Google Health report is more an advertisement for cool technology than a legitimate, reproducible scientific study. The same is true of other reported advances, he said. Indeed, artificial intelligence, like biomedicine and other fields, has become mired in a replication crisis. Researchers make dramatic claims that cannot be tested, because researchersespecially those in industrydo not disclose their algorithms. One recent review found that only 15 percent of AI studies shared their code.

There are also signs that investments in AI are not paying off. Technology analyst Jeffrey Funk recently examined 40 start-up companies developing AI for health care, manufacturing, energy, finance, cybersecurity, transportation and other industries. Many of them were not nearly as valuable to society as all the hype would suggest, Funk reports in IEEE Spectrum. Advances in AI are unlikely to be nearly as disruptivefor companies, for workers, or for the economy as a wholeas many observers have been arguing.

Science reports that core progress in AI has stalled in some fields, such as information retrieval and product recommendation. A study of algorithms used to improve the performance of neural networks found no clear evidence of performance improvements over a 10-year period.

The longstanding goal of general artificial intelligence, possessing the broad knowledge and learning capacity to solve a variety of real-world problems, as humans do, remains elusive. We have machines that learn in a very narrow way, Yoshua Bengio, a pioneer in the AI approach called deep learning, recently complained in WIRED. They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.

Writing in The Gradient, an online magazine devoted to tech, AI entrepreneur and writer Gary Marcus accuses AI leaders as well as the media of exaggerating the fields progress. AI-based autonomous cars, fake news detectors, diagnostic programs and chatbots have all been oversold, Marcus contends. He warns that if and when the public, governments, and investment community recognize that they have been sold an unrealistic picture of AIs strengths and weaknesses that doesn't match reality, a new AI winter may commence.

Another AI veteran and writer, Erik Larson, questions the myth that one day AI will inevitably equal or surpass human intelligence. In The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do, scheduled to be released by Harvard University Press in April, Larson argues that success with narrow applications gets us not one step closer to general intelligence.

Larson says the actual science of AI (as opposed to the pseudoscience of Hollywood and science fiction novelists) has uncovered a very large mystery at the heart of intelligence, which no one currently has a clue how to solve. Put bluntly: all evidence suggests that human and machine intelligence are radically different. And yet the myth of inevitability persists.

When I first started writing about science, I believed the myth of AI. One day, surely, researchers would achieve the goal of a flexible, supersmart, all-purpose artificial intelligence, like HAL. Given rapid advances in computer hardware and software, it was only a matter of time. And who was I to doubt authorities like Marvin Minsky?

Gradually, I became an AI doubter, as I realized that our mindsin spite of enormous advances in neuroscience, genetics, cognitive science and, yes, artificial intelligenceremain as mysterious as ever. Heres the paradox: machines are becoming undeniably smarterand humans, it seems lately, more stupid, and yet machines will never equal, let alone surpass, our intelligence. They will always remain mere machines. Thats my guess, and my hope.

Further Reading:

How Would AI Cover an AI Conference?

Do We Need Brain Implants to Keep Up with Robots?

The Many Minds of Marvin Minsky (R.I.P.)

The Singularity and the Neural Code

Who Wants to Be a Cyborg?

Mind-Body Problems

Continued here:

Will Artificial Intelligence Ever Live Up to Its Hype? - Scientific American

SimCam 1S security camera review: Superb AI features are the highlight of this affordable camera – TechHive

The SimCam 1S is a relative raritya home security camera chockablock with advanced AI features, no cloud subscription requirement for them to work, and a modest price tag. That it works as well as it does almost seem like gravy. The 1S is not only a big improvement over the original Kickstarter-funded SimCam, its features and performance put it in a league with cameras from leadingand more-expensivebrands such as Nest.

The SimCam 1S looks fairly identical to the original SimCam I reviewed at its launch. A spherical body houses a 1080p camera with a 120-degree field of view. An LED indicator is set into the face of the camera above the lens, and a light sensor and microphone are beneath that. Ten infrared LED emitters ringing the lens provide up to 50 feet of night vision. A speaker takes up much of the back of the camera, and beneath that t is a panel concealing a MicroSD card slot and a reset button.

The camera body can rotate 360 degrees and tilt 22 degrees on its base when you pan it or enable the automatic tracking feature. On the back of the camera are the power cord port and a slot to attach the wall mount. The camera can be used outside, but it carries only an IP54 rating so you might want to keep away from too much dust and direct water exposure (click this link for our in-depth explanation of IP codes).

The 1Ss includes person, animal, and vehicle detection.

Also like the original SimCam, the SimCam 1Ss main attraction is its system of AI-powered smart alerts. The camera can detect people up to 60 feet, vehciles up to 20 feet, and animals up to 10 feet. It can also recognize faces at up to 18 feet. You further hone detection by setting activity zones and object monitoring areas.

The SimCam 1S performs all AI processing on board, so the manufacturer doesnt burden users with the additional ongoing costs of a cloud subscription. That means all event-detected video clips are store locally as well. The camera comes with a 16GB microSD card installed, but it supports up to 128GB cards.

The camera works with Amazon Alexa and Google Assistant, so you can use voice commands to view your feed on smart displays that support those digital assistants. Additionally, you can automate many of the SimCam 1Ss features and integrate it with other smart devices using IFTTT applets.

The SimCam 1S has excellent image quality and its companion app enables easy control of its AI features.

SimCams companion app walks you through the setup, but it was still one of the more bothersome Ive encountered. Thats largely because it starts with having to access the cameras reset button, which is secured behind a panel on the back of the camera stand. You must unscrew and remove this panel and then press the recessed button for five seconds to begin the connection process. SimCam provides a small Allen Wrench and a reset pin for this task, but I still had to employ a butter knife to pry off the panel (users with long fingernails might fare better with this step).

It gets easier after that. Once youve pressed the reset button, youre asked to login to your Wi-Fi and scan a QR code in the app with the cameras voice prompt providing status updates. The SimCam 1S connected immediately and was up and running in minutes.

The camera appears on the SimCam apps homescreen as a still shot of its current view, overlaid with a Play button to activate its live stream. Buttons beneath this are used to activate privacy mode, turn off motion detection and Wi-Fi, and access the cameras other settings. The Settings menu is the first place you should go, as this is where you turn on the various forms of AI detection and automatic tracking, choose a video clip length (15-, 30-, or 60 seconds), set a working schedule (the camera is active 24/7 by default) and set up activity zones and object monitoring areas.

Enabling the various detection features is as simple as toggling a switch in the app for each of them. Facial recognition requires you to enter familiar faces into a database by taking four pictures of a persons visagetwo frontal and a three-quarter profile of the left and right side of their faceand then entering their name and role (Mother, Father, Visitor, and so on). To create activity zones, youll create at least three points on the screen by pressing your finger on it; the app automatically connects all the dots into a shape and will detect activity only within those areas. Object monitoring operates in a similar fashion, but you simply create a bounding box over the object you wish to monitor by dragging your finger over it. After that, the app will notify you if the object moves or is moved from that area. You can also opt to have activity zones and monitoring areas visible in the live feed.

While you can view the cameras feed in the home screens streaming pane, you must enter fullscreen mode to see the cameras controls. Across the bottom are buttons for muting the speaker, recording video and taking screen shots, triggering the cameras microphone, and sounding the cameras siren to ward off an intruder.

You can pan the camera using swiping gestures on your phones screen. Each swipe only moves the camera a few inches, though, and theres no way to continuously pan. That makes using this feature a slow and somewhat noisy affair as the cameras motor is audible with each swipe.

You can easily add familiar faces to the SimCams database to have them identified in alerts.

That small criticism was really the only shortcoming I encountered. The SimCam 1Ss smart detection and alerts worked well in my tests, as did automatic tracking. I created an object monitoring area around my dogs bed so that I could be alerted when he strayed from it and use the two-way talk feature to tell him to return. It worked perfectly. The cameras image quality was excellent, displaying bright, accurate color in day mode and strong contrast and illumination in night mode.

I found the SimCam 1S to be a much more consistent performer than the original SimCam. Though its panning feature could be improved, its AI functions are top notch and this camera is a steal at its current retail price. All that plus superb image quality makes it easy to recommend.

Read the original:

SimCam 1S security camera review: Superb AI features are the highlight of this affordable camera - TechHive

Riiid raises $41.8 million to expand its AI test prep apps – VentureBeat

Riiid, a Seoul, South Korea-based startup developing AI test prep solutions, today closed a $41.8 million pre-series D financing round, bringing its total venture capital raised to date to $70.2 million. CEO YJ Jang says the funding will be used to advance Riiids technology that offers personalized study solutions based on big data analysis, and to bolster the companys expansion across the U.S., South America, and the Middle East as it establishes an R&D lab Riiid Labs in Silicon Valley.

The pandemic has forced the shutdown of schools in countries around the world; cramped indoor classrooms are seen as a major threat vector. Despite inequities with regard to internet access and the widening achievement gap, educators believe the health pros outweigh the cons. Riiid, which offers its services exclusively online, has benefited from the shift. The company claims sales have grown more than 200% since 2017 as over a million users joined its community.

Riiids platform Santa is a mobile study aid for the Test of English for International Communication (TOEIC) English proficiency exam. (Unlike the better-known TOEFL or IELTS tests, which are used by Western universities and colleges as part of their admissions process, TOEIC is primarily used by employers to assess the English proficiency of prospective hires.) Leveraging AI and machine learning algorithms, Santa analyzes responses to predict scores and recommend personalized review plans. A meta-learning log with over 100 million pools of information supplies insights to support Santa, as well as Riiids other systems.

Santa primarily lives on the web, but its also available as a chatbot for smart speakers from Kakao. In Japan, Riiid teamed up with game developer KLab Langoo to design a mobile-optimized version of Santa.

According to Jang, the goal is to help achieve learning objectives through continuous evaluation and feedback rather than specific prep. The engineers called the app Santa because it collects data on student performance in the way that Santa Claus famously keeps track of childrens good deeds and bad, he told VentureBeat via email. We launched Santa in the Korean market, focusing on preparing students for the TOEIC exam because it was an easy target that would validate or invalidate our research findings. More than a million students in Korea and Japan have now used the Santa app and I can proudly report that it works. We raise scores by an average of 129 points out of a possible 990 on the TOEIC exam at a fraction of the time and cost it takes with traditional test-prep courses or personal tutors.

After launching Santa in Korea, Japan, and Vietnam, Riiid plans to pivot to backend curriculum solutions for companies, school districts, and education ministries. Earlier this year, the company published EdNet, a data set of all student-system interactions collected over two years by Santa. Riiid is also expanding its platform to standardized tests like the SAT and ACT, and it claims its signed memorandums of understanding with customers in the Middle East and U.S. (for example, private education center company Point Avenue) to develop programs for specific courses of study.

Riiids latest funding round included an investment from the state-run Korea Development Bank (KDB), NVESTOR, and Intervest, as well as from existing investor IMM Investment. According to LinkedIn data, the startup employs about 80 people.

Here is the original post:

Riiid raises $41.8 million to expand its AI test prep apps - VentureBeat

AI in Agriculture market Comprehensive Analysis On Size (Value & Volume), Application | The Climate Corporation Agribotix LLC, Tule Technologies,…

With the watchful use of established and advanced tools such as SWOT analysis and Porters Five Forces Analysis, this market report has been structured. Meticulous hard work of skilled forecasters, well-versed analysts and knowledgeable researchers gives outcome of such premium AI in Agriculture market research report. This market report aids to unearth the general market conditions, existing trends and tendencies in the industry. The market study conducted in this report analyzes the market status, market share, growth rate, future trends, market drivers, opportunities and challenges, risks and entry barriers, sales channels, and distributors in the industry.

This AI in Agriculture market report studies the market and the industry thoroughly by considering several aspects. According to this market report, the global market is anticipated to observe a moderately higher growth rate during the forecast period. This makeover can be subjected to the moves of key players or brands which include developments, product launches, joint ventures, mergers and acquisitions that in turn change the view of the global face of the industry. With the actionable market insights included in this report, businesses can craft sustainable and cost-effective strategies. This AI in Agriculture market report provides all-inclusive study about production capacity, consumption, import and export for all the major regions across the world.

According to report published byData Bridge Market Research, TheAI in Agriculture marketsize is expected to reachUSD XX billionin the forecast period. This report provides in depth study of AI in Agriculture Market using SWOT analysis i.e. Strength, Weakness, Opportunities and Threat to the organization.

Get Exclusive FREE Sample Report with All Related Graphs & Charts @https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-ai-agriculture-market

Register Here @https://www.databridgemarketresearch.com/digital-conference/conference-on-content-moderation-solution

Key questions answered in the report:

What is the growth potential of this market?

Which product segment will grab a lions share?

Which regional market will emerge as a frontrunner in the coming years?

Which application segment will grow at a robust rate?

What are the growth opportunities that may emerge in this industry in the years to come?

What are the key challenges that the global market may face in the future?

Which are the leading companies in the global market?

Which are the key trends positively impacting the market growth?

Which are the growth strategies considered by the players to sustain hold in the global market?

Report includes Competitors Landscape:

Major trends and growth projections by region and country

Key winning strategies followed by the competitors

Who are the key competitors in this industry?

What shall be the potential of this industry over the forecast tenure?

What are the factors propelling the demand for this Industry?

What are the opportunities that shall aid in significant proliferation of the market growth?

What are the regional and country wise regulations that shall either hamper or boost the demand for this Industry?

How has the covid-19 impacted the growth of the market?

Has the supply chain disruption caused changes in the entire value chain?

Market segmentation

By Offering (Hardware, Software, Service, AI-As-A-Service), By Technology (Predictive Analytics, Machine Learning, Computer Vision), By Application (Livestock Monitoring, Precision Farming, Agriculture Robots, Livestock Monitoring, Drone Analytics), By Geography (North America, Europe, Asia-Pacific, Europe, South America, Middle East and Africa)

NOTE: Our report highlights the major issues and hazards that companies might come across due to the unprecedented outbreak of COVID-19.

The assessment provides a 360 view and insights, outlining the key outcomes of the industry, current scenario witnesses a slowdown and study aims to unique strategies followed by key players. These insights also help the business decision-makers to formulate better business plans and make informed decisions for improved profitability. In addition, the study helps venture or private players in understanding the companies more precisely to make better informed decisions.Some of the key players in the Global AI in Agriculture market are M, Microsoft Corporation, Descartes Labs, Deere & Company, Granular, aWhere, The Climate Corporation Agribotix LLC, Tule Technologies, Prospera, Mavrx Inc., Cropx, Harvest Croo, Farmbot, Trace Genomics, Spensa Technologies Inc., Resson, Vision Robotics and Autonomous Tractor Corporation among others

We can add or profile new company as per client need in the report. Final confirmation to be provided by research team depending upon the difficulty of survey

Why COVID-19 AI in Agriculture Research Insights is Interesting?

This report covers the current slowdown due to Coronavirus and growth prospects of COVID-19High AI in Agriculture for the period. The study is a professional and in-depth study with around n- no. of tables and figures which provides key statistics on the state of the industry and is a valuable source of guidance and direction for companies and individuals interested in the domain to better understand how players are fighting and preparing against COVID-19.

Regional Analysis:

This segment of the report covers the analysis of AI in Agriculture consumption, import, export, market value, revenue, market share and growth rate, market status and SWOT analysis, price and gross margin analysis by regions. It includes data about several parameters related to the regional contribution. From the available data, we will identify which area has the largest share of the market. At the same time, we will compare this data to other regions, to understand the demand in other countries. Market analysis by regions:North America (United States, Canada and Mexico), Europe (Germany, France, UK, Russia and Italy), Asia-Pacific (China, Japan, Korea, India and Southeast Asia), South America (Brazil, Argentina, etc.), Middle East & Africa (Saudi Arabia, Egypt, Nigeria and South Africa)

Answers That The Report Acknowledges:

To know more about this research, you can click @https://www.databridgemarketresearch.com/reports/global-ai-agriculture-market

Research objectives:

To study and analyze the global AI in Agriculture market size by key regions/countries, product type and application, history data, and forecast period

To understand the structure of AI in Agriculture market by identifying its various sub segments.

Focuses on the key global AI in Agriculture players, to define, describe and analyze the value, market share, market competition landscape, SWOT analysis and development plans in next few years

To analyze the AI in Agriculture with respect to individual growth trends, future prospects, and their contribution to the total market

To share detailed information about the key factors influencing the growth of the market (growth potential, opportunities, drivers, industry-specific challenges and risks)

To project the size of AI in Agriculture submarkets, with respect to key regions (along with their respective key countries)

To analyse competitive developments such as expansions, agreements, new product launches and acquisitions in the market

To strategically profile the key players and comprehensively analyze their growth strategies

Key Pointers Covered within the Global AI in Agriculture Market Industry Trends and Forecast

And More..Get Detailed Free TOC @https://www.databridgemarketresearch.com/toc/?dbmr=global-ai-agriculture-market

Data Bridge set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market.

Contact:

Tel: +1-888-387-2818

Email: Corporatesales@databridgemarketresearch.com

More:

AI in Agriculture market Comprehensive Analysis On Size (Value & Volume), Application | The Climate Corporation Agribotix LLC, Tule Technologies,...

Tech companies are using AI to mine our digital traces – STAT

Imagine sending a text message to a friend. As your fingers tap the keypad, words and the occasional emoji appear on the screen. Perhaps you write, I feel blessed to have such good friends 🙂 Every character conveys your intended meaning and emotion.

But other information is hiding among your words, and companies eavesdropping on your conversations are eager to collect it. Every day, they use artificial intelligence to extract hidden meaning from your messages, such as whether you are depressed or diabetic.

Companies routinely collect the digital traces we leave behind as we go about our daily lives. Whether were buying books on Amazon (AMZN), watching clips on YouTube, or communicating with friends online, evidence of nearly everything we do is compiled by technologies that surround us: messaging apps record our conversations, smartphones track our movements, social media monitors our interests, and surveillance cameras scrutinize our faces.

advertisement

What happens with all that data? Tech companies feed our digital traces into machine learning algorithms and, like modern day alchemists turning lead into gold, transform seemingly mundane information into sensitive and valuable health data. Because the links between digital traces and our health emerge unexpectedly, I call this practice mining for emergent medical data.

A landmark study published in June, which used AI to analyze nearly 1 million Facebook posts containing more than 20 million words, shows how invasive this practice can be. According to the authors, Facebook status updates can predict many health conditions such as diabetes, hypertension, and gastrointestinal disorders.

Many of the words and phrases analyzed were not related to health. For instance, the presence of diabetes was predicted by religious language, including words related to God, family, and prayer. Those words often appeared in non-medical phrases such as, I am blessed to spend all day with my daughter.

Throughout history, medical information flowed directly from people with health conditions to those who cared for them: their family members, physicians, and spiritual advisers. Mining for emergent medical data circumvents centuries of well-established social norms and creates new opportunities for discrimination and oppression.

Facebook analyzes even the most mundane user-generated content to determine when people feel suicidal. Google is patenting a smart home that collects digital traces from household members to identify individuals with undiagnosed Alzheimers disease, influenza, and substance use disorders. Similar developments may be underway at Amazon, which recently announced a partnership with the United Kingdoms National Health Service.

Medical information that is revealed to health care providers is protected by privacy laws such as the Health Information Portability and Accountability Act (HIPAA). In contrast, emergent medical data receives virtually no legal protection. By mining it, companies sidestep privacy and antidiscrimination laws to obtain information most people would rather not disclose.

Why do companies go to such lengths? Facebook says it performs a public service by mining digital traces to identify people at risk for suicide. Google says its smart home can detect when people are getting sick. Though these companies may have good intentions, their explanations also serve as smoke screens that conceal their true motivation: profit.

In theory, emergent medical data could be used for good. People with undiagnosed Alzheimers disease could be referred to physicians for evaluation; those with substance use disorders could be served ads for evidence-based recovery centers. But doing so without explicit consent violates individuals privacy and is overly paternalistic.

Emergent medical data is extremely valuable because it can be used to manipulate consumers. People with chronic pain or substance use disorders can be targeted with ads for illicit opioids; those with eating disorders can be served ads for stimulants and laxatives; and those with gambling disorders can be tempted with coupons for casino vacations.

Informing and influencing consumers with traditional advertising is an accepted part of commerce. However, manipulating and exploiting them through behavioral ads that leverage their medical conditions and related susceptibilities is unethical and dangerous. It can trap people in unhealthy cycles of behavior and worsen their health. Targeted individuals and society suffer while corporations and their advertising partners prosper.

Emergent medical data can also promote algorithmic discrimination, in which automated decision-making exploits vulnerable populations such as children, seniors, people with disabilities, immigrants, and low-income individuals. Machine learning algorithms use digital traces to sort members of these and other groups into health-related categories called market segments, which are assigned positive or negative weights. For instance, an algorithm designed to attract new job candidates might negatively weight people who use wheelchairs or are visually impaired. Based on their negative ratings, the algorithm might deny them access to the job postings and applications. In this way, automated decision-making screens people in negatively weighted categories out of life opportunities without considering their desires or qualifications.

Last year, in a high-profile case of algorithmic discrimination, the Department of Housing and Urban Development (HUD) accused Facebook of disability discrimination when it allowed advertisers to exclude people from receiving housing-related ads based on their disabilities. But in a surprising turn, HUD recently proposed a rule that would make it more difficult to prove algorithmic discrimination under the Fair Housing Act.

Because emergent medical data are mined secretly and fed into black-box algorithms that increasingly make important decisions, they can be used to discriminate against consumers in ways that are difficult to detect. On the basis of emergent medical data, people might be denied access to housing, jobs, insurance, and other important resources without even knowing it. HUDs new rule will make that easier to do.

One section of the rule allows landlords to defeat claims of algorithmic discrimination by identifying the inputs used in the model and showing that these inputs are not substitutes for a protected characteristic This section gives landlords a green light to mine emergent medical data because its inputs our digital traces have little or no apparent connection to health conditions or disabilities. Few would consider the use of religious language on Facebook a substitute for having diabetes, which is a protected characteristic under the Fair Housing Act. But machine learning is revealing surprising connections between digital traces and our health.

To close gaps in health privacy regulation, Reps. Amy Klobuchar (D-Minn.) and Lisa Murkowski (R-Alaska) introduced the Protecting Personal Health Data Act in June. The bill aims to protect health information collected by fitness trackers, wellness apps, social media sites, and direct-to-consumer DNA testing companies.

Though the bill has some merit, it would put consumers at risk by creating an exception for emergent medical data: One section excludes products on which personal health data is derived solely from other information that is not personal health data, such as Global Positioning System [GPS] data. If passed, the bill would allow companies to continue mining emergent medical data to spy on peoples health with impunity.

Consumers can do little to protect themselves other than staying off the grid. Shopping online, corresponding with friends, and even walking along public streets can expose any of us to technologies that collect digital traces. Unless we do something soon to counteract this trend, we risk permanently discarding centuries of health privacy norms. Instead of healing people, emergent medical data will be used to control and exploit them.

Just as we prohibit tech companies from spying on our health records, we must prevent them from mining our emergent medical data. HUDs new rule and the Klobuchar-Murkowski bill are steps in the wrong direction.

Mason Marks, M.D. is an assistant professor of law at Gonzaga University and an affiliate fellow at Yale Law Schools Information Society Project.

Continued here:

Tech companies are using AI to mine our digital traces - STAT

Google’s Deep Mind Explained! – Self Learning A.I.

Subscribe here: https://goo.gl/9FS8uFBecome a Patreon!: https://www.patreon.com/ColdFusion_TVVisual animal AI: https://www.youtube.com/watch?v=DgPaC...

Hi, welcome to ColdFusion (formally known as ColdfusTion).Experience the cutting edge of the world around us in a fun relaxed atmosphere.

Sources:

Why AlphaGo is NOT an "Expert System": https://googleblog.blogspot.com.au/20...

Inside DeepMind Nature video:https://www.youtube.com/watch?v=xN1d3...

AlphaGo and the future of Artificial Intelligence BBC Newsnight: https://www.youtube.com/watch?v=53YLZ...

http://www.nature.com/nature/journal/...

http://www.ft.com/cms/s/2/063c1176-d2...

http://www.nature.com/nature/journal/...

https://www.technologyreview.com/s/53...

https://medium.com/the-physics-arxiv-...

https://www.deepmind.com/

http://www.forbes.com/sites/privacynotice/2014/02/03/inside-googles-mysterious-ethics-board/#5dc388ee4674

https://medium.com/the-physics-arxiv-...

http://www.theverge.com/2016/3/10/111...

https://en.wikipedia.org/wiki/Demis_H...

https://en.wikipedia.org/wiki/Google_...

//Soundtrack//

Disclosure - You & Me (Ft. Eliza Doolittle) (Bicep Remix)

Stumbleine - Glacier

Sundra - Drifting in the Sea of Dreams (Chapter 2)

Dakent - Noon (Mindthings Rework)

Hnrk - fjarlg

Dr Meaker - Don't Think It's Love (Real Connoisseur Remix)

Sweetheart of Kairi - Last Summer Song (ft. CoMa)

Hiatus - Nimbus

KOAN Sound & Asa - This Time Around (feat. Koo)

Burn Water - Hide

Google + | http://www.google.com/+coldfustion

Facebook | https://www.facebook.com/ColdFusionTV

My music | t.guarva.com.au/BurnWater http://burnwater.bandcamp.com or http://www.soundcloud.com/burnwater https://www.patreon.com/ColdFusion_TV Collection of music used in videos: https://www.youtube.com/watch?v=YOrJJ...

Producer: Dagogo Altraide

Editing website: http://www.cfnstudios.com

Coldfusion Android Launcher: https://play.google.com/store/apps/de...

Twitter | @ColdFusion_TV

Go here to see the original:

Google's Deep Mind Explained! - Self Learning A.I.

AI won’t kill you, but ignoring it might kill your business, experts say … – Chicago Tribune

Relax. Artificial intelligence is making our lives easier, but won't be a threat to human existence, according to panel of practitioners in the space.

"One of the biggest misconceptions today about autonomous robots is how capable they are," said Brenna Argall, faculty research scientist at the Rehabilitation Institute of Chicago, during a Chicago Innovation Awards eventWednesday.

"We see a lot of videos online showing robots doing amazing things. What isn't shown is the hours of footage where they did the wrong thing," she said. "The reality is that robots spend most of their time not doing what they're supposed to be doing."

The event at Studio Xfinity drew about 200 people, who mingled among tech exhibits before contemplating killer robot overlords.

Stephen Pratt, a former IBM employee who was then responsible for the global implementation of Watson, also was quick to swat down the notion that machines are poised to run the world.

The tech insteadgives better ways to improve services, products and business, hesaid besting humans in applications dealing with demand predictions, pricing, inventory, retail promotion, logistics and preventive maintenance.

"Amplifying human intelligence, and overcoming human cognitive biases I think that's where it fits," said Pratt, founder and CEO of business consultancy Noodle.ai. "Humans are really bad probabilistic thinkers and statisticians. That's where cognitive bias creeps in and, therefore, inefficiencies and lost profit."

But machineswon't replace humans when it comes to big-picture decisions, he said.

"Those algorithms are not going to set the strategy for the company. It'll help you make the decision once I come up with the idea," Pratt said. "But any executive that doesn't have a supercomputer in the mix now on their side and they're stuck in the spreadsheet era your jobs are going to be in jeopardy in a few years."

It'll be up to machines to decipher those spreadsheets anyway, as so much data is being collected it would be overwhelming for humans to understand, said Kris Hammond, co-founder of Chicago AI company Narrative Science.

"We're no longer looking at a world with a spreadsheet with 20 columns and 50 rows. We're now looking at spreadsheets of thousands of columns and millions of rows," said Hammond, founder of the University of Chicago's Artificial Intelligence Laboratory. "The only way we can actually understand what's going on in the world is to have systems that look at that data, understand what they mean and then turn it into something we can understand."

Mike Shelton, technical director for Microsoft's Azure Data Services, said it's also a time saver.

"What I see every day is it's giving time back," he said. "Through an AI interface, I can ask a question in speech or text and get a response through that without having to go search for a web page or hunt for information."

Julie Friedman Steele , CEO of the World Future Society, said her organization is focusing on the advances that could be made using AI in education, where teachers in crowded classrooms can't give much attention to students individually.

"As a human, can you actually learn all the knowledge that you might have a student interested in learning?" said Steele, who's also CEO and founder of The 3D Printer Experience. "I'm not talking about there not being a human in the room and it's all robots. I'm just saying that there's an opportunity in education with artificial intelligence so that if a teacher doesn't know something, it's OK."

Cheryl V. Jackson is a freelance writer. Twitter@cherylvjackson

Read more here:

AI won't kill you, but ignoring it might kill your business, experts say ... - Chicago Tribune

Total partners with Google to deploy AI-powered solar energy tool – The Hindu

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

French energy company Total has developed a tool to determine solar energys potential of rooftops. Partnering with Google Cloud, the tool will help popularise the deployment of solar energy panels in households.

The tool Solar Mapper uses artificial intelligence (AI) algorithm to extract data from satellite images. AI helps facilitates sharper and quicker estimation of solar energy potential than present tools, the company said in an official statement.

The tool will also guide households to understand what technology would need to be installed depending on solar energy requirements.

Researchers from Total and Google Cloud took 6 months to devise the programme. At present, Solar Mapper is said to provide nearly 90% coverage in France.

Also read | Alphabet's robotic plant buggy can scan crops, gather data

The availability of the tool will expand all through Europe and the rest of the world soon, the team said. Solar Mapper will also expand its application to industrial and commercial buildings.

Total said this will help further its goal of becoming net-zero emission by 2050.

In September, Googles CEO Sundar Pichai said in a video message the company has ended its carbon legacy, making it the first major company to do so.

Here is the original post:

Total partners with Google to deploy AI-powered solar energy tool - The Hindu

AI computing will enter the ‘land of humans’ in the 2020s: The promise and the peril | TheHill – The Hill

Indisputably, computers in their myriad forms helped improve our lives in the last century, and especially in the past decade. Much of our interaction with computers, however, has long been stilted and unnatural.

The means of natural interaction we evolved for human communication generally were not of much use in dealing with computers. We had to enter their land to get our work done be it typing, clicking buttons or editing spreadsheets. While our productivity increased, so did the time we spend in these unnatural modes of interaction. Communicating with computers sometimes is such a soul-draining activity that, over time, we even created special classes of computer data-entry positions.

Thanks to recent strides in artificial intelligence (AI) especially in perceptual intelligence this is going to change drastically in coming years, with computers entering our land, instead of the other way around.They will be able to hear us, to speak back to us, to see us and to show us back. In an ironic twist, these "advanced" capabilities finally will allow us to be ourselves, and to have computers deal with us in modes of interaction that are natural to us.

We won't need to type to them or to speak in stilted, halting voices. This will make computer assistants and decision-support systems infinitely more human-friendly as witnessed by the increasing popularity of "smart speakers." As computers enter the land of humans, we might even reclaim some of our lost arts, such as cursive script, since it will become as easy for computers to recognize handwriting as it is for humans.

Granted, the current recognition technology still has many limitations but the pace of improvement has been phenomenal. Despite having done an undergraduate thesis on speech recognition, I have scrupulously avoided most all the dictation/transcription technologies. Recently, however, the strides in voice transcription have been quite remarkable even for someone with my accent. In fact, Iused Pixel 4 Recorder to transcribe my thoughts for this article!

Beyond the obvious advantages of easy communication with computer assistants, their entry into our land has other important benefits.

For a long time now, computers have foisted a forced homogenization among the cultures and languages of the world. Whatever your mother tongue, you had to master some pidgin English to enter the land of computers. In the years to come, however, computers can unify us in all our diversity, without forcing us to lose our individuality. We can expect to see a time when two people can speak in their respective mother tongues and understand each other, thanks to real-time AI transcription technology that rivals the mythicalBabel Fishfrom The Hitchhikers Guide to the Galaxy." Some baby steps towards this goal are already being taken. I have a WeChat account to keep in touch with friends from China; they all communicate in Chinese, and I still get a small percentage of their communications thanks to the "translate" button.

Seeing and hearing the world as we do will allow computers to take part in many other quotidian aspects of our lives beyond human-machine communication. While self-driving cars still may not be here this coming decade, we certainly will have much more intelligent cars that see the road and the obstacles, hear and interpret sounds and directions, the way we do, and thus provide much better assistance to us in driving. Similarly, physicians will have access to intelligent diagnostic technology that can see and hear the way they themselves do, thus making their jobs much easier and less time-consuming (and giving them more time for interaction with patients!).

Of course, to get computers to go beyond recognition and see the world the way we do, we still have some hard AI problems to solve including giving computers the common sense that we humans share, and the ability to model the mental states of those humans who are in the loop. The current pace of progress makes me optimistic that we will make important breakthroughs on these problems within this decade.

There is, of course, a flip side. Until now it was fairly easy for us to figure out whether we are interacting with a person or a computer, be it the stilted prose or robotic voice of the latter. As computers enter our land with natural interaction modalities, they can have significant impact on our perception of reality and human relations. As a species, we already are acutely susceptible to the sin of anthropomorphization. Computer scientist and MIT professor Joseph Weizenbaum is said to have shut down his Eliza chatbot when he was concerned that the office secretaries were typing their hearts out to it. Already, modern chatbots such as Woebot are rushing onto the ground where Weizenbaum feared to tread.

Imagine the possibilities when our AI-enabled assistants don't rely on us typing but, instead, can hear, see and talk back to us.

There also are the myriad possibilities of synthetic reality. In order to give us some ability to tell whether we are interacting with a computer or the reality it generated, there are calls to have AI assistants voluntarily identify themselves as such when interacting with humans ironic, considering all of the technological steps we took to get the computers into our land in the first place.

Thanks to the internet of things (IoT) and 5G communication technologies, computers that hear and see the world the way we do can also be weaponized to provide surveillance at scale. Surveillance in the past required significant human power. With improved perceptual recognition capabilities, computers can provide massive surveillance capabilities without requiring much human power.

Its instructive remember a crucial difference between computers and humans: When we learn a skill, there is no easy way to instantly transfer it to others we dont have USB connectors to our brains. In contrast, computers do, and thus when they enter our land, they enter all at once.

Even an innocuous smart speaker in our home can invade our privacy. This alarming trend is already seen in some countries such as China, where the idea of privacy in the public sphere is becoming increasingly quaint. Countering this trend will require significantvigilance and regulatory oversight from civil society.

After a century of toiling in the land of computers, we finally will have them come to our land, on our terms. If language is the soul of a culture, our computers will start having first glimpses of our human culture. The coming decade will be a test of how we will balance the many positive impacts of this capability on productivity and quality of life with its harmful or weaponized aspects.

Subbarao Kambhampati, PhD, is a professor of computer science at Arizona State University and chief AI officer for AI Foundation, which focuses on the responsible development of AI technologies. He served as president and is now past-president of the Association for the Advancement of Artificial Intelligence and was a founding board member of Partnership on AI. He can be followed on Twitter@rao2z.

More:

AI computing will enter the 'land of humans' in the 2020s: The promise and the peril | TheHill - The Hill

AI expert calls for end to UK use of racially biased algorithms – The Guardian

An expert on artificial intelligence has called for all algorithms that make life-changing decisions in areas from job applications to immigration into the UK to be halted immediately.

Prof Noel Sharkey, who is also a leading figure in a global campaign against killer robots, said algorithms were so infected with biases that their decision-making processes could not be fair or trusted.

A moratorium must be imposed on all life-changing decision-making algorithms in Britain, he said.

Sharkey has suggested testing AI decision-making machines in the same way as new pharmaceutical drugs are vigorously checked before they are allowed on to the market.

In an interview with the Guardian, the Sheffield University robotics/AI pioneer said he was deeply concerned over a series of examples of machine-learning systems being loaded with bias.

On inbuilt bias in algorithms, Sharkey said: There are so many biases happening now, from job interviews to welfare to determining who should get bail and who should go to jail. It is quite clear that we really have to stop using decision algorithms, and I am someone who has always been very light on regulation and always believed that it stifles innovation.

But then I realised eventually that some innovations are well worth stifling, or at least holding back a bit. So I have come down on the side of strict regulation of all decision algorithms, which should stop immediately.

There should be a moratorium on all algorithms that impact on peoples lives. Why? Because they are not working and have been shown to be biased across the board.

Sharkey said he had spoken to the biggest global social media and computing corporations, such as Google and Microsoft, about the innate bias problem. They know its a problem and theyve been working, in fairness, to find a solution over the last few years but none so far has been found.

Until they find that solution, what I would like to see is large-scale pharmaceutical-style testing. Which in reality means that you test these systems on millions of people, or at least hundreds of thousands of people, in order to reach a point that shows no major inbuilt bias. These algorithms have to be subjected to the same rigorous testing as any new drug produced that ultimately will be for human consumption.

As well as numerous examples of racial bias in machine-led decisions on, for example, who gets bail in the US or on healthcare allocation, Sharkey said his work on autonomous weapons, or killer robots, also illuminated how bias infects algorithms.

There is this fantasy among people in the military that these weapons can select individual targets on their own. These move beyond the drone strikes, which humans arent great at already, with operatives moving the drone by remote control and targeting individual faces via screens from bases thousands of miles away, he said.

Now the new idea that you could send autonomous weapons out on their own, with no direct human control, and find an individual target via facial recognition is more dangerous. Because what we have found out from a lot of research is that the darker the skin, the harder it is to properly recognise the face.

In the laboratory you get a 98% recognition rate for white males without beards. Its not very good with women and its even worse with darker-skinned people. In the latter case, the laboratory results have shown it comes to the point where the machine cannot even recognise that you have a face.

So, this exposes the fantasy of facial recognition being used to directly target enemies like al-Qaida, for instance. They are not middle-class men without beards, of whom there is a 98% recognition rate in the lab. They are darker-skinned people and AI-driven weapons are really rubbish at that kind of recognition under the current technology. The capacity for innocent people being killed by autonomous weapons using a flawed facial recognition algorithm is enormous.

Sharkey said weapons like these should not be in the planning stage, let alone ever deployed. In relation to decision-making algorithms generally, these flaws in facial recognition are yet another argument along with all the other biases that they too should be shut down, albeit temporarily, until they are tested just like any new drug should be.

Follow this link:

AI expert calls for end to UK use of racially biased algorithms - The Guardian

Facebook’s Director Of AI Research On Why It’s Critical To Take Career Risks – Forbes


Forbes
Facebook's Director Of AI Research On Why It's Critical To Take Career Risks
Forbes
As a 25-year-old Computer Science PhD student in France, Yann LeCun chose to study a field that was in its infancy: machine learning. Many skeptics questioned his decision to pursue machine learning, an application of artificial intelligence that ...

More here:

Facebook's Director Of AI Research On Why It's Critical To Take Career Risks - Forbes

AI will have a huge impact on your healthcare. But there are still big obstacles to overcome – ZDNet

Healthcare has been one of the most promising testing grounds for artificial intelligence, thanks largely to the vast amounts of data, in the forms of medical records and scans, that these smart systems can analyse. But while there are plenty of AI projects underway, there are still barriers to rolling out the benefits further.

Moorfields Eye Hospital in London has been working with Deep Mind and Google Health to develop an algorithm that interprets scans of the back of the eye, which are known as optical coherence tomography scans.

The impact of this AI-led innovation is potentially revolutionary, says Peter Thomas, director of digital innovation at Moorfields Eye Hospital. The algorithm supports automated interpretation of patient scans and gives hospital staff access to excellent diagnostic information.

SEE: Managing AI and ML in the enterprise: Tech leaders increase project development and implementation (TechRepublic Premium)

Yet despite all this promise, the impact of AI isn't as wide as it could be, at least not yet.

If you deploy AI in a hospital, you're using the technology in a place where you already have a department full of clinical experts. Yes, they'll be able to use the interpretation the AI produces, but they'd probably have come up with a similar diagnostic decision themselves.

Thomas, who spoke atthe recent virtual HETT Reset event, says that AI will have a bigger impact when you can apply it to a situation where the level of expertise is different, like in the optometry practice on your high street.

However, that's a big challenge because, at present, the technical infrastructures to support the use of those algorithms in opticians do not exist.

Infrastructure issues aren't the only barrier to the development of more effective healthcare treatment through AI. Another key challenge is finding ways to bring together data from multiple clinical sources.

Right now, AI is usually applied to single decisions. Thomas gives the example of diabetic retinopathy screening in his own hospital, where every patient with diabetes gets an annual eye scan that determines the level of follow-up care. "We know that AI can deal with that single workflow pretty well," says Thomas.

Things get more complicated when hospital staff and their AI-based assistants need to go beyond a single source of data. That's a big issue, as effective healthcare for most patients relies on more than a single data source and usually involves a complex range of information.

If we fast-forward a few more years, says Thomas, and we anticipate a point at which there are multiple autonomous decision-making systems that might be involved in a single patient's healthcare journey, then there's going to be a lot of complexity around how staff are going to implement that information in hospitals and how they're going to monitor that data effectively.

"Each algorithm will need to be monitored for bias and performance as it changes. And there's the potential for complex interaction patterns when you have multiple algorithms involved in a single patient's care," says Thomas, who says the result is clear: the impact of AI in healthcare could be revolutionary, but we're not there yet.

"We're still a distance from being at a point where we can start deploying automated clinical management that goes beyond a single decision or a single interpretation. There's a lot of work to do in terms of getting the right workforce, expertise and structures within the hospital to support that."

Other experts agree. James Teo, clinical director of AI and data science, and consultant neurologist at Guys and St Thomas NHS Foundation Trust, joined Thomas at the HETT event and says one of the things his team has discovered through its research work is that "big data is really, really big".

Automated analysis by AI not only feeds the big-data beast but also sends it off in a new direction.

As people become more aware of automation, their expectations are raised. That hope creates more demand for AI systems, which might be implemented before the key use case around improving patient outcomes is actually identified.

"One fear I have is that the process of operating AI and data-driven technologies is that we'll create an even greater hunger for data, and we'll end up spending all our time clicking on menus and checkboxes. And that, I think, is the wrong way to travel. I think we need systems that allow us to capture data in a more human-friendly way," says Teo.

Moorfields' Thomas agrees, suggesting the main accelerator for AI in healthcare must be clinical usefulness. He says there's a tendency for healthcare providers to create AI-based point solutions. Startup companies target particular healthcare problems, but those aren't necessarily the key issues patients face and, as result, the tech fails to create benefits.

Teo says the result of this badly though-through deployment process is too many point solutions that need to be managed and maintained and that's unfeasible for healthcare organisations, especially when you add-in the risk that the startups that create these point solutions might disappear with their products a few years from now.

The answer, suggests Teo, is to create common platforms, or at least common standards, for handling these point solutions. Vendors need to sign up to these standards and the aim for hospital administrators and tech suppliers alike must be to avoid reinventing the wheel.

Indra Joshi, AI director at digital transformation unit NHSX, says her organisation has plans in this direction. It set up the NHS AI Lab in 2019, a 250m programme that aims to accelerate the safe and ethical development and deployment of AI into the health and care system.

SEE: Quantum computing: Strings of ultracold atoms reveal the surprising behavior of quantum particles

One of the Lab's key programmes of work is about creating projects that take a problem-focused approach to the healthcare challenges that organisations face, rather than simply focusing on the AI products that currently exist.

"We've flipped the traditional approach on its head. We ask, 'what problems are you facing and how can we take some of those problems and develop a solution?' And if we fail, that's OK, because AI might not be the solution to every problem," says Joshi.

TheAI Lab recently worked with Kettering General Hospitalto develop a process-automation tool to help staff produce complex situational reports that have to be filled out during the coronavirus pandemic. The system automatically reduces complexity, collecting information from a variety of sources, such as frontline capacity records and patient data, and frees up staff to focus on patient care rather than reporting.

This kind of data-enabled automation goes to show how the technology can boost staff productivity and patient healthcare. While AI can have a huge impact on diagnostics and decision-making processes, the biggest impact for now is likely to be around operational processes and that's something to celebrate, too.

"People often get excited about the clinical aspects of what AI can do people always love to talk about how AI can really help in diagnosis. But actually, there's a quite a lot of great work happening in the back-end processes," says Joshi.

Link:

AI will have a huge impact on your healthcare. But there are still big obstacles to overcome - ZDNet

Ai Weiwei Is Building Fences All Over NYC In A Powerful Public Art Project – HuffPost

One of the worlds most famous living artists is headed to New York City this fall, and hes bringing a massive public art project with him.

Ai Weiwei, the prolific Chinese artist and activist famously profiled in the documentary Never Sorry, is behind the ambitious Good Fences Make Good Neighbors project set to take over NYC this October. Commissioned by the Public Art Fund, the five-borough exhibition will involve over 300 locations and hundreds of individual artworks, turning the sprawling city into an unconventional canvas for his collage-like experiment.

According to a statement announcing the projects specific locations on Tuesday, Ais upcoming intervention was inspired by the international migration crisis and current global geopolitical landscape. The exhibition will use the concept of a security fence, something long touted by President Donald Trump, as its central visual element.

Ai Weiweis work is extraordinarily timely, but its not reducible to a single political gesture, Nicholas Baume, the director and chief curator of Public Art Fund, told HuffPost. The exhibition grows out of his own life and work, including his childhood experience of displacement during the Cultural Revolution, his formative years as an immigrant and student in NYC in the 1980s, and his more recent persecution as an artist-activist in China. It reflects his profound empathy with other displaced people, particularly migrants, refugees and victims of war.

The exhibition has been in development for several years, he added, so the election of President Trump has only added to its relevance.

In an earlier interview with The New York Times, Ai explained more directly that the work is a reaction to a retreat from the essential attitude of openness in American politics, though he did not explicitly mention Trumps desire to erect a wall on the border between Mexico and the U.S.

The fence has always been a tool in the vocabulary of political landscaping and evokes associations with words like border, security, and neighbor, Ai said in a statement on Tuesday. But whats important to remember is that while barriers have been used to divide us, as humans we are all the same. Some are more privileged than others, but with that privilege comes a responsibility to do more.

The name Good Fences Make Good Neighbors comes from a Robert Frost poem called Mending Wall, which Baume sent to Ai early in the projects development.The poem includes the ambiguous phrase Ai used as his title, as well as the line, Before I built a wall Id ask to know / What I was walling in or walling out / And to whom I was like to give offence.

He loved the clarity and directness of Frosts writing, and the subtle irony of this famous refrain, Baume added.

Physically, the exhibition will involve large-scale, site-specific, freestanding works, some described as sculptural interventions that will be installed in public spaces like Central Park, Washington Square Park and Flushing Meadows-Corona Park, as well as on private walls and buildings. Beyond the sculptures, Ai will display a series of 200 two-dimensional works on lamppost banners and 100 documentary images on bus shelters and newsstands. The photos were taken during the artists travels to research the international refugee crisis, and they will be coupled with text about displaced people around the world.

This is clearly not an exhibition of conventional, off-the-shelf fences, Baume said.[Ai] has taken the familiar and utilitarian material of metal fencing, which has many forms, as a basic motif. He has created multiple variations on that theme, exploring the potential of the material as a sculptural element, adapted to different locations in very site-responsive ways. Some installations are more straightforward, some more complex, but they all share this basic DNA.

Good Fences will open to the public on Oct.12 and will run until Feb. 11, marking the Public Art Funds 40th anniversary. Since its inception, the organizations mission has revolved around providing public access to contemporary art, a goal Baume said is more relevant than ever. In the past, the Fund has organized projects like Anish Kapoors Sky Mirror (2006) at the Rockefeller Center and Tatzu Nishis Discovering Columbus (2012) at Columbus Circle.

See a detailed list of the locations for Good Fences by downloading the available press releaseon Public Art Funds website. Ai Weiwei: Good Fences Make Good Neighbors will be on view from Oct. 12, 2017, to Feb. 11, 2018.

See the original post:

Ai Weiwei Is Building Fences All Over NYC In A Powerful Public Art Project - HuffPost