The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
"Perhaps Even More Dangerous than Nuclear Bombs": Tech Expert Toby Walsh on Artificial Intelligence – DER SPIEGEL International Edition
Posted: August 6, 2022 at 7:31 pm
DER SPIEGEL: Would it even still be realistic at all to outlaw AI-controlled weapons, for instance through a counterpart to the Nuclear Non-Proliferation Treaty, as you suggest in your new book "Machines Behaving Badly?"
Walsh: Well, outlawing them may not always work perfectly, but it can prevent worse. There are quite a few examples of weapons that were initially used but were later outlawed. Think of the widespread use of poison gas in World War I. Or think of blinding lasers, which can blind soldiers. They were outlawed by a United Nations protocol in 1998 and have almost never appeared on battlefields since, even though civilian laser technology is, as we know, widely used. For anti-personnel mines, the ban doesn't work as well, but at least 40 million of them have been destroyed due to outlawing protocols, saving the lives of many children. It's a similar story with cluster munitions: About 99 percent of the stockpile has been destroyed, even though they were used again in Syria. We can ensure that autonomous weapons become unacceptable by stigmatizing them.
DER SPIEGEL: Just four years ago, you predicted a glorious future for AI in your bestseller "It's Alive." What led to your change of heart?
Walsh: Reality happened! We've just seen a lot of unpleasant side effects of AI. Gradually, it became clearer and clearer the extent to which targeted election advertising was being used to sort of hack people's brains into voting for Donald Trump or Brexit, which often goes against their own interests. And through self-learning programs, these attacks have swelled into a perfect storm.
DER SPIEGEL: Does it give you hope that the European Union is currently working on a directive on "Trusted AI"?
Walsh: The EU is really leading the way when it comes to regulating AI. And the European market is big enough that it's worthwhile for global corporations to adapt their AI products to European rules. However, the devil is in the details. Formulating rules is one thing, but the question is how vigorously compliance with the rules will then actually be enforced.
DER SPIEGEL: There are already considerable differences of opinion in the premlinary stages, for example on the question of transparency. Can AI really be transparent and comprehensible or isn't it always, by definition, partly a black box?
Walsh: Transparency is overrated. People aren't transparent either yet we often trust them in our everyday lives. I trust my doctor, for example, even though I'm not a medical professional and can't understand her decisions in detail. And even though I have no idea what's going on inside her. But I do trust the institutions that monitor my doctor.
DER SPIEGEL: How can we make sure that an AI is working according to the rules, even though we don't know its code in detail?
Walsh: This is a tricky problem but it's not limited to AI. Modern companies are also a form of superhuman intelligence. Not even the smartest person on the planet could build an iPhone all by themself. No one is smart enough to design a power plant by themself. Every large corporation interconnects the intelligence of tens of thousands of moderately intelligent employees to form a superhumanly smart collective in other words, a kind of artificial intelligence, as it were.
DER SPIEGEL: Couldn't we just pull the plug on an AI system that misbehaves and that would be the end of it?
Walsh: No way! We can't just turn off the computers of the global banking system, then the global economy would collapse. We can't turn off the computers of air traffic control either, then traffic would collapse. We also can't turn off the computers of power plants, because then we would experience a blackout. We are totally dependent on computers, already today. This dependence is only increasing with AI. We can't get rid of it. We can only try to ensure that the values of AI are in harmony with the values of our society.
DER SPIEGEL: You once correctly predicted that a self-driving car would cause a fatal accident with a cyclist and pedestrian, which is exactly what happened one year later. What do you predict for the next five years?
Walsh: With automatic facial recognition, we will see scandals. The American startup Clearview AI has scraped millions of photos without the consent of the people involved. The company was sued for that, but just keeps going. It's incredible they haven't already been sued into bankruptcy. And one more prediction: deep fakes i.e. movies and photos manipulated with the help of AI on the internet will increase. In a few years, deep fakes will decide an election or trigger a war or even both.
Go here to see the original:
Posted in Artificial Intelligence
Comments Off on "Perhaps Even More Dangerous than Nuclear Bombs": Tech Expert Toby Walsh on Artificial Intelligence – DER SPIEGEL International Edition
High Five: Artificial Intelligence-Generated Campaigns and Experiments | LBBOnline – Little Black Book – LBBonline
Posted: at 7:31 pm
I cant stop playing with Midjourney. It may signal the end of human creativity or the start of an exciting new era, but heres me, like a monkey at a typewriter chucking random words into the algorithm for an instant hit of this-shouldnt-be-as-good-as-it-is art.
For those who dont know, Midjourney is one of a number of image-generating AI algorithms that can turn written prompts into unworldly pictures, It, along with OpenAIs DALL-E 2, have been having something of a moment in the last month as people get their hands on them and try to push them to their limits. Craiyon - formerly DALL-E mini - is an older, less refined and very much wobblier platform to try too. Its worth having a go just to get a feel for what these algorithms can and cant do - though be warned, the dopamine hit of seeing some silly words turn into something strange, beautiful, terrifying or cool within seconds is quite addictive. A confused dragon playing chess. A happy apple. A rat transcends and perceives the oneness of the universe, pulsing with life. Yes Sir, I can boogie.
Within the LBB editorial team, weve been having lots of discussions about the implications of these art-generating algorithms. What are the legal and IP ramifications for those artists whose works are mined and drawn into the data set (on my Midjourney server, Klimt and HR Giger seem to be the most popular artists to replicate but what of more contemporary artists?). Will the industry use this to find unexpected new looks that go beyond the human creative habits and rules - or will we see content pulled directly from the algorithm? How long will it take for the algorithms to iron out the wonky weirdness that can sometimes take the human face way beyond the uncanny valley to a nightmarish, distorted abyss? What are the keys to writing prompts when you are after something very specific? Why does the algorithm seem to struggle when two different objects are requested in the same image?
Unlike other technologies that have shaken up the advertising industry, these image-generating algorithms are relatively accessible and easy to use (DALL-E 2s waitlist aside). The results are almost instant - and the possibilities, for now, seem limitless. Weve already seen a couple of brands have a go with campaigns that are definitely playing on the novelty and PR-angle of this new technology - and also a few really intriguing art projects too...
Agency: Rethink
The highest profile commercial campaign of the bunch is Rethinks new Heinz campaign. Its a follow up to a previous campaign, in which humans were asked to draw a bottle of ketchup and ended up all drawing a bottle of Heinz. This time around, the team asked Dall-E 2 - and the algorithm, like its human predecessors, couldnt help but create images that looked like Heinz branded bottles (albeit with a funky AI spin). In this case, the AI is used to reinforce and revisit the original idea - but how long will it take before were using AIs to generate ideas for boards or pitch images?
Agency: 10 Days
Animation: Jeremy Higgins
This artsy animated short by art director and designer Jeremy Higgins is a delight and shows how a sequence of similar AI-generated images can serve as frames in a film. The flickering effect ironically gives the animation a very hand-made stop motion style, reminding me of films that use individual oil paintings as frames. Its a really vivid encapsulation of what it feels like to be sucked into a Midjourney rabbit hole too...I also have to tip my hat to Stefan Sagmeister who shared this film on his Instagram account.
For the latest issue of Cosmopolitan, creative Karen X Cheng used Dall-E 2 to create a dramatic and imposing cover - using the prompt: 'a strong female president astronaut warrior walking on the planet Mars, digital art synthwave'. Theres a deep dive into the creative process that also examines some of the potential ramifications of the technology on the Cosmopolitan website thats well worth a read.
Studio:T&DA
Heres a cheeky sixth entry to High Five. This execution is part of a wider summer platform for BT Sport, centred around belief - in this case football pundit Robbie Savage is served up a Dall-E 2 image of striker Aleksander Mitrovi lifting the golden boot. Fulham has just been promoted to the Premier League - but though Robbie can see it, he cant quite believe it.
Read the original post:
Posted in Artificial Intelligence
Comments Off on High Five: Artificial Intelligence-Generated Campaigns and Experiments | LBBOnline – Little Black Book – LBBonline
Some Idiot Asked The Dall.E mini Artificial Intelligence Program What The Last Selfies Of Humans Will Look Like And Good News, We’re Definitely Headed…
Posted: at 7:31 pm
Metro- A TikToker asked Dall.E mini, the popular Artificial Intelligence(AI) image generator, what the last selfies on earth would look like and the results are chilling.
In a series of videos titled Asking an Ai to show the last selfie ever taken in the apocalypse, a TikTok account called @robotoverloards, shared the disturbing images.
Each image shows a person taking a selfie set against an apocalyptic background featuring scenes of a nuclear wasteland and catastrophic weather, along with cities burning and even zombies.
Dall.Emini, now renamed toCraiyonis an AI model that can draw images from any text prompt.
The image generator uses artificial intelligence to make photos based on the text you put in.
The image generator is connected to an artificial intelligence that has, for some time, been scraping the web for images to learn what things are. Often it will draw this from the captions attached to the pictures.
What's up everybody? I'm back with my weekly "old man screaming at the clouds" rant about how artificial intelligence is going to wipe our species clean off the planet and it's blatantly telling us this and we continue to ignore it.
Look at this shit.
Does this look like a good time to anybody that's not "metal as fuck"?
No. Absolutely not.
What's worse than the disfiguration in all these beauties' selfies is the devastation in the landscapes behind them.
That shit looks like straight-up nuclear winter to my virgin eyes.
Call it skynet, Boston Robotics, Dall.E mini, whatever the fuck you want. Bottom line is its robot scum and our man Stephen Hawking told us years ago, and Elon Musk is telling us now, that A.I. is going to be the end all be all of homosapiens. That's us. And that's a fucking wrap.
p.s. - the only thing that could make a nuclear/zombie apocalypse worse is this song playing on repeat in your head
See the original post:
Posted in Artificial Intelligence
Comments Off on Some Idiot Asked The Dall.E mini Artificial Intelligence Program What The Last Selfies Of Humans Will Look Like And Good News, We’re Definitely Headed…
Can artificial intelligence really help us talk to the animals? – The Guardian
Posted: July 31, 2022 at 8:18 pm
A dolphin handler makes the signal for together with her hands, followed by create. The two trained dolphins disappear underwater, exchange sounds and then emerge, flip on to their backs and lift their tails. They have devised a new trick of their own and performed it in tandem, just as requested. It doesnt prove that theres language, says Aza Raskin. But it certainly makes a lot of sense that, if they had access to a rich, symbolic way of communicating, that would make this task much easier.
Raskin is the co-founder and president of Earth Species Project (ESP), a California non-profit group with a bold ambition: to decode non-human communication using a form of artificial intelligence (AI) called machine learning, and make all the knowhow publicly available, thereby deepening our connection with other living species and helping to protect them. A 1970 album of whale song galvanised the movement that led to commercial whaling being banned. What could a Google Translate for the animal kingdom spawn?
The organisation, founded in 2017 with the help of major donors such as LinkedIn co-founder Reid Hoffman, published its first scientific paper last December. The goal is to unlock communication within our lifetimes. The end we are working towards is, can we decode animal communication, discover non-human language, says Raskin. Along the way and equally important is that we are developing technology that supports biologists and conservation now.
Understanding animal vocalisations has long been the subject of human fascination and study. Various primates give alarm calls that differ according to predator; dolphins address one another with signature whistles; and some songbirds can take elements of their calls and rearrange them to communicate different messages. But most experts stop short of calling it a language, as no animal communication meets all the criteria.
Until recently, decoding has mostly relied on painstaking observation. But interest has burgeoned in applying machine learning to deal with the huge amounts of data that can now be collected by modern animal-borne sensors. People are starting to use it, says Elodie Briefer, an associate professor at the University of Copenhagen who studies vocal communication in mammals and birds. But we dont really understand yet how much we can do.
Briefer co-developed an algorithm that analyses pig grunts to tell whether the animal is experiencing a positive or negative emotion. Another, called DeepSqueak, judges whether rodents are in a stressed state based on their ultrasonic calls. A further initiative Project CETI (which stands for the Cetacean Translation Initiative) plans to use machine learning to translate the communication of sperm whales.
Yet ESP says its approach is different, because it is not focused on decoding the communication of one species, but all of them. While Raskin acknowledges there will be a higher likelihood of rich, symbolic communication among social animals for example primates, whales and dolphins the goal is to develop tools that could be applied to the entire animal kingdom. Were species agnostic, says Raskin. The tools we develop can work across all of biology, from worms to whales.
The motivating intuition for ESP, says Raskin, is work that has shown that machine learning can be used to translate between different, sometimes distant human languages without the need for any prior knowledge.
This process starts with the development of an algorithm to represent words in a physical space. In this many-dimensional geometric representation, the distance and direction between points (words) describes how they meaningfully relate to each other (their semantic relationship). For example, king has a relationship to man with the same distance and direction that woman has to queen. (The mapping is not done by knowing what the words mean but by looking, for example, at how often they occur near each other.)
It was later noticed that these shapes are similar for different languages. And then, in 2017, two groups of researchers working independently found a technique that made it possible to achieve translation by aligning the shapes. To get from English to Urdu, align their shapes and find the point in Urdu closest to the words point in English. You can translate most words decently well, says Raskin.
ESPs aspiration is to create these kinds of representations of animal communication working on both individual species and many species at once and then explore questions such as whether there is overlap with the universal human shape. We dont know how animals experience the world, says Raskin, but there are emotions, for example grief and joy, it seems some share with us and may well communicate about with others in their species. I dont know which will be the more incredible the parts where the shapes overlap and we can directly communicate or translate, or the parts where we cant.
He adds that animals dont only communicate vocally. Bees, for example, let others know of a flowers location via a waggle dance. There will be a need to translate across different modes of communication too.
The goal is like going to the moon, acknowledges Raskin, but the idea also isnt to get there all at once. Rather, ESPs roadmap involves solving a series of smaller problems necessary for the bigger picture to be realised. This should see the development of general tools that can help researchers trying to apply AI to unlock the secrets of species under study.
For example, ESP recently published a paper (and shared its code) on the so called cocktail party problem in animal communication, in which it is difficult to discern which individual in a group of the same animals is vocalising in a noisy social environment.
To our knowledge, no one has done this end-to-end detangling [of animal sound] before, says Raskin. The AI-based model developed by ESP, which was tried on dolphin signature whistles, macaque coo calls and bat vocalisations, worked best when the calls came from individuals that the model had been trained on; but with larger datasets it was able to disentangle mixtures of calls from animals not in the training cohort.
Another project involves using AI to generate novel animal calls, with humpback whales as a test species. The novel calls made by splitting vocalisations into micro-phonemes (distinct units of sound lasting a hundredth of a second) and using a language model to speak something whale-like can then be played back to the animals to see how they respond. If the AI can identify what makes a random change versus a semantically meaningful one, it brings us closer to meaningful communication, explains Raskin. It is having the AI speak the language, even though we dont know what it means yet.
A further project aims to develop an algorithm that ascertains how many call types a species has at its command by applying self-supervised machine learning, which does not require any labelling of data by human experts to learn patterns. In an early test case, it will mine audio recordings made by a team led by Christian Rutz, a professor of biology at the University of St Andrews, to produce an inventory of the vocal repertoire of the Hawaiian crow a species that, Rutz discovered, has the ability to make and use tools for foraging and is believed to have a significantly more complex set of vocalisations than other crow species.
Rutz is particularly excited about the projects conservation value. The Hawaiian crow is critically endangered and only exists in captivity, where it is being bred for reintroduction to the wild. It is hoped that, by taking recordings made at different times, it will be possible to track whether the speciess call repertoire is being eroded in captivity specific alarm calls may have been lost, for example which could have consequences for its reintroduction; that loss might be addressed with intervention. It could produce a step change in our ability to help these birds come back from the brink, says Rutz, adding that detecting and classifying the calls manually would be labour intensive and error prone.
Meanwhile, another project seeks to understand automatically the functional meanings of vocalisations. It is being pursued with the laboratory of Ari Friedlaender, a professor of ocean sciences at the University of California, Santa Cruz. The lab studies how wild marine mammals, which are difficult to observe directly, behave underwater and runs one of the worlds largest tagging programmes. Small electronic biologging devices attached to the animals capture their location, type of motion and even what they see (the devices can incorporate video cameras). The lab also has data from strategically placed sound recorders in the ocean.
ESP aims to first apply self-supervised machine learning to the tag data to automatically gauge what an animal is doing (for example whether it is feeding, resting, travelling or socialising) and then add the audio data to see whether functional meaning can be given to calls tied to that behaviour. (Playback experiments could then be used to validate any findings, along with calls that have been decoded previously.) This technique will be applied to humpback whale data initially the lab has tagged several animals in the same group so it is possible to see how signals are given and received. Friedlaender says he was hitting the ceiling in terms of what currently available tools could tease out of the data. Our hope is that the work ESP can do will provide new insights, he says.
But not everyone is as gung ho about the power of AI to achieve such grand aims. Robert Seyfarth is a professor emeritus of psychology at University of Pennsylvania who has studied social behaviour and vocal communication in primates in their natural habitat for more than 40 years. While he believes machine learning can be useful for some problems, such as identifying an animals vocal repertoire, there are other areas, including the discovery of the meaning and function of vocalisations, where he is sceptical it will add much.
The problem, he explains, is that while many animals can have sophisticated, complex societies, they have a much smaller repertoire of sounds than humans. The result is that the exact same sound can be used to mean different things in different contexts and it is only by studying the context who the individual calling is, how are they related to others, where they fall in the hierarchy, who they have interacted with that meaning can hope to be established. I just think these AI methods are insufficient, says Seyfarth. Youve got to go out there and watch the animals.
There is also doubt about the concept that the shape of animal communication will overlap in a meaningful way with human communication. Applying computer-based analyses to human language, with which we are so intimately familiar, is one thing, says Seyfarth. But it can be quite different doing it to other species. It is an exciting idea, but it is a big stretch, says Kevin Coffey, a neuroscientist at the University of Washington who co-created the DeepSqueak algorithm.
Raskin acknowledges that AI alone may not be enough to unlock communication with other species. But he refers to research that has shown many species communicate in ways more complex than humans have ever imagined. The stumbling blocks have been our ability to gather sufficient data and analyse it at scale, and our own limited perception. These are the tools that let us take off the human glasses and understand entire communication systems, he says.
More here:
Can artificial intelligence really help us talk to the animals? - The Guardian
Posted in Artificial Intelligence
Comments Off on Can artificial intelligence really help us talk to the animals? – The Guardian
Artificial Intelligence Has a ‘Last Mile’ Problem, and Machine Learning Operations Can Solve It – Built In
Posted: at 8:18 pm
With headlines emerging about artificial intelligence (AI) reaching sentience, its clear that the power of AI remains both revered and feared. For any AI offering to reach its full potential, though, its executive sponsors must first be certain that the AI is a solution to a real business problem.
And as more enterprises and startups alike develop their AI capabilities, were seeing a common roadblock emerge known as AIs last mile problem. Generally, when machine learning engineers and data scientists refer to the last mile, theyre referencing the steps required to take an AI solution and make it available for generalized, widespread use.
The last mile describes the short geographical segment of delivery of communication and media services or the delivery of products to customers located in dense areas. Last mile logistics tend to be complex and costly to providers of goods and services who deliver to these areas.(Source: Investopedia).
Democratizing AI involves both the logistics of deploying the code or model as well as using the appropriate approach to track the models performance. The latter becomes especially challenging, however, since many models function as black boxes in terms of the answers that they provide. Therefore, determining how to track a models performance is a critical part of surmounting the last-mile hurdle. With less than half of AI projects ever reaching a production win, its evident that optimizing the processes that comprise the last mile will unlock significant innovation.
The biggest difficulty developers face comes after they build an AI solution. Tracking its performance can be incredibly challenging as its both context-dependent and varies based on the type of AI model. For instance, while we must compare the results of predictive models to a benchmark, we can examine outputs from less deterministic models such as personalization models with respect to their statistical characteristics. This also requires a deep understanding of what a good result actually entails. For example, during my time working on Google News, we created a rigorous process to evaluate AI algorithms. This involved running experiments in production and determining how to measure their success. The latter required looking at a series of metrics (long vs. short clicks, source diversity, authoritativeness, etc.) to determine if in fact the algorithm was a win. Another metric that we tracked on Google News is new source diversity in personalized feeds. In local development and experiments, the results might appear good, but at scale and as models evolve, the results may skew.
The solution, therefore, is two-fold:
Machine learning operations (MLOps) is becoming a new category of products necessary to adopt AI. MLOps are needed to establish good patterns and the tools required to increase confidence in AI solutions. Once AI needs are established, decision-makers must weigh the fact that while developing in-house may look attractive, it can be a costly affair given the approach is still nascent.
Looking ahead, cloud providers will start offering AI platforms as a commodity. In addition, innovators will consolidate more robust tooling, and the same rigors that we see with traditional software development will standardize and operationalize within the AI industry. Nonetheless, tooling is only a piece of the puzzle. There is significant work required to improve how we take an AI solution from idea to test to reality and ultimately measure success. Well get there more quickly when AIs business value and use case is determined from the outset.
Read More on Built Ins Expert Contributors NetworkRage Against the Machine Learning: My War With Recommendation Engines
Read the rest here:
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence Has a ‘Last Mile’ Problem, and Machine Learning Operations Can Solve It – Built In
‘Alternative physics’ discovered by artificial intelligence – TweakTown
Posted: at 8:18 pm
A study on the physics discovery titled "Automated discovery of fundamental variables hidden in experimental data" has been published in the journal Nature Computational Science.
Researchers from Columbia Engineering have developed a new artificial intelligence (AI) program that could derive the fundamental variables of physics from video footage of physical phenomena. The program analyzed videos of systems like the swinging double pendulum, which researchers already know four "state variables" exist for; the angle and angular velocity of each arm. Within a few hours, the AI determined there were 4.7 variables at play.
"We thought this answer was close enough. Especially since all the AI had access to was raw video footage, without any knowledge of physics or geometry. But we wanted to know what the variables actually were, not just their number," said Hod Lipson, director of the Creative Machines Lab in the Department of Mechanical Engineering.
Two of the variables it identified correlated with the angles of each arm. However, the other two were unclear, as the program interprets and visualizes the variables differently from how humans intuitively understand them. Nevertheless, as the AI was making accurate predictions about the system, it is clear it managed to identify four valid variables. The researchers then tested the AI on systems we don't fully understand, like a lava lamp, and a fireplace, identifying 8 and 24 variables, respectively.
"I always wondered, if we ever met an intelligent alien race, would they have discovered the same physics laws as we have, or might they describe the universe in a different way? Perhaps some phenomena seem enigmatically complex because we are trying to understand them using the wrong set of variables. In the experiments, the number of variables was the same each time the AI restarted, but the specific variables were different each time. So yes, there are alternative ways to describe the universe and it is quite possible that our choices aren't perfect," said Lipson.
You can read more from the study here.
Read the original here:
'Alternative physics' discovered by artificial intelligence - TweakTown
Posted in Artificial Intelligence
Comments Off on ‘Alternative physics’ discovered by artificial intelligence – TweakTown
Elon Musk and Silicon Valley’s Overreliance on Artificial Intelligence – The Wire
Posted: at 8:18 pm
When the richest man in the world is being sued by one of the most popular social media companies, its news. But while most of the conversation about Elon Musks attempt to cancel his $44 billion contract to buy Twitter is focusing on the legal, social, and business components, we need to keep an eye on how the discussion relates to one of tech industrys most buzzy products: artificial intelligence.
The lawsuit shines a light on one of the most essential issues for the industry to tackle: What can and cant AI do, and what should and shouldnt AI do? The Twitter v Musk contretemps reveals a lot about the thinking about AI in tech and startup land and raises issues about how we understand the deployment of the technology in areas ranging from credit checks to policing.
At the core of Musks claim for why he should be allowed out of his contract with Twitter is an allegation that the platform has done a poor job of identifying and removing spam accounts. Twitter has consistently claimed in quarterly filings that less than 5% of its active accounts are spam; Musk thinks its much higher than that. From a legal standpoint, it probably doesnt really matter if Twitters spam estimate is off by a few percent, and Twitters been clear that its estimate is subjective and that others could come to different estimates with the same data. Thats presumably why Musks legal team lost in a hearing on July 19when they asked for more time to perform detailed discovery on Twitters spam-fighting efforts, suggesting that likely isnt the question on which the trial will turn.
Regardless of the legal merits, its important to scrutinise the statistical and technical thinking from Musk and his allies. Musks position is best summarised in his filing from July 15, which states: In a May 6 meeting with Twitter executives, Musk was flabbergasted to learn just how meager Twitters process was. Namely: Human reviewers randomly sampled 100 accounts per day (less than 0.00005% of daily users) and applied unidentified standards to somehow conclude every quarter for nearly three years that fewer than 5% of Twitter users were false or spam. The filing goes on to express the flabbergastedness of Musk by adding, Thats it. No automation, no AI, no machine learning.
Perhaps the most prominent endorsement of Musks argument here came from venture capitalist David Sacks,who quoted it while declaring, Twitter is toast. But theres an irony in Musks complaint here: If Twitter were using machine learning for the audit as he seems to think they should, and only labeling spam that was similar to old spam, it would actually produce a lower, less-accurate estimate than it has now.
There are three components to Musks assertion that deserve examination: his basic statistical claim about what a representative sample looks like, his claim that the spam-level auditing process should automated or use AI or machine learning, and an implicit claim about what AI can actually do.
On the statistical question, this is something any professional anywhere near the machine learning space should be able to answer (so can many high school students). Twitter uses a daily sampling of accounts to scrutinise a total of 9,000 accounts per quarter (averaging about 100 per calendar day) to arrive at its under-5% spam estimate. Though that sample of 9,000 users per quarter is, as Musk notes, a very small portion of the 229 million active users the company reported in early 2022, a statistics professor (or student) would tell you that thats very much not the point. Statistical significance isnt determined by what percentage of the population is sampled but simply by the actual size of the sample in question. As Facebook whistleblower Sophie Zhang put it, you can make the comparison to soup: It doesnt matter if you have a small or giant pot of soup, if its evenly mixed you just need a spoonful to taste-test.
The whole point of statistical sampling is that you can learn most of what you need to know about the variety of a larger population by studying a much-smaller but decently sized portion of it. Whether the person drawing the sample is a scientist studying bacteria, or a factory quality inspector checking canned vegetables, or a pollster asking about political preferences, the question isnt what percentage of the overall whole am I checking, but rather how much should I expect my sample to look like the overall population for the characteristics Im studying? If you had to crack open a large percentage of your cans of tomatoes to check for their quality, youd have a hard time making a profit, so you want to check the fewest possible to get within a reasonable range of confidence in your findings.
Also read: Why Understanding This 60S Sci-Fi Novel Is Key to Understanding Elon Musk
While this thinking does go against the grain of certain impulses (theres a reason why many people make this mistake), there is also a way to make this approach to sampling more intuitive. Think of the goal in setting sample size as getting a reasonable answer to the question, If I draw another sample of the same size, how different would I expect it to be? A classic approach to explaining this problem is to imagine youve bought a great mass of marbles, that are supposed to come in a specific ratio: 95% purple marbles and 5% yellow marbles. You want to do a quality inspection to ensure the delivery is good, so you load them into one of those bingo game hoppers, turn the crank, and start counting the marbles you draw, in each color. Lets say your first sample of 20 marbles has 19 purple and one yellow; should you be confident that you got the right mix from your vendor? You can probably intuitively understand that the next 20 random marbles you draw could end up being very different, with zero yellows or seven. But what if you draw 1,000 marbles, around the same as the typical political poll? What if you draw 9,000 marbles? The more marbles you draw, the more youd expect the next drawing to look similar, because its harder to hide random fluctuations in larger samples.
There are onlinecalculators that can let you run the numbers yourself. If you only draw 20 marbles and get one yellow, you can have 95% confidence that the yellows would be between 0.13% and 24.9% of the total not very exact. If you draw 1,000 marbles and get 50 yellows, you can have 95% confidence that yellows would be between 3.7% and 6.5% of the total; closer, but perhaps not something youd sign your name to in a quarterly filing. At 9,000 marbles with 450 yellow, you can have 95% confidence the yellows are between 4.56% and 5.47%; youre now accurate to within a range of less than half a percent, and at that point Twitters lawyers presumably told them theyd done enough for their public disclosure.
Printed Twitter logos are seen in this picture illustration taken April 28, 2022. Photo: Reuters/Dado Ruvic/Illustration/File Photo
This reality that statistical sampling works to tell us about large populations based on much-smaller samples underpins every area where statistics is used, from checking the quality of the concrete used to make the building youre currently sitting in, to ensuring the reliable flow of internet traffic to the screen youre reading this on.
Its also what drives all current approaches to artificial intelligence today. Specialists in the field almost never use the term artificial intelligence to describe their work, preferring to use machine learning. But another common way to describe the entire field as it currently stands is applied statistics. Machine learning today isnt really computers thinking in anything like what we assume humans do (to the degree we even understand how humans think, which isnt a great degree); its mostly pattern-matching and -identification, based on statistical optimisation. If you feed a convolutional neural network thousands of images of dogs and cats and then ask the resulting model to determine if the next image is of a dog or a cat, itll probably do a good job, but you cant ask it to explain what makes a cat different from a dog on any broader level; its just recognising the patterns in pictures, using a layering of statistical formulas.
Stack up statistical formulas in specific ways, and you can build a machine learning algorithm that, fed enough pictures, will gradually build up a statistical representation of edges, shapes, and larger forms until it recognises a cat, based on the similarity to thousands of other images of cats it was fed. Theres also a way in which statistical sampling plays a role: You dont need pictures of all the dogs and cats, just enough to get a representative sample, and then your algorithm can infer what it needs to about all the other pictures of dogs and cats in the world. And the same goes for every other machine learning effort, whether its an attempt to predict someones salary using everything else you know about them, with a boosted random forests algorithm, or to break down a list of customers into distinct groups, in a clustering algorithm like a support vector machine.
You dont absolutely have to understand statistics as well as a student whos recently taken a class in order to understand machine learning, but it helps. Which is why the statistical illiteracy paraded by Musk and his acolytes here is at least somewhat surprising.
But more important, in order to have any basis for overseeing the creation of a machine-learning product, or to have a rationale for investing in a machine-learning company, its hard to see how one could be successful without a decent grounding in the rudiments of machine learning, and where and how it is best applied to solve a problem. And yet, team Musk here is suggesting they do lack that knowledge.
Once you understand that all machine learning today is essentially pattern-matching, it becomes clear why you wouldnt rely on it to conduct an audit such as the one Twitter performs to check for the proportion of spam accounts. Theyre hand-validating so that they ensure its high-quality data, explained security professional Leigh Honeywell, whos been a leader at firms like Slack and Heroku, in an interview. She added, any data you pull from your machine learning efforts will by necessity be not as validated as those efforts. If you only rely on patterns of spam youve already identified in the past and already engineered into your spam-detection tools, in order to find out how much spam there is on your platform, youll only recognise old spam patterns, and fail to uncover new ones.
Also read: India Versus Twitter Versus Elon Musk Versus Society
Where Twitter should be using automation and machine learning to identify and remove spam is outside of this audit function, which the company seems to do. It wouldnt otherwise be possible tosuspend half a million accountsevery day and lock millions of accounts each week, as CEO Parag Agrawal claims. In conversations Ive had with cybersecurity workers in the field, its quite clear that large amounts of automation is used at Twitter (though machine learning specifically is actually relatively rare in the field because the results often arent as good as other methods, marketing claims by allegedly AI-based security firms to the contrary).
At least in public claims related to this lawsuit, prominent Silicon Valley figures are suggesting they have a different understanding of what machine learning can do, and when it is and isnt useful. This disconnect between how many nontechnical leaders in that world talk about AI, and what it actually is, has significant implications for how we will ultimately come to understand and use the technology.
The general disconnect between the actual work of machine learning and how its touted by many company and industry leaders is something data scientists often chalk up to marketing. Its very common to hear data scientists in conversation among themselves declare that AI is just a marketing term. Its also quite common to have companies using no machine learning at all describe their work as AI to investors and customers, who rarely know the difference or even seem to care.
This is a basic reality in the world of tech. In my own experience talking with investors who make investments in AI technology, its often quite clear that they know almost nothing about these basic aspects of how machine learning works. Ive even spoken to CEOs of rather large companies that rely at their core on novel machine learning efforts to drive their product, who also clearly have no understanding of how the work actually gets done.
Not knowing or caring how machine learning works, what it can or cant do, and where its application can be problematic could lead society to significant peril. If we dont understand the way machine learning actually works most often by identifying a pattern in some dataset and applying that pattern to new data we can be led deep down a path in which machine learning wrongly claims, for example, to measure someones face for trustworthiness (when this is entirely based on surveys in which people reveal their own prejudices), or that crime can be predicted (when many hyperlocal crime numbers are highly correlated with more police officers being present in a given area, who then make more arrests there), based almost entirely on a set of biased data or wrong-headed claims.
If were going to properly manage the influence of machine learning on our society on our systems and organisations and our government we need to make sure these distinctions are clear. It starts with establishing a basic level of statistical literacy, and moves on to recognising that machine learning isnt magicand that it isnt, in any traditional sense of the word, intelligent that it works by pattern-matching to data, that the data has various biases, and that the overall project can produce many misleading and/or damaging outcomes.
Its an understanding one might have expected or at least hoped to find among some of those investing most of their life, effort, and money into machine-learning-related projects. If even people that deep arent making those efforts to sort fact from fiction, its a poor omen for the rest of us, and the regulators and other officials who might be charged with keeping them in check.
This article was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.
Go here to read the rest:
Elon Musk and Silicon Valley's Overreliance on Artificial Intelligence - The Wire
Posted in Artificial Intelligence
Comments Off on Elon Musk and Silicon Valley’s Overreliance on Artificial Intelligence – The Wire
Artificial Intelligence (AI) in Drug Discovery Market worth $4.0 billion by 2027 – Exclusive Report by MarketsandMarkets – PR Newswire
Posted: at 8:18 pm
Browse in-depth TOC on "Artificial Intelligence (AI) in Drug Discovery Market"177 Tables 33 Figures 198 Pages
Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=151193446
North America is expected to dominate the Artificial Intelligence in Drug Discovery Marketin 2022.
North America accounted for the largest share of the global AI in drug discovery market in 2021 and also expected to grow at the highest CAGR during the forecast period. North America, which comprises the US, Canada, and Mexico, forms the largest market for AI in drug discovery. These countries have been early adopters of AI technology in drug discovery and development. Presence of key established players, well-established pharmaceutical and biotechnology industry, and high focus on R&D & substantial investment are some of the key factors responsible for the large share and high growth rate of this market
Prominent players in Artificial Intelligence in Drug Discovery Market:
Players adopted organic as well as inorganic growth strategies such as product upgrades, collaborations, agreements, partnerships, and acquisitions to increase their offerings, cater to the unmet needs of customers, increase their profitability, and expand their presence in the global AI in Drug Discovery Industry.
Request Sample Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=151193446
Ai In Drug Discovery Market Dynamics
What benefits AI show in drug discovery and development process?
Drug discovery is a very costly and lengthy process, owing to which there is a need for alternative tools for discovering new drugs. Drug discovery and development are commonly conducted through in vivo and in vitro methods, which are very costly and time-consuming. Furthermore, it takes ~10 years on average for a new drug to enter the market at a cost of ~USD 2.6 billion (Source: Biopharmaceutical Research and Development.org). Several players operating in this market are developing platforms that can help in the rapid discovery of drugs. For instance, Insilico Medicine (US) developed an AI-based drug discovery system, GENTRL, with which it could develop six experimental novel molecules within 21 days.
How and why AI workforce shortage is important retraining factor holding back the growth of the market?
AI is a complex system, and companies require a workforce with specific skill sets to design, manage, and implement AI systems. Personnel dealing with AI systems should be familiar and aware of technologies such as machine intelligence, deep learning, cognitive computing, image recognition and other AI technologies. Additionally, integrating AI technologies into existing systems is a challenging task that necessitates substantial data processing in order to replicate human brain behavior. Even slight errors might cause system failure and have a negative impact on the desired outcome. The absence of professional standards and certifications in AI/ML technologies is restraining the growth of AI
What are the emerging markets for Artificial Intelligence in Drug Discovery?
Emerging economies such as India, China, and countries in the Middle East are expected to offer potential growth opportunities for players operating in the AI in drug discovery market. In most of these countries, the demand for pharmaceuticals is expected to increase significantly, owing to the rising incidence of chronic and infectious diseases, increasing income levels, and improving healthcare infrastructure. As a result, these markets are very attractive for companies whose profit margins are affected by stagnation in mature markets, the patent expiration of drugs, and increasing regulatory hurdles.
Speak to Analyst: https://www.marketsandmarkets.com/speaktoanalystNew.asp?id=151193446
Scope of the Artificial Intelligence (AI) in Drug Discovery Market Report:
Report Metric
Details
Market size available for years
2020-2027
Base year considered
2021
Forecast period
20222027
Forecast units
Value (USD Billion)
Segments covered
Offering, Technology, Application, End User,And Region
Geographies covered
North America (US, and Canada), Europe (Germany, France, UK, Italy, and the RoE), Asia Pacific (Japan, China, India, and RoAPAC), and RoW
Companies covered
NVIDIA Corporation (US), Microsoft (US), Google (US), Exscientia (UK), Schrdinger (US), Atomwise, Inc. (US), BenevolentAI (UK), NuMedii (US), BERG LLC (US), Cloud Pharmaceuticals (US), Insilico Medicine (US), Cyclica (Canada), Deep Genomics (Canada), IBM (US), BIOAGE (US), Valo Health (US), Envisagenics (US), twoXAR (US), Owkin, Inc. (US), XtalPi (US), Verge Genomics (US), Biovista (US), Evaxion Biotech (Denmark), Iktos (France), Standigm (South Korea), and BenchSci (Canada)
Browse Adjacent Markets: Healthcare IT Market Research Reports & Consulting
Browse Related Reports:
Drug Discovery Services Marketby Process (Target Selection, Validation, Hit-to-lead), Type (Chemistry, Biology), Drug Type (Small molecules, biologics), Therapeutic Area (Oncology, Neurology) End User (Pharma, Biotech) - Global Forecast to 2026
Artificial Intelligence in Genomics Marketby Offering (Software, Services), Technology (Machine Learning, Computer Vision), Functionality (Genome Sequencing, Gene Editing), Application (Diagnostics), End User (Pharma, Research)-Global Forecasts to 2025
About MarketsandMarkets
MarketsandMarkets provides quantified B2B research on 30,000 high growth niche opportunities/threats which will impact 70% to 80% of worldwide companies' revenues. Currently servicing 7500 customers worldwide including 80% of global Fortune 1000 companies as clients. Almost 75,000 top officers across eight industries worldwide approach MarketsandMarkets for their painpoints around revenues decisions.
Our 850 fulltime analyst and SMEs at MarketsandMarkets are tracking global high growth markets following the "Growth Engagement Model GEM". The GEM aims at proactive collaboration with the clients to identify new opportunities, identify most important customers, write "Attack, avoid and defend" strategies, identify sources of incremental revenues for both the company and its competitors. MarketsandMarkets now coming up with 1,500 MicroQuadrants (Positioning top players across leaders, emerging companies, innovators, strategic players) annually in high growth emerging segments. MarketsandMarkets is determined to benefit more than 10,000 companies this year for their revenue planning and help them take their innovations/disruptions early to the market by providing them research ahead of the curve.
MarketsandMarkets's flagship competitive intelligence and market research platform, "Knowledge Store" connects over 200,000 markets and entire value chains for deeper understanding of the unmet insights along with market sizing and forecasts of niche markets.
Contact:Mr. Aashish MehraMarketsandMarkets INC.630 Dundee RoadSuite 430Northbrook, IL 60062USA: +1-888-600-6441Email: [emailprotected] Research Insight: https://www.marketsandmarkets.com/ResearchInsight/ai-in-drug-discovery-market.asp Visit Our Web Site: https://www.marketsandmarkets.com Content Source: https://www.marketsandmarkets.com/PressReleases/ai-in-drug-discovery.asp
Photo: https://mma.prnewswire.com/media/1868356/AI_DRUG_DISCOVERY.jpg Logo: https://mma.prnewswire.com/media/660509/MarketsandMarkets_Logo.jpg
SOURCE MarketsandMarkets
Read more from the original source:
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence (AI) in Drug Discovery Market worth $4.0 billion by 2027 – Exclusive Report by MarketsandMarkets – PR Newswire
U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M – Business Wire
Posted: at 8:18 pm
DENVER--(BUSINESS WIRE)--Palantir Technologies Inc. (NYSE: PLTR) today announced that it will expand its work with the U.S. Army Research Laboratory to implement data and artificial intelligence (AI)/machine learning (ML) capabilities for users across the combatant commands (COCOMs). The contract totals $99.9 million over two years.
Palantir first partnered with the Army Research Lab to provide those on the frontlines with state-of-the-art operational data and AI capabilities in 2018. Palantirs platform has supported the integration, management, and deployment of relevant data and AI model training to all of the Armed Services, COCOMs, and special operators. This extension grows Palantirs operational RDT&E work to more users globally.
Maintaining a leading edge through technology is foundational to our mission and partnership with the Army Research Laboratory, said Akash Jain, President of Palantir USG. Our nations armed forces require best-in-class software to fulfill their missions today while rapidly iterating on the capabilities they will need for tomorrows fight. We are honored to support this critical work by teaming up to deliver the most advanced operational AI capabilities available with dozens of commercial and public sector partners.
By working with the U.S. Army Research Lab, integrating with partner vendors, and iterating with users on the front lines, Palantirs software platforms will continue to quickly implement advanced AI capabilities against some of DODs most pressing problem sets. Were looking forward to fielding our newest ML, Edge, and Space technologies alongside our U.S. military partners, said Shannon Clark, Senior Vice President of Innovation, Federal. These technologies will enable operators in the field to leverage AI insights to make decisions across many fused domains. From outer space to the sea floor, and everything in between.
About Palantir Technologies Inc.
Foundational software of tomorrow. Delivered today. Additional information is available at https://www.palantir.com.
Forward-Looking Statements
This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, Palantirs expectations regarding the amount and the terms of the contract and the expected benefits of our software platforms. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond our control. These risks and uncertainties include our ability to meet the unique needs of our customer; the failure of our platforms to satisfy our customer or perform as desired; the frequency or severity of any software and implementation errors; our platforms reliability; and our customers ability to modify or terminate the contract. Additional information regarding these and other risks and uncertainties is included in the filings we make with the Securities and Exchange Commission from time to time. Except as required by law, we do not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.
More:
Posted in Artificial Intelligence
Comments Off on U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M – Business Wire
What is Artificial Intelligence? Guide to AI | eWEEK – eWeek
Posted: July 29, 2022 at 5:21 pm
By any measure, artificial intelligence (AI) has become big business.
According to Gartner, customers worldwide will spend $62.5 billion on AI software in 2022. And it notes that 48 percent of CIOs have either already deployed some sort of AI software or plan to do so within the next twelve months.
All that spending has attracted a huge crop of startups focused on AI-based products. CB Insights reported that AI funding hit $15.1 billion in the first quarter of 2022 alone. And that came right after a quarter that saw investors pour $17.1 billion into AI startups. Given that data drives AI, its no surprise that related fields like data analytics, machine learning and business intelligence are all seeing rapid growth.
But what exactly is artificial intelligence? And why has it become such an important and lucrative part of the technology industry?
Also see: Top AI Software
In some ways, artificial intelligence is the opposite of natural intelligence. If living creatures can be said to be born with natural intelligence, man-made machines can be said to possess artificial intelligence. So from a certain point of view, any thinking machine has artificial intelligence.
And in fact, one of the early pioneers of AI, John McCarthy, defined artificial intelligence as the science and engineering of making intelligent machines.
In practice, however, computer scientists use the term artificial intelligence to refer to machines doing the kind of thinking that humans have taken to a very high level.
Computers are very good at making calculations at taking inputs, manipulating them, and generating outputs as a result. But in the past they have not been capable of other types of work that humans excel at, such as understanding and generating language, identifying objects by sight, creating art, or learning from past experience.
But thats all changing.
Today, many computer systems have the ability to communicate with humans using ordinary speech. They can recognize faces and other objects. They use machine learning techniques, especially deep learning, in ways that allow them to learn from the past and make predictions about the future.
So how did we get here?
Also see: How AI is Altering Software Development with AI-Augmentation
Many people trace the history of artificial intelligence back to 1950, when Alan Turing published Computing Machinery and Intelligence. Turings essay began, I propose to consider the question, Can machines think?' It then laid out a scenario that came to be known as a Turing Test. Turing proposed that a computer could be considered intelligent if a person could not distinguish the machine from a human being.
In 1956, John McCarthy and Marvin Minsky hosted the first artificial intelligence conference, the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). It convinced computer scientists that artificial intelligence was an achievable goal, setting the foundation for several decades of further research. And early forays into AI technology developed bots that could play checkers and chess.
The 1960s saw the development of robots and several problem-solving programs. One notable highlight was the creation of ELIZA, a program that simulated psychotherapy and provided an early example of human-machine communication.
In the 1970s and 80s, AI development continued but at a slower pace. The field of robotics in particular saw significant advances, such as robots that could see and walk. And Mercedes-Benz introduced the first (extremely limited) autonomous vehicle. However, government funding for AI research decreased dramatically, leading to a period some refer to as the AI winter.
Interest in AI surged again in the 1990s. The Artificial Linguistic Internet Computer Entity (ALICE) chatbot demonstrated that natural language processing could lead to human-computer communication that felt far more natural than what had been possible with ELIZA. The decade also saw a surge in analytic techniques that would form the basis of later AI development, as well as the development of the first recurrent neural network architecture. This was also the decade when IBM rolled out its Deep Blue chess AI, the first to win against the current world champion.
The first decade of the 2000s saw rapid innovation in robotics. The first Roombas began vacuuming rugs, and robots launched by NASA explored Mars. Closer to home, Google was working on a driverless car.
The years since 2010 have been marked by unprecedented increases in AI technology. Both hardware and software developed to a point where object recognition, natural language processing, and voice assistants became possible. IBMs Watson won Jeopardy. Siri, Alexa, and Cortana came into being, and chatbots became a fixture of modern retail. Google DeepMinds AlphaGo beat human Go champions. And enterprises in all industries have begun deploying AI tools to help them analyze their data and become more successful.
Now AI is truly beginning to evolve past some of the narrow and limited types into more advanced implementations.
Also see:The History of Artificial Intelligence
Different groups of computer scientists have proposed different ways of classifying the types of AI. One popular classification uses three categories:
Another popular classification uses four different categories:
While these classifications are interesting from a theoretical standpoint, most organizations are far more interested in what they can do with AI. And that brings us to the aspect of AI that is generating a lot of revenue the AI use cases.
Also see: Three Ways to Get Started with AI
The possible AI use cases and applications for artificial intelligence are limitless. Some of todays most common AI use cases include the following:
Of course, these are just some of the more widely known use cases for AI. The technology is seeping into daily life in so many ways that we often arent fully aware of them.
Also see: Best Machine Learning Platforms
So where is the future of AI? Clearly it is reshaping consumer and business markets.
The technology that powers AI continues to progress at a steady rate. Future advances like quantum computing may eventually enable major new innovations, but for the near term, it seems likely that the technology itself will continue along a predictable path of constant improvement.
Whats less clear is how humans will adapt to AI. This question poses questions that loom large over human life in the decades ahead.
Many early AI implementations have run into major challenges. In some cases, the data used to train models has allowed bias to infect AI systems, rendering them unusable.
In many other cases, business have not seen the financial results they hoped for after deploying AI. The technology may be mature, but the business processes surrounding it are not.
The AI software market is picking up speed, but its long-term trajectory will depend on enterprises advancing their AI maturity, said Alys Woodward, senior research director at Gartner.
Successful AI business outcomes will depend on the careful selection of use cases, Woodware added. Use cases that deliver significant business value, yet can be scaled to reduce risk, are critical to demonstrate the impact of AI investment to business stakeholders.
Organizations are turning to approaches like AIOps to help them better manage their AI deployments. And they are increasingly looking for human-centered AI that harnesses artificial intelligence to augment rather than to replace human workers.
In a very real sense, the future of AI may be more about people than about machines.
Also see: The Future of Artificial Intelligence
Read more:
What is Artificial Intelligence? Guide to AI | eWEEK - eWeek
Posted in Artificial Intelligence
Comments Off on What is Artificial Intelligence? Guide to AI | eWEEK – eWeek