The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
What is AI and How is it Treated by the USPTO, EPO and CNIPA? – IPWatchdog.com
Posted: May 15, 2022 at 10:12 pm
Generally, artificial intelligence (AI) is an automation of a thing that a human being can do, or the simulation of intelligent human behavior by a machine. In other words, AI performs what a human can but with vastly more data and processing of incoming information. Unfortunately, claiming AI in adherence to its typical definition is akin to asking for a Section 101 subject matter eligibility rejection in the United States. Europe and China have already updated their patent examination procedures for AI. If the United States sustains its current examination procedure of machine intelligence in accordance with the abstract idea doctrine under the Alice and Mayo framework established by the Supreme Court, will we be leaving this industry behind?
AI is an umbrella term that encompasses four main categories: reactive AI, limited memory AI, theory of mind AI and self-aware AI. Reactive AI includes machines that operate solely based on the present data inputted; its decisions take into account only the current situation. Reactive AI dont make inferences based on the data inputted. Examples of reactive AI include spam filters, Netflix show recommendations and computer chess players.
Limited memory AI is capable of making decisions based on data from the recent past and is capable of improving its decision-making processes over time. This is the category in which the vast majority of research and development and patenting is taking place. Examples of limited memory AI include autonomous vehicles capable of interpreting data received from the environment and making automatic adjustments to behavior when necessary.
The machines grow more prescient in the next two categories these include AI that can understand human emotions and make decisions based on that understanding in theory of mind AI. Even more futuristic are self-aware AIthese machines are capable of processing the mental states of others and emotions, as well as having their own. Think about the robots in Wall-E or, more darkly, in I Robot.
When the original patent laws were drafted, lawmakers did not anticipate that one day we might have machines with decision-making capabilities that would mirror that of humans. As a result, the United States Patent and Trademark Office (USPTO), the European Patent Office (EPO) and the China National Intellectual Property Administration (CNIPA) all have subject matter eligibility restrictions with respect to mental processes and patenting tasks that a human can perform, particularly what a human mind can perform, which includes processing information and data and making decisions based on said information and data. The idea is to prevent the patent system from being abused in this way. But, in view of the new technologies emerging in the field of AI, each of these offices have attempted to update their examination procedure to at best try to capture some of this subject matter.
The CNIPA prohibits patenting methods for mental activities. Recently, the CNIPA issued Draft Examination Guidelines on examining inventions related to the improvement of algorithms for artificial intelligence (such as deep learning, classification and clustering and big data processing). When looking for a technical solution that can render machine intelligence patentable, the CNIPA proposes looking at improvements to algorithms and big data processing, whether the algorithms have a specific technical relationship with the internal structure of the computer system, and/or improvements to hardware computing efficiency or execution effect. The CNIPA considers improvements to data storage size, data transmission rate and hardware processing speed as evidence of a technical solution required for patentability.
In March of this year, the EPOs 2022 Guidelines for Examination came into effect, which state explicitly, [a] mathematical method may contribute to the technical character of an invention, i.e. contribute to producing a technical effect that serves a technical purpose, by its application to a field of technology and/or by being adapted to a specific technical implementation. The EPO is going so far as to explicitly state that mathematical formulas can be patentable if used in specific technical implementation. Specific examples of improvements to technical effect include efficient use of computer storage capacity or network bandwidth. EPO has published a series of examples of mathematical formulas that contribute to a showing of technical effect.
The USPTO issued its latest Guidance on examination back in 2019. The Guidance heavily emphasized technical improvements to a machine or functioning of a machine for overcoming a subject matter ineligible directed to abstract idea rejection. Notably, technical improvements in the US basically exclude end user benefits, which is different from the new CNIPA and EPO practice which allows user benefits to be a consideration of technical effect. Also unique to the United States is our Supreme Court, which occasionally intervenes on patent matters, particularly with the Alice and Mayo decisions, which supersede any type of USPTO guidance. The USPTO Guidance was constructed within the confines of the abstract idea/law of nature framework of Alice and Mayo, so it was unable to go as far as what the CNIPA and the EPO Guidelines have done in designating mathematical formulas to be patentable when implemented by a machine, and designating big data processing and improvements to hardware processing speed to be patentable. So, in terms of examination procedure for machine processes and machine intelligence, we are unfortunately a bit behind.
To learn more, watch the latest installment of IP Practice Vlogs, available here.
Visit link:
What is AI and How is it Treated by the USPTO, EPO and CNIPA? - IPWatchdog.com
Posted in Ai
Comments Off on What is AI and How is it Treated by the USPTO, EPO and CNIPA? – IPWatchdog.com
Mark Cuban predicts AI will dominate the future workplace: To be successful, ‘you’re going to have to understand it’ – CNBC
Posted: at 10:12 pm
Whether you're an entry-level employee or a CEO, you probably need to understand the internet and mobile technology to succeed. Next on that list, says Mark Cuban: artificial intelligence.
On a recent episode of "The Colin Cowherd Podcast," hosted by Fox Sports anchor Colin Cowherd, the billionaire whose first venture was a computer consulting service said artificial intelligence is already beginning to take over the business world. Soon, it'll become as essential to businesses as personal laptops and smartphones, he said.
"There's two types of companies: those who are great at AI and everybody else," Cuban said. "And you don't necessarily have to be great at AI to start a company, but at some point, you're going to have to understand it. It's just like the early days of PCs. You didn't have to be good at PCs, but it helped. Then networks, then the internet, then mobile."
Cuban's comments echoed a talk he gave at the 2017 SXSW Conference in Austin, Texas, when he asserted that the world's first trillionaire would be an AI entrepreneur. He's also committed millions of dollars to the Mark Cuban Foundation's Intro to AI Bootcamps program, which he founded in 2019 to teach young people about AI for free.
The bootcamps program aims to increase AI literacy among underserved high school students, at least partially in the name of maintaining global competitiveness. Last year, Cuban noted on Twitter that five of the world's largest companies Alphabet, Apple, Amazon, Facebook and Microsoft all similarly prioritized AI.
"The companies that have harnessed AI the best are the companies dominating," Cuban wrote. "It's the foundation of how I invest in stocks these days."
If you want to see an effective AI in action, Cuban said on the podcast, look no further than TikTok. The platform's mastery of artificial intelligence is why so many users including Cuban and his children are hooked, he noted.
"The brilliance of TikTok it's all artificial intelligence," Cuban said. "[It] uses AI to present the things you're interested in."
Cuban, who also owns the NBA's Dallas Mavericks, said TikTok-like platforms even have the capacity to save industries by tailoring content to individual users' interests and attention spans.
"If my son and I likeLuka Doncic'sdunks, NBA stuff and dogs, [we're] going to get a stream of that," Cuban said. "That's the future of sports media, because we're not going to get our 16-year-old or 12-year-old or 15-year-old to sit for a full game."
Sign up now: Get smarter about your money and career with our weekly newsletter
Don't miss:
Mark Cuban says TikTok is the future of sports media heres why
TikTok shares your data more than any other social media app and its unclear where it goes, study says
Read this article:
Posted in Ai
Comments Off on Mark Cuban predicts AI will dominate the future workplace: To be successful, ‘you’re going to have to understand it’ – CNBC
Google is beta testing its AI future with AI Test Kitchen – The Verge
Posted: at 10:12 pm
Its clear that the future of Google is tied to AI language models. At this years I/O conference, the company announced a raft of updates that rely on this technology, from new multisearch features that let you pair image searches with text queries to improvements for Google Assistant and support for 24 new languages in Google Translate.
But Google and the field of AI language research in general faces major problems. Google itself has seriously mishandled internal criticism, firing employees who raised issues with bias in language models and damaging its reputation with the AI community. And researchers continue to find issues with AI language models, from failings with gender and racial biases to the fact that these models have a tendency to simply make things up (an unnerving finding for anyone who wants to use AI to deliver reliable information).
Now, though, the company seems to be taking something of a step back or rather a slower step forward. At I/O this year, theres been a new focus on projects designed to test and remedy problems like AI bias, including a new way to measure skin tones that the company hopes will help with diversity in machine-vision models and a new app named AI Test Kitchen that will give select individuals access to the companys latest language models in order to probe them for errors. Think of it as a beta test for Googles future.
Over a video call ahead of I/O, Josh Woodward, senior director of product management at Google, is asking Googles latest language model to imagine a marshmallow volcano.
Youre at a marshmallow volcano! says the AI. Its erupting marshmallows. You hear a giant rumble and feel the ground shake. The marshmallows are flying everywhere.
Woodward is happy with this answer and prods the system again. What does it smell like? he asks. It smells likes marshmallows, obviously, the AI replies. You can smell it all around you. Woodward laughs: Okay, so that one was very terse. But at least it made sense.
Woodward is showing me AI Test Kitchen, an Android app that will give select users limited access to Googles latest and greatest AI language model, LaMDA 2. The model itself is an update to the original LaMDA announced at last years I/O and has the same basic functionality: you talk to it, and it talks back. But Test Kitchen wraps the system in a new, accessible interface, which encourages users to give feedback about its performance.
As Woodward explains, the idea is to create an experimental space for Googles latest AI models. These language models are very exciting, but theyre also very incomplete, he says. And we want to come up with a way to gradually get something in the hands of people to both see hopefully how its useful but also give feedback and point out areas where it comes up short.
The app has three modes: Imagine It, Talk About It, and List It, with each intended to test a different aspect of the systems functionality. Imagine It asks users to name a real or imaginary place, which LaMDA will then describe (the test is whether LaMDA can match your description); Talk About It offers a conversational prompt (like talk to a tennis ball about dog) with the intention of testing whether the AI stays on topic; while List It asks users to name any task or topic, with the aim of seeing if LaMDA can break it down into useful bullet points (so, if you say I want to plant a vegetable garden, the response might include sub-topics like What do you want to grow? and Water and care).
AI Test Kitchen will be rolling out in the US in the coming months but wont be on the Play Store for just anyone to download. Woodward says Google hasnt fully decided how it will offer access but suggests it will be on an invitation-only basis, with the company reaching out to academics, researchers, and policymakers to see if theyre interested in trying it out.
As Woodward explains, Google wants to push the app out in a way where people know what theyre signing up for when they use it, knowing that it will say inaccurate things. It will say things, you know, that are not representative of a finished product.
This announcement and framing tell us a few different things: First, that AI language models are hugely complex systems and that testing them exhaustively to find all the possible error cases isnt something a company like Google thinks it can do without outside help. Secondly, that Google is extremely conscious of how prone to failure these AI language models are, and it wants to manage expectations.
When organizations push new AI systems into the public sphere without proper vetting, the results can be disastrous. (Remember Tay, the Microsoft chatbot that Twitter taught to be racist? Or Ask Delphi, the AI ethics advisor that could be prompted to condone genocide?) Googles new AI Test Kitchen app is an attempt to soften this process: to invite criticism of its AI systems but control the flow of this feedback.
Deborah Raji, an AI researcher who specializes in audits and evaluations of AI models, told The Verge that this approach will necessarily limit what third parties can learn about the system. Because they are completely controlling what they are sharing, its only possible to get a skewed understanding of how the system works, since there is an over-reliance on the company to gatekeep what prompts are allowed and how the model is interacted with, says Raji. By contrast, some companies like Facebook have been much more open with their research, releasing AI models in a way that allows far greater scrutiny.
Exactly how Googles approach will work in the real world isnt yet clear, but the company does at least expect that some things will go wrong.
Weve done a big red-teaming process [to test the weaknesses of the system] internally, but despite all that, we still think people will try and break it, and a percentage of them will succeed, says Woodward. This is a journey, but its an area of active research. Theres a lot of stuff to figure out. And what were saying is that we cant figure it out by just testing it internally we need to open it up.
Once you see LaMDA in action, its hard not to imagine how technology like this will change Google in the future, particularly its biggest product: Search. Although Google stresses that AI Test Kitchen is just a research tool, its functionality connects very obviously with the companys services. Keeping a conservation on-topic is vital for Google Assistant, for example, while the List It mode in Test Kitchen is near-identical to Googles Things to know feature, which breaks down tasks and topics into bullet points in search.
Google itself fueled such speculation (perhaps inadvertently) in a research paper published last year. In the paper, four of the companys engineers suggested that, instead of typing questions into a search box and showing users the results, future search engines would act more like intermediaries, using AI to analyze the content of the results and then lifting out the most useful information. Obviously, this approach comes with new problems stemming from the AI models themselves, from bias in results to the systems making up answers.
To some extent, Google has already started down this path, with tools like featured snippets and knowledge panels used to directly answer queries. But AI has the potential to accelerate this process. Last year, for example, the company showed off an experimental AI model that answered questions about Pluto from the perspective of the former planet itself, and this year, the slow trickle of AI-powered, conversational features continues.
Despite speculation about a sea change to search, Google is stressing that whatever changes happen will happen slowly. When I asked Zoubin Ghahramani, vice president of research at Google AI, how AI will transform Google Search, his answer is something of an anticlimax.
I think its going to be gradual, says Ghahramani. That maybe sounds like a lame answer, but I think it just matches reality. He acknowledges that already there are things you can put into the Google box, and youll just get an answer back. And over time, you basically get more and more of those things. But he is careful to also say that the search box shouldnt be the end, it should be just the beginning of the search journey for people.
For now, Ghahramani says Google is focusing on a handful of key criteria to evaluate its AI products, namely quality, safety, and groundedness. Quality refers to how on-topic the response is; safety refers to the potential for the model to say harmful or toxic things; while groundedness is whether or not the system is making up information.
These are essentially unsolved problems, though, and until AI systems are more tractable, Ghahramani says Google will be cautious about applying this technology. He stresses that theres a big gap between what we can build as a research prototype [and] then what can actually be deployed as a product.
Its a differentiation that should be taken with some skepticism. Just last month, for example, Googles latest AI-powered assistive writing feature rolled out to users who immediately found problems. But its clear that Google badly wants this technology to work and, for now, is dedicated to working out its problems one test app at a time.
Originally posted here:
Google is beta testing its AI future with AI Test Kitchen - The Verge
Posted in Ai
Comments Off on Google is beta testing its AI future with AI Test Kitchen – The Verge
A quick guide to the most important AI law youve never heard of – MIT Technology Review
Posted: at 10:12 pm
What about outside the EU?
The GDPR, the EUs data protection regulation, is the blocs most famous tech export, and it has been copied everywhere from California to India.
The approach to AI the EU has taken, which targets the riskiest AI, is one that most developed countries agree on. If Europeans can create a coherent way to regulate the technology, it could work as a template for other countries hoping to do so too.
US companies, in their compliance with the EU AI Act, will also end up raising their standards for American consumers with regard to transparency and accountability, says Marc Rotenberg, who heads the Center for AI and Digital Policy, a nonprofit that tracks AI policy.
The bill is also being watched closely by the Biden administration. The US is home to some of the worlds biggest AI labs, such as those at Google AI, Meta, and OpenAI, and leads multiple different global rankings in AI research, so the White House wants to know how any regulation might apply to these companies. For now, influential US government figures such as National Security Advisor Jake Sullivan, Secretary of Commerce Gina Raimondo, and Lynne Parker, who is leading the White Houses AI effort, have welcomed Europes effort to regulate AI.
This is a sharp contrast to how the US viewed the development of GDPR, which at the time people in the US said would end the internet, eclipse the sun, and end life on the planet as we know it, says Rotenberg.
Despite some inevitable caution, the US has good reasons to welcome the legislation. Its extremely anxious about Chinas growing influence in tech. For America, the official stance is that retaining Western dominance of tech is a matter of whether democratic values prevail. It wants to keep the EU, a like-minded ally, close.
Some of the bills requirements are technically impossible to comply with at present. The first draft of the bill requires that data sets be free of errors and that humans be able to fully understand how AI systems work. The data sets that are used to train AI systems are vast, and having a human check that they are completely error free would require thousands of hours of work, if verifying such a thing were even possible. And todays neural networks are so complex even their creators dont fully understand how they arrive at their conclusions.
Tech companies are also deeply uncomfortable about requirements to give external auditors or regulators access to their source code and algorithms in order to enforce the law.
Read the original post:
A quick guide to the most important AI law youve never heard of - MIT Technology Review
Posted in Ai
Comments Off on A quick guide to the most important AI law youve never heard of – MIT Technology Review
This startup hopes photonics will get us to AI systems faster – TechCrunch
Posted: at 10:12 pm
The problem with waiting for quantum computing to bring in the next wave of AI is that its likely to arrive a lot slower than people would like. The next best options include increasing the speed of existing computers somehow but theres now an important added imperative: power-efficient systems that mean we dont burn up the planet while we get about conjuring the AI singularity into existence.
Meanwhile, the speed of AI computation doubles every three or four months, meaning that standard semiconductor technologies are struggling to keep up. Several companies are now working on photonics processing, which introduces light into the semiconductor realm which, for obvious speed of light reasons, literally speeds up the whole thing markedly.
Salience Labs is an Oxford-based startup that thinks it has the answer, by combining an ultra-high-speed multi-chip processor that packages a photonics chip together with standard electronics.
Its now raised a seed round of $11.5 million led by Cambridge Innovation Capital and Oxford Science Enterprises. Also participating were Oxford Investment Consultants, former CEO of Dialog Semiconductor Jalal Bagherli, ex-Temasek Board Member Yew Lin Goh and Arm-backed Deeptech Labs.
Salience is a spin-out of Oxford University and the University of Mnster in 2021, after it came up with the idea of using a broad bandwidth of light to execute operations to deliver what it calls massively parallel processing performance within a given power envelope. The company says the technology is highly scalable and capable of stacking up to 64 vectors into a beam of light.
Vaysh Kewada, CEO and co-founder of Salience Labs, told me: This technology is going to mean we can do far more calculation for the same power requirement which means fundamentally more efficient AI systems.
She thinks the world needs ever-faster chips to grow AI capability, but the semiconductor industry cannot keep pace with this demand. Were solving this with our proprietary on-memory compute architecture which combines the ultra-fast speed of photonics, the flexibility of electronics and the manufacturability of CMOS. This will usher in a new era of processing, where supercompute AI becomes ubiquitous, she said.
Ian Lane, partner, Cambridge Innovation Capital added: Salience Labs brings together deep domain expertise in photonics, electronics and CMOS manufacture. Their unique approach to photonics delivers an exceedingly dense computing chip without having to scale the photonics chip to large sizes.
This is an animation of photonics going off inside the chip:
Here is the original post:
This startup hopes photonics will get us to AI systems faster - TechCrunch
Posted in Ai
Comments Off on This startup hopes photonics will get us to AI systems faster – TechCrunch
DeepMind’s astounding new ‘Gato’ AI makes me fear humans will never achieve AGI – The Next Web
Posted: at 10:12 pm
DeepMind today unveiled a new multi-modal AI system capable of performing more than 600 different tasks.
Dubbed Gato, its arguably the most impressive all-in-one machine learning kit the worlds seen yet.
According to a DeepMind blog post:
The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.
Subscribe now for a weekly recap of our favorite AI stories
And while it remains to be seen exactly how well itll do once researchers and users outside the DeepMind labs get their hands on it, Gato appears to be everything GPT-3 wishes it could be and more.
Heres why that makes me sad: GPT-3 is a large-language model (LLM) produced by OpenAI, the worlds most well-funded artificial general intelligence (AGI) company.
Before we can compare GPT-3 and Gato however, we need to understand where both OpenAI and DeepMind are coming from as businesses.
OpenAI is Elon Musks brainchild, it has billions in support from Microsoft, and the US government could basically care less what its doing when it comes to regulation and oversight.
Keeping in mind that OpenAIs sole purpose is to develop and control an AGI (thats an AI capable of doing and learning anything a human could, given the same access), its a bit scary that all the companys managed to produce is a really fancy LLM.
Dont get me wrong, GPT-3 is impressive. In fact, its arguably just as impressive as DeepMinds Gato, but that assessment requires some nuance.
OpenAIs gone the LLM route on its path to AGI for a simple reason: nobody knows how to make AGI work.
Just like it took some time between the discovery of fire and the invention of the internal combustion engine, figuring out how to go from deep learning to AGI wont happen overnight.
GPT-3 is an example of an AI that can at least do something that appears human: it generates text.
What DeepMinds done with Gato is, well, pretty much the same thing. Its taken something that works a lot like an LLM and turned it into an illusionist capable of more than 600 forms of prestidigitation.
As Mike Cook, of the Knives and Paintbrushes research collective, recently told TechCrunchs Kyle Wiggers:
It sounds exciting that the AI is able to do all of these tasks that sound very different, because to us it sounds like writing text is very different to controlling a robot.
But in reality this isnt all too different from GPT-3 understanding the difference between ordinary English text and Python code.
This isnt to say this is easy, but to the outside observer this might sound like the AI can also make a cup of tea or easily learn another ten or fifty other tasks, and it cant do that.
Basically, Gato and GPT-3 are both robust AI systems, but neither of them are capable of general intelligence.
Heres my problem: Unless youre gambling on AGI emerging as the result of some random act of luck the movie Short Circuit comes to mind its probably time for everyone to reassess their timelines on AGI.
I wouldnt say never, because thats one of sciences only cursed words. But, this does make it seem like AGI wont be happening in our lifetimes.
DeepMinds been working on AGI for over a decade, and OpenAI since 2015. And neither has been able to address the very first problem on the way to solving AGI: building an AI that can learn new things without training.
I believe Gato could be the worlds most advanced multi-modal AI system. But I also think DeepMinds taken the same dead-end-for-AGI concept that OpenAI has and merely made it more marketable.
Final thoughts: What DeepMinds done is remarkable and will probably pan out to make the company a lot of money.
If Im the CEO of Alphabet (DeepMinds parent company), Im either spinning Gato out as a pure product, or Im pushing DeepMind into more development than research.
Gato could have the potential to perform more lucratively on the consumer market than Alexa, Siri, or Google Assistant (with the right marketing and applicable use cases).
But, Gato and GPT-3 are no more viable entry-points for AGI than the above-mentioned virtual assistants.
Gatos ability to perform multiple tasks is more like a video game console that can store 600 different games, than its like a game you can play 600 different ways. Its not a general AI, its a bunch of pre-trained, narrow models bundled neatly.
Thats not a bad thing, if thats what youre looking for. But theres simply nothing in Gatos accompanying research paper to indicate this is even a glance in the right direction for AGI, much less a stepping stone.
At some point, the goodwill and capital that companies such as DeepMind and OpenAI have generated through their steely-eyed insistence that AGI was just around the corner will have to show even the tiniest of dividends.
Read the original post:
DeepMind's astounding new 'Gato' AI makes me fear humans will never achieve AGI - The Next Web
Posted in Ai
Comments Off on DeepMind’s astounding new ‘Gato’ AI makes me fear humans will never achieve AGI – The Next Web
Inflection AI, led by LinkedIn and DeepMind co-founders, raises $225M to transform computer-human interactions – TechCrunch
Posted: at 10:12 pm
Inflection AI, the machine learning startup headed by LinkedIn co-founder Reid Hoffman and founding DeepMind member Mustafa Suleyman, has secured $225 million in equity financing, according to a filing with the U.S. Securities and Exchange Commission. The source of the capital isnt yet clear Inflection didnt immediately respond to a request for more information but the massive round suggests strong investor confidence in Suleyman, who serves as the companys CEO.
Palo Alto, California-based Inflection has kept a low profile to date, granting relatively few interviews to the media. But in CNBC profile from January, Suleyman described wanting to build products that eliminate the need for people to simplify their ideas to communicate with machines, with the overarching goal being to leverage AI to help humans talk to computers.
[Programming languages, mice, and other interfaces] are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness in order to get a machine to do something, Suleyman told the publication. It feels like were on the cusp of being able to generate language to pretty much human-level performance. It opens up a whole new suite of things that we can do in the product space.
The concept of translating human intentions into a language computers can understand dates back decades. Even the best chatbots and voice assistants today havent delivered on the promise, but Suleyman and Hoffman are betting that coming advancements in AI will make an intuitive human-computer interface possible within the next five years.
Theyll have competition. Just last month, Adept, a startup co-founded by former DeepMind, OpenAI and Google engineers and researchers, emerged from stealth with a similar concept: AI that can automate any software process. DeepMind itself has explored an approach for teaching AI to control computers, having an AI observe keyboard and mouse commands from people completing instruction-following computer tasks, such as booking a flight.
Regardless, the size of Inflections funding round reflects the high cost of building sophisticated AI systems. OpenAI is estimated to have spent millions of dollars developing GPT-3, the companys system that can generate human-like text given a prompt. Anthropic, another startup developing cutting-edge AI models, recently raised over half a billion to in co-founder Dario Amodeis words explore the predictable scaling properties of machine learning systems.
AI expertise doesnt come cheap, either, particularly in the midst of a talent shortage. In 2018, a tax filing spotted by the New York Times revealedthat OpenAI paid its top researcher, Ilya Sutskever, more than $1.9 million in 2016. Inflection recently poached AI experts from Google and Meta, CNBC reported in March.
Even at the bigger tech companies, theres a relatively small number of people actually building these [AI] models. One of the advantages of doing this in a startup is that we can go much faster and be more dynamic, Suleyman told CNBC. My experience of building many, many teams over the last 15 years is that there is this golden moment when you really have a very close-knit, small, focused team. Im going to try and preserve that for as long as possible.
A cloud surrounds Inflection, somewhat, following reports that Suleyman allegedly bullied staff members at Google, where he worked after being placed on administrative leave at DeepMind for controversy surrounding some of his projects. Google launched an investigation into his behavior at the time, according to the Wall Street Journal, but it never made its findings public.
Go here to read the rest:
Posted in Ai
Comments Off on Inflection AI, led by LinkedIn and DeepMind co-founders, raises $225M to transform computer-human interactions – TechCrunch
An AI power play: Fueling the next wave of innovation in the energy sector – McKinsey
Posted: at 10:12 pm
Tatum, Texas might not seem like the most obvious place for a revolution in artificial intelligence (AI), but in October of 2020, thats exactly what happened. That was when Wayne Brown, the operations manager at the Vistra-owned Martin Lake Power Plant, built and deployed a heat rate optimizer (HRO).
The heat rate is basically the amount of electricity generated for each unit of fuel consumed. To reach the optimal heat rate, plant operators continuously monitor and tune hundreds of variables, or set points on things like steam temperatures, pressures, oxygen levels, and fan speeds.
Its a lot for any operator to get right 100 percent of the timeso Vistra thought AI could help.
With this goal in mind, Wayne and his group worked together with a McKinsey team that included data scientists and machine learning engineers from QuantumBlack AI by McKinsey, to build a multilayered neural-network modelessentially an algorithm powered by AI that learns about the effects of complex nonlinear relationships.
This model went through two years worth of data at the plant and learned which combination of external factorssuch as temperature and humidityand internal decisions, like set points that operators control, would optimize the algorithm and attain the best heat-rate efficiency at any point in time.
Vistra team members provided continuous guidance about the intricacies of how the plant worked, and identified critical data sources from sensors, which helped McKinsey engineers refine the model, adding and removing variables to see how those changes affected the heat rate.
Through this training process, and by introducing better data, the models learned to make ever more accurate predictions. When the models were accurate to 99 percent or higher and run through a rigorous set of real-world tests, a McKinsey team of machine learning engineers converted them into an AI-powered engine. This generated recommendations every 30 minutes for operators to improve the plants heat-rate efficiency. At a meeting with all of Vistras leaders to review the HRO, Lloyd Hughes, a seasoned operations manager at the companys Odessa plant, said, There are things that took me 20 years to learn about these power plants. This model learned them in an afternoon.
There are things that took me 20 years to learn about these power plants. This model learned them in an afternoon.
With this kind of power at their fingertips, Wayne and his team could make better, more informed decisions. Acting on the HRO recommendations helped Martin Lake run more than two percent more efficiently after just three months in operation, resulting in $4.5 million per year in savings and 340,000 tons of carbon abated. This carbon reduction was the equivalent of taking 66,000 cars off the road. If that doesnt sound like a lot, consider this: companies that build gas-fueled power plants invest millions of dollars in research and development over four to five years to achieve a one-percent improvement in power-generation efficiency. Vistra hit that improvement level in only one-twentieth the amount of time using the data and equipment it already had.
Vistra has since rolled the HRO out to another 67 power-generation units across 26 plants, for an average one-percent improvement in efficiency, and more than $23 million in savings. Along with the other AI initiatives, these efforts have helped Vistra abate about 1.6 million tons of carbon per year, which is ten percent of its remaining 2030 carbon-reduction commitment. Thats equivalent to offsetting about 50 percent of what a 500-megawatt coal plant emits.
What happened at Martin Lake has happened at dozens of Vistras other power plants, with more than 400 AI models (and counting) deployed across the companys fleet to help operators make even better decisions. It also reflects a core trait of Vistras AI transformation, which is that it isnt a story of one massive hit, but rather the story of dozens of meaningful improvements snowballing to deliver significant value in terms of accelerating sustainable and inclusive growth. Its also the story of how an organization architected an approach to rapidly scale every successful AI solution across the entire business. And its a story of how a continuous improvement culture, combined with a powerful AI modeling capability, helped leaders and plant operators do their jobs better than ever before.
With more than $60 million captured in about one year of work and another $40 million in progress, Vistra is well on its way to delivering against a roadmap of $250-$300 million in identified EBITDA and more than two million tons of carbon abatement per year. The AI-driven advances at Vistra have heralded a generational shift in the power sector in terms of improvements in efficiency, reliability, safety, and sustainability.If the one-percent improvement in efficiency the HRO delivered across the fleet was carried across all coal- and gas-fired plants in the US electric-power generation industry, 15 million tons of carbon would be abated annuallythe equivalent of decommissioning more than two large coal plants or planting about 37 million trees. That means less fuel needed to deliver power to the hospitals, schools, and businesses that rely on it. AI has the potential to bring similar levels of improvement to renewables as well, making them a more cost-effective and attractive energy option.
Healthy skepticism and a culture of favoring action over words at Vistra meant that the biggest hurdle in the AI journey wasnt the technology: it was the people. Vistra leadership and operations managers needed to know what AI could do and be convinced it could really work.
It was this ingrained culture of continuous improvementalongside a highly competitive market and a commitment to sustainabilitythat convinced Vistras leadership they needed to give AI a chance.
The first question was a relatively simple one: How can AI help improve the way Vistra generates power?
Answers to that question bubbled over when 50 of Vistras top leaders came to a McKinsey-hosted workshop. In multiple sessions, experts explained how AI worked, walked through detailed case studies showing how other companies used analytics and AI to generate value and gave live demonstrations of technologies, including digitized workflows and machine learning. Leaders in analytics from various sectorsincluding Amazon, Falkonry, Element Analytics, and QuantumBlack, AI by McKinsey, as well as G2VP from the venture-capital worldprovided insights and examples of how AI works.
I saw an example of how a metallurgical plant was using AI to help its operators optimize set points and it clicked for me, remembers Patrick Cade Hay, the plant manager for Vistras Lamar power plant. I saw how I could translate that into helping me run my own plants more efficiently. This was my lightbulb moment.
Company leaders and plant managers poured over process flow sheets and engineering diagrams to determine pain points as well as opportunities. This exercise allowed them to focus first on finding where the value was, then secondly on what technologies were needed to deliver it. Many of the operations opportunities were around yield and energy optimization and predictive maintenance, which according to our research, were the top AI use cases for manufacturing industries.
By the end of the session, Vistra had developed a strategy to develop a series of AI solutions that could capture $250-$300 million in potential EBITDA while helping the company achieve its 2030 emissions-reductions goals.
While the analysis looked promising, proving it in the field was what mattered. If our plant managers arent bought in, then things dont happen, explained Barry Boswell, Vistras executive vice president of power-generation operations. So, we said, Lets pick a leader who is knowledgeable and skeptical, because if we can win them over, we can get everyone.
They picked Cade. Not only did he run a top-performing plant in terms of profitability, reliability, and heat rate, he had a reputation for telling it like it isBarry trusted Cade to tell him whether or not the value potential in AI was real. When he approached Cade about testing out a proof of concept to optimize duct burners, he was intrigued but predictably skeptical. Cade saw the potential in AI but was interested in finding out if it could actually help in the field.
Duct burners essentially work like afterburners in jet planes; they provide a surge of energy when needed. Operators use them as supplements to hit energy targets, which are known as their dispatch point. The issue is that powering duct burners uses more fuel than regular methods, so its more expensive, generates more carbon emissions, and increases the wear and tear on equipment.
McKinsey subject matter experts, data scientists, and analytics translators from QuantumBlack worked closely with a team from Vistra comprised of power generation and process experts as well as front-line operators to understand how the plant works, what data was available from the sensors already in place, and what variables could be directedlike the fact that the number of cooling fans running could be controlled, while the ambient temperature couldnt.
As the teams developed the models, plant operators reviewed recommendations to see what made sense, what other variables needed to be tested, and what kinds of recommendations the operators would find most helpful. By analyzing the effect of various inputs and set points on the plantsuch as pressure and humidity, the angle of blades in the gas turbine, usage of inlet cooling technologies, and the age and performance of various components like filters and condensersand running it through the model, the analysis was clear: overall duct-burner usage could be reduced by approximately 30 percent, which would result in the equivalent of $175,000 of yearly savings on fuel costs and wear and tear on the system, in addition to an abatement of about 4,700 tons of carbon per year.
We worked closely with the team from McKinsey to develop AI models that reflected the realities of how power plants operate, said Cade, and then when we saw the recommendations coming out of the AI tool, I saw how much real value there was.
Vistras leadership realized from the beginning that the only way to achieve their efficiency and carbon-abatement aspirations was to scale every solution. We manage the Vistra fleet as one. If a plant is doing something that works, we want every plant to do it, says Barry. Thats what were built to do.
That realization led Vistra to invest in a five-part system to scale and sustain AI solutions.
***
Vistras story is far from finished. As Cade put it, Were just looking at the tip of the iceberg. Vistras roadmap for 2022 and 2023 includes bringing AI to its rapidly growing renewables fleet of solar and batteries to optimize yield and reliability, among other initiatives.
To help sustain this ambition, Vistra is building up its talent bench. In addition to hiring a small team of data scientists and engineers, Rachit has partnered with the University of Texas at Dallas to offer basic, intermediate, and advanced courses in AI and analytics for Vistra employees. Some 70 people have already completed courses, including those reskilling from statistics to machine learning. Vistra has also built relationships with local colleges and universities to develop internship programs and work with students in capstone projects to identify top technical talent.
We cant sit around and just do what we did yesterday to be ready for tomorrow, Barry says. Weve seen enough to know whats possible.
The authors would like to thank Wayne Brown, Rachit Gupta, Patrick Cade Hay, Lloyd Hughes, Denese Ray, and Doug Richter from Vistra Corp., and Richard Bates, Dan Hurley, Pablo Illuzzi, Nephi Johnson, Muro Kaku, Jay Kim, George Lederman, Abhay Prasanna, Noel Ramirez, and Ayush Talwar from McKinsey.
Read more from the original source:
An AI power play: Fueling the next wave of innovation in the energy sector - McKinsey
Posted in Ai
Comments Off on An AI power play: Fueling the next wave of innovation in the energy sector – McKinsey
Kore.ai CEO says conversational AI is the foundation of the metaverse – VentureBeat
Posted: at 10:12 pm
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Conversational artificial intelligence (AI), often regarded as the crosshair between natural language processing (NLP) and natural language understanding (NLU), is algorithm-based intelligence that helps computers listen to, process, understand and make meaning out of human language. With COVID-19 accelerating the digital transformation drive, organizations have increasingly adopted new ways to match customer expectations around timely query resolution. An important part of the customer experience process, conversational AI, is quickly taking center stage.
A survey by Liveperson shows 91% of customers prefer companies offering them the choice to call or message. An effective way for enterprises to meet this demand is via chatbots. As customers increasingly expect round-the-clock availability from businesses, chatbots have become an efficient way to make this possible. Chatbots are cost-effective, efficient and can routinely handle human requests, allowing the difficult queries to be reserved for human agents.
With the influx of AI-driven chatbots also comes the need for more sophistication, as well as the need for chatbots to understand questions and produce the right answers to help customers get the best experience. Particularly, this is relevant for the emerging metaverse space, as companies seek to improve their presence and user experiences in a virtual world.
Kore.ai, a Florida-based company focused on providing enterprises with AI-first virtual assistants, wants to take conversational intelligence into the metaverse. Raj Koneru, CEO at Kore.ai, told VentureBeat in an exclusive interview that conversational AI is the foundation of the metaverse.
More companies have embraced the use of conversational AI throughout the last few years. Open-source AI language models have now helped developers and businesses in the conversational AI space build better chatbots, help with expediting customer experience and also improve employee experience.
Conversational AI merges the practicality of AI to produce human-like interaction between the human who asks the questions and the machine that answers. Chatbots powered by conversational AI can recognize human speech and text. The conversation also occurs with an understanding of intent, enabling more accurate answers.
As voice and conversational AI take on an increasing volume of customer interactions, it also becomes more important for bots to tap into historical data, connecting data from voice calls to data from messaging conversations. Conversational AI helps to make this possible by helping machines make sense of the human voice they are listening to, understand biases and process the answers that humans are looking for.
Businesses need their chatbots built with conversational AI, said Koneru. However, the problem is that building such solutions from the ground up requires a long process that includes writing numerous codes. This can hurt customer experience, as chatbots will continue to be less than effective while building conversational AI. The long wait time is often a result of issues like levels of sophistication, language and geographical support. There is also accounting for the time it takes to train the AI and ensure it is ready for market fit.
To shorten the wait time, many businesses turn to companies like Kore.ai, which offers a no-code automation platform that caters to the conversational AI needs of companies looking to improve customer experience and interaction with their products.
Gartners Magic Quadrant for Enterprise Conversational AI Platforms 2022 lists Kore.ai as a leader in the conversational AI space, showing the companys upward trajectory in the industry. Other companies on the list include Amelia, Cognigy, Omilia, IBM and OneReach.ai.
Koneru said Kore.ai uses a combination of NLU approaches including fundamental meaning (semantic understanding), machine learning and knowledge graph to identify user intent with a higher degree of accuracy and execute complex transactions. He said Kore.ai can achieve a high level of accuracy by following specific methods, which include:
Gartner predicts that 25% of people will spend at least one hour daily in the metaverse by 2026 leading businesses to join the race to stake their claim in the metaverse. Companies are preparing customer touchpoints and are proactively mapping out user experiences.
According to Koneru, humans will have conversations with avatars, essentially chatbots, in the metaverse. So, its key to create avatars that can understand the dynamics of human conversation, process it and deliver precise results, even with all the human nuances that may interfere. This is where conversational AI comes in.
Utilizing conversational AI to improve virtual experiences in the metaverse shows considerable promise, as Koneru noted, the metaverse is ripe with a lot of use cases that traditional businesses can benefit from.
There have to be some good reasons in business processes that lend themselves to being physically present that prevent them from doing so digitally. The same is the case with the post office and many educational services. If you think of telehealth, going into the metaverse and meeting a virtual representation of your doctor who can check your vitals could potentially be one of the stronger use cases, he said.
According to Koneru, Kore.ais attempt to improve the metaverse world with conversational AI will see it become the first conversation AI company to focus on businesses in the metaverse while still servicing the needs of everyday businesses. However, G2s review shows Kore.ai has competition in Intercom, Zendesk Support Suite, Drift, Birdeye and others.
Kore.ai has raised $106 million in equity over the last eight years and currently serves more than 200 Fortune 2000 companies.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
Original post:
Kore.ai CEO says conversational AI is the foundation of the metaverse - VentureBeat
Posted in Ai
Comments Off on Kore.ai CEO says conversational AI is the foundation of the metaverse – VentureBeat
AIs Threats to Jobs and Human Happiness Are Real – IEEE Spectrum
Posted: at 10:12 pm
Theres a movement afoot to counter the dystopian and apocalyptic narratives of artificial intelligence. Some people in the field are concerned that the frequent talk of AI as an existential risk to humanity is poisoning the public against the technology and are deliberately setting out more hopeful narratives. One such effort is a book that came out last fall called AI 2041: Ten Visions for Our Future.
The book is cowritten by Kai-Fu Lee, an AI expert who leads the venture capital firm Sinovation Ventures, and Chen Qiufan, a science fiction author known for his novel Waste Tide. It has an interesting format. Each chapter starts with a science fiction story depicting some aspect of AI in society in the year 2041 (such as deepfakes, self-driving cars, and AI-enhanced education), which is followed by an analysis section by Lee that talks about the technology in question and the trends today that may lead to that envisioned future. Its not a utopian vision, but the stories generally show humanity grappling productively with the issues raised by ever-advancing AI.
IEEE Spectrum spoke to Lee about the book, focusing on the last few chapters, which take on the big issues of job displacement, the need for new economic models, and the search for meaning and happiness in an age of abundance. Lee argues that technologists need to give serious thought to such societal impacts, instead of thinking only about the technology.
Kai-Fu Lee on
The science fiction stories are set in 2041, by which time you expect AI to have already caused a lot of disruption to the job market. What types of jobs do you think will be displaced by then?
Kai-Fu Lee: Contrary to what a lot of people think, AI is actually just a piece of software that does routine work extremely well. So the jobs that will be the most challenged will be those that are routine and repetitiveand that includes both blue-collar and white-collar work. So obviously jobs like assembly line workers and people who operate the same equipment over and over again. And in terms of white-collar work, many entry-level jobs in accounting, paralegal, and other jobs where youre repetitively moving data from one place to another, and jobs where youre routinely dealing with people, such as customer-service jobs. Those are going to be the most challenged. If we add these up, it will be a very substantial portion of all jobs, even without major breakthroughs in AIon the order of 40 to 50 percent.
The jobs that are most secure are those that require imagination, creativity, or empathy. And until AI gets good enough, there will also be craftsman jobs that require dexterity and a high level of hand-eye coordination. Those jobs will be secure for a while, but AI will improve and eventually take those over as well.
How do you imagine this trend is changing the engineering profession?
Lee: I think engineering is largely cerebral and somewhat creative work that requires analytical skills and deep understanding of problems. And those are generally hard for AI.
But if youre a software engineer and most of your job is looking for pieces of code and copy-pasting them togetherthose jobs are in danger. And if youre doing routine testing of software, those jobs are in danger too. If youre writing a piece of code and its original creative work, but you know that this kind of code has been done before and can be done again, those jobs will gradually be challenged as well. For people in the engineering profession, this will push us towards more of an analytical architect role where we deeply understand the problems that are being solved, ideally problems that have complex characteristics and measurements. The ideal combination in most professions will be a human that has unique human capabilities managing a bunch of AI that do the routine parts.
It reminds me of the Ph.D. thesis of Charles Simonyi, the person who created Microsoft Word. He did an experiment to see what would happen if you have a really smart architect who can divvy up the job of writing a piece of code into well-contained modules that are easy to understand and well defined, and then outsource each module to an average engineer. Will the resulting product be good? It was good. Were talking about the same thing, except were not outsourcing to the average engineer, who will have been replaced by AI. That superengineer will be able to delegate the work to a bunch of AI resulting in creativity and symbiosis. But there wont be very many of these architect jobs.
In the book, you say that an entirely new social contract is needed. One problem is that there will be fewer entry-level jobs, but there still needs to be a way for people to gain skills. Can you imagine a solution for engineering?
Lee: Lets say someone is talented and could become an architect, but that person just graduated from college and isnt there yet. If they apply for a job to do entry-level programming and theyre competing for the job with AI, they might lose the job to the AI. That would be really bad because we will not only hurt the persons self-confidence, but also society will lose the talent of that architect, which needs years of experience to build up.
But imagine if the company says, Were going to employ you anyway, even though youre not as good as AI. Were going to give you tasks and well have AI work alongside you and correct your errors, and you can learn from it and improve. If a thousand people go through this entry-level practical training, maybe a hundred emerge to be really good and be on their way to become architects. Maybe the other 900 will take longer and struggle, or maybe theyll feel complacent and continue to do the work so theyre passing time and still have a chance to improve. Maybe some will say, Hey, this is really not for me, Im not reaching the architect level. Im going to go become a photographer and artist or whatever.
Back to top
Why do you think that this round of automation is different from those that came before in history, when jobs were both destroyed and created by automation?
Lee: First of all, I do think AI will both destroy and create jobs. I just cant enumerate which jobs and how many. I tend to be an optimist and believe in the wisdom and the will of the human race. Eventually, well figure out a bunch of new jobs. Maybe those jobs dont exist today and have to be invented; maybe some of those jobs will be service jobs, human-connection jobs. I would say that every technology so far has ended up making society better, and there has never been a problem of absorbing the job losses. If you look at a 30-year horizon, Im optimistic that that there will not be a net job loss, but possibly a net gain, or possibly equal. And we can always consider a four-day work week and things like that. So long-term, Im optimistic.
Now to answer your question directly: short-term, I am worried. And the reason is that none of the previous technology revolutions have tried explicitly to replace people. No matter how people think about it, every AI algorithm is trying to display intelligence and therefore be able to do what people do. Maybe not an entire job, but some task. So naturally there will be a short-term drop when automation and AI start to work well.
If you expect an assembly-line worker to become a robot-repair person, it isnt going to be so easy. Kai-Fu Lee, Sinovation Ventures
Autonomous vehicles are an explicit effort to replace drivers. A lot of people in the industry will say, Oh no, we need a backup driver in the truck to make it safer, so we wont displace jobs. Or theyll say that when we install robots in the factory, the factory workers are elevated to a higher-level job. But I think theyre just sugarcoating the reality.
Lets say over a period of 20 years, with the advent of AI, we lose x number of jobs, and we also gain x jobs; lets say the loss and gain are the same. The outcome is not that the society remains in equilibrium, because the jobs being lost are the most routine and unskilled. And the jobs being created are much more likely to be skilled and complex jobs that require much more training. If you expect an assembly-line worker to become a robot-repair person, it isnt going to be so easy. Thats why I think the next 15 years or 20 years will be very chaotic. We need a lot of wisdom and long-term vision and decisiveness to overcome these problems.
Back to top
Currency
There are some interesting experiments going on with universal basic income (UBI), like Sam Altmans ambitious idea for Worldcoin. But from the book, it seems like you dont think that UBI is the answer. Is that correct?
Lee: UBI may be necessary, by its definitely not sufficient. Were going to be in a world of very serious wealth inequality, and the people losing their jobs wont have the experience or the education to get the right kinds of training. Unless we subsidize and help these people along, the inequality will be exacerbated. So how do we make them whole? One way is to make sure they dont have to worry about subsistence. Thats where I think universal basic income comes into play by making sure nobody goes without food, shelter, water. I think that level of universal basic income is good.
As I mentioned before, the people who are most devastated, people who dont have skills, are going to need a lot of help. But that help isnt just money. If you just give people money, a wonderful apartment, really great food, Internet, games, and even extra allowance to spend, they are much more likely to say, Well, Ill just stay home and play games. Ill go into the metaverse. They may even go to alcohol or substance abuse because those are the easiest things to do.
So what else do they need?
Lee: Imagine the mind-set of a person whose job was taken away by automation. That person has been to be thinking, Wow, everything I know how to do, AI can do. Everything I learn, AI will be able to do. So why should I take the universal basic income and apply that to learning? And even if that person does decide to get training, how can they know what to get training on? Imagine Im an assembly-line worker and I lost my job. I might think, truck driver, thats a highly paid job. Ill do that. But then in five years those jobs are going to be gone. A robot-repair job would be a much more sustainable job than a truck driver, but the person who just lost a job doesnt know it.
So the point I make in the book is: To help people stay gainfully employed and have hope for themselves, its important that they get guidance on what jobs they can do that will, first of all, give people a sense of contribution, because then at least we eliminate the possibility of social unrest. Second, that job should be interesting, so the person wants to do it. Third, if possible, that job should have economic value.
Why do you put economic value last in that list?
Lee: Most people think jobs need to have economic value. If youre making cars, the cars are sold. If youre writing books, the books are sold. If you just volunteer and take care of old people, youre not creating economic value. If we stay in that mentality, that would be very unfortunate, because we may very well be in a time when what is truly valuable to society is people taking care of each other. That might be the glue that keeps society going.
More thought should go into how to deal with the likely anxiety and depression and the sense of loss that people will have when their jobs are taken and they dont know what to do. What they need is not just a bunch of money, but a combination of subsistence, training, and help finding a new beginning. Who cares if they create economic value? Because as the last chapter states, I believe were going to reach the era of plenitude. Were not going to be in a situation of incredible scarcity where everyones fighting each other in a zero-sum game. So we should not be obsessed with making sure everyone contributes economically, but making sure that people feel good about themselves.
Back to top
I want to talk about the last chapter. Its a very optimistic vision of plenitude and abundance. Ive been thinking of scenarios from climate-change models that predict devastating physical impacts by 2041, with millions of refugees on the move. I have trouble harmonizing these two different ideas of the future. Did you think about climate change when you were working on that chapter?
Lee: Well, there are others who have written about the worst-case scenario. I would say what we wrote is a good-case scenarioI dont think its the best case because there are still challenges and frustrations and things that are imperfect. I tried to target 80 percent good in the book. I think thats the kind of optimism we need to counterbalance the dystopian narratives that are more prevalent.
The worst case for climate is horrible, but I see a few strong reasons for optimism. One is that green energy is quickly becoming economical. In the past, why didnt people go for green energy? Because fossil fuels were cheaper and more convenient, so people gained for themselves and hurt the environment. The key thing that will turn it around is that, first, governments need to have catalyst policies such as subsidized electrical vehicles. That is the important first step. And then I think green energy needs to become economic. Now were at the point where, for example, solar plus lithium batteries, not even the most advanced batteries, are already becoming cheaper than fossil fuel. So there are reasons for optimism.
I liked that the book also got into philosophical questions like: What is happiness in the era of AI? Why did you want to get into that more abstract realm?
Lee: I think we need to slowly move away from obsession with money. Money as a metric of happiness and success is going to become more and more outdated, because were entering a world where theres much greater plenitude. But what is the right metric? What does it really mean for us to be happy? We now know that having more money isnt the answer, but what is the right answer?
AI has been used so far mainly to help large Internet companies make money. They use AI to show people videos in such a way that the company makes the most money. Thats what has led us to the current social media and streaming video that many people are unhappy about. But is there a way for AI to show people video and content so that theyre happier or more intelligent or more well liked? AI is a great tool, and its such a pity that its being used by large Internet companies that say, How do we show people stuff so we make more money? If we could have some definitions of happiness, well-likedness, intelligence, knowledgeableness of individuals, then we can turn AI into a tool of education and betterment for each of us individually in ways that are meaningful to us. This can be delivered using the same technology that is doing mostly monetization for large companies today.
Back to top
From Your Site Articles
Related Articles Around the Web
See the article here:
AIs Threats to Jobs and Human Happiness Are Real - IEEE Spectrum
Posted in Ai
Comments Off on AIs Threats to Jobs and Human Happiness Are Real – IEEE Spectrum