The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial General Intelligence
Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI – Grit Daily
Posted: February 24, 2024 at 12:01 pm
Denver, USA, February 23rd, 2024, Chainwire
The Decentralized AGI Summit, organized by Sentient and Symbolic Capital, will bring together top thought leaders in Decentralized AI like Vitalik Buterin, Sandeep Nailwal, Illia Polosukhin, and Sreeram Kannan.
As the development of artificial general intelligence (AGI) systems accelerates, there are growing concerns that centralized AI controlled by a small number of actors poses a major threat to humanity. The inaugural Decentralized AGI Summit will bring together top experts in AI and blockchain like Vitalik Buterin, Sandeep Nailwal, Illia Polosukhin, Sreeram Kannanm, and more, to explore how decentralized, multi-stakeholder governance models enabled by blockchain technology can help make the development of AGI safer, more transparent and aligned with the greater good.
The rapid acceleration of centralized AI and its integration into everyday life has led humanity to a crossroads between two future worlds, says Sandeep Nailwal. On the one hand, we have the choice of a Closed World. This world is controlled by few, closed-source models run by massive mega corporations. On the other hand, we have the choice of an Open World. In this world, models are default open-source, inference is verifiable, and value flows back to the stakeholders. The Open World is the world we want to live in, but it is only possible by leveraging blockchain to make AI more transparent and just.
The Decentralized AGI Summit will take place on Monday, February 26th from 3-9pm MST. It is free and open to the public to attend at: https://decentralizedagi.org/.
We are excited to help facilitate this important discussion around the development of safe and ethical AGI systems that leverage decentralization and multi-stakeholder governance, said Kenzi Wang, Co-Founder and General Partner at Symbolic Capital. Bringing luminaries across both the AI and web3 domains together will help push forward thinking on this critical technological frontier.
Featured keynote speakers include:
Vitalik Buterin, Co-Founder of Ethereum Foundation
Sandeep Nailwal, Co-Founder of Polygon Labs
Illia Polosukhin, Co-Founder of Near Foundation
Sreeram Kannan, Founder of Eigenlayer
Topics will span technical AI safety research, governance models for AGI systems, ethical considerations, and emerging use cases at the intersection of AI and blockchain. The summit aims to foster collaboration across academic institutions, industry leaders and the decentralized AI community.
For more details and to register, visit https://decentralizedagi.org/.
About Sentient
Sentient is building a decentralized AGI platform. Sentients team is comprised of leading web3 founders, builders, researchers, and academics who are committed to creating trustless and open artificial intelligence models.
Learn more about Sentient here: https://sentient.foundation/
About Symbolic Capital
Symbolic Capital is a people-driven investment firm supporting the best web3 projects globally. Our team has founded and led some of the most important blockchain companies in the world, and we leverage this background to provide unparalleled support to the companies in our portfolio.
Learn more about Symbolic Capital here: https://www.symbolic.capital/
Sam Lehman[emailprotected]
Original post:
Posted in Artificial General Intelligence
Comments Off on Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI – Grit Daily
What is AI? A-to-Z Glossary of Essential AI Terms in 2024 – Tech.co
Posted: February 22, 2024 at 7:59 pm
A for Artificial General Intelligence (AGI)
AGI is a theoretical type of AI that exhibits human-like intelligence and is generally considered to be as smart or smarter than humans. While the term's origins can be traced back to 1997, the concept of AGI has fallen into the mainstream in recent years as AI developers continue to push the frontier of the technology forward.
For instance, in November 2023 OpenAI revealed it was working on a new AI superintelligence model codenamed Project Q*, which could bring the company closer to realizing AGI. It should be emphasized, however, that AGI is still a hypothetical concept, and many experts are confident the type of AI will not be developed anytime soon, if ever.
Big data refers to large, high-volume datasets, that traditional data processing methods struggle to manage. Big data and AI go hand in hand. The gigantic pool of raw information is vital for AI decision-making, while sophisticated AI algorithms can analyze patterns in datasets and identify valuable insights. When working together, they help users make more insightful revelations, much faster than through traditional methods.
AI bias occurs when an algorithm produces results that are systematically prejudiced against certain types of people. Unfortunately, AI systems have consistently been shown to reflect biases within society by upholding harmful beliefs and encouraging negative stereotypes relating to race, gender, and national identity.
These biases were emphasized in a now-deleted article by Buzzfeed, which displayed AI-generated Barbies from all over the world. The images supported a variety of racial stereotypes, by featuring oversexualized Caribbean dolls, white-washed Barbies from the global south, and Asian dols with inaccurate cultural outfits.
You've probably heard of this one, but it's still important to mention as no AI glossary can be considered complete without a nod to the generative AI chatbot that changed the game when it launched back in November 2022.
In short, ChatGPT is the product that has shifted the AI debate from the server room into the living room. It has done from artificial intelligence what the iPhone did for the mobile phone, bringing the technology into the public eye by virtue of its widely accessible model.
As we recently revealed in our Impact of Technology in the Workplace report, ChatGPT is easily the most widely used AI tool by businesses and may even be the key to unlocking the 4-day workweek.
Its influence may fade over time, but the world of AI will always be viewed through the prism of before and after ChatGPT's birth.
Standing for computing power', compute refers to the computational resources required to train AI models to perform tasks like data processing and making predictions. Typically, the more competing power used to train an LLM, the better it can perform.
Computing power relies on a lot of energy consumption, however, which is sparking concern among environmental activists. For instance, research has revealed that is takes 1GWh of energy to power responses for ChatGPT daily, which is enough energy to power 30,000 US households.
Diffusion models represent a new tier of machine learning, capable of generating superior AI-generated images. These models work by adding noise to a dataset before learning to reverse this process.
By understanding the concept of abstraction behind an image, and creating content in a new way, diffusion models create images that are more sharpened and refined than those made by traditional AI models, and are currently being deployed in a range of AI image tools like Dall-E and Stable Diffusion.
Emergent behavior takes place when AI models produce an unanticipated response outside of its creator's intention. Much of AI is so complex its decision-making processes still can't be understood by humans, even its creators. With AI models as prominent as GPT4 recently exhibiting emergent capabilities, AI researchers are making an increased effort to understand the how and the why behind AI models.
Facial recognition technology relies on AI, machine learning algorithms, and computer vision techniques to process stills and videos of human faces. Since AI can identify intricate facial details more efficiently than manual methods, most facial recognition systems use an artificial neural network called convolutional neural network (CNN) to enhance its accuracy.
Generative AI is a catch-all term that describes any type of AI that produces original content like text, images, and audio clips. Generative AI uses information from LLMs, and other AI models, to create outputs, and powers responses made by chatbots like ChatGPT, Gemini, and Grok,
Chatbots don't always produce correct or sane responses. Oftentimes, AI models generate incorrect information but present it as facts. This is called AI hallucination. Hallucinations take place when the AI model makes predictions based on the dataset it was trained on, instead of retrieving actual facts.
Most AI hallucinations are minor and may even be overlooked by the average user. However, sometimes hallucinations can have dangerous consequences, as false responses produced by ChatGPT have previously been exploited by scammers to trick developers into downloading malicious code.
Bearing similarities to AGI, the intelligence explosion is a hypothetical scenario where AI development becomes uncontrollable and poses a threat to humanity as a result. Also referred to as the singularity, the term represents an existential threat felt by many about the rapid and largely unchecked advancement of the technology.
Jailbreaking is a form of hacking with the goal of bypassing the ethical safeguards of AI models. Specifically, when certain prompts are entered into chatbots, users are able to use them free of any restrictions.
Interestingly, a recent study by Brown University found that using languages like Hmong, Zulu, and Scottish Gaelic was an effective way to jailbreak ChatGPT. Learn how to jailbreak ChatGPT here.
As AI continues to automate manual processes previously performed by humans, the technology is sparking widespread job insecurity among workers. While most workers shouldn't have anything to worry about, our Tech.co Impact of Technology on the Workplace report recently found out that supply chain optimization, legal research, and financial analysis roles are the most likely to be replaced by AI in 2024.
LLMs are a specialist type of AI model that harnesses natural language processing (NLP) to understand and produce natural, humanlike responses. In simple terms, make tools like ChatGPT sound less like a bot, and more like you and me.
Unlike generative AI, LLMs have been designed specifically to handle language-related tasks. Popular examples of LLMs you may have heard of include GPT-4, PaLM 2, and Gemini.
Machine learning is a field of artificial intelligence that allows systems to learn and improve from experience, in a similar way to humans. Specifically, it focuses on the use of data and algorithms in AI, and aims to improve the way AI models can autonomously learn and make decisions in real-world environments.
While the term is often used interchangeably with AI, machine learning is part of the wider AI umbrella, and requires minimal human intervention.
A neural network (NN) is a machine learning model designed to mimic the structure and function of a human brain. An artificial neural network is comprised of multiple tiers and consists of units called artificial neurons, which loosely imitate neurons found in the brain.
Also referred to as deep neural networks, NN's have a variety of useful applications and can be used to improve image recognition, predictive modeling, and natural language processing.
Open-source AI refers to AI technology that has freely available source code. The ultimate aim of open-source AI is to create a culture of collaboration and transparency within the artificial intelligence community, that gives companies and developers greater freedom to innovate with the technology.
Lots of currently available open-source AI products are variations of existing applications., and common product categories include chatbots, machine translation tools, and large language models.
If you're somehow still unfamiliar with tools like Gemini and ChatGPT, a prompt is an instruction or query you enter into chatbots to gain a targeted response. They can exist as stand-alone commands or can be the starting point for longer conversations with AI models.
AI prompts can take any form the user desires, but we found that longer form, detailed input generates the best responses. Using emotive language is another way to generate high-quality answers, according to a recent study by Microsoft.
Find out how to make your work life easier with these 40 ChatGPT prompts designed to save you time at the workplace.
In AI, parameters are a value that measures the behavior of a machine-learning model. In this context, each parameter acts as a variable, determining how the model will convert an input into output. Parameters are one of the most common ways to measure AI performance, and generally speaking, the more an AI model has, the better it will be able to understand complex data patterns and produce more accurate responses.
Quantum AI is the use of quantum computing for the computation of machine learning algorithms. Compared to classical computing, which processes information through 1s and 0s, quantum computing uses a unit called qubits, which represents both 1s and 0s at once. Theoretically, this process could speed up computing speeds dramatically.
In the case of quantum AI, the use of qubits could potentially help produce much more powerful AI models, although many experts believe we're still a way off in achieving this reality.
Red teaming is a structured testing system that aims to find flaws and vulnerabilities in AI models. The cybersecurity term essentially refers to an ethical hacking practice where actors try and simulate an actual cyber attack, to identify potential weak spots in a system and to improve its defenses in the long run.
In the case of AI red teaming, no actual hacking attempt may take place, and red teamers may instead try to test the security of the system by prompting it in a certain way that bypasses any guardrails developers have placed on it, in a similar way to jailbreaking.
There are two basic approaches when it comes to AI learning: supervised learning and unsupervised learning.Also known as supervised machine learning, supervised learning is a method of training where algorithms are trained on input data that has been labeled for a specific output. The aim of the test is to measure how accurately the algorithm can perform on unlabeled data, and the process strives to improve the overall accuracy of AI systems as a whole.
In simple terms, training data is an extremely vast input dataset used to train a machine learning model. Training data is used to teach prediction models using algorithms how to extract features that are relevant to specific user goals, and it's the initial set of data that can then be complimented by subsequent data called testing sets.
It is fundamental to the way AI and machine learning work, and without training data, AI models wouldn't be able to learn, extract useful information, and make predictions, or put simply, exist.
Contrary to supervised learning, unsupervised learning is a type of machine learning where models are given unlabeled, cluttered data and encouraged to discover patterns and insights without any specific framework.
Unsupervised learning models are used for three main tasks, cluttering, which is a data mining technique for grouping unlabeled data, association, another earning method that uses different rules to find relationships between variables, and dimensionality reduction, a learning technique deployed when the number of dimensions in a dataset it too high.
X-risk stands for existential risk. More specifically, the term relates to the existential risk posed by the rapid development of AI. People warning about a potential X-risk event believe that the progress being made in the field of AI may result in human extinction or global catastrophe if left unchecked.
X-risk isn't a fringe belief, though. In fact, in 2023 several tech leaders like Demis Hassabis CEO of DeepMind, Ilya Sutskever Co-Founder and Chief Scientist at OpenAI, and Bill Gates signed a letter warning AI developers about the existential threat posed by AI.
Zero-shot learning is a deep learning problem setup where an AI model is tasked with completing a task without receiving any training examples. In machine learning, zero-shot learning is used to build models for classes that have not yet been labeled for training.
The two stages of zero-shot learning include the training stage, where knowledge is captured, and the interference stage, where information is used to classify examples into a new set of classes.
Here is the original post:
What is AI? A-to-Z Glossary of Essential AI Terms in 2024 - Tech.co
Posted in Artificial General Intelligence
Comments Off on What is AI? A-to-Z Glossary of Essential AI Terms in 2024 – Tech.co
With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat – VentureBeat
Posted: at 7:59 pm
Last Thursday, OpenAI released a demo of its new text-to-video model Sora, that can generate videos up to a minute long while maintaining visual quality and adherence to the users prompt.
Perhaps youve seen one, two or 20 examples of the video clips OpenAI provided, from the litter of golden retriever puppies popping their heads out of the snow to the couple walking through the bustling Tokyo street. Maybe your reaction was wonder and awe, or anger and disgust, or worry and concern depending on your view of generative AI overall.
Personally, my reaction was a mix of amazement, uncertainty and good old-fashioned curiosity. Ultimately I, and many others, want to know what is the Sora release really about?
Heres my take: With Sora, OpenAI offers what I think is a perfect example of the companys pervasive air of mystery around its constant releases, particularly just three months after CEO Sam Altmans firing and quick comeback. That enigmatic aura feeds the hype around each of its announcements.
The AI Impact Tour NYC
Well be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.
Of course, OpenAI is not open. It offers closed, proprietary models, which makes its offerings mysterious by design. But think about it millions of us are now trying to parse every word around the Sora release, from Altman and many others. We wonder or opine on how the black-box model really works, what data it was trained on, why it was suddenly released now, what it will really be used for, and the consequences of its future development on the industry, the global workforce, society at large, and the environment. All for a demo that will not be released as a product anytime soon its AI hype on steroids.
At the same time, Sora also exemplifies the very un-mysterious, transparent clarity OpenAI has around its mission to develop artificial general intelligence (AGI) and ensure that it benefits all of humanity.
After all, OpenAI said it is sharing Soras research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon. The title of the Sora technical report, Video generation models as world simulators, shows that this is not a company looking to simply release a text-to-video model for creatives to work with. Instead, this is clearly AI researchers doing what AI researchers do pushing against the edges of the frontier. In OpenAIs case, that push is towards AGI, even if there is no agreed-upon definition of what that means.
That strange duality the mysterious alchemy of OpenAIs current efforts, and unwavering clarity of its long-term mission often gets overlooked and under-analyzed, I believe, as more of the general public becomes aware of its technology and more businesses sign on to use its products.
The OpenAI researchers working on Sora are certainly concerned about the present impact and are being careful about deployment for creative use. For example, Aditya Ramesh, an OpenAI scientist who co-created DALL-E and is on the Sora team, told MIT Technology Review that OpenAI is worried about misuses of fake but photorealistic video. Were being careful about deployment here and making sure we have all our bases covered before we put this in the hands of the general public, he said.
But Ramesh also considers Sora a stepping stone. Were excited about making this step toward AI that can reason about the world like we do, he posted on X.
In January 2023, I spoke to Ramesh for a look back at the evolution DALL-E on the second anniversary of the original DALL-E paper.
I dug up my transcript of that conversation and it turns out that Ramesh was already talking about video. When I asked him what interested him most about working on DALL-E, he said that the aspects of intelligence that are bespoke to vision and what can be done in vision were what he found the most interesting.
Especially with video, he added. You can imagine how a model that would be capable of generating a video could plan across long-time horizons, think about cause and effect, and then reason about things that have happened in the past.
Ramesh also talked, I felt, from the heart about the OpenAI duality. On the one hand, he felt good about exposing more people to what DALL-E could do. I hope that over time, more and more people get to learn about and explore what can be done with AI and that sort of open up this platform where people who want to do things with our technology can can easily access it through through our website and find ways to use it to build things that theyd like to see.
On the other hand, he said that his main interest in DALL-E as a researcher was to push this as far as possible. That is, the team started the DALL-E research project because we had success with GPT-2 and we knew that there was potential in applying the same technology to other modalities and we felt like text-to-image generation was interesting becausewe wanted to see if we trained a model to generate images from text well enough, whether it could do the same kinds of things that humans can in regard to extrapolation and so on.
In the short term, we can look at Sora as a potential creative tool with lots of problems to be solved. But dont be fooled to OpenAI, Sora is not really about video at all.
Whether you think Sora is a data-driven physics engine that is a simulation of many worlds, real or fantastical, like Nvidias Jim Fan, or you think modeling the world for action by generating pixel is as wasteful and doomed to failure as the largely-abandoned idea of analysis by synthesis, like Yann LeCun, I think its clear that looking at Sora simply as a jaw-dropping, powerful video application that plays into all the anger and fear and excitement around todays generative AI misses the duality of OpenAI.
OpenAI is certainly running the current generative AI playbook, with its consumer products, enterprise sales, and developer community-building. But its also using all of that as stepping stone towards developing the power over whatever it believes AGI is, could be, or should be defined as.
So for everyone out there who wonders what Sora is good for, make sure you keep that duality in mind: OpenAI may currently be playing the video game, but it has its eye on a much bigger prize.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Read more:
With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat
Posted in Artificial General Intelligence
Comments Off on With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat – VentureBeat
Why, Despite All the Hype We Hear, AI Is Not One of Us – Walter Bradley Center for Natural and Artificial Intelligence
Posted: at 7:59 pm
Artificial Intelligence (AI) systems are inferencing systems. They make decisions based on information. Thats not a particularly controversial point: inference is central to thinking. If AI performs the right types of inference, at the right time, on the right problem, we should view them as thinking machines.
The problem is, AI currently performs the wrong type of inference, on problems selected precisely because this type of inference works well. Ive called this Big Data AI, because the problems AI currently solves can only be cracked if very large repositories of data are available to solve them. ChatGPT is no exception in fact, it drives the point home. Its a continuation of previous innovations of Big Data AI taken to an extreme. The AI scientists dream of general intelligence, often referred to as Artificial General Intelligence (AGI), remains as elusive as ever.
Computer scientists who were not specifically trained on mathematical or philosophical logic probably dont think in terms of inference. Still, it pervades everything we do. In a nutshell, inference in the scientific sense is: given what I know already, and what I see or observe around me, what is proper to conclude? The conclusion is known as the inference, and for any cognitive system its ubiquitous.
For humans, inferring something is like a condition of being awake; we do it constantly, in conversation (what does she mean?), when walking down a street (do I turn here?), and indeed in having any thought where theres an implied question at all. If you try to pay attention to your thoughts for one day one hour youll quickly discover you cant count the number of inferences your brain is making. Inference is cognitive intelligence. Cognitive intelligence is inference.
What difference have 21st-century innovations made?
In the last decade, the computer science community innovated rapidly, and dramatically. These innovations are genuine and importantmake no mistake. In 2012, a team at the University of Toronto led by neural network guru Geoffrey Hinton roundly defeated all competitors at a popular photo recognition competition called ImageNet. The task was to recognize images from a dataset curated from fifteen million high resolution images on Flickr and representing twenty-two thousand classes, or varieties of photos (caterpillars, trees, cars, terrier dogs, etc.).
The system, dubbed AlexNet, after Hintons graduate student Alex Krizhevsky, who largely developed it, used a souped-up version of an old technology: the artificial neural network (ANN), or just neural network. Neural networks were developed in rudimentary form in the 1950s, when AI had just begun. They had been gradually refined and improved over the decades, though they were generally thought to be of little value for much of AIs history.
Moores Law, gave them a boost. As many know, Moores Law isnt a law, but an observation made by Intel co-founder and CEO Gordon Moore in 1965: the number of transistors on a microchip doubles roughly every two years (the other part is that the cost of computers is also halved during that time). Neural networks are computationally expensive on very large datasets, and the catch-22 for many years was that very large datasets are the only datasets they work well on.
But by the 2010s the roughly accurate Moores Law had made deep neural networks, known at that time as convolutional neural networks (CNNs), computationally practical. CPUs were swapped for the more mathematically powerful GPUsalso used in computer game enginesand suddenly CNNs were not just an option, but the go-to technology for AI. Though all the competitors at ImageNet contests used some version of machine learninga subfield of AI that is specifically inductive because it learns from prior examples or observationsthe CNNs were found wholly superior, once the hardware was in place to support the gargantuan computational requirements.
The second major innovation occurred just two years later, when a well-known limitation to neural networks in general was solved or at least partially solved the limitation of overfitting. Overfitting happens when the neural network fits to its training data, and doesnt adequately generalize to its unseen, or test data. Overfitting is bad; it means the system isnt really learning the underlying rule or pattern in the data. Its like someone memorizing the answers to the test without really understanding the questions. The overfitting problem bedeviled early attempts at using neural networks for problems like image recognition (CNNs are also used for face recognition, machine translation between languages, autonomous navigation, and a host of other useful tasks).
In 2014, Geoff Hinton and his team developed a technique known as dropout which helped solve the overfitting problem. While the public consumed the latest smartphones and argued, flirted, and chatted away on myriad social networks and technologies, real innovations on an old AI technology were taking place, all made possible by the powerful combination of talented scientists and engineers, and increasingly powerful computing resources.
There was a catch, however.
Black Boxes and Blind Inferences
Actually, there were two catches. One, it takes quite an imaginative computer scientist to believe that the neural network knows what its classifying or identifying. Its a bunch of math in the background, and relatively simple math at that: mostly matrix multiplication, a technique learned by any undergraduate math student. There are other mathematics operations in neural networks, but its still not string theory. Its the computation of the relatively simple math equations that counts, along with the overall design of the system. Thus,neural networks were performing cognitive feats while not really knowing they were performing anything at all.
This brings us to the second problem, which ended up spawning an entire field itself, known as Explainable AI.
Next: Because AIs dont know why they make decisions, they cant explain them to programmers.
Read the original here:
Posted in Artificial General Intelligence
Comments Off on Why, Despite All the Hype We Hear, AI Is Not One of Us – Walter Bradley Center for Natural and Artificial Intelligence
What is Artificial General Intelligence (AGI) and Why It’s Not Here Yet: A Reality Check for AI Enthusiasts – Unite.AI
Posted: at 7:59 pm
Artificial Intelligence (AI) is everywhere. From smart assistants to self-driving cars, AI systems are transforming our lives and businesses. But what if there was an AI that could do more than perform specific tasks? What if there was a type of AI that could learn and think like a human or even surpass human intelligence?
This is the vision of Artificial General Intelligence (AGI), a hypothetical form of AI that has the potential to accomplish any intellectual task that humans can. AGI is often contrasted with Artificial Narrow Intelligence (ANI), the current state of AI that can only excel at one or a few domains, such as playing chess or recognizing faces. AGI, on the other hand, would have the ability to understand and reason across multiple domains, such as language, logic, creativity, common sense, and emotion.
AGI is not a new concept. It has been the guiding vision of AI research since the earliest days and remains its most divisive idea. Some AI enthusiasts believe that AGI is inevitable and imminent and will lead to a new technological and social progress era. Others are more skeptical and cautious and warn of the ethical and existential risks of creating and controlling such a powerful and unpredictable entity.
But how close are we to achieving AGI, and does it even make sense to try? This is, in fact, an important question whose answer may provide a reality check for AI enthusiasts who are eager to witness the era of superhuman intelligence.
AGI stands apart from current AI by its capacity to perform any intellectual task that humans can, if not surpass them. This distinction is in terms of several key features, including:
While these features are vital for achieving human-like or superhuman intelligence, they remain hard to capture for current AI systems.
Current AI predominantly relies on machine learning, a branch of computer science that enables machines to learn from data and experiences. Machine learning operates through supervised, unsupervised, and reinforcement learning.
Supervised learning involves machines learning from labeled data to predict or classify new data. Unsupervised learning involves finding patterns in unlabeled data, while reinforcement learning centers around learning from actions and feedback, optimizing for rewards, or minimizing costs.
Despite achieving remarkable results in areas like computer vision and natural language processing, current AI systems are constrained by the quality and quantity of training data, predefined algorithms, and specific optimization objectives. They often need help with adaptability, especially in novel situations, and more transparency in explaining their reasoning.
In contrast, AGI is envisioned to be free from these limitations and would not rely on predefined data, algorithms, or objectives but instead on its own learning and thinking capabilities. Moreover, AGI could acquire and integrate knowledge from diverse sources and domains, applying it seamlessly to new and varied tasks. Furthermore, AGI would excel in reasoning, communication, understanding, and manipulating the world and itself.
Realizing AGI poses considerable challenges encompassing technical, conceptual, and ethical dimensions.
For example, defining and measuring intelligence, including components like memory, attention, creativity, and emotion, is a fundamental hurdle. Additionally, modeling and simulating the human brains functions, such as perception, cognition, and emotion, present complex challenges.
Moreover, critical challenges include designing and implementing scalable, generalizable learning and reasoning algorithms and architectures. Ensuring the safety, reliability, and accountability of AGI systems in their interactions with humans and other agents and aligning the values and goals of AGI systems with those of society is also of utmost importance.
Various research directions and paradigms have been proposed and explored in the pursuit of AGI, each with strengths and limitations. Symbolic AI, a classical approach using logic and symbols for knowledge representation and manipulation, excels in abstract and structured problems like mathematics and chess but needs help scaling and integrating sensory and motor data.
Likewise, Connectionist AI, a modern approach employing neural networks and deep learning to process large amounts of data, excels in complex and noisy domains like vision and language but needs help interpreting and generalizations.
Hybrid AI combines symbolic and connectionist AI to leverage its strengths and overcome weaknesses, aiming for more robust and versatile systems. Similarly, Evolutionary AI uses evolutionary algorithms and genetic programming to evolve AI systems through natural selection, seeking novel and optimal solutions unconstrained by human design.
Lastly, Neuromorphic AI utilizes neuromorphic hardware and software to emulate biological neural systems, aiming for more efficient and realistic brain models and enabling natural interactions with humans and agents.
These are not the only approaches to AGI but some of the most prominent and promising ones. Each approach has advantages and disadvantages, and they still need to achieve the generality and intelligence that AGI requires.
While AGI has not been achieved yet, some notable examples of AI systems exhibit certain aspects or features reminiscent of AGI, contributing to the vision of eventual AGI attainment. These examples represent strides toward AGI by showcasing specific capabilities:
AlphaZero, developed by DeepMind, is a reinforcement learning system that autonomously learns to play chess, shogi and Go without human knowledge or guidance. Demonstrating superhuman proficiency, AlphaZero also introduces innovative strategies that challenge conventional wisdom.
Similarly, OpenAI's GPT-3 generates coherent and diverse texts across various topics and tasks. Capable of answering questions, composing essays, and mimicking different writing styles, GPT-3 displays versatility, although within certain limits.
Likewise, NEAT, an evolutionary algorithm created by Kenneth Stanley and Risto Miikkulainen, evolves neural networks for tasks such as robot control, game playing, and image generation. NEAT's ability to evolve network structure and function produces novel and complex solutions not predefined by human programmers.
While these examples illustrate progress toward AGI, they also underscore existing limitations and gaps that necessitate further exploration and development in pursuing true AGI.
AGI poses scientific, technological, social, and ethical challenges with profound implications. Economically, it may create opportunities and disrupt existing markets, potentially increasing inequality. While improving education and health, AGI may introduce new challenges and risks.
Ethically, it could promote new norms, cooperation, and empathy and introduce conflicts, competition, and cruelty. AGI may question existing meanings and purposes, expand knowledge, and redefine human nature and destiny. Therefore, stakeholders must consider and address these implications and risks, including researchers, developers, policymakers, educators, and citizens.
AGI stands at the forefront of AI research, promising a level of intellect surpassing human capabilities. While the vision captivates enthusiasts, challenges persist in realizing this goal. Current AI, excelling in specific domains, must meet AGIs expansive potential.
Numerous approaches, from symbolic and connectionist AI to neuromorphic models, strive for AGI realization. Notable examples like AlphaZero and GPT-3 showcase advancements, yet true AGI remains elusive. With economic, ethical, and existential implications, the journey to AGI demands collective attention and responsible exploration.
Here is the original post:
Posted in Artificial General Intelligence
Comments Off on What is Artificial General Intelligence (AGI) and Why It’s Not Here Yet: A Reality Check for AI Enthusiasts – Unite.AI
Future of Artificial Intelligence: Predictions and Impact on Society – Medriva
Posted: at 7:59 pm
As we stand at the cusp of a new era, Artificial Intelligence (AI) is not just a buzzword in the tech industry but a transformative force anticipated to reshape various aspects of society by 2034. From attaining Artificial General Intelligence (AGI) to the fusion of quantum computing and AI, and the application of AI to neural interface technology, the future of AI promises an exciting blend of advancements and challenges.
By 2034, AI is expected to achieve AGI, meaning it will be capable of learning to perform any job just by being instructed. This evolution represents a significant milestone as it signifies a shift from AIs current specialized applications to a more generalized approach. Furthermore, the fusion of quantum computing and AI, referred to as Quantum AI, is anticipated to usher in a new era of supercomputing and scientific discovery. This fusion will result in unprecedented computational power, enabling us to solve complex problems that are currently beyond our reach.
Another promising area of AI development lies in its application to neural interface technology. AIs potential to enhance cognitive capabilities could revolutionize sectors like healthcare, education, and even our daily lives. For instance, AI algorithms combined with computer vision have greatly improved medical imaging and diagnostics. The global computer vision in healthcare market is projected to surge to US $56.1 billion by 2034, driven by precision medicine and the demand for computer vision systems.
AIs integration into robotics is expected to transform our daily lives. From performing household chores to providing companionship and manual work, robotics and co-bots are poised to become an integral part of our society. In public governance and justice systems, AI raises questions about autonomy, ethics, and surveillance. As AI continues to permeate these sectors, addressing these ethical concerns will be critical.
The automotive industry is another sector where AI is set to make a significant impact. Artificial Intelligence, connectivity, and software-defined vehicles are expected to redefine the future of cars. The projected growth of connected and software-defined vehicles is estimated at a compound annual growth rate of 21.1% between 2024 and 2034, reaching a value of US $700 billion. This growth opens up new revenue streams, including AI assistants offering natural interactions with the vehicles systems and in-car payment systems using biometric security.
AIs impact extends beyond technology and industry, potentially reshaping societal norms and structures. A significant area of discussion is the potential effect of AI on the concept of meritocracy. As AI continues to evolve, it might redefine merit and meritocracy in ways we can only begin to imagine. However, it also poses challenges in terms of potential disparities, biases, and issues of accountability and data hegemony.
As we look forward to the next decade, the future of AI presents both opportunities and challenges. It is an intricate dance of evolution and ethical considerations, of technological advancements and societal impact. As we embrace this future, it is crucial to navigate these waters with foresight and responsibility, ensuring that the benefits of AI are reaped while minimizing its potential adverse effects.
Continued here:
Future of Artificial Intelligence: Predictions and Impact on Society - Medriva
Posted in Artificial General Intelligence
Comments Off on Future of Artificial Intelligence: Predictions and Impact on Society – Medriva
Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions – Semafor
Posted: at 7:59 pm
Q: When you decided you wanted to start a company, did you know what you were going to do?
Bret: We knew we wanted to empower businesses with this technology. One of our principles was, its hard, even in San Francisco, to follow the pace of progress. Every day, theres a new research paper, a new this, a new that. Imagine being a big consumer brand. How do you actually take advantage of this technology?
Its not like you can read research papers every week as the CEO of Weight Watchers. So we knew we wanted to enable businesses to consume this technology in a push-button way, a solution to bring this to every company in the world. We called on a lot of CIOs, CTOs, and CEOs of companies we worked with in the past. We talked to them about the problems they were facing. Through that, we got excited about one concept, which is the future of digital customer experiences, thinking of this asset, which is the AI agent, as being a really important new concept.
It wasnt just about customer service; it was something bigger than that. We always talked about peoples websites and apps, thats their digital identity. In the future, every company will need an agent. Can you update the agent with the new policies? That sentence will come from a CEOs mouth at some point. What we loved about it, when youre starting a company, you want to imagine yourself doing it for 20-plus years. So its a big commitment. We love that there was a short-term demand in customer service where we could improve something very expensive that no one likes much. So its a great application of AI.
Clay: One of the things that weve been very focused on from the beginning is being intensely customer-led. Back to your question on how we started, it was through a series of conversations rather than taking this new technology and being the hammer trying to find the proverbial nails.
Q: Would you want to develop your own models?
Bret: We converged on a technical approach that we should not be pre-training models. Our area of AI research is around autonomous agents, its really thriving in the open-source community. Theres a lot of people making an AI thats answering all my emails, which is an energetic, open-source community that is fun to watch.
There are a couple of reasons why we really believe in it. One is customer benefits. Imagine its Black Friday, Cyber Monday, and you want to extend your return period to past New Years, which is a common thing to do. If youre building your own pre-trained model, youre like, we can update the policy in like three weeks. And thats going to cost $100,000. The agility that you get from using a constellation of models, retrieval-augmented generation, all the common techniques in agentic style AI.
Similarly, it means we can serve a much broader range of customers because its not like you have to build an expensive model for every customer. And like Reid Hoffmans characterization, there are frontier models, like GPT-4 and Gemini Ultra. And then there are foundation models, which is just this broad range of open source and others. The foundation models are sort of a commodity now. It makes a lot of sense to focus on fine-tuning and post-training and say, we can start with these great open-source models or other peoples foundation models, and just add value, which is unique to our business. For example, we have a model that detects are you giving medical advice? We have a model that detects are you hallucinating? The pre-training part of that isnt particularly differentiated.
So our view is that theres a Moores Law level of investment in these foundation models. Wed rather benefit from that rising tide lifting our boat, rather than burning our own capital, doing what is a relatively undifferentiated part of the AI supply chain, and really focus on what makes our platform unique. If you squint, its like the cloud market. How many startups build their own data centers now? For some companies, it might make sense, but you should have a very specialized use case. Otherwise, licensing the server from Amazon or Azure probably makes a ton more sense. I think the same is true of these foundation models.
Q: Has the process of building an autonomous agent been more challenging than you thought it would be?
Clay: Its been incredibly fun exploring this territory because you can anthropomorphize how humans think, reason, and recall things, and those really apply to so many things in developing agents. For instance, what does it take to respond effectively to customers and solve problems? You have to plan. How should the agent go about planning?
So we have specialized models that are experts in planning and thinking through the next steps. How do you answer a factual question about the company? You recall a memory of something you read previously. So weve figured out how to give our agent access to, in essence, a reference library that it can read through in an instant, pull out the right bits, and use those to summarize and synthesize an answer. How do we make sure that answers are factual or the action that the agent is taking is correct?
We have another module within our agent architecture that we affectionately call the supervisor. And the supervisor, before a message is sent to a user or an action is taken, will basically review the agents work, and say, actually, I think you need to make a little change here, try again and get back to me, and only after the initial process has revised that, will the action be taken or the response sent.
On whats been hard, there are a number of really important challenges that if youre going to put AI directly in front of your customers, you need to mitigate and overcome. For hallucinations, large language models can synthesize answers and facts that arent, in fact, factual. So we built a layered approach to ensuring that we can mitigate hallucinations, and theres no guarantee because AI is non-deterministic. Were using supervisory layers, giving it access to knowledge provided by the company. Were providing audit and inspection tools, and quality assurance tools, so that our customers can review conversations and, in essence, coach the AI in the right direction through this feedback mechanism.
No matter how smart one of these frontier models is, its not going to know, Reed, where your order is, or when I bought my shoes and whether or not theyre eligible for returns. So you have to be able to integrate safely, securely, and reliably with the systems that you use to run your business. And weve built some really important protections there where all actions taken when youre interacting with customer or company data are completely deterministic. They use good old-fashioned if-then-else statements, and dont rely on LLMs, and their unpredictability to manage things like access controls, security, and so on.
The last interesting challenge has been, of course you want an AI agent representing your company to be able to do stuff, to answer questions, to be able to solve problems. But you also want it to be a good ambassador of your brand and of your company. So one of the most interesting challenges has been, how do we imbue a companys AI agent with its values and its voice, its way of being?
One of our design partners, OluKai, is a Hawaiian-inspired retailer. They wanted to make sure that their AI agent interacts with what they call the Aloha experience. So weve imbued it with tone, language, some knowledge of the Hawaiian language. Weve even had it throw the shaka emoji at a customer who was particularly friendly towards the end of an interaction.
One of our other customers has what they refer to as the language of luxury, a kind of a refined way of interacting with customers with really excellent manners. These are some of the challenges that weve had to overcome. Theyve been hard but really interesting.
Q: When people think of automated customer service, the thought is, how do I get to a real person in the quickest way? Are you seeing evidence that people might enjoy talking to a robot more than a person?
Bret: Thats definitely our ambition. So Weight Watchers, the AI in their app is handling over 70% of conversations completely autonomously. And its a 4.6 out of 5-star customer satisfaction score, which is remarkable. OluKai, over Black Friday, Cyber Monday, we handled over half their cases with a 4.5 out of 5 customer satisfaction score. The joke we all say is if you surveyed anybody, Do you like talking to a customer support chatbot?, you could not find a person who says yes.
I think if you survey people about ChatGPT, you get the inverse. Everyone loves it, even with its flaws, and hallucinations. Its delightful. Its fun. Thats why its so popular. One of our big challenges will be to shift the perception of chatting with an AI. At our company, we dont use the word bot, because weve found that consumers associate it with the old technology.
So our customers get to name their agent, but we usually refer to it as an AI or an agent or a virtual agent, to try to make sure that the brand association is hey, its this new thing, its this fun, delightful, empathetic thing, not that old, robotic thing. But itll be an interesting challenge.
Our AI agents are always on, faster, more delightful than having to wait on hold, not because the agent on the other side is bad. But you dont have to wait on hold. Its instantaneous. Its faster. I hope that we end up where people are like dont you have an AI I can talk to? Are you kidding me? I have to talk to a real person? I dont think were there, and I think therell be a bit of a cultural shift. Weve even talked about how do you actually know youre talking to one of the good ones versus the old bad ones? Because they kind of look the same. But you know it when you see it.
Q: There are some really heavy hitters in this space trying to do something similar. How do you differentiate yourselves?
Bret: Were really focused on driving real success with real scaled consumer brands like Sonos, Sirius XM, Weight Watchers, and OluKai. We really recognize that its very easy to make a demo in this space, but to get something to work at scale, thats where the hard stuff is. When companies decide who they want to partner with, theyll look at who are the customers? Do I respect them?
We want to be focused on the enterprise. We believe that the needs of enterprise consumer brands are pretty distinct or higher scale. They have really strict regulatory requirements that smaller companies dont have. That produces a platform where we have a lot of enterprise features around protecting personal identifiable information, compliance, things that are an important category of enterprise software that I think will set us apart.
We also have a really great business model. We call it outcome-based pricing. Our customers only pay us when we fully resolve the issue. It means that they are only paying us when were saving them money. It will be competitive and execution really matters. The company hasnt even existed for 11 months and weve got live paying customers.
Very few people remember AltaVista, but those of us at Google at the time do. Very few people remember Buy.com; they remember Amazon. Were aware that in these periods of technology innovation, execution matters a lot.
Q: Just to make sure I understand, if Im a customer and I go to a human, then that company doesnt have to pay you because the agent did not resolve the issue..
Bret: Thats right.
Q: Youre a startup. You have no time to be distracted. But then you became chair of the OpenAI board. What was that like for you two then?
Bret: The reason why I agreed to join the board was a sense of the gravity and importance of OpenAI. I had this genuine fear that the OpenAI that had produced so much of the innovation that inspired Clay and I to quit our jobs might cease to exist in its current form. I was in a unique position to help facilitate an outcome where OpenAI could be preserved, and I felt a sense of obligation to do it.
When I talked to Clay, the conversation was like, is this going to take too much time? Is it going to be a distraction? Both of us were like, OpenAI is really important. Youre not going to sit around 10 years from now and say, was it a bad use of time to help preserve the mission of ensuring Artificial General Intelligence benefits all of humanity. Ive served on public boards before, including some high profile ones. Ive been pretty good at time management and work a lot. Weve been able to manage it pretty well. At the end of day, were technologists.
Its funny. Now people ask, is it competitive? Its like asking, is the internet a market? I dont think it is. If I have to articulate the AI market, theres infrastructure, theres foundation model providers, theres tools, and then theres solutions. Were a solution. Were in a different part of the supply chain of AI.
Clay: As we do with everything, we talked it through. And I really felt, and I think Bret felt, that there was an element of civic duty. Its fair to say that Bret was in a literally unique position to make a difference, given his experience, given his great mind, and perhaps most importantly, given his values and judgment. For the impact on Sierra, Bret has done a remarkable job balancing everything and Im really proud to, from a step removed, be a part of preserving this really important organization.
Q: I think every company in crisis now is going to call you to be on their board.
Bret: Im trying to figure out the reputation I have now. Am I like Harvey Keitel from Pulp Fiction or something? I dont know.
Q: Is the drama over, by the way? I know theres an ongoing investigation.
Bret: Nothing to share at this point. But over the coming months, well be super transparent about all of that.
Q: Speaking of AGI, I know youre not developing it. But has this experience of trying to meld all these different models, and fine-tune them to build something more intelligent, made you think about the path to AGI any differently?
Bret: Im not an expert in AGI so take this as a slightly outside, slightly inside perspective. I do think that composing different models to produce something greater is a really interesting technique. If you have a model thats wrong 10% of the time and right 90% of the time, and another model that can detect when its wrong with the same level of accuracy, you can compose them and make something thats right 99% of the time. Its also slower and more expensive, though, you end up with a pipeline of intelligence. Theres both time and cost limits to it. But its really interesting architecturally.
The biggest trend change that Clay and I have talked about is, I think three years ago ancient history AI was sort of the domain of machine learning. You meet a data scientist, their workflows are very different than engineers. Its like notebooks and lots of data. Source control is optional. Its very different culturally than traditional software engineering. Now, particularly with agent-oriented models, you can use models off the shelf, you can wire them together, and AI has moved to the domain of engineering.
You use it almost like you think of spinning up a database or something like, oh, yeah, well use this model for that and use this model for this. Im not sure of its impact on AGI, which has a lot of connotation, but certainly as it relates to building an intelligence into all the products we use on a daily basis, I think its been democratized.
LLMs just enable transfer learning. Essentially, when you train on all of human knowledge, its very easy to get it to do something smart at the tail end of that, kind of reductively. As a consequence, thats so interesting, because now just every day full stack developers can incorporate next generation intelligence into their product. You used to have to be Google.
Now its like, everyday programmers have these at their disposal. And I still think we havent seen the end of that. The first generation of iPhone apps were like a flashlight. I think the early AI applications were sort of thin wrappers on top of ChatGPT. We havent gotten yet to the WhatsApps and the Ubers.
Q: I also wonder if theres also an element of the early internet here, where theres an infrastructure bottleneck. You cant use a frontier model for every part of this. Its too slow, too expensive. So, do you try to make your software efficient for todays models, or make it a little inefficient in anticipation of the infrastructure layer improving?
Bret: Our approach internally with research is to use overpowered models to prove out a concept and then specialize afterwards. And I really think that style of development is great. Its like vertical integration, you can get it working, prove it out, and then say, Okay, can we build specialized models? Theres been a lot of research Microsoft had, I cant remember the name of the research paper but theres been a ton of research of using very large parameter models to make lower parameter calls that are really effective.
Q: Textbooks Are All You Need.
Bret: That was the paper. This area is fascinating. One of the things weve talked about is Sierra was the name of a game software company in the 90s that both of us played. I remember hearing stories of the game developers in the 90s, where theyd make a game for a computer that didnt exist yet. Moores Law was at such a blistering pace at that point that making a game for the current generation just didnt make sense, youd make it for the next one.
When we think about Sierra, we think about two forms of this, which is one you can build with lower parameter, cheaper models that make it faster and cheaper. Similarly, even the current generation of models will be cheaper and faster a year from now, even if you did nothing. So theres this interesting thing as youre building a business and youre thinking about your gross margins, which is talking about the present will be the past so quickly, its almost incorrect. You really actually should be thinking about Moores Law the way a 90s game developer thought about the PC.
It makes it very hard to form a business plan, by the way, because you almost have to bet on the outcome, but you dont have all the information. We know a multimodal model that supports x is going to exist by the end of this year, with like a 90% likelihood. What decision do you make as a technologist at this point to optimize for that? Its fun, but its chaotic.
Q: It sounds like getting that exactly right might be the thing that makes you win.
Clay: Being able to read the trend lines and how quickly these new capabilities will come from being just over the horizon, to on the horizon, to available and usable for building new products with, thats part of the art here. We both often fall asleep reading research papers at night. So were up to speed on the latest. Our hope is that we can read those research papers and hire the PhDs so that our customers dont have to, and we can enable every one of them to build this AI agent version of themselves.
Q: Youve said that this will put some call center workers out of business, but it will also create new jobs. I agree but do you have any ideas of what those new jobs will be?
Bret: One of our design partners, the customer experience team, theyre in the operations part of the customer service team. They were doing quality assurance on the agent, including both before and after launch, reporting issues with live conversations. They refer to themselves now as the AI architects and their main job is actually shaping and changing the behavior of the AI. Weve embraced that.
With our new customers, we talk about how you need to have some people adopt this AI architect role. The exciting part for me is what is the webmaster of AI? Not the computer science person whos making the hardcore HTTP server, but the person whose actual job it is to help a company get their stuff up and running, and maintain it.
We love this idea of an AI architect, but I think it requires technology companies to create tools that are accessible to people who are not technologists so that they can be a part of this. I actually was really inspired by the role of Salesforce administrator. It would surprise me if it werent one of the top 10 jobs on Indeed still to this day. And the role of a Salesforce administrator is a low code, no code job to set up Salesforce for people.
If you talk to Salesforce administrators, 99% of them made a mid-career transition to that role. Everything from manicurists to accidental admin, like your boss says, hey, we have the Salesforce thing, you mind maintaining it? Ten years later, they have a higher salary and theyre part of this ecosystem.
Its important as technology companies, were creating those opportunities to have on-ramps for people from operational roles around service to benefit from the rising tide of all the investment in this space. It will be disruptive, though. I dont know the history of the automated teller machine very well. I imagine there was a point where it was disruptive. And its very easy to say now that bank employees didnt go down. What about the week you put it in? Was that moment disruptive? It probably was.
We shouldnt be insensitive to the fact that when you start answering 70% of conversations with an AI, theres probably a person on the other side thats getting less traffic. Thats something we need to be accountable to and sensitive to. But the average tenure of a contact center agent is way less than two years. Its not a career people seek out. Its not necessarily the most pleasant work. If you see in a call center, people have eight chat windows open at the same time, with a requirement of how many conversations they can have per hour. Its a challenging job.
So Im hopeful that the jobs that come out of this will be better and more fulfilling. But the transition could be awkward, and thats something we need to be sensitive to and its something Clay and I talk a ton about.
See original here:
Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions - Semafor
Posted in Artificial General Intelligence
Comments Off on Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions – Semafor
3 scary breakthroughs AI will make in 2024 – Livescience.com
Posted: January 2, 2024 at 5:48 am
Artificial intelligence (AI) has been around for decades, but this year was a breakout for the spooky technology, with OpenAI's ChatGPT creating accessible, practical AI for the masses. AI, however, has a checkered history, and today's technology was preceded by a short track record of failed experiments.
For the most part, innovations in AI seem poised to improve things like medical diagnostics and scientific discovery. One AI model can, for example, detect whether you're at high risk of developing lung cancer by analyzing an X-ray scan. During COVID-19, scientists also built an algorithm that could diagnose the virus by listening to subtle differences in the sound of people's coughs. AI has also been used to design quantum physics experiments beyond what humans have conceived.
But not all the innovations are so benign. From killer drones to AI that threatens humanity's future, here are some of the scariest AI breakthroughs likely to come in 2024.
We don't know why exactly OpenAI CEO Sam Altman was dismissed and reinstated in late 2023. But amid corporate chaos at OpenAI, rumors swirled of an advanced technology that could threaten the future of humanity. That OpenAI system, called Q* (pronounced Q-star) may embody the potentially groundbreaking realization of artificial general intelligence (AGI), Reuters reported. Little is known about this mysterious system, but should reports be true, it could kick AI's capabilities up several notches.
Related: AI is transforming every aspect of science. Here's how.
AGI is a hypothetical tipping point, also known as the "Singularity," in which AI becomes smarter than humans. Current generations of AI still lag in areas in which humans excel, such as context-based reasoning and genuine creativity. Most, if not all, AI-generated content is just regurgitating, in some way, the data used to train it.
But AGI could potentially perform particular jobs better than most people, scientists have said. It could also be weaponized and used, for example, to create enhanced pathogens, launch massive cyber attacks, or orchestrate mass manipulation.
The idea of AGI has long been confined to science fiction, and many scientists believe we'll never reach this point. For OpenAI to have reached this tipping point already would certainly be a shock but not beyond the realm of possibility. We know, for example, that Sam Altman was already laying the groundwork for AGI in February 2023, outlining OpenAI's approach to AGI in a blog post. We also know experts are beginning to predict an imminent breakthrough, including Nvidia's CEO Jensen Huang, who said in November that AGI is in reach within the next five years, Barrons reported. Could 2024 be the breakout year for AGI? Only time will tell.
One of the most pressing cyber threats is that of deepfakes entirely fabricated images or videos of people that might misrepresent them, incriminate them or bully them. AI deepfake technology hasn't yet been good enough to be a significant threat, but that might be about to change.
AI can now generate real-time deepfakes live video feeds, in other words and it is now becoming so good at generating human faces that people can no longer tell the difference between what's real or fake. Another study, published in the journal Psychological Science on Nov. 13, unearthed the phenomenon of "hyperrealism," in which AI-generated content is more likely to be perceived as "real" than actually real content.
This would make it practically impossible for people to distinguish fact from fiction with the naked eye. Although tools could help people detect deepfakes, these aren't in the mainstream yet. Intel, for example, has built a real-time deepfake detector that works by using AI to analyze blood flow. But FakeCatcher, as it's known, has produced mixed results, according to the BBC.
As generative AI matures, one scary possibility is that people could deploy deepfakes to attempt to swing elections. The Financial Times (FT) reported, for example, that Bangladesh is bracing itself for an election in January that will be disrupted by deepfakes. As the U.S. gears up for a presidential election in November 2024, there's a possibility that AI and deepfakes could shift the outcome of this critical vote. UC Berkeley is monitoring AI usage in campaigning, for example, and NBC News also reported that many states lack the laws or tools to handle any surge in AI-generated disinformation.
Governments around the world are increasingly incorporating AI into tools for warfare. The U.S. government announced on Nov. 22 that 47 states had endorsed a declaration on the responsible use of AI in the military first launched at The Hague in February. Why was such a declaration needed? Because "irresponsible" use is a real and terrifying prospect. We've seen, for example, AI drones allegedly hunting down soldiers in Libya with no human input.
AI can recognize patterns, self-learn, make predictions or generate recommendations in military contexts, and an AI arms race is already underway. In 2024, it's likely we'll not only see AI used in weapons systems but also in logistics and decision support systems, as well as research and development. In 2022, for instance, AI generated 40,000 novel, hypothetical chemical weapons. Various branches of the U.S. military have ordered drones that can perform target recognition and battle tracking better than humans. Israel, too, used AI to rapidly identify targets at least 50 times faster than humans can in the latest Israel-Hamas war, according to NPR.
But one of the most feared development areas is that of lethal autonomous weapon systems (LAWS) or killer robots. Several leading scientists and technologists have warned against killer robots, including Stephen Hawking in 2015 and Elon Musk in 2017, but the technology hasn't yet materialized on a mass scale.
That said, some worrying developments suggest this year might be a breakout for killer robots. For instance, in Ukraine, Russia allegedly deployed the Zala KYB-UAV drone, which could recognize and attack targets without human intervention, according to a report from The Bulletin of the Atomic Scientists. Australia, too, has developed Ghost Shark an autonomous submarine system that is set to be produced "at scale", according to Australian Financial Review. The amount countries around the world are spending on AI is also an indicator with China raising AI expenditure from a combined $11.6 million in 2010 to $141 million by 2019, according to Datenna, Reuters reported. This is because, the publication added, China is locked in a race with the U.S. to deploy LAWS. Combined, these developments suggest we're entering a new dawn of AI warfare.
See the rest here:
3 scary breakthroughs AI will make in 2024 - Livescience.com
Posted in Artificial General Intelligence
Comments Off on 3 scary breakthroughs AI will make in 2024 – Livescience.com
What Is Artificial Intelligence (AI)? – Council on Foreign Relations
Posted: at 5:47 am
Introduction
Artificial intelligence (AI) has been around for decades, but new advancements have brought the technology to the fore. Experts say its rise could mirror previous technological revolutions, adding billions of dollars worth of productivity to the global economy while introducing a slew of new risks that could upend the global geopolitical order and the nature of society itself.
More From Our Experts
Managing these risks will be essential, and a global debate over AI governance is raging as major powers such as the United States, China, and European Union (EU) take increasingly divergent approaches toward regulating the technology. Meanwhile, AIs development and deployment continues to proceed at an exponential rate.
More on:
Robots and Artificial Intelligence
United States
Technology and Innovation
Defense Technology
While there is no single definition, artificial intelligence generally refers to the ability of computers to perform tasks traditionally associated with human capabilities. The terms origins trace back to the 1950s, when Stanford University computer scientist John McCarthy used the term artificial intelligence to describe the science and engineering of making intelligent machines. For McCarthy, the standard for intelligence was the ability to solve problems in a constantly changing environment.
A curation of original analyses, data visualizations, and commentaries, examining the debates and efforts to improve health worldwide.Weekly.
Since 2022, the public availability of so-called generative AI tools, such as the chatbot ChatGPT, has raised the technologys profile. Generative AI models draw from massive amounts of training data to generate statistically probable outcomes in response to specific prompts. Tools powered by such models generate humanlike text, images, audio, and other content.
Another commonly referenced form of AI, known as artificial general intelligence (AGI), or strong AI, refers to systems that would learn and apply knowledge like humans do. However, these systems do not yet exist and experts disagree on what exactly they would entail.
More From Our Experts
Researchers have been studying AI for eighty years, with mathematicians Alan Turing and John von Neumann considered to be among the disciplines founding fathers. In the decades since they taught rudimentary computers binary code, software companies have used AI to power tools such as chess-playing computers and online language translators.
In the countries that invest the most in AI, development has historically relied on public funding. In China, AI research is predominantly funded by the government, while the United States for decades drew on research by the Defense Advanced Research Projects Agency (DARPA) and other federal agencies. In recent years, U.S. AI development has largely shifted to the private sector, which has poured hundreds of billions of dollars into the effort.
More on:
Robots and Artificial Intelligence
United States
Technology and Innovation
Defense Technology
In 2022, U.S. President Joe Biden signed the CHIPS and Science Act, which refocuses U.S. government spending on technology research and development. The legislation directs $280 billion in federal spending toward semiconductors, the advanced hardware capable of supporting the massive processing and data-storage capabilities that AI requires. In January 2023, ChatGPT became the fastest-growing consumer application of all time.
The arrival of AI marks a Big Bang moment, the beginning of a world-changing technological revolution that will remake politics, economies, and societies, Eurasia Group President Ian Bremmer and Inflection AI CEO Mustafa Suleyman write for Foreign Affairs.
Companies and organizations across the world are already implementing AI tools into their offerings. Driverless-car manufacturers such as Tesla have been using AI for years, as have investment banks that rely on algorithmic models to conduct some trading operations, and technology companies that use algorithms to deliver targeted advertising. But after the arrival of ChatGPT, even businesses that are less technology-oriented began turning to generative AI tools to automate systems such as those for customer service. One-third of firms around the world that were surveyed by consultancy McKinsey in April 2023 claimed to be using AI in some capacity.
Widespread adoption of AI could speed up technological innovation across the board. Already, the semiconductor industry has boomed; Nvidia, the U.S.-based company that makes the majority of all AI chips, saw its stock more than triple in 2023to a total valuation of more than $1 trillionamid skyrocketing global demand for semiconductors.
Many experts foresee a massive boon to the global economy as the AI industry grows, with global gross domestic product (GDP) predicted to increase by an additional $7 trillion annually within the next decade. Economies that refuse to adopt AI are going to be left behind, CFR expert Sebastian Mallaby said on an episode of the Why It Matters podcast. Everything from strategies to contain climate change, to medical challenges, to making something like nuclear fusion work, almost any cognitive challenge you can think of is going to become more soluble thanks to artificial intelligence.
Like many other large-scale technological changes in history, AI could breed a trade-off between increased productivity and job loss. But unlike previous breakthroughs, which predominantly eliminated lower-skill jobs, generative AI could put white-collar jobs at riskand perhaps supplant jobs across many industries more quickly than ever before. One quarter of jobs around the world are at a high risk of being replaced by AI automation, according to the Organization for Economic Cooperation and Development (OECD). These jobs tend to rely on tasks that generative AI could perform at a similar level of quality as a human worker, such as information gathering and data analysis, a Pew Research Center study found. Workers with high-exposure to replacement by AI include accountants, web developers, marketing professionals, and technical writers.
The rise of generative AI has also raised concerns over inequality, as the most high-skilled jobs appear to be the safest from disruptions related to the technology, according to OECD. But other analysis suggests that low-skilled workers could benefit by drawing on AI tools to boost productivity: a 2023 study by researchers at the Massachusetts Institute of Technology (MIT) and Stanford University found that less-experienced call center operators doubled the productivity gains of their more-experienced colleagues after both groups began using AI.
AIs relationship with the environment heralds both peril and promise. While some experts argue that generative AI could catalyze breakthroughs in the fight against climate change, others have raised alarms about the technologys massive carbon footprint. Its enormous processing power requires energy-intensive data centers; these systems already produce greenhouse gas emissions equivalent to those from the aviation industry, and AIs energy consumption is only expected to rise with future advancements.
AI advocates contend that developers can use renewable energy to mitigate some of these emissions. Tech firms including Apple, Google, and Meta run their data centers using self-produced renewable energy, and they also buy so-called carbon credits to offset emissions from any energy use that relies on fossil fuels.
There are also hopes that AI can help reduce emissions in other industries by enhancing research on renewables and using advanced data analysis to optimize energy efficiency. In addition, AI can improve climate adaptation measures. Scientists in Mozambique, for example, are using the technology to better predict flooding patterns, bolstering early warning systems for impending disasters.
Many experts have framed AI development as a struggle for technological primacy between the United States and China. The winner of that competition, they say, will gain both economic and geopolitical advantage. So far, U.S. policymakers seem to have operated with this framework in mind. In 2022, Biden banned exports of the most powerful semiconductors to China and encouraged U.S. allies to do the same, citing national security concerns. One year later, Biden proposed an outright ban on several streams of U.S. investment into Chinas AI sector, and the Department of Commerce announced a raft of new restrictions aimed at curbing Chinese breakthroughs in artificial intelligence. Most experts believe the United States has outpaced China in AI development to date, but that China will quickly close the gap.
AI could also have a more direct impact on U.S. national security: the Department of Defense expects the technology to transform the very character of war by empowering autonomous weapons and improving strategic analysis. (Some experts have pushed for a ban on autonomous lethal weapons.) In Ukraines war against Russia, Kyiv is deploying autonomously operated AI-powered drones, marking the first time a major conflict has involved such technology. Warring parties could also soon rely on AI systems to accelerate battlefield decisions or to automatically attack enemy infrastructure. Some experts fear these capabilities could raise the possibility of nuclear weapons use.
Furthermore, AI could heighten the twin threats of disinformation and propaganda, issues that are gaining particular relevance as the world approaches a year in which more people are set to vote than ever before: more than seventy countries, representing half the global population, will hold national elections in 2024. Generative AI tools are making deep fakes easier to create, and the technology is already appearing in electoral campaigns across the globe. Experts also cite the possibility that bad actors could use AI to create sophisticated phishing attempts that are tailored to a targets interests to gain access to election systems. (Historically, phishing has been a way into these systems for would-be election hackers; Russia used the method to interfere in the 2016 U.S. election, according to the Department of Justice.)
Together, these risks could lead to a nihilism about the existence of objective truth that threatens democracy, said Jessica Brandt, policy director for the Brookings Institutions Artificial Intelligence and Emerging Technology Initiative, on the podcast The Presidents Inbox.
Some experts say that its not yet accurate to call AI intelligent, as it doesnt involve human-level reasoning. They argue that it doesnt create new knowledge, but instead aggregates existing information and presents it in a digestible way.
But that could change. OpenAI, the company behind ChatGPT, was founded as a nonprofit dedicated to ensuring that AGI benefits humanity as a whole, and its cofounder, Sam Altman, has argued that it is not possible or desirable to stop the development of AGI; in 2023, Google DeepMind CEO Demis Hassabis said AGI could arrive within five years. Some experts, including CFR Senior Fellow Sebastian Mallaby, contend that AI has already surpassed human-level intelligence on some tasks. In 2020, DeepMind used AI to solve protein folding, widely considered until then to be one of the most complex, unresolved biological mysteries.
Many AI experts seem to think so. In May 2023, hundreds of AI leaders, including the CEOs of Anthropic, Google DeepMind, and OpenAI, signed a one-sentence letter that read, Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
One popular theory for how extinction could happen posits that a directive to optimize a certain task could lead a super-intelligent AI to accomplish its goal by diverting resources away from something humans need to live. For example, an AI tasked with reducing the amount of harmful algae in the oceans could suck oxygen out of the atmosphere, leading humans to asphyxiate. While many AI researchers see this theory as alarmist, others say the example accurately illustrates the risk that powerful AI systems could cause vast, unintentional harm in the course of carrying out their directives.
Skeptics of this debate argue that focusing on such far-off existential risks obfuscates more immediate threats, such as authoritarian surveillance or biased data sets. Governments and companies around the world are expanding facial-recognition technology, and some analysts worry that Beijing in particular is using AI to supercharge repression. Another risk occurs when AI training data contains elements that are over- or underrepresented; tools trained on such data can produce skewed outcomes. This can exacerbate discrimination against marginalized groups, such as when AI-powered tenant-screening algorithms trained on biased data disproportionately deny housing to people of color. Generative AI tools can also facilitate chaotic public discoursehallucinating false information that chatbots present as true, or polluting search engines with dubious AI-generated results.
Almost all policymakers, civil society leaders, academics, independent experts, and industry leaders agree that AI should be governed, but they are not on the same page about how. Internationally, governments are taking different approaches.
The United States escalated its focus on governing AI in 2023. The Biden administration followed up its 2022 AI Bill of Rights by announcing a pledge from fifteen leading technology companies to voluntarily adopt shared standards [PDF] for AI safety, including by offering their frontier models for government review. In October 2023, Biden issued an expansive executive order aimed at producing a unified framework for safe AI use across the executive branch. And one month later, a bipartisan group of senators proposed legislation to govern the technology.
EU lawmakers are moving ahead with legislation that will introduce transparency requirements and restrict AI use for surveillance purposes. However, some EU leaders have expressed concerns that the law could hinder European innovation, raising questions of how it will be enforced. Meanwhile, in China, the ruling Chinese Communist Party has rolled out regulations that include antidiscrimination requirements as well as the mandate that AI reflect Socialist core values.
Some governments have sought to collaborate on regulating AI at the international level. At the Group of Seven (G7) summit in May 2023, the bloc launched the so-called Hiroshima Process to develop a common standard on AI governance. In October 2023, the United Nations formed an AI Advisory Boardwhich includes both U.S. and Chinese representativesto coordinate global AI governance. The following month, twenty-eight governments attended the first ever AI Safety Summit, held in the United Kingdom. Delegates, including envoys from the United States and China, signed a joint declaration warning of AIs potential to cause catastrophic harm and resolving to work together to ensure human-centric, trustworthy and responsible AI. China has also announced its own AI global governance effort for countries in its Belt and Road Initiative.
AIs complexity makes it unlikely that the technology could be governed by any one set of principles, CFR Senior Fellow Kat Duffy says. Proposals run the gamut of policy options with many levels of potential oversight, from total self-regulation to various types of public-policy guardrails.
Some analysts acknowledge that AIs risks have destabilizing consequences but argue that the technologys development should proceed. They say that regulators should place limits on compute, or computing power, which has increased by five billion times over the past decade, allowing models to incorporate more of their training data in response to human prompts. Others say governance should focus on immediate concerns such as improving the publics AI literacy and creating ethical AI systems that would include protections against discrimination, misinformation, and surveillance.
Some experts have called for limits on open-source models, which can increase access to the technology, including for bad actors. Many national security experts and leading AI companies are in favor of such rules. However, some observers warn that extensive restrictions could reduce competition and innovation by allowing the largest AI companies to entrench their power within a costly industry. Meanwhile, there are proposals for a global framework for governing AIs military uses; one such approach would be modeled after the International Atomic Energy Agency, which governs nuclear technology.
The U.S.-China relationship looms large over AI governance: as Beijing pursues a national strategy aimed at making China the global leader in AI theories, technologies, and applications by 2030, policymakers in Washington are struggling with how to place guardrails around AI development without undermining the United States technological edge.
Meanwhile, AI technology is rapidly advancing. Computing power has doubled every 3.4 months since 2012, and AI scientists expect models to contain one hundred times more compute by 2025.
In the absence of robust global governance, companies that control AI development are now exercising power typically reserved for nation-states, ushering in a technopolar world order, Bremmer and Suleyman write. They argue that these companies have become themselves geopolitical actors, and thus they need to be involved in the design of any global rules.
AIs transformative potential means the stakes are high. We have a chance to fix huge problems, Mallaby says. With proper safeguards in place, he says, AI systems can catalyze scientific discoveries that cure deadly diseases, ward off the worst effects of climate change, and inaugurate an era of global economic prosperity. Im realistic that there are significant risks, but Im hopeful that smart people of goodwill can help to manage them.
Go here to see the original:
What Is Artificial Intelligence (AI)? - Council on Foreign Relations
Posted in Artificial General Intelligence
Comments Off on What Is Artificial Intelligence (AI)? – Council on Foreign Relations
6. AI in Everyday Life How Artificial Intelligence is Impacting Society – Medium
Posted: at 5:47 am
The science and technology of building computers and systems that are capable of reasoning, learning, making decisions, and solving problems tasks that typically require human intellect is known as artificial intelligence (AI). The availability of vast volumes of data, the creation of potent computer hardware and software, and the invention of novel algorithms and models have all contributed to AIs recent rapid advancement. Artificial Intelligence has been implemented in a number of fields and sectors, including industry, banking, education, and entertainment. But one of the most widespread and significant uses of AI is in our daily lives, where it may facilitate our work and personal lives, increase our comfort and convenience, and improve our overall experience and happiness. This post will provide you with further information on artificial intelligence (AI), including its workings, advantages and disadvantages, and instances of its application in real-world situations both now and in the future.
Artificial Intelligence operates through the use of algorithms and models that, without explicit programming or instruction, may learn from data and gradually improve their performance. Two categories of AI exist: narrow and general. Using AI for certain, well-defined tasks, like chess play, facial recognition, or language translation, is known as narrow AI. General AI refers to the application of AI to any activity that a person can perform, including the comprehension of natural language, logical reasoning, and emotional expression. While narrow AI is currently a reality and affects our daily lives, general AI remains a theoretical and unattainable objective. Additionally, AI may be separated into two categories: sub-symbolic and symbolic. Symbolic AI is the use of AI to represent and process knowledge and information by manipulating symbols and rules, such as those found in logic, mathematics, or linguistics. Sub-symbolic AI is modeling and simulating intricate and dynamic events and systems utilizing AI that can learn from data and patterns, such as neural networks, genetic algorithms, or fuzzy logic.
See the original post here:
6. AI in Everyday Life How Artificial Intelligence is Impacting Society - Medium
Posted in Artificial General Intelligence
Comments Off on 6. AI in Everyday Life How Artificial Intelligence is Impacting Society – Medium