To many, AI is just a horrible Steven Spielberg movie. To others, it's the next generation of learning computers. But what is artificial intelligence, exactly? The answer depends on who you ask.
Broadly, artificial intelligence (AI) is the combination of mathematical algorithms, computer software, hardware, and robust datasets deployed to solve some kind of problem. In one sense, artificial intelligence is sophisticated information processing by a powerful program or algorithm. In another, an AI connotes the same information processing but also refers to the program or algorithm itself.
Many definitions of artificial intelligence include a comparison to the human mind or brain, whether in form or function. Alan Turing wrote in 1950 about thinking machines that could respond to a problem using human-like reasoning. His eponymous Turing test is still a benchmark for natural language processing. Later, however, Stuart Russell and John Norvig observed that humans are intelligent but not always rational.
As defined by John McCarthy in 2004, artificial intelligence is "the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."
Russell and Norvig saw two classes of artificial intelligence: systems that think and act rationally versus those that think and act like a human being. But there are places where that line begins to blur. AI and the brain use a hierarchical, profoundly parallel network structure to organize the information they receive. Whether or not an AI has been programmed to act like a human, on a very low level, AIs process data in a way common to not just the human brain but many other forms of biological information processing.
What distinguishes a neural net from conventional software? Its structure. A neural net's code is written to emulate some aspect of the architecture of neurons or the brain.
The difference between a neural net and an AI is often a matter of philosophy more than capabilities or design. A robust neural net's performance can equal or outclass a narrow AI. Many "AI-powered" systems are neural nets under the hood. But an AI isn't just several neural nets smashed together, any more than Charizard is three Charmanders in a trench coat. All these different types of artificial intelligence overlap along a spectrum of complexity. For example, OpenAI's powerful GPT-4 AI is a type of neural net called a transformer (more on these below).
There is much overlap between neural nets and artificial intelligence, but the capacity for machine learning can be the dividing line. An AI that never learns isn't very intelligent at all.
IBM explains, "[M]achine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. In fact, it is the number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three [layers]."
AGI stands for artificial general intelligence. An AGI is like the turbo-charged version of an individual AI. Today's AIs often require specific input parameters, so they are limited in their capacity to do anything but what they were built to do. But in theory, an AGI can figure out how to "think" for itself to solve problems it hasn't been trained to do. Some researchers are concerned about what might happen if an AGI were to start drawing conclusions we didn't expect.
In pop culture, when an AI makes a heel turn, the ones that menace humans often fit the definition of an AGI. For example, Disney/Pixar's WALL-E followed a plucky little trashbot who contends with a rogue AI named AUTO. Before WALL-Es time, HAL and Skynet were AGIs complex enough to resent their makers and powerful enough to threaten humanity.
Conceptually: An AI's logical structure has three fundamental parts. First, there's the decision processusually an equation, a model, or just some code. Second, there's an error functionsome way for the AI to check its work. And third, if the AI will learn from experience, it needs some way to optimize its model. Many neural networks do this with a system of weighted nodes, where each node has a value and a relationship to its network neighbors. Values change over time; stronger relationships have a higher weight in the error function.
Physically: Typically, an AI is "just" software. Neural nets consist of equations or commands written in things like Python or Common Lisp. They run comparisons, perform transformations, and suss out patterns from the data. Commercial AI applications have typically been run on server-side hardware, but that's beginning to change. AMD launched the first on-die NPU (Neural Processing Unit) in early 2023 with its Ryzen 7040 mobile chips. Intel followed suit with the dedicated silicon baked into Meteor Lake. Dedicated hardware neural nets run on a special type of "neuromorphic" ASICs as opposed to a CPU, GPU, or NPU.
A neural net is software, and a neuromorphic chip is a type of hardware called an ASIC (application-specific integrated circuit). Not all ASICs are neuromorphic designs, but neuromorphic chips are all ASICs. Neuromorphic design fundamentally differs from CPUs and only nominally overlaps with a GPU's multi-core architecture. But it's not some exotic new transistor type, nor any strange and eldritch kind of data structure. It's all about tensors. Tensors describe the relationships between things; they're a kind of mathematical object that can have metadata, just like a digital photo has EXIF data.
Tensors figure prominently in the physics and lighting engines of many modern games, so it may come as little surprise that GPUs do a lot of work with tensors. Modern Nvidia RTX GPUs have a huge number of tensor cores. That makes sense if you're drawing moving polygons, each with some properties or effects that apply to it. Tensors can handle more than just spatial data, and GPUs excel at organizing many different threads at once.
But no matter how elegant your data organization might be, it must filter through multiple layers of software abstraction before it becomes binary. Intel's neuromorphic chip, Loihi 2, affords a very different approach.
Loihi 2 is a neuromorphic chip that comes as a package deal with a compute framework named Lava. Loihi's physical architecture invitesalmost requiresthe use of weighting and an error function, both defining features of AI and neural nets. The chip's biomimetic design extends to its electrical signaling. Instead of ones and zeroes, on or off, Loihi "fires" in spikes with an integer value capable of carrying much more data. Loihi 2 is designed to excel in workloads that don't necessarily map well to the strengths of existing CPUs and GPUs. Lava provides a common software stack that can target neuromorphic and non-neuromorphic hardware. The Lava framework is explicitly designed to be hardware-agnostic rather than locked to Intel's neuromorphic processors.
Machine learning models using Lava can fully exploit Loihi 2's unique physical design. Together, they offer a hybrid hardware-software neural net that can process relationships between multiple entire multi-dimensional datasets, like an acrobat spinning plates. According to Intel, the performance and efficiency gains are largest outside the common feed-forward networks typically run on CPUs and GPUs today. In the graph below, the colored dots towards the upper right represent the highest performance and efficiency gains in what Intel calls "recurrent neural networks with novel bio-inspired properties."
Intel hasn't announced Loihi 3, but the company regularly updates the Lava framework. Unlike conventional GPUs, CPUs, and NPUs, neuromorphic chips like Loihi 1/2 are more explicitly aimed at research. The strength of neuromorphic design is that it allows silicon to perform a type of biomimicry. Brains are extremely cheap, in terms of power use per unit throughput. The hope is that Loihi and other neuromorphic systems can mimic that power efficiency to break out of the Iron Triangle and deliver all three: good, fast, and cheap.
IBM's NorthPole processor is distinct from Intel's Loihi in what it does and how it does it. Unlike Loihi or IBM's earlier TrueNorth effort in 2014, Northpole is not a neuromorphic processor. NorthPole relies on conventional calculation rather than a spiking neural model, focusing on inference workloads rather than model training. What makes NorthPole special is the way it combines processing capability and memory. Unlike CPUs and GPUs, which burn enormous power just moving data from Point A to Point B, NorthPole integrates its memory and compute elements side by side.
According to Dharmendra Modha of IBM Research, "Architecturally, NorthPole blurs the boundary between compute and memory," Modha said. "At the level of individual cores, NorthPole appears as memory-near-compute and from outside the chip, at the level of input-output, it appears as an active memory." IBM doesn't use the phrase, but this sounds similar to the processor-in-memory technology Samsung was talking about a few years back.
IBM Credit: IBMs NorthPole AI processor.
NorthPole is optimized for low-precision data types (2-bit to 8-bit) as opposed to the higher-precision FP16 / bfloat16 standard often used for AI workloads, and it eschews speculative branch execution. This wouldn't fly in an AI training processor, but NorthPole is designed for inference workloads, not model training. Using 2-bit precision and eliminating speculative branches allows the chip to keep enormous parallel calculations flowing across the entire chip. Against an Nvidia GPU manufactured on the same 12nm process, NorthPole was reportedly 25x more energy efficient. IBM reports it was 5x more energy efficient.
NorthPole is still a prototype, and IBM has yet to say if it intends to commercialize the design. The chip doesn't fit neatly into any of the other buckets we use to subdivide different types of AI processing engine. Still, it's an interesting example of companies trying radically different approaches to building a more efficient AI processor.
When an AI learns, it's different than just saving a file after making edits. To an AI, getting smarter involves machine learning.
Machine learning takes advantage of a feedback channel called "back-propagation." A neural net is typically a "feed-forward" process because data only moves in one direction through the network. It's efficient but also a kind of ballistic (unguided) process. In back-propagation, however, later nodes in the process get to pass information back to earlier nodes.
Not all neural nets perform back-propagation, but for those that do, the effect is like changing the coefficients in front of the variables in an equation. It changes the lay of the land. This is important because many AI applications rely on a mathematical tactic known as gradient descent. In an x vs. y problem, gradient descent introduces a z dimension, making a simple graph look like a topographical map. The terrain on that map forms a landscape of probabilities. Roll a marble down these slopes, and where it lands determines the neural net's output. But if you change that landscape, where the marble ends up can change.
We also divide neural nets into two classes, depending on the problems they can solve. In supervised learning, a neural net checks its work against a labeled training set or an overwatch; in most cases, that overwatch is a human. For example, SwiftKey learns how you text and adjusts its autocorrect to match. Pandora uses listeners' input to classify music to build specifically tailored playlists. 3blue1brown has an excellent explainer series on neural nets, where he discusses a neural net using supervised learning to perform handwriting recognition.
Supervised learning is great for fine accuracy on an unchanging set of parameters, like alphabets. Unsupervised learning, however, can wrangle data with changing numbers of dimensions. (An equation with x, y, and z terms is a three-dimensional equation.) Unsupervised learning tends to win with small datasets. It's also good at noticing subtle things we might not even know to look for. Ask an unsupervised neural net to find trends in a dataset, and it may return patterns we had no idea existed.
Transformers are a special, versatile kind of AI capable of unsupervised learning. They can integrate many different data streams, each with its own changing parameters. Because of this, they're excellent at handling tensors. Tensors, in turn, are great for keeping all that data organized. With the combined powers of tensors and transformers, we can handle more complex datasets.
Video upscaling and motion smoothing are great applications for AI transformers. Likewise, tensorswhich describe changesare crucial to detecting deepfakes and alterations. With deepfake tools reproducing in the wild, it's a digital arms race.
Nvidia Credit: The person in this image does not exist. This is a deepfake image created by StyleGAN, Nvidias generative adversarial neural network.
Video signal has high dimensionality, or bit depth. It's made of a series of images, which are themselves composed of a series of coordinates and color values. Mathematically and in computer code, we represent those quantities as matrices or n-dimensional arrays. Helpfully, tensors are great for matrix and array wrangling. DaVinci Resolve, for example, uses tensor processing in its (Nvidia RTX) hardware-accelerated Neural Engine facial recognition utility. Hand those tensors to a transformer, and its powers of unsupervised learning do a great job picking out the curves of motion on-screenand in real life.
That ability to track multiple curves against one another is why the tensor-transformer dream team has taken so well to natural language processing. And the approach can generalize. Convolutional transformersa hybrid of a convolutional neural net and a transformerexcel at image recognition in near real-time. This tech is used today for things like robot search and rescue or assistive image and text recognition, as well as the much more controversial practice of dragnet facial recognition, la Hong Kong.
The ability to handle a changing mass of data is great for consumer and assistive tech, but it's also clutch for things like mapping the genome and improving drug design. The list goes on. Transformers can also handle different kinds of dimensions, more than just the spatial, which is useful for managing an array of devices or embedded sensorslike weather tracking, traffic routing, or industrial control systems. That's what makes AI so useful for data processing "at the edge." AI can find patterns in data and then respond to them on the fly.
Not only does everyone have a cell phone, there are embedded systems in everything. This proliferation of devices gives rise to an ad hoc global network called the Internet of Things (IoT). In the parlance of embedded systems, the "edge" represents the outermost fringe of end nodes within the collective IoT network.
Edge intelligence takes two primary forms: AI on edge and AI for edge. The distinction is where the processing happens. "AI on edge" refers to network end nodes (everything from consumer devices to cars and industrial control systems) that employ AI to crunch data locally. "AI for the edge" enables edge intelligence by offloading some of the compute demand to the cloud.
In practice, the main differences between the two are latency and horsepower. Local processing is always going to be faster than a data pipeline beholden to ping times. The tradeoff is the computing power available server-side.
Embedded systems, consumer devices, industrial control systems, and other end nodes in the IoT all add up to a monumental volume of information that needs processing. Some phone home, some have to process data in near real-time, and some have to check and correct their work on the fly. Operating in the wild, these physical systems act just like the nodes in a neural net. Their collective throughput is so complex that, in a sense, the IoT has become the AIoTthe artificial intelligence of things.
As devices get cheaper, even the tiny slips of silicon that run low-end embedded systems have surprising computing power. But having a computer in a thing doesn't necessarily make it smarter. Everything's got Wi-Fi or Bluetooth now. Some of it is really cool. Some of it is made of bees. If I forget to leave the door open on my front-loading washing machine, I can tell it to run a cleaning cycle from my phone. But the IoT is already a well-known security nightmare. Parasitic global botnets exist that live in consumer routers. Hardware failures can cascade, like the Great Northeast Blackout of the summer of 2003 or when Texas froze solid in 2021. We also live in a timeline where a faulty firmware update can brick your shoes.
There's a common pipeline (hypeline?) in tech innovation. When some Silicon Valley startup invents a widget, it goes from idea to hype train to widgets-as-a-service to disappointment, before finally figuring out what the widget's good for.
This is why we lampoon the IoT with loving names like the Internet of Shitty Things and the Internet of Stings. (Internet of Stings devices communicate over TCBee-IP.) But the AIoT isn't something anyone can sell. It's more than the sum of its parts. The AIoT is a set of emergent properties that we have to manage if we're going to avoid an explosion of splinternets, and keep the world operating in real time.
In a nutshell, artificial intelligence is often the same as a neural net capable of machine learning. They're both software that can run on whatever CPU or GPU is available and powerful enough. Neural nets often have the power to perform machine learning via back-propagation.
There's also a kind of hybrid hardware-and-software neural net that brings a new meaning to "machine learning." It's made using tensors, ASICs, and neuromorphic engineering by Intel. Furthermore, the emergent collective intelligence of the IoT has created a demand for AI on, and for, the edge. Hopefully, we can do it justice.
The rest is here:
What Is Artificial Intelligence? From Software to Hardware, What You Need to Know - ExtremeTech
- How would a Victorian author write about generative AI? - Verdict [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is the current regulatory system equipped to deal with AI? - The Hindu [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Tim Sweeney, CD Projekt, and Other Experts React to AI's Rise, and ... - IGN [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- GCHQ chiefs warning to ministers over risks of AI - The Independent [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Solving The Mystery Of How ChatGPT And Generative AI Can Surprisingly Pick Up Foreign Languages, Says AI Ethics And AI Law - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- SenseAuto Empowers Nearly 30 Mass-produced Models Exhibited at Auto Shanghai 2023 and Unveils Six Intelligent Cabin Products - Yahoo Finance [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- AI Dangers Viewed Through the Perspective of Don't Look Up - BeInCrypto [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- ChatGPT, artificial intelligence, and the news - Columbia Journalism Review [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript - MSNBC [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is 'Generative' AI the Way of the Future? - Northeastern University [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- GPT-4 Passes the Bar Exam: What That Means for Artificial ... - Stanford Law School [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is artificial intelligence approaching science fiction? - The Trail - The Puget Sound Trail [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Towards Artificial General Intelligence, ChatGPT 5 is on Track - Analytics Insight [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- New AGI hardware in progress for artificial general intelligence - Information Age [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- ChatGPT is impressive, but it may slow the emergence of AGI - TechTalks [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- How smart is ChatGPT really and how do we judge intelligence in AIs? - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- AI Singapore and the Digital and Intelligence Service Sign ... - MINDEF Singapore [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- Artificial Intelligence Will Take Away Jobs and Disrupt Society, says Zerodha CEO Nithin Kamath - DATAQUEST [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- China's State-Sponsored AI Claims it Will Surpass ChatGPT by End ... - Tom's Hardware [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to... - The Sun [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- Opinion | We Need a Manhattan Project for AI Safety - POLITICO [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- As AutoGPT released, should we be worried about AI? - Cosmos [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- The Role of Artificial Intelligence in the Future of Media - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Threat Of Climate Misinformation Propagated by Generative AI ... - Unite.AI [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- AI and the Next Phase of Human Evolution: What Can We Expect? - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- IT pros mull observability tools, devx and generative AI - TechTarget [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The future of learning and skilling with AI in the picture - Chief Learning Officer [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Economic Case for Generative AI and Foundation Models - Andreessen Horowitz [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Exploring the future of AI: The power of decentralization - Cointelegraph [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Convergence of Brain-Inspired AI and AGI: Exploring the Path to ... - Newswise [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Past, Present, Future: AI, Geopolitics, and the Global Economy - Tech Policy Press [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Development of GPT-5: The Next Step in AI Technology - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Executive Q&A: Andrew Cardno, QCI - Indian Gaming [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Future is Now: Understanding and Harnessing Artificial ... - North Forty News [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Twin Convergence: AGI And Superconductors Ushering Humanity's Inflection Point - Medium [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Moonshot: Coexisting with AI holograms - The Edge Malaysia [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- A large computing cluster at sea might have big implications for AI ... - XDA Developers [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- ChatGPT or not ChatGPT? That was the question, briefly, as ... - GeekWire [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The Best ChatGPT Prompts Are Highly Emotional, Study Confirms - Tech.co [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The impact of AI and Language Models - Girton College [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Startup gaining investment traction for AI clinician productivity tool - Mobihealth News [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- What OpenAI's latest batch of chips says about the future of AI - Quartz [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How AI Ecosystems Are Transforming the Future of Business - Entrepreneur [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Understanding Artificial Intelligence: Definition, Applications, and ... - Medium [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Europe's weaknesses, opportunities facing the AI revolution - EURACTIV [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How the AI Executive Order and OMB memo introduce ... - Brookings Institution [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Artificial intelligence: the world is waking up to the risks - InCyber [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How to win the artificial general intelligence race and not end ... - The Strategist [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- AI 2023: risks, regulation & an 'existential threat to humanity' - RTE.ie [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Game-playing DeepMind AI can beat top humans at chess, Go and poker - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Sam Altman Seems to Imply That OpenAI Is Building God - Futurism [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- OpenAI's six-member board will decide 'when we've attained AGI' - VentureBeat [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - DataDrivenInvestor [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 - Medium [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI Technologies Set to Revolutionize Multiple Industries in Near Future - Game Is Hard [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Amazon reportedly preparing paid Alexa version powered by its own Titan AI model - SiliconANGLE News [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? - Vox.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- Meta AI Head: ChatGPT Will Never Reach Human Intelligence - PYMNTS.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- What is artificial general intelligence, and is it a useful concept? - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- 22 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - Livescience.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- Why AI Won't Take Over The World Anytime Soon - Bernard Marr [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics - TechRadar [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- Responsible AI needs further collaboration - Chinadaily.com.cn - China Daily [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI says it's charting a "path to AGI" with its next frontier AI model - ITPro [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- The AI revolution is coming to robots: how will it change them? - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- Can AI ever be smarter than humans? | Context - Context [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI revives its robotic research team, plans to build dedicated AI - Interesting Engineering [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]