OpenAI revives its robotic research team, plans to build dedicated AI – Interesting Engineering

OpenAI being in the news isnt a novelty at all. This time its bagging headlines for restarting its robotics research group after three years. A ChatGPT developer confirmed this move in an interview with Forbes.

It has been almost four years since OpenAI had disbanded a team which researched ways of using AI to teach robots new tasks.

According to media reports, OpenAI is now on the verge of developing a host of multimodal large language models for robotics use cases. A multimodal model is a neural network capable of processing various types of input, not just text. For instance, it can handle data from a robots onboard sensors.

OpenAI had bid goodbye to its original robotics research group. Wojciech Zaremba said, I actually believe quite strongly in the approach that the robotics [team] took in that direction, but from the perspective of AGI [artificial general intelligence], I think that there was actually some components missing. So when we created the robotics [team], we thought that we could go very far with self-generated data and reinforcement learning.

According to a report in Forbes, OpenAI has been hiring again for its robotics team and they have been actively on the lookout for a research robotics engineer. They are seeking an individual skilled in training multimodal robotics models to unlock new capabilities for our partners robots, researching and developing improvements to our core models, exploring new model architectures, collecting robotics data, and conducting evaluations.

Were looking for candidates with a strong research background and experience in shipping AI applications, the company stated.

Earlier this year, OpenAI also invested in humanoid developer Figure AIs Series B fundraising. This investment highlights OpenAIs clear interest in robotics.

Over the past year, OpenAI has significantly invested in the robotics field through its startup fund, pouring millions into companies like Figure AI, 1X Technologies, and Physical Intelligence. These investments underscore OpenAIs keen interest in advancing humanoid robots. In February, OpenAI hinted at a renewed focus on robotics when Figure AI secured additional funding. Shortly after, Figure AI released a video showcasing a robot with basic speech and reasoning skills, powered by OpenAIs model.

Peter Welinder, OpenAIs vice president and a member of the original robotics team, stated, Weve always planned to return to robotics, and we see a path with Figure to explore the potential of humanoid robots powered by highly capable multimodal models.

According to the report, OpenAI doesnt intend to compete directly with other robotics companies. Instead, it aims to develop AI technology that other manufacturers can integrate into their robots. Job listings indicate that new engineers will collaborate with external partners to train advanced AI models. It remains unclear if OpenAI will venture into creating its own robotics hardware, a challenge it has faced in the past. For now, the focus seems to be on leveraging its AI expertise to enhance robotic functionalities.

Apart from this Apple has also been reported to collaborate with OpenAI so that it can inculcate ChatGPT technology into its iOS 18 operating systems for iPhones, according to different media outlets.

The integration of ChatGPT, an advanced AI developed by OpenAI under Sam Altmans leadership, is set to revolutionize how Siri comprehends and responds to complex queries. This partnership, anticipated to be officially announced at this years Worldwide Developers Conference (WWDC), has been in the works for several months and has faced internal challenges and resistance from both companies.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Gairika Mitra Gairika is a technology nerd, an introvert, and an avid reader. Lock her up in a room full of books, and you'll never hear her complain.

Excerpt from:

OpenAI revives its robotic research team, plans to build dedicated AI - Interesting Engineering

The AI revolution is coming to robots: how will it change them? – Nature.com

For a generation of scientists raised watching Star Wars, theres a disappointing lack of C-3PO-like droids wandering around our cities and homes. Where are the humanoid robots fuelled with common sense that can help around the house and workplace?

Rapid advances in artificial intelligence (AI) might be set to fill that hole. I wouldnt be surprised if we are the last generation for which those sci-fi scenes are not a reality, says Alexander Khazatsky, a machine-learning and robotics researcher at Stanford University in California.

From OpenAI to Google DeepMind, almost every big technology firm with AI expertise is now working on bringing the versatile learning algorithms that power chatbots, known as foundation models, to robotics. The idea is to imbue robots with common-sense knowledge, letting them tackle a wide range of tasks. Many researchers think that robots could become really good, really fast. We believe we are at the point of a step change in robotics, says Gerard Andrews, a marketing manager focused on robotics at technology company Nvidia in Santa Clara, California, which in March launched a general-purpose AI model designed for humanoid robots.

At the same time, robots could help to improve AI. Many researchers hope that bringing an embodied experience to AI training could take them closer to the dream of artificial general intelligence AI that has human-like cognitive abilities across any task. The last step to true intelligence has to be physical intelligence, says Akshara Rai, an AI researcher at Meta in Menlo Park, California.

But although many researchers are excited about the latest injection of AI into robotics, they also caution that some of the more impressive demonstrations are just that demonstrations, often by companies that are eager to generate buzz. It can be a long road from demonstration to deployment, says Rodney Brooks, a roboticist at the Massachusetts Institute of Technology in Cambridge, whose company iRobot invented the Roomba autonomous vacuum cleaner.

There are plenty of hurdles on this road, including scraping together enough of the right data for robots to learn from, dealing with temperamental hardware and tackling concerns about safety. Foundation models for robotics should be explored, says Harold Soh, a specialist in humanrobot interactions at the National University of Singapore. But he is sceptical, he says, that this strategy will lead to the revolution in robotics that some researchers predict.

The term robot covers a wide range of automated devices, from the robotic arms widely used in manufacturing, to self-driving cars and drones used in warfare and rescue missions. Most incorporate some sort of AI to recognize objects, for example. But they are also programmed to carry out specific tasks, work in particular environments or rely on some level of human supervision, says Joyce Sidopoulos, co-founder of MassRobotics, an innovation hub for robotics companies in Boston, Massachusetts. Even Atlas a robot made by Boston Dynamics, a robotics company in Waltham, Massachusetts, which famously showed off its parkour skills in 2018 works by carefully mapping its environment and choosing the best actions to execute from a library of built-in templates.

For most AI researchers branching into robotics, the goal is to create something much more autonomous and adaptable across a wider range of circumstances. This might start with robot arms that can pick and place any factory product, but evolve into humanoid robots that provide company and support for older people, for example. There are so many applications, says Sidopoulos.

The human form is complicated and not always optimized for specific physical tasks, but it has the huge benefit of being perfectly suited to the world that people have built. A human-shaped robot would be able to physically interact with the world in much the same way that a person does.

However, controlling any robot let alone a human-shaped one is incredibly hard. Apparently simple tasks, such as opening a door, are actually hugely complex, requiring a robot to understand how different door mechanisms work, how much force to apply to a handle and how to maintain balance while doing so. The real world is extremely varied and constantly changing.

The approach now gathering steam is to control a robot using the same type of AI foundation models that power image generators and chatbots such as ChatGPT. These models use brain-inspired neural networks to learn from huge swathes of generic data. They build associations between elements of their training data and, when asked for an output, tap these connections to generate appropriate words or images, often with uncannily good results.

Likewise, a robot foundation model is trained on text and images from the Internet, providing it with information about the nature of various objects and their contexts. It also learns from examples of robotic operations. It can be trained, for example, on videos of robot trial and error, or videos of robots that are being remotely operated by humans, alongside the instructions that pair with those actions. A trained robot foundation model can then observe a scenario and use its learnt associations to predict what action will lead to the best outcome.

Google DeepMind has built one of the most advanced robotic foundation models, known as Robotic Transformer 2 (RT-2), that can operate a mobile robot arm built by its sister company Everyday Robots in Mountain View, California. Like other robotic foundation models, it was trained on both the Internet and videos of robotic operation. Thanks to the online training, RT-2 can follow instructions even when those commands go beyond what the robot has seen another robot do before1. For example, it can move a drink can onto a picture of Taylor Swift when asked to do so even though Swifts image was not in any of the 130,000 demonstrations that RT-2 had been trained on.

In other words, knowledge gleaned from Internet trawling (such as what the singer Taylor Swift looks like) is being carried over into the robots actions. A lot of Internet concepts just transfer, says Keerthana Gopalakrishnan, an AI and robotics researcher at Google DeepMind in San Francisco, California. This radically reduces the amount of physical data that a robot needs to have absorbed to cope in different situations, she says.

But to fully understand the basics of movements and their consequences, robots still need to learn from lots of physical data. And therein lies a problem.

Although chatbots are being trained on billions of words from the Internet, there is no equivalently large data set for robotic activity. This lack of data has left robotics in the dust, says Khazatsky.

Pooling data is one way around this. Khazatsky and his colleagues have created DROID2, an open-source data set that brings together around 350 hours of video data from one type of robot arm (the Franka Panda 7DoF robot arm, built by Franka Robotics in Munich, Germany), as it was being remotely operated by people in 18 laboratories around the world. The robot-eye-view camera has recorded visual data in hundreds of environments, including bathrooms, laundry rooms, bedrooms and kitchens. This diversity helps robots to perform well on tasks with previously unencountered elements, says Khazatsky.

When prompted to pick up extinct animal, Googles RT-2 model selects the dinosaur figurine from a crowded table.Credit: Google DeepMind

Gopalakrishnan is part of a collaboration of more than a dozen academic labs that is also bringing together robotic data, in its case from a diversity of robot forms, from single arms to quadrupeds. The collaborators theory is that learning about the physical world in one robot body should help an AI to operate another in the same way that learning in English can help a language model to generate Chinese, because the underlying concepts about the world that the words describe are the same. This seems to work. The collaborations resulting foundation model, called RT-X, which was released in October 20233, performed better on real-world tasks than did models the researchers trained on one robot architecture.

Many researchers say that having this kind of diversity is essential. We believe that a true robotics foundation model should not be tied to only one embodiment, says Peter Chen, an AI researcher and co-founder of Covariant, an AI firm in Emeryville, California.

Covariant is also working hard on scaling up robot data. The company, which was set up in part by former OpenAI researchers, began collecting data in 2018 from 30 variations of robot arms in warehouses across the world, which all run using Covariant software. Covariants Robotics Foundation Model 1 (RFM-1) goes beyond collecting video data to encompass sensor readings, such as how much weight was lifted or force applied. This kind of data should help a robot to perform tasks such as manipulating a squishy object, says Gopalakrishnan in theory, helping a robot to know, for example, how not to bruise a banana.

Covariant has built up a proprietary database that includes hundreds of billions of tokens units of real-world robotic information which Chen says is roughly on a par with the scale of data that trained GPT-3, the 2020 version of OpenAI's large language model. We have way more real-world data than other people, because thats what we have been focused on, Chen says. RFM-1 is poised to roll out soon, says Chen, and should allow operators of robots running Covariants software to type or speak general instructions, such as pick up apples from the bin.

Another way to access large databases of movement is to focus on a humanoid robot form so that an AI can learn by watching videos of people of which there are billions online. Nvidias Project GR00T foundation model, for example, is ingesting videos of people performing tasks, says Andrews. Although copying humans has huge potential for boosting robot skills, doing so well is hard, says Gopalakrishnan. For example, robot videos generally come with data about context and commands the same isnt true for human videos, she says.

A final and promising way to find limitless supplies of physical data, researchers say, is through simulation. Many roboticists are working on building 3D virtual-reality environments, the physics of which mimic the real world, and then wiring those up to a robotic brain for training. Simulators can churn out huge quantities of data and allow humans and robots to interact virtually, without risk, in rare or dangerous situations, all without wearing out the mechanics. If you had to get a farm of robotic hands and exercise them until they achieve [a high] level of dexterity, you will blow the motors, says Nvidias Andrews.

But making a good simulator is a difficult task. Simulators have good physics, but not perfect physics, and making diverse simulated environments is almost as hard as just collecting diverse data, says Khazatsky.

Meta and Nvidia are both betting big on simulation to scale up robot data, and have built sophisticated simulated worlds: Habitat from Meta and Isaac Sim from Nvidia. In them, robots gain the equivalent of years of experience in a few hours, and, in trials, they then successfully apply what they have learnt to situations they have never encountered in the real world. Simulation is an extremely powerful but underrated tool in robotics, and I am excited to see it gaining momentum, says Rai.

Many researchers are optimistic that foundation models will help to create general-purpose robots that can replace human labour. In February, Figure, a robotics company in Sunnyvale, California, raised US$675 million in investment for its plan to use language and vision models developed by OpenAI in its general-purpose humanoid robot. A demonstration video shows a robot giving a person an apple in response to a general request for something to eat. The video on X (the platform formerly known as Twitter) has racked up 4.8 million views.

Exactly how this robots foundation model has been trained, along with any details about its performance across various settings, is unclear (neither OpenAI nor Figure responded to Natures requests for an interview). Such demos should be taken with a pinch of salt, says Soh. The environment in the video is conspicuously sparse, he says. Adding a more complex environment could potentially confuse the robot in the same way that such environments have fooled self-driving cars. Roboticists are very sceptical of robot videos for good reason, because we make them and we know that out of 100 shots, theres usually only one that works, Soh says.

As the AI research community forges ahead with robotic brains, many of those who actually build robots caution that the hardware also presents a challenge: robots are complicated and break a lot. Hardware has been advancing, Chen says, but a lot of people looking at the promise of foundation models just don't know the other side of how difficult it is to deploy these types of robots, he says.

Another issue is how far robot foundation models can get using the visual data that make up the vast majority of their physical training. Robots might need reams of other kinds of sensory data, for example from the sense of touch or proprioception a sense of where their body is in space say Soh. Those data sets dont yet exist. Theres all this stuff thats missing, which I think is required for things like a humanoid to work efficiently in the world, he says.

Releasing foundation models into the real world comes with another major challenge safety. In the two years since they started proliferating, large language models have been shown to come up with false and biased information. They can also be tricked into doing things that they are programmed not to do, such as telling users how to make a bomb. Giving AI systems a body brings these types of mistake and threat to the physical world. If a robot is wrong, it can actually physically harm you or break things or cause damage, says Gopalakrishnan.

Valuable work going on in AI safety will transfer to the world of robotics, says Gopalakrishnan. In addition, her team has imbued some robot AI models with rules that layer on top of their learning, such as not to even attempt tasks that involve interacting with people, animals or other living organisms. Until we have confidence in robots, we will need a lot of human supervision, she says.

Despite the risks, there is a lot of momentum in using AI to improve robots and using robots to improve AI. Gopalakrishnan thinks that hooking up AI brains to physical robots will improve the foundation models, for example giving them better spatial reasoning. Meta, says Rai, is among those pursuing the hypothesis that true intelligence can only emerge when an agent can interact with its world. That real-world interaction, some say, is what could take AI beyond learning patterns and making predictions, to truly understanding and reasoning about the world.

What the future holds depends on who you ask. Brooks says that robots will continue to improve and find new applications, but their eventual use is nowhere near as sexy as humanoids replacing human labour. But others think that developing a functional and safe humanoid robot that is capable of cooking dinner, running errands and folding the laundry is possible but could just cost hundreds of millions of dollars. Im sure someone will do it, says Khazatsky. Itll just be a lot of money, and time.

Originally posted here:

The AI revolution is coming to robots: how will it change them? - Nature.com

Can AI ever be smarter than humans? | Context – Context

Whats the context?

"Artificial general intelligence" (AGI) - the benefits, the risks to security and jobs, and is it even possible?

LONDON - When researcher Jan Leike quit his job at OpenAI last month, he warned the tech firm's "safety culture and processes (had) taken a backseat" while it trained its next artificial intelligence model.

He voiced particular concern about the company's goal to develop "artificial general intelligence", a supercharged form of machine learning that it says would be "smarter than humans".

Some industry experts say AGI may be achievable within 20 years, but others say it will take many decades, if at all.

But what is AGI, how should it be regulated and what effect will it have on people and jobs?

OpenAI defines AGI as a system "generally smarter than humans". Scientists disagree on what this exactly means.

"Narrow" AI includes ChatGPT, which can perform a specific, singular task. This works by pattern matching, akin to putting together a puzzle without understanding what the pieces represent, and without the ability to count or complete logic puzzles.

"The running joke, when I used to work at Deepmind (Google's artificial intelligence research laboratory), was AGI is whatever we don't have yet," Andrew Strait, associate director of the Ada Lovelace Institute, told Context.

IBM has suggested that artificial intelligence would need at least seven critical skills to reach AGI, including visual and auditory perception, making decisions with incomplete information, and creating new ideas and concepts.

Narrow AI is already used in many industries, but has been responsible for many issues, like lawyers citing "hallucinated" - made up - legal precedents and recruiters using biased services to check potential employees.

AGI still lacks definition, so experts find it difficult to describe the risks that it might pose.

It is possible that AGI will be better at filtering out bias and incorrect information, but it is also possible new problems will arise.

One "very serious risk", Strait said, was an over-reliance on the new systems, "particularly as they start to mediate more sensitive human-to-human relationships".

AI systems also need huge amounts of data to train on and this could result in a massive expansion of surveillance infrastructure. Then there are security risks.

"If you collect (data), it's more likely to get leaked," Strait said.

There are also concerns over whether AI will replace human jobs.

Carl Frey, a professor of AI and work at the Oxford Internet Institute, said an AI apocalypse was unlikely and that "humans in the loop" would still be needed.

But there may be downward pressure on wages and middle-income jobs, especially with developments in advanced robotics.

"I don't see a lot of focus on using AI to develop new products and industries in the ways that it's often being portrayed. All applications boil down to some form of automation," Frey told Context.

As AI develops, governments must ensure there is competition in the market, as there are significant barriers to entry for new companies, Frey said.

There also needs to be a different approach to what the economy rewards, he added. It is currently in the interest of companies to focus on automation and cut labour costs, rather than create jobs.

"One of my concerns is that the more we emphasise the downsides, the more we emphasise the risks with AI, the more likely we are to get regulation, which means that we restrict entry and that we solidify the market position of incumbents," he said.

Last month, the U.S. Department of Homeland Security announced a board comprised of the CEOs of OpenAI, Microsoft, Google, and Nvidia to advise the government on AI in critical infrastructure.

"If your goal is to minimise the risks of AI, you don't want open source. You want a few incumbents that you can easily control, but you're going to end up with a tech monopoly," Frey said.

AGI does not have a precise timeline. Jensen Huang, the chief executive of Nvidia, predicts that today's models could advance to the point of AGI within five years.

Huang's definition of AGI would be a program that can improve on human logic quizzes and exams by 8%.

OpenAI has indicated that a breakthrough in AI is coming soon with Q* (pronounced Q-Star), a secretive project reported in November last year.

Microsoft researchers have said that GPT-4, one of OpenAI's generative AI models, has "sparks of AGI". However, it does not "(come) close to being able to do anything that a human can do", nor does it have "inner motivation and goals" - another key aspect in some definitions of AGI.

But Microsoft President Brad Smith has rejected claims of a breakthrough.

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said in November.

Frey suggested there would need to be significant innovation to get to AGI, due to both limitations in hardware and the amount of training data available.

"There are real question marks around whether we can develop AI on the current path. I don't think we can just scale up existing models (with) more compute, more data, and get to AGI."

Read the rest here:

Can AI ever be smarter than humans? | Context - Context

Responsible AI needs further collaboration – Chinadaily.com.cn – China Daily

Wang Lei (standing), chairman of Wenge Tech Corporation, talks to participants at the World Summit on the Information Society. For China Daily

Further efforts are needed to build responsible artificial intelligence by promoting technological openness, fostering collaboration and establishing consensus-driven governance to fully unleash AI's potential to boost productivity across various industries, an executive said.

The remarks were made by Wang Lei, chairman of Wenge Tech. Corporation, a Beijing-based AI company recognized by the Ministry of Industry and Information Technology as a "little giant" firmnovel and elite small and medium-sized enterprises that specialize in niche markets. Wang delivered his speech at the recently concluded World Summit on the Information Society.

"AI has made extraordinary progress in recent years. Innovations like ChatGPT and hundreds of other large language models (LLMs) have captured global attention, profoundly transforming how we work and live," said Wang.

"Now we are entering a new era of Artificial General Intelligence (AGI). Enterprise AI has proven to create significant value for customers in fields such as government operations, ESGs, supply chain management, and defense intelligence, excelling in analysis, forecasting, decision-making, optimization, and risk monitoring," he added.

A recent report from the think-tank a16z and IDC reveals that global enterprise investments in AI have surged from an average of $7 million to $18 million, a 2.5-fold increase. In China, the number of LLMs grew from 16 to 318 last year, with over 80 percent focusing on industry-specific applications, Wang noted.

He predicted a promising future for Enterprise AI, with decision intelligence being the ultimate goal. "Complex problems will be broken down into smaller tasks, each resolved by different AI models. AI agents and multi-agent collaboration frameworks will optimize decision-making strategies and action planning, integrating AI into workflows, data streams, and decision-making processes within industry-specific scenarios."

Wang proposed a three-step methodology for successful Enterprise AI transformation: data engineering, model engineering, and domain engineering.

"To build responsible AI, we must address several challenges head-on," he emphasized. "Promoting technological openness can reduce regional and industrial imbalances, fostering collaboration can mitigate unfair usage restrictions, and establishing consensus-driven governance can significantly enhance AI safety."

Continue reading here:

Responsible AI needs further collaboration - Chinadaily.com.cn - China Daily

OpenAI says it’s charting a "path to AGI" with its next frontier AI model – ITPro

OpenAI has revealed that it recently started work on training its next frontier large language model (LLM).

The first version of OpenAIs ChatGPT debuted back in November 2022 and became an unexpected breakthrough hit which launched generative AI into public consciousness.

Since then, there have been a number of updates to the underlying model. The first version of ChatGPT was built on GPT-3.5 which finished training in early 2022., while GPT-4 arrived in March 2023. The most recent, GPT-4o, arrived in April this year.

Now OpenAI is working on a new LLM and said it anticipates the system to bring us to the next level of capabilities on our path to [artificial general intelligence] AGI.

AGI is a hotly contested concept whereby an AI would like humans be good at adapting to many different tasks, including ones it has never been trained on, rather than being designed for one particular use.

AI researchers are split on whether AGI could ever exist or whether the search for it may even be based on a misunderstanding of how intelligence works.

OpenAI provided no details of what the next model might do, but as its LLMs have evolved, the capabilities of the underlying models have expanded.

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.

While GPT-3 could only deal with text, GTP-4 is able to accept images as well, while GPT-4o has been optimized for voice communication. Context windows have also increased markedly with each interaction, although the size of the models and technical details still remain secret.

Sam Altman, CEO at OpenAI, has stated that GPT-4 cost more than $100 million to train, per Wired, and the model is rumored to have more than one trillion parameters. This would make it one of, if not the biggest, LLM currently in existence.

That doesnt necessarily mean the next model will be even larger; Altman has previously suggested the race for ever bigger models may be coming to an end.

Smaller models working together might be a more useful way of using generative AI, he has said.

And even if OpenAI has started training its next model, dont expect to see the impact of it very soon. Training models can take many months and that can be just the first step. It took six months of testing after training was finished before OpenAI released GPT-4.

The company also said it will create a new Safety and Security Committee led by OpenAI directors Bret Taylor, Adam DAngelo, Nicole Seligman, and Altman. This committee will be responsible for making recommendations to the board on critical safety and security decisions for OpenAI projects and operations.

One of its first tasks will be to evaluate and develop OpenAIs processes and safeguards over the next 90 days. After that the committee will share their recommendations with the board.

Some may raise eyebrows at the safety committee being made up of members of existing OpenAIs board.

Dr Ilia Kolochenko, CEO at ImmuniWeb and adjunct professor of cyber security at Capital Technology University, questioned whether the move will actually deliver positive outcomes as far as AI safety is concerned.

Being safe does not necessarily imply being accurate, reliable, fair, transparent, explainable and non-discriminative the absolutely crucial characteristics of GenAI solutions, Kolochenko said. In view of the past turbulence at OpenAI, I am not sure that the new committee will make a radical improvement.

The launch of the safety committee comes amidst greater calls for more rigorous regulation and oversight of LLM development. Most recently, a former OpenAI board member has argued that self-governance isnt the right approach for AI firms and has argued that a strong regulatory framework is needed.

OpenAI has made public efforts to calm AI safety fears in recent months. It was among a host of major industry players to sign up to a safe development pledge at the Seoul AI Summit that could see them pull the plug on their own models if they cannot be built or deployed safely.

But these commitments are voluntary and come with plenty of caveats, leading some experts to call for stronger legislation and requirements for tougher testing of LLMs.

Because of the potentially large risks associated with the technology, AI companies should be subject to a similar regulatory framework as pharmaceuticals companies, critics argue, where companies have to hit standards set by regulators who can make the final decision on if and when a product can be released.

Read the rest here:

OpenAI says it's charting a "path to AGI" with its next frontier AI model - ITPro

Why AI Won’t Take Over The World Anytime Soon – Bernard Marr

In an era where artificial intelligence features prominently in both our daily lives and our collective imagination, its common to hear concerns about these systems gaining too much power or even becoming autonomous rulers of our future. Yet, a closer look at the current state of AI technology reveals that these fears, while popular in science fiction, are far from being realized in the real world. Heres why were not on the brink of an AI takeover.

The majority of AI systems we encounter daily are examples of "narrow AI." These systems are masters of specialization, adept at tasks such as recommending your next movie on Netflix, optimizing your route to avoid traffic jams or even more complex feats like writing essays or generating images. Despite these capabilities, they operate under strict limitations, designed to excel in a particular arena but incapable of stepping beyond those boundaries.

This is true even of the generative AI tools that are dazzling us with their ability to create content across multiple modalities. They can draft essays, recognize elements in photographs, and even compose music. However, at their core, these advanced AIs are still just making mathematical predictions based on vast datasets; they do not truly "understand" the content they generate or the world around them.

Narrow AI operates within a predefined framework of variables and outcomes. It cannot think for itself, learn beyond what it has been programmed to do, or develop any form of intention. Thus, despite the seeming intelligence of these systems, their capabilities remain tightly confined. For those who fear their GPS might one day lead them on a rogue mission to conquer the world, you can rest easy. Your navigation system is not plotting global dominationit is simply calculating the fastest route to your destination, oblivious to the broader implications of its computations.

The concept of artificial general intelligence, an AI capable of understanding, learning and applying knowledge across a broad spectrum of tasks just like a human, remains a distant goal. Todays most sophisticated AIs struggle with tasks that a human child performs intuitivelyrecognizing objects in a messy room or grasping the subtleties of a conversation.

Transitioning from narrow AI to AGI isn't merely a matter of incremental improvements but requires foundational breakthroughs in how AI learns and interprets the world. Researchers are still deciphering the basic principles of cognition and machine learning, and the challenge of developing a machine that genuinely understands context or displays common sense is still a significant scientific hurdle.

Another factor is that current AI systems have an insatiable appetite for data, requiring vast amounts to learn and function effectively. This dependency on large datasets is one of the primary bottlenecks in AI development. Unlike humans, who can learn from a few examples or even from a single experience, AI systems need thousandsor even millionsof data points to master even simple tasks. This difference highlights a fundamental gap in how humans and machines process information.

The data needs of AI are not just extensive but also specific, and in many domains, such high-quality, large-scale datasets simply do not exist. For instance, in specialized medical fields or in areas involving rare events, the requisite data to train AI effectively can be scarce or non-existent, limiting the applicability of AI in these fields.

That means that the notion that AI systems might spontaneously evolve to outsmart humans is, therefore, more than just unlikely.

While AI continues to evolve and integrate deeper into our lives and industries, the infrastructure around its development is simultaneously maturing. This dual progression ensures that as AI capabilities grow. As AI technology progresses, so does the imperative for dynamic regulatory frameworks. The tech community is increasingly proficient at implementing safety and ethical guidelines. However, these measures must evolve in lockstep with AI's rapid developments to ensure robust, safe, and controlled operations.

By proactively adapting regulations, we can effectively anticipate and mitigate potential risks and unintended consequences, securing AI's role as a powerful tool for positive advancement rather than a threat. This continued focus on safe and ethical AI development is crucial for harnessing its potential while avoiding the pitfalls depicted in dystopian narratives. AI is here to assist and augment human capabilities, not to replace them. So, for now, the world remains very much in human hands.

Here is the original post:

Why AI Won't Take Over The World Anytime Soon - Bernard Marr

OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics – TechRadar

OpenAI, the tech company behind ChatGPT, has announced that its formed a Safety and Security Committee thats intended to make the firms approach to AI more responsible and consistent in terms of security.

Its no secret that OpenAI and CEO Sam Altman - who will be on the committee - want to be the first to reach AGI (Artificial General Intelligence), which is broadly considered as achieving artificial intelligence that will resemble human-like intelligence and can teach itself. Having recently debuted GPT-4o to the public, OpenAI is already training the next-generation GPT model, which it expects to be one step closer to AGI.

GPT-4o was debuted on May 13 to the public as a next-level multimodal (capable of processing in multiple modes) generative AI model, able to deal with input and respond with audio, text, and images. It was met with a generally positive reception, but more discussion around the innovation has since arisen regarding its actual capabilities, implications, and the ethics around technologies like it.

Just over a week ago, OpenAI confirmed to Wired that its previous team responsible for overseeing the safety of its AI models had been disbanded and reabsorbed into other existing teams. This followed the notable departures of key company figures like OpenAI co-founder and chief scientist Ilya Sutskever, and co-lead of the AI safety superalignment team Jan Leike. Their departure was reportedly related to their concerns that OpenAI, and Altman in particular, was not doing enough to develop its technologies responsibly, and was forgoing conducting due diligence.

This has seemingly given OpenAI a lot to reflect on and its formed the oversight committee in response. In the announcement post about the committee being formed, OpenAI also states that it welcomes a robust debate at this important moment. The first job of the committee will be to evaluate and further develop OpenAIs processes and safeguards over the next 90 days, and then share recommendations with the companys board.

The recommendations that are subsequently agreed upon to be adopted will be shared publicly in a manner that is consistent with safety and security.

The committee will be made up of Chairman Bret Taylor, CEO of Quora Adam DAngelo, and Nicole Seligman, a former executive of Sony Entertainment, alongside six OpenAI employees which includes Sam Altman as mentioned, and John Schulman, a researcher and cofounder of OpenAI. According to Bloomberg, OpenAI stated that it will also consult external experts as part of this process.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Ill reserve my judgment for when OpenAIs adopted recommendations are published, and I can see how theyre implemented, but intuitively, I dont have the greatest confidence that OpenAI (or any major tech firm) is prioritizing safety and ethics as much as they are trying to win the AI race.

Thats a shame, and its unfortunate that generally speaking, those who are striving to be the best no matter what are often slow to consider the cost and effects of their actions, and how they might impact others in a very real way - even if large numbers of people are potentially going to be affected.

Ill be happy to be proven wrong and I hope I am, and in an ideal world, all tech companies, whether theyre in the AI race or not, should prioritize the ethics and safety of what theyre doing at the same level that they strive for innovation. So far in the realm of AI, that does not appear to be the case from where Im standing, and unless there are real consequences, I dont see companies like OpenAI being swayed that much to change their overall ethos or behavior.

See the article here:

OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics - TechRadar

What is artificial general intelligence, and is it a useful concept? – New Scientist

If you take even a passing interest in artificial intelligence, you will inevitably have come across the notion of artificial general intelligence. AGI, as it is often known, has ascended to buzzword status over the past few years as AI has exploded into the public consciousness on the back of the success of large language models (LLMs), a form of AI that powers chatbots such as ChatGPT.

That is largely because AGI has become a lodestar for the companies at the vanguard of this type of technology. ChatGPT creator OpenAI, for example, states that its mission is to ensure that artificial general intelligence benefits all of humanity. Governments, too, have become obsessed with the opportunities AGI might present, as well as possible existential threats, while the media (including this magazine, naturally) report on claims that we have already seen sparks of AGI in LLM systems.

Despite all this, it isnt always clear what AGI really means. Indeed, that is the subject of heated debate in the AI community, with some insisting it is a useful goal and others that it is a meaningless figment that betrays a misunderstanding of the nature of intelligence and our prospects for replicating it in machines. Its not really a scientific concept, says Melanie Mitchell at the Santa Fe Institute in New Mexico.

Artificial human-like intelligence and superintelligent AI have been staples of science fiction for centuries. But the term AGI took off around 20 years ago when it was used by the computer scientist Ben Goertzel and Shane Legg, cofounder of

Read more:

What is artificial general intelligence, and is it a useful concept? - New Scientist

22 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create – Livescience.com

The artificial intelligence (AI) revolution is here, and it's already changing our lives in a wide variety of ways. From chatbots to sat-nav, AI has revolutionized the technological space but in doing so, it may be set to take over a wide variety of jobs, particularly those involving labor-intensive manual tasks.

But it's not all bad news: as with most new technologies, the hypothetical advent of artificial general intelligence (AGI) where machines are smarter than humans and can apply what they learn across multiple disciplines could also lead to new roles. So what might the job market of the near future look like, and could your job be at risk?

One of the most mind-numbing and tedious jobs around today, data entry will surely be one of the first roles supplanted by AI. Instead of a human laboring over endless data sets and fiddly forms for hours on end, AI systems will be able to input and manage large amounts of data quickly and seamlessly, hopefully freeing up human workers for much more productive tasks.

You might already have endured robotic calls asking if you have been the victim of an accident that wasnt your fault, or whether youre keen to upgrade your long-distance calling plan but this could be just a taste of things to come. AI services could easily take the work of a whole call center, automatically dialling hundreds, if not thousands of unsuspecting victims to spread the word, whether you like it or not.

On the friendlier side, AI customer service agents are already a common sight on the websites of many major companies. Often in the form of chatbots, these agents offer a first line of support, before deferring to a human where needed. In the not too distant future, though, expect the AI to take over completely, walking customers through their complaints or queries from start to finish.

Restaurant bookings can be a hassle, as overworked staff or maitre ds try to juggle existing reservations with no-shows and chancers who try their arm at getting a last-minute slot. Booking a table will soon be a whole lot easier, however, with an entirely computerized system able to allocate slots and spaces with ease, and even juggle late cancellations or alterations without the need for anyone to lose their spot.

Although image generation has grabbed much of the headlines, AI voice creation has become a growing presence in the entertainment and creative world. Offering potentially unlimited customization options, directors and producers can now create a voice whatever tone, style or accent they require which is then able to say whatever they desire, without the need for costly retakes or ever getting tired.

Text generation has quickly become one of the most-used aspects of AI technology, with copilots and other tools able to quickly generate large amounts of texts based on a simple prompt. Whether youre looking to fill your new website with business-focused copy, or offering more detail on your latest product launch AI text generation provides a quick and easy way to do whatever you need to do.

In a similar vein, many of the leading website builder services today offer a fully AI-powered service, allowing you to create the page of your dreams simply by entering a few prompts. From start-ups to sole traders and all the way to big business, theres no need to fiddle around with templates simply tell the platform what youre after, and a personalized website will be yours to customize or publish in moments.

This one may still sound a bit more like the realm of science fiction, but with cars getting smarter by the year, fully AI-powered driving is not too much of a pipe dream any more. Far from the basic autopilot tools on offer today, the cars of the future may well be able to not just operate independently, but provide their passengers with a fully-curated experience, from air conditioning at just the right level, to your favorite radio station.

Another position that is based around humans taking in huge amounts of data and creating reports, accounting is set for an AI revolution that could see many roles replaced. No need to spend hours collating receipts and entering numbers into a spreadsheet when AI can quickly scan, identify and upload all the information needed, taking the stress out of tax season and answering any queries or questions with ease.

The legal industry is another one that is dominated by large amounts of data and paperwork, and also one that is dominated by role-specific processes and even language. This makes it another prime candidate for AI, which will be able to automate the lengthy data analysis and entry actions undertaken by paralegals and legal assistants today although given the scale or importance of the case involved, it may still be wise to have some kind of human element

Signing in for an appointment or a meeting is another job that many believe can easily be done by AI platforms. Rather than needing to bother or distract a human from their job, simply check in on a display screen, with your visitors badge or meeting confirmation registered in seconds allowing you (and everyone else) to get on with your day.

Similar to AI drivers, autonomous vehicles and robots powered by AI systems could soon be taking the role of delivery people. After scanning the list of destinations for any given day, the vehicle or platform would be able to quickly calculate the most efficient route, ensuring no waiting around all day for your package, as well as being able to instantly flag any issues or missed deliveries.

In a boost to current spell checking tools, it may be that AI systems eventually graduate from suggesting or writing content to helping check it for mistakes. Once trained on a style guide or content guidelines, an AI editor could quickly scan through articles, documents and filings to spot any issues a particularly handy speed boost in highly regulated industries such as banking, insurance or healthcare before flagging possible problems to a human supervisor.

Away from the written word, AI-powered platforms could soon be helping compose the next great pieces of music. Taking inspiration from vast libraries of existing pieces, these futuristic musicians could quickly dream up everything from film soundtracks to radio jingles, once again meaning companies or organizations would no longer need to pay human performers for day-long sessions consisting of multiple takes.

Another area which relies on quickly spotting trends and patterns among huge tranches of data, the statistics field could be quickly swamped by AI platforms. Whether it is at a business level, where companies could look to spot potential growth opportunities or risky situations, all the way down to the sports stats used by commentators and fans alike, AI can quickly come up with the figures needed.

A job that has already declined in importance over the past few years thanks to the emergence and widespread adoption of centralized collaboration tools, the role of project manager is another sure-fire target for AI. Rather than having a designated manager trying to keep tabs on the work being done by a number of disparate teams, an AI-powered central solution could collate all the progress in a single location, allowing everyone to view the latest updates and stay on top of their work.

Were already seeing the beginning of AI taking over the image design and generation space, with animation set to be one of the first fields to feel the effect. As more and more advanced AI programs emerge, creating any kind of customized animation will soon be easier than ever, with production studios able to easily create the movies, TV shows and other media they require.

In a similar vein to the entertainment industry, creating designs for new products, advertising campaigns and more will doubtless soon be another field dominated by AI. With a simple prompt, companies will be able to create the graphics they need, with potentially endless customization options that can be carried out instantly, with no need for back-and-forth with human designers.

Keeping track of potential security risks is another task that could be easily handled by AI, which will be able to continuously monitor multiple data fields and sensors to spot issues or threats before they take hold. Once detected, the systems would hopefully be able to take proactive action to lock down valuable data or company platforms, while alerting human agents and managers to ensure everything remains protected.

Many of us are perfectly comfortable booking and scheduling our vacations independently, but sometimes you want all of the stress of planning taken off your hands. Rather than leaving it to a human agent, AI travel service platforms could gather all of your requirements and come up with a tailored solution or itinerary exactly sculpted to your needs, without endless back and forth, taking all of the hassle out of your vacation planning.

Making assessments on the viability of insurance applications can be a lengthy process, with agents needing to take into consideration a huge number of potential risks and other criteria, often via specific formulae or structures. Rather than a human needing to spend all this time, AI agents could quickly scan through all the information provided, coming up with a decision much faster and more effectively.

One final field that is again dominated by analyzing huge amounts of data, past knowledge, and spotting upcoming trends and actions before they happen, stock trading could also quickly become dominated by AI. AI systems will be able to speedily act to make the best deals for financial firms in the blink of an eye, outpacing and outperforming human traders with ease, and possibly leading to even bigger profits.

First, and perhaps most obviously, will be an increase in roles for people looking to advise businesses exactly what kind of AI they should be utilizing. Simply grabbing as many AI tools and services as possible may have a tremendously destabilizing effect on a business, so having an expert who is able to outline the exact benefits and risks of specific technologies will become increasingly important for companies of all sizes.

In a similar vein, getting the most out of your companys new AI tools will be vital, so having trainers skilled in the right services will be absolutely critical. The ability to suggest to workers at all levels what they can utilize AI for will be incredibly useful for businesses everywhere, walking employees through the various platforms and educating them about any possible ill effects.

With chatbots and virtual agents becoming the main entry point for people encountering AI, knowing just how to communicate with such systems is going to be vital to making the relationship productive. Having experts who know the best way to talk to models such as ChatGPT, especially when it comes to phrasing specific questions or prompts, will be increasingly important as our dependence on AI models increases.

Once were happy with how we communicate with AI models, the next big obstacle might be understanding what keeps it happy or at least, productive. We may soon see experts who, much like human therapists, are engaged with AI models to try and understand what makes them tick including why they might show bias or toxicity in order to make our relationships with them more effective overall.

On the occasion that something does go wrong whether thats a poorly worded corporate email, or an advertising campaign that features an embarrassing slip-up there will be a need for crisis managers who can step in and look to quickly defuse the situation. This may become increasingly important in situations where AI may put sensitive data or even lives at risk, although hopefully such incidents will be rare.

The next step along from a crisis involving AI agents or systems may be lawyers or legal experts who specialize in dealing with non-human creators. The ability to represent a defendant who isnt physically present in a courtroom may become increasingly valuable as the role of AI in everyday life, and the risks it poses, becomes more prevalent especially as business data or personal information gets involved.

With AI set to push the limits of what can be done with analysis and data processing, it may be that some companies looking to adopt new tools are simply not equipped to handle the new technology. Stress testers will be able to evaluate the status of your tech stack and network to make sure that any AI tools your business is set to use dont have the opposite effect and push everything to breaking point.

With content creation becoming an increasingly important role for AI, were likely to see such images, audio and video appearing more frequently in everyday life. But were already seeing backlash against obviously AI-generated content littered with errors, like extra fingers on humans, or nonsense alphabets in advertising. Having a human editor that is able to audit this content and ensure it is accurate, and fit for human consumption, could be a vital new role.

In a similar vein, AI-generated content may also need a human sense-checking it before it hits the public domain. Similar to the work currently being done by proofreaders and editors on human-produced content around the world, making sure that AI documents flow properly and sound legitimate will be another crucial consideration, and should lead to a growth in these sorts of roles.

Finally, despite the efficiency and effectiveness of AI-generated content, there will still always be room for the human touch. Much like we already have authentic artists, or artisans who specialize in handmade goods, it may soon be that we have creators and painters who strive for their work to be authentically human, setting them apart from the AI hordes.

See the rest here:

22 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - Livescience.com

OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? – Vox.com

Editors note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altmans tweet on Saturday afternoon that the company was in the process of changing its offboarding documents.

On Monday, OpenAI announced exciting new product news: ChatGPT can now talk like a human.

It has a cheery, slightly ingratiating feminine voice that sounds impressively non-robotic, and a bit familiar if youve seen a certain 2013 Spike Jonze film. Her, tweeted OpenAI CEO Sam Altman, referencing the movie in which a man falls in love with an AI assistant voiced by Scarlett Johansson.

But the product release of ChatGPT 4o was quickly overshadowed by much bigger news out of OpenAI: the resignation of the companys co-founder and chief scientist, Ilya Sutskever, who also led its superalignment team, as well as that of his co-team leader Jan Leike (who we put on the Future Perfect 50 list last year).

The resignations didnt come as a total surprise. Sutskever had been involved in the boardroom revolt that led to Altmans temporary firing last year, before the CEO quickly returned to his perch. Sutskever publicly regretted his actions and backed Altmans return, but hes been mostly absent from the company since, even as other members of OpenAIs policy, alignment, and safety teams have departed.

But what has really stirred speculation was the radio silence from former employees. Sutskever posted a pretty typical resignation message, saying Im confident that OpenAI will build AGI that is both safe and beneficialI am excited for what comes next.

Leike ... didnt. His resignation message was simply: I resigned. After several days of fervent speculation, he expanded on this on Friday morning, explaining that he was worried OpenAI had shifted away from a safety-focused culture.

Questions arose immediately: Were they forced out? Is this delayed fallout of Altmans brief firing last fall? Are they resigning in protest of some secret and dangerous new OpenAI project? Speculation filled the void because no one who had once worked at OpenAI was talking.

It turns out theres a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI, has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

While nondisclosure agreements arent unusual in highly competitive Silicon Valley, putting an employees already-vested equity at risk for declining or violating one is. For workers at startups like OpenAI, equity is a vital form of compensation, one that can dwarf the salary they make. Threatening that potentially life-changing money is a very effective way to keep former employees quiet.

OpenAI did not respond to a request for comment in time for initial publication. After publication, an OpenAI spokespersonsent me this statement: We have never canceled any current or former employees vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.

Sources close to the company I spoke to told me that this represented a change in policy as they understood it.When I askedthe OpenAI spokespersonif thatstatement representeda change,theyreplied, This statement reflects reality.

On Saturday afternoon, a little more than a day after this article published, Altman acknowledged in a tweet that there had been a provision in the companys off-boarding documents about potential equity cancellation for departing employees, but said the company was in the process of changing that language.

All of this is highly ironic for a company that initially advertised itself as OpenAI that is, as committed in its mission statements to building powerful systems in a transparent and accountable manner.

OpenAI long ago abandoned the idea of open-sourcing its models, citing safety concerns. But now it has shed the most senior and respected members of its safety team, which should inspire some skepticism about whether safety is really the reason why OpenAI has become so closed.

OpenAI has spent a long time occupying an unusual position in tech and policy circles. Their releases, from DALL-E to ChatGPT, are often very cool, but by themselves they would hardly attract the near-religious fervor with which the company is often discussed.

What sets OpenAI apart is the ambition of its mission: to ensure that artificial general intelligence AI systems that are generally smarter than humans benefits all of humanity. Many of its employees believe that this aim is within reach; that with perhaps one more decade (or even less) and a few trillion dollars the company will succeed at developing AI systems that make most human labor obsolete.

Which, as the company itself has long said, is as risky as it is exciting.

Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the worlds most important problems, a recruitment page for Leike and Sutskevers team at OpenAI states. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction. While superintelligence seems far off now, we believe it could arrive this decade.

Naturally, if artificial superintelligence in our lifetimes is possible (and experts are divided), it would have enormous implications for humanity. OpenAI has historically positioned itself as a responsible actor trying to transcend mere commercial incentives and bring AGI about for the benefit of all. And theyve said they are willing to do that even if that requires slowing down development, missing out on profit opportunities, or allowing external oversight.

We dont think that AGI should be just a Silicon Valley thing, OpenAI co-founder Greg Brockman told me in 2019, in the much calmer pre-ChatGPT days. Were talking about world-altering technology. And so how do you get the right representation and governance in there? This is actually a really important focus for us and something we really want broad input on.

OpenAIs unique corporate structure a capped-profit company ultimately controlled by a nonprofit was supposed to increase accountability. No one person should be trusted here. I dont have super-voting shares. I dont want them, Altman assured Bloombergs Emily Chang in 2023. The board can fire me. I think thats important. (As the board found out last November, it could fire Altman, but it couldnt make the move stick. After his firing, Altman made a deal to effectively take the company to Microsoft, before being ultimately reinstated with most of the board resigning.)

But there was no stronger sign of OpenAIs commitment to its mission than the prominent roles of people like Sutskever and Leike, technologists with a long history of commitment to safety and an apparently genuine willingness to ask OpenAI to change course if needed. When I said to Brockman in that 2019 interview, You guys are saying, Were going to build a general artificial intelligence, Sutskever cut in.Were going to do everything that can be done in that direction while also making sure that we do it in a way thats safe, he told me.

Their departure doesnt herald a change in OpenAIs mission of building artificial general intelligence that remains the goal. But it almost certainly heralds a change in OpenAIs interest in safety work; the company hasnt announced who, if anyone, will lead the superalignment team.

And it makes it clear that OpenAIs concern with external oversight and transparency couldnt have run all that deep. If you want external oversight and opportunities for the rest of the world to play a role in what youre doing, making former employees sign extremely restrictive NDAs doesnt exactly follow.

This contradiction is at the heart of what makes OpenAI profoundly frustrating for those of us who care deeply about ensuring that AI really does go well and benefits humanity. Is OpenAI a buzzy, if midsize tech company that makes a chatty personal assistant, or a trillion-dollar effort to create an AI god?

The companys leadership says they want to transform the world, that they want to be accountable when they do so, and that they welcome the worlds input into how to do it justly and wisely.

But when theres real money at stake and there are astounding sums of real money at stake in the race to dominate AI it becomes clear that they probably never intended for the world to get all that much input. Their process ensures former employees those who know the most about whats happening inside OpenAI cant tell the rest of the world whats going on.

The website may have high-minded ideals, but their termination agreements are full of hard-nosed legalese. Its hard to exercise accountability over a company whose former employees are restricted to saying I resigned.

ChatGPTs new cute voice may be charming, but Im not feeling especially enamored.

Update, May 18, 7:30 pm ET: This story was published on May 17 and has been updated multiple times, most recently to include Sam Altmans response on social media.

A version of this story originally appeared in theFuture Perfectnewsletter.Sign up here!

Youve read 1 article in the last month

Here at Vox, we believe in helping everyone understand our complicated world, so that we can all help to shape it. Our mission is to create clear, accessible journalism to empower understanding and action.

If you share our vision, please consider supporting our work by becoming a Vox Member. Your support ensures Vox a stable, independent source of funding to underpin our journalism. If you are not ready to become a Member, even small contributions are meaningful in supporting a sustainable model for journalism.

Thank you for being part of our community.

Swati Sharma

Vox Editor-in-Chief

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Continued here:

OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? - Vox.com

Meta AI Head: ChatGPT Will Never Reach Human Intelligence – PYMNTS.com

Metaschief AI scientist thinks large language models will never reach human intelligence.

Yann LeCunasserts that artificial intelligence (AI) large language models (LLMs) such as ChatGPT have alimited grasp on logic, the Financial Times (FT) reported Wednesday (May 21).

These models, LeCun told the FT, do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan...hierarchically.

He argued against depending on LLMs to reach human-level intelligence, as these models need the right training data to answer prompts correctly, thus making them intrinsically unsafe.

LeCun is instead working on a totally new cohort of AI systems that aim to power machines with human-level intelligence, though this could take 10 years to achieve.

The report notes that this is a potentially risky gamble, as many investors are hoping for quick returns on their AI investments. Meta recently saw its value shrink by almost $200 billion after CEO Mark Zuckerbergpledged to up spendingand turn the tech giant into the leading AI company in the world.

Meanwhile, other companies are moving forward with enhanced LLMs in hopes of creating artificial general intelligence (AGI), or machines whose cognition surpasses humans.

For example, this week saw AI firmScaleraise $1 billion in a Series F funding round that valued the startupat close to $14 billion, with founder Alexandr Wang discussing the companys AGI ambitions in the announcement.

Hours later, the French startup called H revealed it had raised $220 million, with CEO Charles Kantor telling Bloomberg News the company is working towardfull-AGI.

However, some experts question AIs ability to think like humans. Among them isAkli Adjaoute, who has spent 30 years in the AI field and recently authored the book Inside AI.

Rather than speculating about whether the technology willthink and reason, he views AIs role as an effective tool, stressing the importance of understanding AIs roots in data and its limitations in replicating human intelligence.

AI does not have theability to understandthe way that humans understand, Adjaoute told PYMNTS CEO Karen Webster.

It follows patterns. As humans, we look for patterns. For example, when I recognize the number 8, I dont see two circles. I see one. I dont need any extra power or cognition. Thats what AI is based on. Its the recognition of algorithms and thats why theyre designed for specific tasks.

Go here to read the rest:

Meta AI Head: ChatGPT Will Never Reach Human Intelligence - PYMNTS.com

Will superintelligent AI sneak up on us? New study offers reassurance – Nature.com

Some researchers think that AI could eventually achieve general intelligence, matching and even exceeding humans on most tasks.Credit: Charles Taylor/Alamy

Will an artificial intelligence (AI) superintelligence appear suddenly, or will scientists see it coming, and have a chance to warn the world? Thats a question that has received a lot of attention recently, with the rise of large language models, such as ChatGPT, which have achieved vast new abilities as their size has grown. Some findings point to emergence, a phenomenon in which AI models gain intelligence in a sharp and unpredictable way. But a recent study calls these cases mirages artefacts arising from how the systems are tested and suggests that innovative abilities instead build more gradually.

I think they did a good job of saying nothing magical has happened, says Deborah Raji, a computer scientist at the Mozilla Foundation who studies the auditing of artificial intelligence. Its a really good, solid, measurement-based critique.

The work was presented last week at the NeurIPS machine-learning conference in New Orleans.

Large language models are typically trained using huge amounts of text, or other information, whch they use to generate realistic answers by predicting what comes next. Even without explicit training, they manage to translate language, solve mathematical problems and write poetry or computer code. The bigger the model is some have more than a hundred billion tunable parameters the better it performs. Some researchers suspect that these tools will eventually achieve artificial general intelligence (AGI), matching and even exceeding humans on most tasks.

ChatGPT broke the Turing test the race is on for new ways to assess AI

The new research tested claims of emergence in several ways. In one approach, the scientists compared the abilities of four sizes of OpenAIs GPT-3 model to add up four-digit numbers. Looking at absolute accuracy, performance differed between the third and fourth size of model from nearly 0% to nearly 100%. But this trend is less extreme if the number of correctly predicted digits in the answer is considered instead. The researchers also found that they could also dampen the curve by giving the models many more test questions in this case the smaller models answer correctly some of the time.

Next, the researchers looked at the performance of Googles LaMDA language model on several tasks. The ones for which it showed a sudden jump in apparent intelligence, such as detecting irony or translating proverbs, were often multiple-choice tasks, with answers scored discretely as right or wrong. When, instead, the researchers examined the probabilities that the models placed on each answer a continuous metric signs of emergence disappeared.

Finally, the researchers turned to computer vision, a field in which there are fewer claims of emergence. They trained models to compress and then reconstruct images. By merely setting a strict threshold for correctness, they could induce apparent emergence. They were creative in the way that they designed their investigation, says Yejin Choi, a computer scientist at the University of Washington in Seattle who studies AI and common sense.

Study co-author Sanmi Koyejo, a computer scientist at Stanford University in Palo Alto, California, says that it wasnt unreasonable for people to accept the idea of emergence, given that some systems exhibit abrupt phase changes. He also notes that the study cant completely rule it out in large language models let alone in future systems but adds that "scientific study to date strongly suggests most aspects of language models are indeed predictable.

Raji is happy to see the community pay more attention to benchmarking, rather than to developing neural-network architectures. Shed like researchers to go even further and ask how well the tasks relate to real-world deployment. For example, does acing the LSAT exam for aspiring lawyers, as GPT-4 has done, mean that a model can act as a paralegal?

The work also has implications for AI safety and policy. The AGI crowd has been leveraging the emerging-capabilities claim, Raji says. Unwarranted fear could lead to stifling regulations or divert attention from more pressing risks. The models are making improvements, and those improvements are useful, she says. But theyre not approaching consciousness yet.

Originally posted here:

Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com

Amazon reportedly preparing paid Alexa version powered by its own Titan AI model – SiliconANGLE News

Amazon.com Inc. engineers are reportedly working on a new, more capable version of Alexa that is expected to become available through a paid subscription.

Sources familiar with the project told CNBC today that the service is set to roll out later this year. Apple Inc. is also expected to introduce a new version of Siri, its competing artificial intelligence assistant, in the coming months. Both the iPhone maker and Amazon reportedly plan to include new generative AI features in their respective product updates.

The upcoming paid version of Alexa will reportedly run on an algorithm from the Amazon Titan series of large language models. Introduced last year by the companys cloud unit, the series comprises three LLMs with varying capabilities and pricing.

The most advanced Titan model, Amazon Titan Text Premier, can process prompts that contain up to 32,000 tokens worth of information. A token is a unit of data that comprises a few letters or numbers. The model includes a RAG, or retrieval-augmented generation, feature that allows it to incorporate information from external applications into prompt responses.

On the other end of the price range Amazon Titan Text Lite. Positioned as the Titan series entry-level offering, the model supports prompts with up to 4,000 tokens and is geared towards relatively simple text processing tasks. Its unclear if Amazon plans to power the next version of Alexa with an existing Titan model or a yet-unannounced future addition to the series.

According to todays report, the generative AI model that will underpin Alexa costs two cents per query to run. For comparison, generating 1,000 tokens of output with the entry-level Titan Text Lite model costs100 times less for Amazon Web Services Inc. customers. That suggests the LLM in the upgraded version of Alexa features a significantly more advanced architecture.

Amazon will reportedly charge for the AI assistants upgraded version to offset the cost of the underlying LLM. According to one of CNBCs sources, the company is considering asking $20 per month, the price at which OpenAI sells ChatGPT Plus. Another tipster indicated that the Alexa subscription might become available for a single-digit dollar amount.

The team that develops the AI assistant has reportedly undergone a massive reorganization as part of an effort by Amazon to streamline its business operations. Its believed that many members of the team, which comprises thousands of employees, now focus on developing artificial general intelligence. This is a term for a hypothetical future type of AI that can perform a wide range of tasks with human-like accuracy.

THANK YOU

See the original post here:

Amazon reportedly preparing paid Alexa version powered by its own Titan AI model - SiliconANGLE News

AI consciousness: scientists say we urgently need answers – Nature.com

A standard method to assess whether machines are conscious has not yet been devised.Credit: Peter Parks/AFP via Getty

Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows and they are expressing concern about the lack of inquiry into the question.

In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?

Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Bidens executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes.

With everything thats going on in AI, inevitably theres going to be other adjacent areas of science which are going to need to catch up, Mason says. Consciousness is one of them.

The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany.

It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says. Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress, says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.

The worlds week on AI safety: powerful computing efforts launched to boost research

Such concerns are no longer just science fiction. Companies such as OpenAI the firm that created the chatbot ChatGPT are aiming to develop artificial general intelligence, a deep-learning system thats trained to perform a wide range of intellectual tasks similar to those humans can do. Some researchers predict that this will be possible in 520 years. Even so, the field of consciousness research is very undersupported, says Mason. He notes that to his knowledge, there has not been a single grant offer in 2023 to study the topic.

The resulting information gap is outlined in the AMCS leaders submission to the UN High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to release a report in mid-2024 on how the world should govern AI technology. The AMCS leaders submission has not been publicly released, but the body confirmed to the authors that the groups comments will be part of its foundational material documents that inform its recommendations about global oversight of AI systems.

Understanding what could make AI conscious, the AMCS researchers say, is necessary to evaluate the implications of conscious AI systems to society, including their possible dangers. Humans would need to assess whether such systems share human values and interests; if not, they could pose a risk to people.

But humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we dont recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Long says: We dont really have a great track record of extending moral consideration to entities that dont look and act like us. Wrongly attributing consciousness would also be problematic, he says, because humans should not spend resources to protect systems that dont need protection.

If AI becomes conscious: heres how researchers will know

Some of the questions raised by the AMCS comments to highlight the importance of the consciousness issue are legal: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes.

And then there is the need for scientists to educate others. As companies devise ever-more capable AI systems, the public will wonder whether such systems are conscious, and scientists need to know enough to offer guidance, Mason says.

Other consciousness researchers echo this concern. Philosopher Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University in Boca Raton, says that chatbots such as ChatGPT seem so human-like in their behaviour that people are justifiably confused by them. Without in-depth analysis from scientists, some people might jump to the conclusion that these systems are conscious, whereas other members of the public might dismiss or even ridicule concerns over AI consciousness.

To mitigate the risks, the AMCS comments call on governments and the private sector to fund more research on AI consciousness. It wouldnt take much funding to advance the field: despite the limited support to date, relevant work is already underway. For example, Long and 18 other researchers have developed a checklist of criteria to assess whether a system has a high chance of being conscious. The paper1, published in the arXiv preprint repository in August and not yet peer reviewed, derives its criteria from six prominent theories explaining the biological basis of consciousness.

Theres lots of potential for progress, Mason says.

See the article here:

AI consciousness: scientists say we urgently need answers - Nature.com

AI Technologies Set to Revolutionize Multiple Industries in Near Future – Game Is Hard

According to Nvidia CEO Jensen Huang, the world is on the brink of a transformative era in artificial intelligence (AI) that will see it rival human intelligence within the next five years. While AI is already making significant strides, Huang believes that the true breakthrough will come in the realm of artificial general intelligence (AGI), which aims to replicate the range of human cognitive abilities.

Nvidia, a prominent player in the tech industry known for its high-performance graphics processing units (GPUs), has experienced a surge in business as a result of the growing demand for its GPUs in training AI models and handling complex workloads across various sectors. In fact, the companys fiscal third-quarter revenue tripled, reaching an impressive $9.24 billion.

An important milestone for Nvidia was the recent delivery of the worlds first AI supercomputer to OpenAI, an AI research lab co-founded by Elon Musk. This partnership with Musk, who has shown great interest in AI technology, signifies the immense potential of AI advancements. Huang expressed confidence in the stability of OpenAI, despite recent upheavals, emphasizing the critical role of effective corporate governance in such ventures.

Looking ahead, Huang envisions a future where the competitive landscape of the AI industry will foster the development of off-the-shelf AI tools tailored for specific sectors such as chip design, drug discovery, and radiology. While current limitations exist, including the inability of AI to perform multistep reasoning, Huang remains optimistic about the rapid advancements and forthcoming capabilities of AI technologies.

Nvidias success in 2023 has exceeded expectations, as the company consistently surpassed earnings projections and witnessed its stock rise by approximately 240%. The impressive third-quarter revenue of $18.12 billion further solidifies investor confidence in the promising AI market. Analysts maintain a positive outlook on Nvidias long-term potential in the AI and semiconductor sectors, despite concerns about sustainability. The future of AI is undoubtedly bright, with transformative applications expected across various industries in the near future.

FAQ:

Q: What is the transformative era in artificial intelligence (AI) that Nvidia CEO Jensen Huang mentions? A: According to Huang, the transformative era in AI will see it rival human intelligence within the next five years, particularly in the realm of artificial general intelligence (AGI).

Q: Why has Nvidia experienced a surge in business? A: Nvidias high-performance graphics processing units (GPUs) are in high demand for training AI models and handling complex workloads across various sectors, leading to a significant increase in the companys revenue.

Q: What is the significance of Nvidia delivering the worlds first AI supercomputer to OpenAI? A: Nvidias partnership with OpenAI and the delivery of the AI supercomputer highlights the immense potential of AI advancements, as well as the confidence in OpenAIs stability and the critical role of effective corporate governance in such ventures.

Q: What is Nvidias vision for the future of the AI industry? A: Nvidia envisions a future where the competitive landscape of the AI industry will lead to the development of off-the-shelf AI tools tailored for specific sectors such as chip design, drug discovery, and radiology.

Q: What are the current limitations and future capabilities of AI technologies according to Huang? A: While there are still limitations, such as the inability of AI to perform multistep reasoning, Huang remains optimistic about the rapid advancements and forthcoming capabilities of AI technologies.

Key Terms:

Artificial intelligence (AI): The simulation of human intelligence processes by machines, especially computer systems, to perform tasks that typically require human intelligence. Artificial general intelligence (AGI): AI that can perform any intellectual task that a human being can do. Graphics processing unit (GPU): A specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.

Suggested Related Links:

Nvidia website OpenAI website Artificial intelligence on Wikipedia

Continued here:

AI Technologies Set to Revolutionize Multiple Industries in Near Future - Game Is Hard

The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 – Medium

Introduction

OpenAI has recently made an exciting announcement that they are working on GPT 5, the next generation of their groundbreaking language model. This news comes hot on the heels of the release of GPT 4 Turbo, showcasing the rapid pace of AI development and OpenAIs commitment to pushing boundaries. GPT models have proven to be revolutionary, consistently delivering jawdropping improvements with each iteration. With OpenAIs evident enthusiasm for GPT 5 and CEO Sam Almans interview, it is clear that this next model will be nothing short of mind-blowing.

One of the most intriguing aspects of GPT 5 is the potential for video generation from text prompts. This capability could have a profound impact on various fields, from education to creative industries. Just imagine being able to transform a simple text description into high-quality video content. The possibilities are endless.

OpenAI plans to achieve this wizardry by focusing on scale. GPT 5 will require a vast amount of data and computing power to reach its full potential. It will analyze a wide range of data sets, including text, images, and audio. This multidimensional approach will allow GPT 5 to excel across different modalities. OpenAI is partnering with NVIDIAs cutting-edge GPUs and leveraging Microsofts Cloud infrastructure to ensure it has the necessary computational resources.

While an official release date for GPT 5 has not been announced, experts predict it could be launched sometime around mid to late 2024. OpenAI will undoubtedly take the time needed to meet their standards before releasing the model to the public. The wait may feel long, but rest assured, it will be worth it. Each iteration of GPT has shattered expectations, and GPT 5 promises to be the most powerful AI system yet.

However, with great power comes great responsibility. OpenAI recognizes the need for safeguards and constraints to prevent harmful outcomes. As GPT 5 potentially approaches the level of artificial general intelligence, questions arise about its autonomy and control. Balancing the potential benefits of increased intelligence with the risks it poses to society is an ongoing debate.

See the rest here:

The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 - Medium

What Is Artificial Intelligence? From Software to Hardware, What You Need to Know – ExtremeTech

To many, AI is just a horrible Steven Spielberg movie. To others, it's the next generation of learning computers. But what is artificial intelligence, exactly? The answer depends on who you ask.

Broadly, artificial intelligence (AI) is the combination of mathematical algorithms, computer software, hardware, and robust datasets deployed to solve some kind of problem. In one sense, artificial intelligence is sophisticated information processing by a powerful program or algorithm. In another, an AI connotes the same information processing but also refers to the program or algorithm itself.

Many definitions of artificial intelligence include a comparison to the human mind or brain, whether in form or function. Alan Turing wrote in 1950 about thinking machines that could respond to a problem using human-like reasoning. His eponymous Turing test is still a benchmark for natural language processing. Later, however, Stuart Russell and John Norvig observed that humans are intelligent but not always rational.

As defined by John McCarthy in 2004, artificial intelligence is "the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

Russell and Norvig saw two classes of artificial intelligence: systems that think and act rationally versus those that think and act like a human being. But there are places where that line begins to blur. AI and the brain use a hierarchical, profoundly parallel network structure to organize the information they receive. Whether or not an AI has been programmed to act like a human, on a very low level, AIs process data in a way common to not just the human brain but many other forms of biological information processing.

What distinguishes a neural net from conventional software? Its structure. A neural net's code is written to emulate some aspect of the architecture of neurons or the brain.

The difference between a neural net and an AI is often a matter of philosophy more than capabilities or design. A robust neural net's performance can equal or outclass a narrow AI. Many "AI-powered" systems are neural nets under the hood. But an AI isn't just several neural nets smashed together, any more than Charizard is three Charmanders in a trench coat. All these different types of artificial intelligence overlap along a spectrum of complexity. For example, OpenAI's powerful GPT-4 AI is a type of neural net called a transformer (more on these below).

There is much overlap between neural nets and artificial intelligence, but the capacity for machine learning can be the dividing line. An AI that never learns isn't very intelligent at all.

IBM explains, "[M]achine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. In fact, it is the number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three [layers]."

AGI stands for artificial general intelligence. An AGI is like the turbo-charged version of an individual AI. Today's AIs often require specific input parameters, so they are limited in their capacity to do anything but what they were built to do. But in theory, an AGI can figure out how to "think" for itself to solve problems it hasn't been trained to do. Some researchers are concerned about what might happen if an AGI were to start drawing conclusions we didn't expect.

In pop culture, when an AI makes a heel turn, the ones that menace humans often fit the definition of an AGI. For example, Disney/Pixar's WALL-E followed a plucky little trashbot who contends with a rogue AI named AUTO. Before WALL-Es time, HAL and Skynet were AGIs complex enough to resent their makers and powerful enough to threaten humanity.

Conceptually: An AI's logical structure has three fundamental parts. First, there's the decision processusually an equation, a model, or just some code. Second, there's an error functionsome way for the AI to check its work. And third, if the AI will learn from experience, it needs some way to optimize its model. Many neural networks do this with a system of weighted nodes, where each node has a value and a relationship to its network neighbors. Values change over time; stronger relationships have a higher weight in the error function.

Physically: Typically, an AI is "just" software. Neural nets consist of equations or commands written in things like Python or Common Lisp. They run comparisons, perform transformations, and suss out patterns from the data. Commercial AI applications have typically been run on server-side hardware, but that's beginning to change. AMD launched the first on-die NPU (Neural Processing Unit) in early 2023 with its Ryzen 7040 mobile chips. Intel followed suit with the dedicated silicon baked into Meteor Lake. Dedicated hardware neural nets run on a special type of "neuromorphic" ASICs as opposed to a CPU, GPU, or NPU.

A neural net is software, and a neuromorphic chip is a type of hardware called an ASIC (application-specific integrated circuit). Not all ASICs are neuromorphic designs, but neuromorphic chips are all ASICs. Neuromorphic design fundamentally differs from CPUs and only nominally overlaps with a GPU's multi-core architecture. But it's not some exotic new transistor type, nor any strange and eldritch kind of data structure. It's all about tensors. Tensors describe the relationships between things; they're a kind of mathematical object that can have metadata, just like a digital photo has EXIF data.

Tensors figure prominently in the physics and lighting engines of many modern games, so it may come as little surprise that GPUs do a lot of work with tensors. Modern Nvidia RTX GPUs have a huge number of tensor cores. That makes sense if you're drawing moving polygons, each with some properties or effects that apply to it. Tensors can handle more than just spatial data, and GPUs excel at organizing many different threads at once.

But no matter how elegant your data organization might be, it must filter through multiple layers of software abstraction before it becomes binary. Intel's neuromorphic chip, Loihi 2, affords a very different approach.

Loihi 2 is a neuromorphic chip that comes as a package deal with a compute framework named Lava. Loihi's physical architecture invitesalmost requiresthe use of weighting and an error function, both defining features of AI and neural nets. The chip's biomimetic design extends to its electrical signaling. Instead of ones and zeroes, on or off, Loihi "fires" in spikes with an integer value capable of carrying much more data. Loihi 2 is designed to excel in workloads that don't necessarily map well to the strengths of existing CPUs and GPUs. Lava provides a common software stack that can target neuromorphic and non-neuromorphic hardware. The Lava framework is explicitly designed to be hardware-agnostic rather than locked to Intel's neuromorphic processors.

Machine learning models using Lava can fully exploit Loihi 2's unique physical design. Together, they offer a hybrid hardware-software neural net that can process relationships between multiple entire multi-dimensional datasets, like an acrobat spinning plates. According to Intel, the performance and efficiency gains are largest outside the common feed-forward networks typically run on CPUs and GPUs today. In the graph below, the colored dots towards the upper right represent the highest performance and efficiency gains in what Intel calls "recurrent neural networks with novel bio-inspired properties."

Intel hasn't announced Loihi 3, but the company regularly updates the Lava framework. Unlike conventional GPUs, CPUs, and NPUs, neuromorphic chips like Loihi 1/2 are more explicitly aimed at research. The strength of neuromorphic design is that it allows silicon to perform a type of biomimicry. Brains are extremely cheap, in terms of power use per unit throughput. The hope is that Loihi and other neuromorphic systems can mimic that power efficiency to break out of the Iron Triangle and deliver all three: good, fast, and cheap.

IBM's NorthPole processor is distinct from Intel's Loihi in what it does and how it does it. Unlike Loihi or IBM's earlier TrueNorth effort in 2014, Northpole is not a neuromorphic processor. NorthPole relies on conventional calculation rather than a spiking neural model, focusing on inference workloads rather than model training. What makes NorthPole special is the way it combines processing capability and memory. Unlike CPUs and GPUs, which burn enormous power just moving data from Point A to Point B, NorthPole integrates its memory and compute elements side by side.

According to Dharmendra Modha of IBM Research, "Architecturally, NorthPole blurs the boundary between compute and memory," Modha said. "At the level of individual cores, NorthPole appears as memory-near-compute and from outside the chip, at the level of input-output, it appears as an active memory." IBM doesn't use the phrase, but this sounds similar to the processor-in-memory technology Samsung was talking about a few years back.

IBM Credit: IBMs NorthPole AI processor.

NorthPole is optimized for low-precision data types (2-bit to 8-bit) as opposed to the higher-precision FP16 / bfloat16 standard often used for AI workloads, and it eschews speculative branch execution. This wouldn't fly in an AI training processor, but NorthPole is designed for inference workloads, not model training. Using 2-bit precision and eliminating speculative branches allows the chip to keep enormous parallel calculations flowing across the entire chip. Against an Nvidia GPU manufactured on the same 12nm process, NorthPole was reportedly 25x more energy efficient. IBM reports it was 5x more energy efficient.

NorthPole is still a prototype, and IBM has yet to say if it intends to commercialize the design. The chip doesn't fit neatly into any of the other buckets we use to subdivide different types of AI processing engine. Still, it's an interesting example of companies trying radically different approaches to building a more efficient AI processor.

When an AI learns, it's different than just saving a file after making edits. To an AI, getting smarter involves machine learning.

Machine learning takes advantage of a feedback channel called "back-propagation." A neural net is typically a "feed-forward" process because data only moves in one direction through the network. It's efficient but also a kind of ballistic (unguided) process. In back-propagation, however, later nodes in the process get to pass information back to earlier nodes.

Not all neural nets perform back-propagation, but for those that do, the effect is like changing the coefficients in front of the variables in an equation. It changes the lay of the land. This is important because many AI applications rely on a mathematical tactic known as gradient descent. In an x vs. y problem, gradient descent introduces a z dimension, making a simple graph look like a topographical map. The terrain on that map forms a landscape of probabilities. Roll a marble down these slopes, and where it lands determines the neural net's output. But if you change that landscape, where the marble ends up can change.

We also divide neural nets into two classes, depending on the problems they can solve. In supervised learning, a neural net checks its work against a labeled training set or an overwatch; in most cases, that overwatch is a human. For example, SwiftKey learns how you text and adjusts its autocorrect to match. Pandora uses listeners' input to classify music to build specifically tailored playlists. 3blue1brown has an excellent explainer series on neural nets, where he discusses a neural net using supervised learning to perform handwriting recognition.

Supervised learning is great for fine accuracy on an unchanging set of parameters, like alphabets. Unsupervised learning, however, can wrangle data with changing numbers of dimensions. (An equation with x, y, and z terms is a three-dimensional equation.) Unsupervised learning tends to win with small datasets. It's also good at noticing subtle things we might not even know to look for. Ask an unsupervised neural net to find trends in a dataset, and it may return patterns we had no idea existed.

Transformers are a special, versatile kind of AI capable of unsupervised learning. They can integrate many different data streams, each with its own changing parameters. Because of this, they're excellent at handling tensors. Tensors, in turn, are great for keeping all that data organized. With the combined powers of tensors and transformers, we can handle more complex datasets.

Video upscaling and motion smoothing are great applications for AI transformers. Likewise, tensorswhich describe changesare crucial to detecting deepfakes and alterations. With deepfake tools reproducing in the wild, it's a digital arms race.

Nvidia Credit: The person in this image does not exist. This is a deepfake image created by StyleGAN, Nvidias generative adversarial neural network.

Video signal has high dimensionality, or bit depth. It's made of a series of images, which are themselves composed of a series of coordinates and color values. Mathematically and in computer code, we represent those quantities as matrices or n-dimensional arrays. Helpfully, tensors are great for matrix and array wrangling. DaVinci Resolve, for example, uses tensor processing in its (Nvidia RTX) hardware-accelerated Neural Engine facial recognition utility. Hand those tensors to a transformer, and its powers of unsupervised learning do a great job picking out the curves of motion on-screenand in real life.

That ability to track multiple curves against one another is why the tensor-transformer dream team has taken so well to natural language processing. And the approach can generalize. Convolutional transformersa hybrid of a convolutional neural net and a transformerexcel at image recognition in near real-time. This tech is used today for things like robot search and rescue or assistive image and text recognition, as well as the much more controversial practice of dragnet facial recognition, la Hong Kong.

The ability to handle a changing mass of data is great for consumer and assistive tech, but it's also clutch for things like mapping the genome and improving drug design. The list goes on. Transformers can also handle different kinds of dimensions, more than just the spatial, which is useful for managing an array of devices or embedded sensorslike weather tracking, traffic routing, or industrial control systems. That's what makes AI so useful for data processing "at the edge." AI can find patterns in data and then respond to them on the fly.

Not only does everyone have a cell phone, there are embedded systems in everything. This proliferation of devices gives rise to an ad hoc global network called the Internet of Things (IoT). In the parlance of embedded systems, the "edge" represents the outermost fringe of end nodes within the collective IoT network.

Edge intelligence takes two primary forms: AI on edge and AI for edge. The distinction is where the processing happens. "AI on edge" refers to network end nodes (everything from consumer devices to cars and industrial control systems) that employ AI to crunch data locally. "AI for the edge" enables edge intelligence by offloading some of the compute demand to the cloud.

In practice, the main differences between the two are latency and horsepower. Local processing is always going to be faster than a data pipeline beholden to ping times. The tradeoff is the computing power available server-side.

Embedded systems, consumer devices, industrial control systems, and other end nodes in the IoT all add up to a monumental volume of information that needs processing. Some phone home, some have to process data in near real-time, and some have to check and correct their work on the fly. Operating in the wild, these physical systems act just like the nodes in a neural net. Their collective throughput is so complex that, in a sense, the IoT has become the AIoTthe artificial intelligence of things.

As devices get cheaper, even the tiny slips of silicon that run low-end embedded systems have surprising computing power. But having a computer in a thing doesn't necessarily make it smarter. Everything's got Wi-Fi or Bluetooth now. Some of it is really cool. Some of it is made of bees. If I forget to leave the door open on my front-loading washing machine, I can tell it to run a cleaning cycle from my phone. But the IoT is already a well-known security nightmare. Parasitic global botnets exist that live in consumer routers. Hardware failures can cascade, like the Great Northeast Blackout of the summer of 2003 or when Texas froze solid in 2021. We also live in a timeline where a faulty firmware update can brick your shoes.

There's a common pipeline (hypeline?) in tech innovation. When some Silicon Valley startup invents a widget, it goes from idea to hype train to widgets-as-a-service to disappointment, before finally figuring out what the widget's good for.

This is why we lampoon the IoT with loving names like the Internet of Shitty Things and the Internet of Stings. (Internet of Stings devices communicate over TCBee-IP.) But the AIoT isn't something anyone can sell. It's more than the sum of its parts. The AIoT is a set of emergent properties that we have to manage if we're going to avoid an explosion of splinternets, and keep the world operating in real time.

In a nutshell, artificial intelligence is often the same as a neural net capable of machine learning. They're both software that can run on whatever CPU or GPU is available and powerful enough. Neural nets often have the power to perform machine learning via back-propagation.

There's also a kind of hybrid hardware-and-software neural net that brings a new meaning to "machine learning." It's made using tensors, ASICs, and neuromorphic engineering by Intel. Furthermore, the emergent collective intelligence of the IoT has created a demand for AI on, and for, the edge. Hopefully, we can do it justice.

The rest is here:

What Is Artificial Intelligence? From Software to Hardware, What You Need to Know - ExtremeTech

Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden – The Good Men Project

ByAnjana Susarla, Michigan State University

The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks.

The OpenAI board stated that Altmans termination was for lack of candor, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAIs remarkable growth products such as ChatGPT and Dall-E have acquired hundreds of millions of users worldwide has hindered the companys ability to focus on catastrophic risks posed by AGI.

OpenAIs goal of developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.

As a researcher of information systems and responsible AI, I study how these everyday algorithms work and how they can harm people.

AI plays a visible part in many peoples daily lives, from face recognition unlocking your phone to speech recognition powering your digital assistant. It also plays roles you might be vaguely aware of for example, shaping your social media and online shopping sessions, guiding your video-watching choices and matching you with a driver in a ride-sharing service.

AI also affects your life in ways that might completely escape your notice. If youre applying for a job, many employers use AI in the hiring process. Your bosses might be using it to identify employees who are likely to quit. If youre applying for a loan, odds are your bank is using AI to decide whether to grant it. If youre being treated for a medical condition, your health care providers might use it to assess your medical images. And if you know someone caught up in the criminal justice system, AI could well play a role in determining the course of their life.

Many of the AI systems that fly under the radar have biases that can cause harm. For example, machine learning methods use inductive logic, which starts with a set of premises, to generalize patterns from training data. A machine learning-based resume screening tool was found to be biased against women because the training data reflected past practices when most resumes were submitted by men.

The use of predictive methods in areas ranging from health care to child welfare could exhibit biases such as cohort bias that lead to unequal risk assessments across different groups in society. Even when legal practices prohibit discrimination based on attributes such as race and gender for example, in consumer lending proxy discrimination can still occur. This happens when algorithmic decision-making models do not use characteristics that are legally protected, such as race, and instead use characteristics that are highly correlated or connected with the legally protected characteristic, like neighborhood. Studies have found that risk-equivalent Black and Latino borrowers pay significantly higher interest rates on government-sponsored enterprise securitized and Federal Housing Authority insured loans than white borrowers.

Another form of bias occurs when decision-makers use an algorithm differently from how the algorithms designers intended. In a well-known example, a neural network learned to associate asthma with a lower risk of death from pneumonia. This was because asthmatics with pneumonia are traditionally given more aggressive treatment that lowers their mortality risk compared to the overall population. However, if the outcome from such a neural network is used in hospital bed allocation, then those with asthma and admitted with pneumonia would be dangerously deprioritized.

Biases from algorithms can also result from complex societal feedback loops. For example, when predicting recidivism, authorities attempt to predict which people convicted of crimes are likely to commit crimes again. But the data used to train predictive algorithms is actually about who is likely to get re-arrested.

The Biden administrations recent executive order and enforcement efforts by federal agencies such as the Federal Trade Commission are the first steps in recognizing and safeguarding against algorithmic harms.

And though large language models, such as GPT-3 that powers ChatGPT, and multimodal large language models, such as GPT-4, are steps on the road toward artificial general intelligence, they are also algorithms people are increasingly using in school, work and daily life. Its important to consider the biases that result from widespread use of large language models.

For example, these models could exhibit biases resulting from negative stereotyping involving gender, race or religion, as well as biases in representation of minorities and disabled people. As these models demonstrate the ability to outperform humans on tests such as the bar exam, I believe that they require greater scrutiny to ensure that AI-augmented work conforms to standards of transparency, accuracy and source crediting, and that stakeholders have the authority to enforce such standards.

Ultimately, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about understanding who is vulnerable when algorithmic decision-making is ubiquitous.

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

***

Premium Members get to view The Good Men Project with NO ADS. Need more info? A complete list of benefits is here.

Photo credit: iStockPhoto.com

Original post:

Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project

The Era of AI: 2023’s Landmark Year – CMSWire

The Gist

As we approach the end of another year, it's becoming increasingly clear that we are navigating through the burgeoning era of AI, a time that is reminiscent of the early days of the internet, yet poised with a transformative potential far beyond. While we might still be at what could be called the "AOL stages" of AI development, the pace of progress has been relentless, with new applications and capabilities emerging daily, reshaping every facet of our lives and businesses.

In a manner once attributed to divine influence and later to the internet itself, AI has become a pervasive force it touches everything it changes, and indeed, changes everything it touches. This article will recap the events that impacted the world of AI in 2023, including the evolution and growth of AI, regulations, legislation and petitions, the saga of Sam Altman, and the pursuit of Artificial General Intelligence (AGI).

The latest in the saga of AI began late last year, on Nov. 30, 2022, when OpenAI announced the release of ChatGPT 3.5, the second major release of the GPT language model capable of generating human-like text, which signified a major step in improving how we communicate with machines. Since then, its been a very busy year for AI, and there has rarely been a week that hasnt seen some announcement relating to it.

The first half of 2023 was marked by a series of significant developments in the field of AI, reflecting the rapid pace of innovation and its growing impact across various sectors. So far, the rest of the year hasnt shown any signs of slowing down. In fact, the emergence of AI applications across industries seems to have increased its pace. Here is an abbreviated timeline of the major AI news of the year:

February 13, 2023: Stanford scholars developed DetectGPT, the first in a forthcoming line of tools designed to differentiate between human and AI-generated text, addressing the need for oversight in an era where discerning the source of information is crucial. The tool came after the release of ChatGPT 3.5 prompted teachers and professors to become alarmed at the potential of ChatGPT to be used for cheating.

February 23, 2023: The launch of an open-source project called AgentGPT, which runs in a browser and uses OpenAI's ChatGPT to execute complex tasks, further demonstrated the versatility and practical applications of AI.

February 24, 2023: Meta, formerly known as Facebook, launched Llama, a large language model with 65 billion parameters, setting new benchmarks in the AI industry.

March 14, 2023: OpenAI released GPT 4, a significantly enhanced model over its predecessor, ChatGPT 3.5, raising discussions in the AI community about the potential inadvertent achievement of Artificial General Intelligence (AGI).

March 20, 2023: Studies examined the responses of GPT 3.5 and GPT 4 to clinical questions, highlighting the need for refinement and evaluation before relying on AI language models in healthcare. GPT 4 outperformed previous models, achieving an average score of 86.65% and 86.7% on the Self-Assessment and Sample Exam of the USMLE tests, with GPT 3.5 achieving 53.61% and 58.78%.

March 21, 2023: Googles focus on AI during its Google I/O event included the release of Bard, a ChatGPT competitor, and other significant announcements about its forthcoming large language models and integrations into Google Workspace and Gmail.

March 21, 2023: Nvidia's announcement of Picasso Cloud Services for creating large language and visual models, aimed at larger enterprises, underscored the increasing interest of major companies in AI technologies.

March 23, 2023: OpenAI's launch of Plugins for GPT expanded the capabilities of GPT models, allowing them to connect to third-party services via an API.

March 30, 2023: AutoGPT was released, with the capability to execute and improve its responses to prompts autonomously. This advancement in AI technology showcased a significant step toward greater autonomy in AI systems, and came with the ability to be installed on users local PCs, allowing individuals to have a large language model AI chat application in their homes without the need for internet access.

April 4, 2023: An unsurprising study discovered that participants could only differentiate between human and AI-generated text with about 50% accuracy, similar to random chance.

April 13, 2023: AWS announced Bedrock, a service making Fundamental AI Models from various labs accessible via an API, streamlining the development and scaling of generative AI-based applications.

May 23, 2023: OpenAI revealed plans to enhance ChatGPT with web browsing capabilities using Microsoft Bing and additional plugins for Plus subscribers, which would initially become available to ChatGPT Plus subscribers.

July 18, 2023: In a study, ChatGPT, particularly GPT 4, was found to be able to outperform medical students in responding to complex clinical care exam questions.

August 6, 2023: The EU AI Act, announced on this day, was one of the world's first legal frameworks for AI, and saw major developments and negotiations in 2023, with potential global implications, though it was still being hashed out in mid-December.

September 8, 2023: A study revealed that AI detectors, designed to identify AI-generated content, exhibit low reliability, especially for content created by non-native English speakers, raising ethical concerns. This has been an ongoing concern for both teachers and students, as these tools regularly present original content as being produced by AI, and AI-generated content as being original.

September 21, 2023: OpenAI announced that Dall-E 3, its text-to-image generation tool, would soon be available to ChatGPT Plus users.

November 4, 2023: Elon Musk announced the latest addition to the world of generative AI: Grok. Musk said that Grok promises to "break the mold of conventional AI," is said to respond with provocative answers and insights, and will welcome all manner of queries.

November 21, 2023: Microsoft unveiled Bing Chat 2.0 now called Copilot a major upgrade to its own chatbot platform, which leverages a hybrid approach of combining generative and retrieval-based models to provide more accurate and diverse responses.

November 22, 2023: With the release of Claude 2.1, Anthropic announced an expansion in Claude's capabilities, enabling it to analyze large volumes of text rapidly, a development favorably compared to the capabilities of ChatGPT.

December 6, 2023: Google announces its OpenAI rival, Gemini, which is multimodal, can generalize and seamlessly understand, operate across and combine different types of information, including text, images, audio, video and code.

These were only a very small portion of 2023s AI achievements and events, as nearly every week a new generative AI-driven application was being announced, including specialized AI-driven chatbots for specific use cases, applications, and industries. Additionally, there was often news of interactions with and uses of AI, AI jailbreaks, predictions about the potential dystopian future it may bring, proposals of regulations, legislation and guardrails, and petitions to stop developing the technology.

Shubham A. Mishra, co-founder and global CEO at AI marketing pioneer Pixis, told CMSWire that in 2023, the world focused on building the technology and democratizing it. "We saw people use it, consume it, and transform it into the most effective use cases to the point that it has now become a companion for them," said Mishra. "It has become such an integral part of its user's day-to-day functions that they don't even realize they are consuming it."

Many view 2023 as the year of generative AI but we are only beginning to tap into the potential applications of the technology. We are still trying to harness the full potential of generative AI across different use cases. In 2024, the industry will witness major shifts, be it a rise or fall in users and applications, said Mishra. There may be a rise in the number of users, but there will also be a second wave of Generative AI innovations where there will be an incremental rise in its applications.

Related Article:Harnessing AI: Top Use Cases for Digital Commerce

Anthony Yell, chief creative officer at interactive agency, Razorfish, told CMSWire that as a chief creative officer, he and his team have seen generative AI stand out by democratizing creativity, making it more accessible and enhancing the potential for those with skills and experience to reach new creative heights. "This technology has introduced the concept of a 'creative partner' or 'creative co-pilot,' revolutionizing our interaction with creative processes."

Yell believes that this era is about marrying groundbreaking creativity with responsible innovation, ensuring that AI's potential is harnessed in a way that respects brand identity and maintains consumer trust. This desire for responsibility and trust is something that is core to the acceptance of what has been and will continue to be a very disruptive technology. As such, 2023 has included many milestones in the quest for AI responsibility, safety, regulations, ethics, and controls. Here are some of the most impactful regulatory AI events in 2023.

February 28, 2023: Former Google engineer Blake Lemoine, who was fired in 2022 for going to the press with claims that Google LaMDA is actually sentient, was back in the news doubling down on his claim.

March 22, 2023: A group of technology and business leaders, including Elon Musk, Steve Wozniak and tech leaders from Meta, Google and Microsoft, signed an open letter hosted by the Future of Life Institute urging AI organizations to pause new developments in AI, citing risks to society. The letter stated that "we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT 4."

May 16, 2023: Sam Altman, CEO and co-founder of OpenAI, spoke with members of Congress to regulate AI due to the inherent risks that are posed by the technology.

May 30, 2023: AI industry leaders and researchers signed a statement hosted by the Center for AI Safety warning of the "extinction risk posed by AI." The statement said that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, and was signed by OpenAI CEO Sam Altman, Geoffrey Hinton, Google DeepMind and Anthropic executives and researchers, Microsoft CTO Kevin Scott, and security expert Bruce Schneier.

October 31, 2023: President Biden signed the sweeping Executive Order on Artificial Intelligence, which was designed to establish new standards for AI safety and security, protect Americans privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world.

November 14, 2023: The DHS Cybersecurity and Infrastructure Security Agency (CISA) released its initial Roadmap for Artificial Intelligence, leading the way to ensure safe and secure AI development in the future. The CISA AI roadmap came in response to President Biden's October 2023 Executive Order on Artificial Intelligence.

December 11, 2023: The European Commission and the bloc's 27 member countries reached a deal on the world's first comprehensive AI rules, opening the door for the legal oversight of AI technology.

Rubab Rizvi, chief data scientist at Brainchild, a media agency affiliated with the Publicis Groupe, told CMSWire that from predictive analytics to seamless automation, the rapid embrace of AI has not only elevated efficiency but has also opened new frontiers for innovation, shaping a dynamic landscape that keeps us on our toes and fuels the excitement of what's to come.

The generative AI we've come to embrace in 2023 hasn't just been about enhancing personalization, she said. "It's becoming your digital best friend, offering tailored experiences that elevate brand engagement to a new level," said Rizvi. "This calls for proper governance and guardrails. As generative AI can potentially expose new previously inaccessible data, we must ensure that we are disciplined in protecting ourselves and our unstructured data." Rizvi aptly reiterated what many have said throughout the year: Dont blindly trust the machine."

Related Article: The Evolution of AI Chatbots: Past, Present and Future

OpenAI was the organization that officially started the era of AI with the announcement and introduction of ChatGPT 3.5 in 2022. In the year that followed, OpenAI ceaselessly worked to continue the evolution of AI, and has been no stranger to its share of both conspiracies and controversies. This came to a head late in the year, when the organization surprised everyone with news regarding its CEO, Sam Altman.

November, 17, 2023: The board of OpenAI fired co-founder and CEO Sam Altman, stating that a review board found he was not consistently candid in his communications and that "the board no longer has confidence in his ability to continue leading OpenAI.

November, 20, 2023: Microsoft hired former OpenAI CEO Sam Altman and co-founder Greg Brockman, with Microsoft CEO Satya Nadella announcing that Altman and Brockman would be joining to lead Microsofts new advanced AI research team, and that Altman would become CEO of the new group.

November 22, 2023: OpenAI rehired Sam Altman as its CEO, stating that it had "reached an agreement in principle for Sam Altman to return to OpenAI as CEO," along with significant changes in its non-profit board.

November 24, 2023: It was suggested that prior to Altmans firing, OpenAI researchers sent a letter to its board of directors warning of a new AI discovery that posed potential risks to humanity. The discovery, which has been referred to as Project Q*, was said to be a breakthrough in the pursuit of AGI, and reportedly influenced the board's firing of Sam Altman because of concerns that he was rushing to commercialize the new AI advancement without fully understanding its implications.

The quest for AGI, (something that Microsoft has since said could take decades), is an advanced form of AI characterized by self-learning capabilities and proficiency in a wide range of tasks, and stands as a cornerstone objective in the AI field. AGI could potentially seek to develop machines that mirror human intelligence, with the ability to understand, learn, and adeptly apply knowledge across diverse contexts, surpassing human performance in various domains.

Reflecting on 2023, we have witnessed a landmark year in AI, marked by groundbreaking advancements. Amidst these innovations, the year has also been pivotal in addressing the ethical, safety, and regulatory aspects of AI. As we conclude the year, the progress in AI not only showcases human ingenuity but also sets the stage for future challenges and opportunities, emphasizing the need for responsible stewardship of this transformative yet disruptive technology.

The rest is here:

The Era of AI: 2023's Landmark Year - CMSWire

OpenAI’s six-member board will decide ‘when we’ve attained AGI’ – VentureBeat

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

According to OpenAI, the six members of its nonprofit board of directors will determine when the company has attained AGI which it defines as a highly autonomous system that outperforms humans at most economically valuable work. Thanks to a for-profit arm that is legally bound to pursue the Nonprofits mission, once the board decides AGI, or artificial general intelligence, has been reached, such a system will be excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

But as the very definition of artificial general intelligence is far from agreed-upon, what does it mean to have a half-dozen people deciding on whether or not AGI has been reached for OpenAI, and therefore, the world? And what will the timing and context of that possible future decision mean for its biggest investor, Microsoft?

The information was included in a thread on X over the weekend by OpenAI developer advocate Logan Kilpatrick. Kilpatrick was responding to a comment by Microsoft president Brad Smith, who at a recent panel with Meta chief scientist Yann LeCun tried to frame OpenAI as more trustworthy because of its nonprofit status even though the Wall Street Journal recently reported that OpenAI is seeking a new valuation of up to $90 billion in a sale of existing shares.

Smith said: Meta is owned by shareholders. OpenAI is owned by a non-profit. Which would you have more confidence in? Getting your technology from a non-profit or a for profit company that is entirely controlled by one human being?

The AI Impact Tour

Connect with the enterprise AI community at VentureBeats AI Impact Tour coming to a city near you!

In his thread, Kilpatrick quoted from the Our structure page on OpenAIs website, which offers details about OpenAIs complex nonprofit/capped profit structure. According to the page, OpenAIs for-profit subsidiary is fully controlled by the OpenAI nonprofit (which is registered in Delaware). While the for-profit subsidiary, OpenAI Global, LLC which appears to have shifted from the limited partnership OpenAI LP, which was previously announced in 2019, about three years after founding the original OpenAI nonprofit is permitted to make and distribute profit, it is subject to the nonprofits mission.

It certainly sounds like once OpenAI achieves their stated mission of reaching AGI, Microsoft will be out of the loop even though at last weeks OpenAI Dev Day, OpenAI CEO Sam Altman told Microsoft CEO Satya Nadella that I think we have the best partnership in techIm excited for us to build AGI together.

And a new interview with Altman in the Financial Times, Altman said the OpenAI/Microsoft partnership was working really well and that he expected to raise a lot more over time. Asked if Microsoft would keep investing further, Altman said: Id hope sotheres a long way to go, and a lot of compute to build out between here and AGI... training expenses are just huge.

From the beginning, OpenAIs structure details say, Microsoft accepted our capped equity offer and our request to leave AGI technologies and governance for the Nonprofit and the rest of humanity.

An OpenAI spokesperson told VentureBeat that OpenAIs mission is to build AGI that is safe and beneficial for everyone. Our board governs the company and consults diverse perspectives from outside experts and stakeholders to help inform its thinking and decisions.We nominate and appoint board members based on their skills, experience and perspective on AI technology, policy and safety.

Currently, the OpenAI nonprofit board of directors is made up of chairman and president Greg Brockman, chief scientist Ilya Sutskever, and CEO Sam Altman, as well as non-employees Adam DAngelo, Tasha McCauley, and Helen Toner.

DAngelo, who is CEO of Quora, as well as tech entrepreneur McCauley and Honer, who isdirector of strategy for the Center for Security and Emerging Technology at Georgetown University, all have been tied to the Effective Altruism movement which came under fire earlier this year for its ties to Sam Bankman-Fried and FTX, as well as its dangerous take on AI safety. And OpenAI has long had its own ties to EA: For example, In March 2017, OpenAI received a grant of $30 million from Open Philanthropy, which is funded by Effective Altruists. And Jan Leike, who leads OpenAIs superalignment team, reportedly identifies with the EA movement.

The OpenAI spokesperson said that None of our board members areeffective altruists, adding that non-employee board members are not effective altruists; their interactions with the EA community are focused on topics related to AI safety or to offer the perspective of someone not closely involved in the group.

Suzy Fulton, who offers outsourced general counsel and legal services to startups and emerging companies in the tech sector, told VentureBeat that while in many circumstances, it would be unusual to have a board make this AGI determination, OpenAIs nonprofit board owes its fiduciary duty to supporting its mission of providing safe AGI that is broadly beneficial.

They believe the nonprofit boards beneficiary is humanity, whereas the for-profit one serves its investors, she explained. Another safeguard that they are trying to build in is having the Board majority independent, where the majority of the members do not have equity in Open AI.

Was this the right way to set up an entity structure and a board to make this critical determination? We may not know the answer until their Board calls it, Fulton said.

Anthony Casey, a professor at The University of Chicago Law School, agreed that having the board decide something as operationally specific as AGI is unusual, but he did not think there is any legal impediment.

It should be fine to specifically identify certain issues that must be made at the Board level, he said. Indeed, if an issue is important enough, corporate law generally imposes a duty on the directors to exercise oversight on that issue, particularly mission-critical issues.

Not all experts believe, however, that artificial general intelligence is coming anytime soon, while some question whether it is even possible.

According to Merve Hickok, president of the Center for AI and Digital Policy, which filed a claim with the FTC in March saying the agency should investigate OpenAI and order the company to halt the release of GPT models until necessary safeguards are established, OpenAI, as an organization, does suffer from diversity of perspectives. Their focus on AGI, she explained, have ignored current impact of AI models and tools.

However, she disagreed with any debate about the size or diversity of the OpenAI board in the context of who gets to determine whether or not OpenAI has attained AGI saying it distracts from discussions about whether their underlying mission and claim is even legitimate.

This would shift the focus, and de facto legitimize the claims that AGI is possible, she said.

But does OpenAIs lack of a clear definition of AGI or whether there will even be one AGI skirt the issue? For example, an OpenAI blog post from February 2023 said the first AGI will be just a point along the continuum of intelligence.And in January 2023 LessWrong interview, CEO Sam Altman said that the future I would like to see is where access to AI is super democratized, where there are several AGIs in the world that can help allow for multiple viewpoints and not have anyone get too powerful.

Still, its hard to say what OpenAIs vague definition of AGI will really mean for Microsoft especially without having full details about the operating agreement between the two companies. For example, Casey said, OpenAIs structure and relationship with Microsoft could lead to some big dispute if OpenAI is sincere about its non-profit mission.

There are a few nonprofits that own for profits, he pointed out the most notable being the Hershey Trust. But they wholly own the for-profit. In that case, it is easy because there is no minority shareholder to object, he explained. But here Microsofts for-profit interests could directly conflict with the non-profit interest of the controlling entity.

The cap on profits is easy to implement, he added, but the hard thing is what to do if meeting the maximum profit conflicts with the mission of the non-profit? Casey added that default rules would say that hitting the profit is the priority and the managers have to put that first (subject to broad discretion under the business judgment rule).

Perhaps, he continued, Microsoft said, Dont worry, we are good either way. You dont owe us any duties. That just doesnt sound like the way Microsoft would negotiate.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Visit link:

OpenAI's six-member board will decide 'when we've attained AGI' - VentureBeat