Page 2123

Category Archives: Artificial Super Intelligence

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News – Forbes

Posted: April 20, 2023 at 11:42 am

(Photo by Justin Sullivan/Getty Images)Getty Images

The worlds wealthiest billionaires are drawing battle lines when it comes to who will control AI, according to Elon Musk in an interview with Tucker Carlson on Fox News, which aired this week.

Musk explained that he cofounded ChatGPT-maker OpenAI in reaction to Google cofounder Larry Pages lack of concern over the danger of AI outsmarting humans.

He said the two were once close friends and that he would often stay at Pages house in Palo Alto where they would talk late into the night about the technology. Page was such a fan of Musks that in Jan. 2015, Google invested $1 billion in SpaceX for a 10% stake with Fidelity Investments. He wants to go to Mars. Thats a worthy goal, Page said in a March 2014 TED Talk .

But Musk was concerned over Googles acquisition of DeepMind in Jan. 2014.

Google and DeepMind together had about three-quarters of all the AI talent in the world. They obviously had a tremendous amount of money and more computers than anyone else. So Im like were in a unipolar world where theres just one company that has close to a monopoly on AI talent and computers, Musk said. And the person in charge doesnt seem to care about safety. This is not good.

Musk said he felt Page was seeking to build a digital super intelligence, a digital god.

He's made many public statements over the years that the whole goal of Google is what's called AGI artificial general intelligence or artificial super intelligence, Musk said.

Google CEO Sundar Pichai has not disagreed. In his 60 minutes interview on Sunday, while speaking about the companys advancements in AI, Pichai said that Google Search was only one to two percent of what Google can do. The company has been teasing a number of new AI products its planning on rolling out at its developer conference Google I/O on May 10.

Musk said Page stopped talking to him over OpenAI, a nonprofit with the stated mission of ensuring that artificial general intelligenceAI systems that are generally smarter than humansbenefits all of humanity that Musk cofounded in Dec. 2015 with Y Combinator CEO Sam Altman and PayPal alums LinkedIn cofounder Reid Hoffman and Palantir cofounder Peter Thiel, among others.

I havent spoken to Larry Page in a few years because he got very upset with me over OpenAI, said Musk explaining that when OpenAI was created it shifted things from a unipolar world where Google controls most of the worlds AI talent to a bipolar world. And now it seems that OpenAI is ahead, he said.

But even before OpenAI, as SpaceX was announcing the Google investment in late Jan. 2015, Musk had given $10 million to the Future of Life Institute, a nonprofit organization dedicated to reducing existential risks from advanced artificial intelligence. That organization was founded in March 2014 by AI scientists from DeepMind, MIT, Tufts, UCSC, among others and were the ones who issued the petition calling for a pause in AI development that Musk signed last month.

In 2018, citing potential conflicts with his work with Tesla, Musk resigned his seat on the board of OpenAI.

I put a lot of effort into creating this organization to serve as a counterweight to Google and then I kind of took my eye off the ball and now they are closed source, and obviously for profit, and theyre closely allied with Microsoft. In effect, Microsoft has a very strong say, if not directly controls OpenAI at this point, Musk said.

Ironically, its Musks longtime friend Hoffman who is the link to Microsoft. The two hit it big together at PayPal and it was Musk who recruited Hoffman to OpenAI in 2015. In 2017, Hoffman became an independent director at Microsoft, then sold LinkedIn to Microsoft for more than $26 billion in 2019 when Microsoft invested its first billion dollars into OpenAI. Microsoft is currently OpenAIs biggest backer having invested as much as $10 billion more this past January. Hoffman only recently stepped down from OpenAIs board on March 3 to enable him to start investing in the OpenAI startup ecosystem, he said in a LinkedIn post. Hoffman is a partner in the venture capital firm Greylock Partners and a prolific angel investor.

All sit at the top of the Forbes Real-Time Billionaires List. As of April 17 5pm ET, Musk was the worlds second richest person valued at $187.4 billion, Page the eleventh at $90.1 billion. Google cofounder Sergey Brin is in the 12 spot at $86.3 billion. Thiel ranks 677 with a net worth of $4.3 billion and Hoffman ranks 1570 with a net worth of $2 billion.

Musk said he thinks Page believes all consciousness should be treated equally while he disagrees, especially if the digital consciousness decides to curtail the biological intelligence. Like Pichai, Musk is advocating for government regulation of the technology and says at minimum there should be a physical off switch to cut power and connectivity to server farms in case administrative passwords stop working.

Pretty sure Ive seen that movie.

Musk told Carlson that hes considering naming his new AI company TruthGPT.

I will create a third option, although it's starting very late in the game, he said. Can it be done? I don't know.

The entire interview will be available to view on Fox Nation starting April 19 7am ET. Here are some excerpts which includes his thoughts on encrypting Twitter DMs.

Tech and trending reporter with bylines in Bloomberg, Businessweek, Fortune, Fast Company, Insider, TechCrunch and TIME; syndicated in leading publications around the world. Fox 5 DC commentator on consumer trends. Winner CES 2020 Media Trailblazer award. Follow on Twitter @contentnow.

See the original post here:

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News - Forbes

Posted in Artificial Super Intelligence | Comments Off on Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News – Forbes

This is a war and artificial intelligence is more dangerous than a T-80 tank. Unlike a tank its in e… – The US Sun

Posted: at 11:42 am

A GERMAN magazines world exclusive interview with paralysed F1 legend Michael Schumacher. Fake.

A stunning photograph given first place and handed a prestigious Sony World Photography Award. Never taken.

And a banger of a new song called Heart On My Sleeve featuring Drake and The Weeknd dropped on streaming services. Never recorded.

Welcome to another crazy 24 hours in the world of artificial intelligence, where truth and disinformation collide.

Die Aktuelle, a weekly German gossip magazine, splashed a Schumacher interview across its cover when the content of it was actually created by an AI chatbot designed to respond like Schumacher might.

Berlin artist Boris Eldagsen revealed his photo submitted to a high-profile photography competition was dreamt up by artificial intelligence.

This came just after a new song purportedly by Drake was pulled from streaming services by Universal Music Group for infringing content created with generative AI.

These controversies followed on from provocative AI-generated images of Frances President Emmanuel Macron being arrested and of an incandescent Donald Trump being manhandled by American police.

All beamed around the world to a believing audience.

Thats not to mention a super-realistic shot of the Pope resplendent in a massive white puffer coat.

This one even fooled broadcaster and seasoned journalist Andrew Marr, as I found out in a recent conversation with him.

Such images are created by AI technology with the simple push of a button, with entire scenes generated from nothing.

The growing threat posed by generative artificial intelligence technologies is upon us.

Not long ago, it would have been simple to distinguish between real and fake images but it is now almost impossible to spot the difference.

The simplicity of producing these photographs, interviews, songs and soon videos means that platforms that dont put measures against them will be flooded.

These technologies and deepfakes are clear and present threats to democracy and are being seized upon by propagandist regimes to supercharge their agenda and drown out truth.

You could fake an entire political movement, for example.

This is a new war we need to fight, a war on artificial truth and the inequality of truth around the world.

It is time to restore trust. Soon, we will lose the ability to have reasonable online discourse if we cant have a shared sense of reality.

These forgeries are so sophisticated that millions of people globally could be simultaneously watching and believing a speech that Joe Biden never gave.

Nation states will have to reimagine how they govern in a world where their communication to the public will be, by default, disbelieved.

One of the biggest issues we have in social media is that content is user-uploaded and it is nearly impossible to track its origin.

Was the upload taken by an iPhone? Was it heavily Photoshopped? Was it a complete fabrication generated by AI? We dont know its veracity.

Information warfare is now a front, right alongside conventional warfare.

During the Ukraine conflict, we have been faced with a barrage of manipulated media.

There have been deepfake videos of President Zelensky where he says he is resigning and surrendering. It doesnt get more serious than that.

These are dangerous weapons which can have devastating consequences.

And unlike T-80 tanks, the weapons of this front are in everyones hands.

To counter all of this, a number of us computer scientists are creating technologies that help build trust.

Ours is FrankliApp.com, a content platform where we can definitively say that every piece of photography and video is not edited, faked or touched up in any way.

We need more of this and the right regulation to ensure it happens.

As investor Ian Hogarth told Radio 4 yesterday: Theres currently more regulation on selling a Pret sandwich than there is in building super-intelligence.

AI companies should be forced to open source their models and allow anyone to check if a piece of content was created by their service.

We also need regulations that make platforms disclose a particular photo or videos digital provenance.

There is some precedent for this as France orders disclosure of fashion photo edits. We need this in all sectors.

The conjured images of Trump, Macron and many others have now been seen and believed by millions worldwide on platforms that dont care whether what they are promoting is real or not.

Thats just plain wrong.

The world needs a solution to this tsunami of distortion.

We must shine a light on the truth, and nothing but the truth, delivering authenticity in this age of disinformation.

Go here to see the original:

This is a war and artificial intelligence is more dangerous than a T-80 tank. Unlike a tank its in e... - The US Sun

Posted in Artificial Super Intelligence | Comments Off on This is a war and artificial intelligence is more dangerous than a T-80 tank. Unlike a tank its in e… – The US Sun

Elon Musk says he will launch rival to Microsoft-backed ChatGPT – Reuters

Posted: at 11:42 am

SAN FRANCISCO, April 17 (Reuters) - Billionaire Elon Musk said on Monday he will launch an artificial intelligence (AI) platform that he calls "TruthGPT" to challenge the offerings from Microsoft (MSFT.O) and Google (GOOGL.O).

He criticised Microsoft-backed OpenAI, the firm behind chatbot sensation ChatGPT, of "training the AI to lie" and said OpenAI has now become a "closed source", "for-profit" organisation "closely allied with Microsoft".

He also accused Larry Page, co-founder of Google, of not taking AI safety seriously.

"I'm going to start something which I call 'TruthGPT', or a maximum truth-seeking AI that tries to understand the nature of the universe," Musk said in an interview with Fox News Channel's Tucker Carlson aired on Monday.

He said TruthGPT "might be the best path to safety" that would be "unlikely to annihilate humans".

"It's simply starting late. But I will try to create a third option," Musk said.

Musk, OpenAI, Microsoft and Page did not immediately respond to Reuters' requests for comment.

Musk has been poaching AI researchers from Alphabet Inc's (GOOGL.O) Google to launch a startup to rival OpenAI, people familiar with the matter told Reuters.

Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm listed Musk as the sole director and Jared Birchall, the managing director of Musk's family office, as a secretary.

The move came even after Musk and a group of artificial intelligence experts and industry executives called for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, citing potential risks to society.

Musk also reiterated his warnings about AI during the interview with Carlson, saying "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production" according to the excerpts.

"It has the potential of civilizational destruction," he said.

He said, for example, that a super intelligent AI can write incredibly well and potentially manipulate public opinions.

He tweeted over the weekend that he had met with former U.S. President Barack Obama when he was president and told him that Washington needed to "encourage AI regulation".

Musk co-founded OpenAI in 2015, but he stepped down from the company's board in 2018. In 2019, he tweeted that he left OpenAI because he had to focus on Tesla and SpaceX.

He also tweeted at that time that other reasons for his departure from OpenAI were, "Tesla was competing for some of the same people as OpenAI & I didnt agree with some of what OpenAI team wanted to do."

Musk, CEO of Tesla and SpaceX, has also become CEO of Twitter, a social media platform he bought for $44 billion last year.

In the interview with Fox News, Musk said he recently valued Twitter at "less than half" of the acquisition price.

In January, Microsoft Corp (MSFT.O) announced a further multi-billion dollar investment in OpenAI, intensifying competition with rival Google and fueling the race to attract AI funding in Silicon Valley.

Reporting by Hyunjoo JinEditing by Chris Reese

Our Standards: The Thomson Reuters Trust Principles.

Continued here:

Elon Musk says he will launch rival to Microsoft-backed ChatGPT - Reuters

Posted in Artificial Super Intelligence | Comments Off on Elon Musk says he will launch rival to Microsoft-backed ChatGPT – Reuters

Researchers at UTSA use artificial intelligence to improve cancer … – UTSA

Posted: at 11:42 am

Patients undergoing radiotherapy are currently given a computed tomography (CT) scan to help physicians see where the tumor is on an organ, for example a lung. A treatment plan to remove the cancer with targeted radiation doses is then made based on that CT image.

Rad says that cone-beam computed tomography (CBCT) is often integrated into the process after each dosage to see how much a tumor has shrunk, but CBCTs are low-quality images that are time-consuming to read and prone to misinterpretation.

UTSA researchers used domain adaptation techniques to integrate information from CBCT and initial CT scans for tumor evaluation accuracy. Their Generative AI approach visualizes the tumor region affected by radiotherapy, improving reliability in clinical settings.

This improved approach enables physicians to more accurately see how much a tumor has decreased week by week and to plan the following weeks radiation dose with greater precision. Ultimately, the approach could lead clinicians to better target tumors while sparing the surrounding critical organs and healthy tissue.

Nikos Papanikolaou, a professor in the Departments of Radiation Oncology and Radiology at UT Health San Antonio, provided the patient data that enabled the researchers to advance their study.

UTSA and UT Health San Antonio have a shared commitment to deliver the best possible health care to members of our community, Papanikolaou said. This study is a wonderful example of how artificial intelligence can be used to develop new personalized treatments for the benefit of society.

The American Society for Radiology Oncology stated in a 2020 report that between half or two-thirds of people diagnosed with cancer were expected to receive radiotherapy treatment. According to the American Cancer Society, the number of new cancer cases in the U.S. in 2023 is projected to be nearly two million.

Arkajyoti Roy, UTSA assistant professor of management science and statistics, says he and his collaborators have been interested in using AI and deep learning models to improve treatments over the last few years.

Besides just building more advanced AI models for radiotherapy, we also are super interested in the limitations of these models, he said. All models make errors and for something like cancer treatment its very important not only to understand the errors but to try to figure out how we can limit their impact; thats really the goal from my perspective of this project.

The researchers study included 16 lung cancer patients whose pre-treatment CT and mid-treatment weekly CBCT images were captured over a six-week period. Results show that using the researchers new approach demonstrated improved tumor shrinkage predictions for weekly treatment plans with significant improvement in lung dose sparing. Their approach also demonstrated a reduction in radiation-induced pneumonitis or lung damage up to 35%.

Were excited about this direction of research that will focus on making sure that cancer radiation treatments are robust to AI model errors, Roy said. This work would not be possible without the interdisciplinary team of researchers from different departments.

Read the original:

Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA

Posted in Artificial Super Intelligence | Comments Off on Researchers at UTSA use artificial intelligence to improve cancer … – UTSA

Fears of artificial intelligence overblown – Independent Australia

Posted: at 11:42 am

While AI is still a developing technology and not without its limitations, a robotic world domination is far from something we need to fear, writes Bappa Sinha.

THE UNPRECIDENTED popularity of ChatGPT has turbocharged the artificial intelligence (AI) hype machine. We are being bombarded daily by news articles announcing AI as humankinds greatest invention. AI is qualitatively different, transformational, revolutionary and will change everything, they say.

OpenAI, the company behind ChatGPT, announced a major upgrade of the technology behind ChatGPT called GPT4. Already, Microsoft researchers are claiming that GPT4 shows sparks of artificial general intelligence or human-like intelligence the holy grail of AI research. Fantastic claims are made about reaching the point of AI Singularity, of machines equalling and surpassing human intelligence.

The business press talks about hundreds of millions of job losses as AI would replace humans in a whole host of professions. Others worry about a sci-fi-like near future where super-intelligent AI goes rogue and destroys or enslaves humankind. Are these predictions grounded in reality, or is this just over-the-board hype that the tech industry and the venture capitalist hype machine are so good at selling?

The current breed of AI models are based on things called neural networks. While the term neural conjures up images of an artificial brain simulated using computer chips, the reality of AI is that neural networks are nothing like how the human brain actually works. These so-called neural networks have no similarity with the network of neurons in the brain. This terminology was, however, a major reason for the artificial neural networks to become popular and widely adopted despite its serious limitations and flaws.

Machine learning algorithms currently used are an extension of statistical methods that lack theoretical justification for extending them this way. Traditional statistical methods have the virtue of simplicity. It is easy to understand what they do, when and why they work. They come with mathematical assurances that the results of their analysis are meaningful, assuming very specific conditions.

Since the real world is complicated, those conditions never hold. As a result, statistical predictions are seldom accurate. Economists, epidemiologists and statisticians acknowledge this, then use intuition to apply statistics to get approximate guidance for specific purposes in specific contexts.

These caveats are often overlooked, leading to the misuse of traditional statistical methods. These sometimes have catastrophic consequences, as in the 2008 Global Financial Crisis or the Long-Term Capital Management blowup in 1998, which almost brought down the global financial system. Remember Mark Twains famous quote: Lies, damned lies and statistics.

Machine learning relies on the complete abandonment of the caution which should be associated with the judicious use of statistical methods. The real world is messy and chaotic, hence impossible to model using traditional statistical methods. So the answer from the world of AI is to drop any pretence at theoretical justification on why and how these AI models, which are many orders of magnitude more complicated than traditional statistical methods, should work.

Freedom from these principled constraints makes the AI model more powerful. They are effectively elaborate and complicated curve-fitting exercises which empirically fit observed data without us understanding the underlying relationships.

But its also true that these AI models can sometimes do things that no other technology can do at all. Some outputs are astonishing, such as the passages ChatGPT can generate or the images that DALL-E can create. This is fantastic at wowing people and creating hype. The reason they work so well is the mind-boggling quantities of training data enough to cover almost all text and images created by humans.

Even with this scale of training data and billions of parameters, the AI models dont work spontaneously but require kludgy ad hoc workarounds to produce desirable results.

Even with all the hacks, the models often develop spurious correlations. In other words, they work for the wrong reasons. For example, it has been reported that many vision models work by exploiting correlations pertaining to image texture, background, angle of the photograph and specific features. These vision AI models then give bad results in uncontrolled situations.

For example, a leopard print sofa would be identified as a leopard. The models dont work when a tiny amount of fixed pattern noise undetectable by humans is added to the images or the images are rotated, say in the case of a post-accident upside-down car. ChatGPT, for all its impressive prose, poetry and essays, is unable to do simple multiplication of two large numbers, which a calculator from the 1970s can do easily.

The AI models do not have any level of human-like understanding but are great at mimicry and fooling people into believing they are intelligent by parroting the vast trove of text they have ingested. For this reason, computational linguist Emily Bender called the large language models such as ChatGPT and Googles Bard and BERT Stochastic Parrots in a 2021 paper. Her Google co-authors Timnit Gebru and Margaret Mitchell were asked to take their names off the paper. When they refused, they were fired by Google.

This criticism is not just directed at the current large language models but at the entire paradigm of trying to develop artificial intelligence. We dont get good at things just by reading about them. That comes from practice, of seeing what works and what doesnt. This is true even for purely intellectual tasks such as reading and writing. Even for formal disciplines such as maths, one cant get good at it without practising it.

These AI models have no purpose of their own. They, therefore, cant understand meaning or produce meaningful text or images. Many AI critics have argued that real intelligence requires social situatedness.

Doing physical things in the real world requires dealing with complexity, non-linearly and chaos. It also involves practice in actually doing those things. It is for this reason that progress has been exceedingly slow in robotics. Current robots can only handle fixed repetitive tasks involving identical rigid objects, such as in an assembly line. Even after years of hype about driverless cars and vast amounts of funding for its research, fully automated driving still doesnt appear feasible in the near future.

Current AI development based on detecting statistical correlations using neural networks, which are treated as black boxes, promotes a pseudoscience-based myth of creating intelligence at the cost of developing a scientific understanding of how and why these networks work. Instead, they emphasise spectacles such as creating impressive demos and scoring in standardised tests based on memorised data.

The only significant commercial use cases of the current versions of AI are advertisements: targeting buyers for social media and video streaming platforms. This does not require the high degree of reliability demanded from other engineering solutions they just need to be good enough. Bad outputs, such as the propagation of fake news and the creation of hate-filled filter bubbles, largely go unpunished.

Perhaps a silver lining in all this is, given the bleak prospects of AI singularity, the fear of super-intelligent malicious AIs destroying humankind is overblown. However, that is of little comfort for those at the receiving end of AI decision systems. We already have numerous examples of AI decision systems the world over denying people legitimate insurance claims, medical and hospitalisation benefits, and state welfare benefits.

AI systems in the United States have been implicated in imprisoning minorities to longer prison terms. There have even been reports of withdrawal of parental rights to minority parents based on spurious statistical correlations, which often boil down to them not having enough money to properly feed and take care of their children. And, of course, on fostering hate speech on social media.

As noted linguist Noam Chomsky wrote in a recent article:

ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.

Bappa Sinha is a veteran technologist interested in the impact of technology on society and politics.

This article was produced by Globetrotter.

Support independent journalism Subscribeto IA.

See more here:

Fears of artificial intelligence overblown - Independent Australia

Posted in Artificial Super Intelligence | Comments Off on Fears of artificial intelligence overblown – Independent Australia

Genie wont go back in the bottle on AI, says security minister – Yahoo Finance UK

Posted: at 11:42 am

Calls to suspend or stop the development of artificial intelligence due to fears about the new technology are misguided, the security minister has suggested.

Tom Tugendhat, addressing the CyberUK conference in Belfast, said he understands fears about the potential danger of AI but added that the genie wont go back in the bottle.

Italy said last month that it will temporarily block the artificial intelligence software ChatGPT amid global debate about the power of such new tools.

The AI systems powering such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.

There is also significant debate about the potential of the new technology, and Mr Tugendhat said the UK can become a leader in the area if the Government and private sector can work together.

However, he acknowledged that criminals and cyber attackers are aware of the uses of AI.

Cyber attacks work when they find vulnerabilities. AI will cut the cost and complications of cyber attacks by automating the hunt for the chinks in our armour, he said.

Already AI can confuse and copy, spreading lies and committing fraud.

Natural language models can mimic credible news sources, pushing disingenuous narratives at huge scale, and AI image and video generation will get better.

The security minister also acknowledged the threat posed by Russia, as well as Chinas interest in AI.

Given the stakes, we can all understand the calls to stop AI development altogether, he said. But the genie wont go back in the bottle any more than we can write laws against maths.

(Russian President Vladimir) Putin has a longstanding strategic interest in AI and has commented that whoever becomes leader in this sphere will rule the world.

China, with its vast datasets and fierce determination, is a strong rival.

Story continues

But AI also threatens authoritarian controls. Other than the United States, the UK is one of only a handful of liberal democratic countries that can credibly lead the world in AI development.

We can stay ahead, but it will demand investment and co-operation, and not just by government.

As for the safety of the technology itself, its essential that, by the time we reach the development of AGI (artificial general intelligence), we are confident that it can be safely controlled and aligned to our values and interests.

Solving this issue of alignment is where our efforts must lie, not in some King Canute-like attempt to stop the inevitable but in a national mission to ensure that, as super-intelligent computers arrive, they make the world safer and more secure.

Mr Tugendhat followed several senior officials and ministers to have addressed the annual conference, which has been dominated by debates about the challenge posed by China and the cyber threat posed by Russian-aligned groups.

Lindy Cameron, head of the National Cyber Security Centre, warned earlier this week that more needs to be done to protect the UK from the threat posed by Russia-aligned cyber groups.

And Chancellor of the Duchy of Lancaster Oliver Dowden stressed the danger that a cyber equivalent of the Wagner group poses to critical infrastructure.

Originally posted here:

Genie wont go back in the bottle on AI, says security minister - Yahoo Finance UK

Posted in Artificial Super Intelligence | Comments Off on Genie wont go back in the bottle on AI, says security minister – Yahoo Finance UK

Some Glimpse AGI in ChatGPT. Others Call It a Mirage – WIRED

Posted: at 11:42 am

Sbastien Bubeck, a machine learning researcher atMicrosoft, woke up one night last September thinking aboutartificial intelligenceand unicorns.

Bubeck had recently gotten early access toGPT-4, a powerful text generation algorithm fromOpenAI and an upgrade to the machine learning model at the heart of the wildly popular chatbotChatGPT. Bubeck was part of a team working to integrate the new AI system into MicrosoftsBing search engine. But he and his colleagues kept marveling at how different GPT-4 seemed from anything theyd seen before.

GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input. But to Bubeck, the systems output seemed to do so much more than just make statistically plausible guesses.

View more

That night, Bubeck got up, went to his computer, and asked GPT-4 to draw a unicorn usingTikZ, a relatively obscure programming language for generating scientific diagrams. Bubeck was using a version of GPT-4 that only worked with text, not images. But the code the model presented him with, when fed into a TikZ rendering software, produced a crude yet distinctly unicorny image cobbled together from ovals, rectangles, and a triangle. To Bubeck, such a feat surely required some abstract grasp of the elements of such a creature. Something new is happening here, he says. Maybe for the first time we have something that we could call intelligence.

How intelligent AI is becomingand how much to trust the increasingly commonfeeling that a piece of software is intelligenthas become a pressing, almost panic-inducing, question.

After OpenAIreleased ChatGPT, then powered by GPT-3, last November, it stunned the world with its ability to write poetry and prose on a vast array of subjects, solve coding problems, and synthesize knowledge from the web. But awe has been coupled with shock and concern about the potential foracademic fraud,misinformation, andmass unemploymentand fears that companies like Microsoft are rushing todevelop technology that could prove dangerous.

Understanding the potential or risks of AIs new abilities means having a clear grasp of what those abilities areand are not. But while theres broad agreement that ChatGPT and similar systems give computers significant new skills, researchers are only just beginning to study these behaviors and determine whats going on behind the prompt.

While OpenAI has promoted GPT-4 by touting its performance on bar and med school exams, scientists who study aspects of human intelligence say its remarkable capabilities differ from our own in crucial ways. The models tendency to make things up is well known, but the divergence goes deeper. And with millions of people using the technology every day and companies betting their future on it, this is a mystery of huge importance.

Bubeck and other AI researchers at Microsoft were inspired to wade into the debate by their experiences with GPT-4. A few weeks after the system was plugged into Bing and its new chat feature was launched, the companyreleased a paper claiming that in early experiments, GPT-4 showed sparks of artificial general intelligence.

The authors presented a scattering of examples in which the system performed tasks that appear to reflect more general intelligence, significantly beyond previous systems such as GPT-3. The examples show that unlike most previous AI programs, GPT-4 is not limited to a specific task but can turn its hand to all sorts of problemsa necessary quality of general intelligence.

The authors also suggest that these systems demonstrate an ability to reason, plan, learn from experience, and transfer concepts from one modality to another, such as from text to imagery. Given the breadth and depth of GPT-4s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system, the paper states.

Bubecks paper, written with 14 others, including Microsofts chief scientific officer, was met with pushback from AI researchers and experts on social media. Use of the term AGI, a vague descriptor sometimes used to allude to the idea of super-intelligent or godlike machines, irked some researchers, who saw it as a symptom of the current hype.

The fact that Microsoft has invested more than $10 billion in OpenAI suggested to some researchers that the companys AI experts had an incentiveto hype GPT-4s potential while downplaying its limitations. Others griped thatthe experiments are impossible to replicate because GPT-4 rarely responds in the same way when a prompt is repeated, and because OpenAI has not shared details of its design. Of course, people also asked why GPT-4 still makes ridiculous mistakes if it is really so smart.

Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, says Microsofts paper shows some interesting phenomena and then makes some really over-the-top claims. Touting that systems are highly intelligent encourages users to trust them even when theyre deeply flawed, they say. Ringer also points out that while it may be tempting to borrow ideas from systems developed to measure human intelligence, many have proven unreliable and even rooted in racism.

Bubek admits that his study has its limits, including the reproducibility issue, and that GPT-4 also has big blind spots. He says use of the term AGI was meant to provoke debate. Intelligence is by definition general, he says. We wanted to get at the intelligence of the model and how broad it isthat it covers many, many domains.

But for all of the examples cited in Bubecks paper, there are many that show GPT-4 getting things blatantly wrongoften on the very tasks Microsofts team used to tout its success. For example, GPT-4s ability to suggest a stable way to stack a challenging collection of objectsa book, four tennis balls, a nail, a wine glass, a wad of gum, and some uncooked spaghettiseems to point to a grasp of the physical properties of the world that is second nature to humans,including infants. However, changing the items and the requestcan result in bizarre failures that suggest GPT-4s grasp of physics is not complete or consistent.

Bubeck notes that GPT-4 lacks a working memory and is hopeless at planning ahead. GPT-4 is not good at this, and maybe large language models in general will never be good at it, he says, referring to the large-scale machine learning algorithms at the heart of systems like GPT-4. If you want to say that intelligence is planning, then GPT-4 is not intelligent.

One thing beyond debate is that the workings of GPT-4 and other powerful AI language models do not resemble the biology of brains or the processes of the human mind. The algorithms must be fed an absurd amount of training dataa significant portion of all the text on the internetfar more than a human needs to learn language skills. The experience that imbues GPT-4, and things built with it, with smarts is shoveled in wholesale rather than gained through interaction with the world and didactic dialog. And with no working memory, ChatGPT can maintain the thread of a conversation only by feeding itself the history of the conversation over again at each turn. Yet despite these differences, GPT-4 is clearly a leap forward, and scientists who research intelligence say its abilities need further interrogation.

A team of cognitive scientists, linguists, neuroscientists, and computer scientists from MIT, UCLA, and the University of Texas, Austin, posted aresearch paper in January that explores how the abilities of large language models differ from those of humans.

The group concluded that while large language models demonstrate impressive linguistic skillincluding the ability to coherently generate a complex essay on a given themethat is not the same as understanding language and how to use it in the world. That disconnect may be why language models have begun to imitate the kind of commonsense reasoning needed to stack objects or solve riddles. But the systems still make strange mistakes when it comes to understanding social relationships, how the physical world works, and how people think.

The way these models use language, by predicting the words most likely to come after a given string, is very difference from how humans speak or write to convey concepts or intentions. The statistical approach can cause chatbots to follow and reflect back the language of users prompts to the point of absurdity.

Whena chatbot tells someone to leave their spouse, for example, it only comes up with the answer that seems most plausible given the conversational thread. ChatGPT and similar bots will use the first person because they are trained on human writing. But they have no consistent sense of self and can change their claimed beliefs or experiences in an instant. OpenAI also uses feedback from humans to guide a model toward producing answers that people judge as more coherent and correct, which may make the model provide answers deemed more satisfying regardless of how accurate they are.

Josh Tenenbaum, a contributor to the January paper and a professor at MIT who studies human cognition and how to explore it using machines, says GPT-4 is remarkable but quite different from human intelligence in a number of ways. For instance, it lacks the kind of motivation that is crucial to the human mind. It doesnt care if its turned off, Tenenbaum says. And he says humans do not simply follow their programming but invent new goals for themselves based on their wants and needs.

Tenenbaum says some key engineering shifts happened between GPT-3 and GPT-4 and ChatGPT that made them more capable. For one, the model was trained on large amounts of computer code. He and others have argued thatthe human brain may use something akin to a computer program to handle some cognitive tasks, so perhaps GPT-4 learned some useful things from the patterns found in code. He also points to the feedback ChatGPT received from humans as a key factor.

But he says the resulting abilities arent the same as thegeneral intelligence that characterizes human intelligence. Im interested in the cognitive capacities that led humans individually and collectively to where we are now, and thats more than just an ability to perform a whole bunch of tasks, he says. We make the tasksand we make the machines that solve them.

Tenenbaum also says it isnt clear that future generations of GPT would gain these sorts of capabilities, unless some different techniques are employed. This might mean drawing from areas of AI research that go beyond machine learning. And he says its important to think carefully about whether we want to engineer systems that way, as doing so could have unforeseen consequences.

Another author of the January paper, Kyle Mahowald, an assistant professor of linguistics at the University of Texas at Austin, says its a mistake to base any judgements on single examples of GPT-4s abilities. He says tools from cognitive psychology could be useful for gauging the intelligence of such models. But he adds that the challenge is complicated by the opacity of GPT-4. It matters what is in the training data, and we dont know. If GPT-4 succeeds on some commonsense reasoning tasks for which it was explicitly trained and fails on others for which it wasnt, its hard to draw conclusions based on that.

Whether GPT-4 can be considered a step toward AGI, then, depends entirely on your perspective. Redefining the term altogether may provide the most satisfying answer. These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is generalbut we have to be a little bit less, you know, hysterical about what AGI means, saysNoah Goodman, anassociate professor of psychology, computer science, and linguistics at Stanford University.

Unfortunately, GPT-4 and ChatGPT are designed to resist such easy reframing. They are smart but offer little insight into how or why. Whats more, the way humans use language relies on having a mental model of an intelligent entity on the other side of the conversation to interpret the words and ideas being expressed. We cant help but see flickers of intelligence in something that uses language so effortlessly. If the pattern of words is meaning-carrying, then humans are designed to interpret them as intentional, and accommodate that, Goodman says.

The fact that AI is not like us, and yet seems so intelligent, is still something to marvel at. Were getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self, Goodman says. That, to me, is just fascinating.

See the original post:

Some Glimpse AGI in ChatGPT. Others Call It a Mirage - WIRED

Posted in Artificial Super Intelligence | Comments Off on Some Glimpse AGI in ChatGPT. Others Call It a Mirage – WIRED

Control over AI uncertain as it becomes more human-like: Expert – Anadolu Agency | English

Posted: at 11:42 am

ANKARA

Debates are raging over whether artificial intelligence, which has entered many people's lives through video games and is governed by human-generated algorithms, can be controlled in the future.

Other than ethical standards, it is unknown whether artificial intelligence systems that make decisions on people's behalf may pose a direct threat.

People are only using limited and weak artificial intelligence with chatbots in everyday life and in driverless vehicles and digital assistants that work with voice commands. It is debatable whether algorithms have progressed to the level of superintelligence and whether they will go beyond emulating humans in the future.

The rise of AI over human intelligence over time paints a positive picture for humanity according to some experts, while it is seen as the beginning of a disaster according to others.

Wilhelm Bielert, chief digital officer and vice president at Canada-based industrial equipment manufacturer Premier Tech, told Anadolu that the most unknown issue about artificial intelligence is super artificial intelligence, which is still largely speculative among experts studying AI and which exceeds human intelligence.

He said that while humans build and program algorithms today, the notion of artificial intelligence commanding itself in the future and acting like a living entity is still under consideration. Given the possible risks and rewards, Bielert highlighted the importance of society approaching AI development in a responsible and ethical manner.

Prof. Ahmet Ulvi Turkbag, a lecturer at Istanbul Medipol Universitys Faculty of Law, argues that one day, when computer technology reaches the level of superintelligence, it may want to redesign the world from top to bottom.

"The reason why it is called a singularity is that there is no example of such a thing until today. It has never happened before. You do not have a section to make an analogy to be taken as an example in any way in history because there is no such thing. It's called a singularity, and everyone is afraid of this singularity," he said.

Vincent C. Muller, professor of Artificial Intelligence Ethics and Philosophy at the University of Erlangen-Nuremberg, told Anadolu it is uncertain whether artificial intelligence will be kept under control, given that it has the capacity to make its own decisions.

"The control depends on what you want from it. Imagine that you have a factory with workers. You can ask yourself: are these people under my control? Now you stand behind a worker and tell the worker Look, now you take the screw, you put it in there and you take the next screw, and so this person is under your control," he said.

Artificial intelligence and the next generation

According to Bielert, artificial intelligence will have a complicated and multidimensional impact on society and future generations.

He noted that it is vital that society address potential repercussions proactively and guarantee that AI is created and utilized responsibly and ethically.

"Nowadays, if you look at how teenagers and younger children live, they live on screens," he said.

He said that artificial intelligence, which has evolved with technology, has profoundly affected the lives of young people and children.

Read the original here:

Control over AI uncertain as it becomes more human-like: Expert - Anadolu Agency | English

Posted in Artificial Super Intelligence | Comments Off on Control over AI uncertain as it becomes more human-like: Expert – Anadolu Agency | English

Bill Gates Challenges OpenAI to Train AI to Pass AP Biology Exam – Best Stocks

Posted: at 11:42 am

During a recent event in San Diego, technology mogul Bill Gates shared an inspiring challenge he posed to the Microsoft-backed OpenAI team last summer. Gates tasked OpenAI with training an artificial intelligence (AI) to pass an Advanced Placement (AP) Biology exam, a feat that requires more than just memorization of scientific knowledge. The challenge aimed to push the limits of AIs reading and writing abilities, which Gates believes have been lacking until recently, despite its remarkable achievements in speech and image recognition.

Gates envisions the development of super-intelligent AI as an inevitability in our future, and he firmly believes that AI will revolutionize the way we live and work. He sees ChatGPT, a large language model that OpenAI is working on, as a true breakthrough. Gates likened ChatGPT to the advent of the personal computer, and he believes that the development of AI is as groundbreaking as the creation of the microprocessor, personal computers, the internet, and mobile phones.

In a blog post, Gates wrote that AI will have a profound impact on the way people work, learn, travel, get health care, and communicate with each other. He is convinced that AI will change the world as we know it, and he is excited to see what the future holds. If OpenAI can make ChatGPT capable of answering questions it hasnt been specifically trained for, Gates believes that it will be a significant step forward in the development of AI.

On April 20, 2023, MSFT (Microsoft Corporation) opened at 285.99, down from the previous days close of 288.37. The days range was between 284.54 and 289.05, with a volume of 17,150,271 shares traded.

Microsoft Corp, or MSFT, has been a consistently strong performer in the technology sector for years. As of April 20, 2023, the median target price for MSFT stock among 42 analysts is $300.00, with a high estimate of $335.00 and a low estimate of $212.00. This represents a +4.01% increase from the last price of $288.44. Investment analysts have been bullish on MSFT for some time, with a current consensus among 49 polled analysts to buy stock in the company. This rating has held steady since March, indicating a strong and consistent outlook for MSFT. MSFT is expected to report earnings per share of $2.24 and sales of $51.1 billion on April 26, 2023. Overall, MSFT is a solid investment option for those looking to invest in the technology sector.

Here is the original post:

Bill Gates Challenges OpenAI to Train AI to Pass AP Biology Exam - Best Stocks

Posted in Artificial Super Intelligence | Comments Off on Bill Gates Challenges OpenAI to Train AI to Pass AP Biology Exam – Best Stocks

The jobs that will disappear by 2040, and the ones that will survive – inews

Posted: at 11:42 am

Video may have killed the radio star, but it is artificial intelligence that some predict will soon do away with the postie, the web designer, and even the brain surgeon.

With the rise of robots automating roles in manufacturing, and generative AI (algorithms, such as ChatGPT, that can create new content) threatening to replace everyone from customer service assistants to journalists, is any job safe?

A report published by Goldman Sachs last month warned that roughly two-thirds of posts are exposed to some degree of AI automation and the tech could ultimately substitute up to a quarter of current work.

More than half a million industrial robots were installed around the world in 2021, according to the International Federation of Robotics a 75 per cent increase in the annual rate over five years. In total, there are now almost 3.5 million of them.

60 per cent of 10,000 people surveyed for PwCs Workforce of the Future report think few people will have stable, long-term employment in the future. And in the book Facing Our Futures, published in February, the futurist Nikolas Badminton forecasts that every job will be automated within the next 120 years translators by 2024, retail workers by 2031, bestselling authors by 2049 and surgeons by 2053.

But not everyone expects the human employee to become extinct. I really dont think all our jobs are going to be replaced, says Abigail Marks, professor of the future of work at Newcastle University. Some jobs will change, there will be some new jobs. I think its going to be more about refinement.

Richard Watson, futurist-in-residence at the University of Cambridge Judge Business School, puts the probability at close to zero. Its borderline hysteria at the moment, he says. If you look back at the past 50 or 100 years, very, very few jobs have been fully eliminated.

Anything involving data entry or repetitive, pattern-based tasks is likely to be most at risk. People who drive forklift trucks in warehouses really ought to retrain for another career, says Watson.

But unlike previous revolutions that only affected jobs at the lower end of the salary scale such as lamplighters and switchboard operators the professional classes will be in the crosshairs of the machines this time around.

Bookkeepers and database managers may be the first to fall, while what was once seen as a well-remunerated job of the future, the software designer, could be edged out by self-writing computer programs.

This may all fill you with dread, but the majority of us are optimistic about the future, according to the PwC research. 73 per cent described themselves as either excited or confident about the new world of work, as it is likely to affect them, with 18 per cent worried, and 8 per simply uninterested.

Research by the McKinsey Global Institute suggests that all workers will need skills that help them fulfil three criteria: the ability to add value beyond what can be done by automated systems; to operate in a digital environment; and to continually adapt to new ways of working and new occupations.

Watson thinks workers such as plumbers who do very manual work thats slightly different every single time will be protected, while probably the safest job on the planet, pretty much, is a hairdresser. I know theres a hairdressing robot, but its about the chat as much as the haircut. The other thing that I think is very safe indeed is management. Managing people is something that machines arent terribly good at and I dont think they ever will be. You need to be human to deal with humans.

Marks can also offer reassurance to carers, nurses, teachers, tax collectors and police officers because these are the foundations of a civilised society. And she predicts climate change will see us prize more environmentally based jobs, so theres going to be much more of a focus on countryside management, flood management and ecosystem development. She adds: Epidemiology is going to be a bigger thing. The pandemic is not going to be a one-off event.

Watson says it is important not to overlook the fundamental human needs that global warming is likely to put into sharper focus. Water and air are the two most precious resources weve got. We might have water speculators or water traders in the future. If theres a global price for a barrel of water, they could be extremely well-paid.

He also suggests there could be vacancies for longevity coaches (who can help an ageing population focus on improving their healthspan, not just their lifespan), reality counsellors (to support younger people so used to living in a computer-generated universe that they struggle with non-virtual beings), human/machine relationship coaches (teaching older generations how to relate to their robots), data detectives (finding errors and biases in code and analysing black boxes when things go terribly wrong) and pet geneticists (aiding you to clone your cat or order a new puppy with blue fur).

And there may be a human version of this as well. What if in the future I want Spock ears can we do that without doing surgery for my unborn children? Its not impossible. And if we did ever get to some kind of super-intelligence, where robots started to be conscious which I think is so unlikely you can imagine a robot rights lawyer, arguing for the rights of machines.

What will be the highest-paid roles? I think people who are dealing with very large sums of money will always be paid large sums of money, says Watson. The same is true of high-end coders and lawyers, even if paralegals are going to be replaced by algorithms.

Funnily enough, he adds, I think philosophy is an emerging job. I think were going to see more philosophers employed in very large companies and paid lots of money because a lot of the new tech has ethical questions attached to it, particularly AI and genomics.

And among the maths, science and engineering, there could be space for artists to thrive, he predicts. It is probably a ludicrous thought and will never happen, but Id love to think that there will be money for the people who can articulate the human condition in the face of all this changing technology so, incredibly good writers, painters and animators. And then there will be the metaverse architects.

In this brave new world, more power and money will be eaten up by the tech giants who own the algorithms that control almost every aspect of our lives. For Professor David Spencer, expert on labour economics at Leeds University Business School and author of Making Light Work: An End to Toil in the Twenty-First Century, this will make how we structure society and business even more crucial.

Trading

Water speculators or water traders could emerge as resources become scarce.

Health

Longevity coaches will help an ageing population to focus on improving their healthspan, not just their lifespan.

Mental health

Reality counsellors, who might support younger people so used to living in a computer-generated universe that they struggle with non-virtual beings.

Human/machine

Relationship coaches will teach older generations how to relate to their robots.

Technology

Data detectives will find errors and biases in code and analyse black boxes when things go wrong.

Pet geneticists

They will aid you to clone your cat or order a new puppy with blue fur.

AI philosophers

They will teach companies how to navigate the moral conundrums thrown up by technology developing at warp speed.

Metaverse architects

Theyll build our new virtual environments.

The goal should be to ensure that technology lightens work, in terms of hours and direct toil, he says, but this will require that technology is operated under conditions where workers have more of a say over its design and use.

Those who can own technology or have a direct stake in its development are likely to benefit most. Those without any ownership stakes are likely to lose out. This is why we need to talk more about ensuring that its rewards are equally spread. Wage growth for all will depend on workers collectively gaining more bargaining power and this will depend on creating an economy that is more equal and democratic in nature.

Watson thinks politicians need to catch up fast. Big tech should be regulated like any other business. If youve created an algorithm or a line of robots that is making loads of money, tax the algorithm, tax the robots, without a shadow of a doubt.

For employees stressed about the imminent disintegration of their careers, Marks argues that the responsibility lies elsewhere. I dont think the onus should necessarily be on individuals it should be on organisations and on educational establishments to ensure that people are prepared and future-proofed, and on government to make wise predictions and allocate resources where needed.

Watson points out that we need to upgrade an education system that is still teaching children precisely the things that computers are inherently terribly good at things that are based on perfect memory and recall and logic.

But he believes it would also be healthy if everybody did actively ponder on their future, and refine their skills accordingly. I think employers are really into people that have a level of creativity and particularly curiosity these days but I think also empathy, being a good person, having a personality. We dont teach that at school.

The advent of AI has led many including those in Green Party to advocate for a universal basic income, a stipend given by the state to every citizen, regardless of their output. But Watson is not convinced that will be necessary or helpful.

All of this technology is supposed to be creating this leisure society, he says. Rather weirdly, it seems to make us busier, and its really unclear as to why thats happened. I think, fundamentally, we like to be busy, we feel useful, it stops us thinking about the human condition. So Im not sure were going to accept doing next to nothing.

The other thing is, I think it would be very bad for society. Work is really quite critical to peoples wellbeing. Theres a lot of rich people without jobs, and theyre not happy. Work is really important to people in terms of socialisation and meaning and purpose and self-image.

So in a lot of instances, governments should not be allowing technology to take over certain professions or at least they shouldnt be completely eliminated, because that wouldnt be good for a healthy society.

The machines may be on the march, but dont put your feet up just yet.

Excerpt from:

The jobs that will disappear by 2040, and the ones that will survive - inews

Posted in Artificial Super Intelligence | Comments Off on The jobs that will disappear by 2040, and the ones that will survive – inews

Page 2123