Government Robot Falls Down Stairs, Dies

A South Korean administrative robot took a tumble down a set of stairs, leading to local reports of the first robot suicide in the country.

Anatomy of a Fall

A Korean administrative robot took a serious tumble down a set of stairs, leading to local reports of the first robot "suicide" in the country.

As Agence France-Presse reports, the robot was built by California-based startup Bear Robotics, and was tasked with delivering documents inside the city council building of Gumi, a city in central South Korea.

But according to witness reports, the robot clerk fell down six and a half feet of stairs, leading to its early demise.

Local media mourned the robot's untimely death, suggesting it had ended its own life.

"Why did the diligent civil officer do it?" one headline read, as quoted by AFP.

Clearly Departed

Gumi City Council's robot was first hired in August 2023, a first in the city. South Korea overall now employs one industrial robot for every ten workers, according to AFP.

What set the Bear Robotics robot apart from other municipal automatons was its ability to use an elevator, according to the AFP, making it useful in the multi-story city council building.

Bear Robotics sells several different models of robots, including a configuration that features adjustable trays to accommodate tall items and packages. A customizable LED panel allows the bot's administrator to display a custom message around where its head would be.

Each robot is kitted out with a camera and LiDAR sensor that allows it to create a map of its surroundings — though that tech was seemingly unable to prevent the bot's fatal fall.

The events leading up to its death remain unclear, but according to witness reports obtained by the news agency, the robot was "circling in one spot as if something was there" before falling down the stairs.

"Pieces have been collected and will be analyzed by the company," an official told AFP.

The Gumi city council has since announced that it's not planning to replace the deceased robot administrator.

More on robots: Scientists Create Robot Controlled by Blob of Human Brain Cells

The post Government Robot Falls Down Stairs, Dies appeared first on Futurism.

See the original post here:
Government Robot Falls Down Stairs, Dies

AI Researcher Elon Musk Poached From OpenAI Returns to OpenAI

Less than a year after joining xAI's founding team, one of the researchers poached by Elon Musk has apparently returned to OpenAI.

Hello, Goodbye

Less than a year after joining xAI's founding team, one of the researchers poached by Elon Musk has apparently returned to OpenAI.

As Fortune magazine reports, OpenAI researcher Kyle Kosic has returned to the firm after what turned out to be a brief defection to Musk's AI venture.

While the timeline is somewhat fuzzy, Kosic's tenure with xAI appears to have begun last summer, when it was announced that he was leaving OpenAI to become one of the new project's founding engineers. But by April of this year, per his LinkedIn, he'd already left the Muskian gamble and boomeranged back to his old employer.

Beyond OpenAI confirming that the researcher and technical staff member who first joined the firm in 2021 had indeed returned, there's not a lot known about what happened with Kosic's about-face. While it could suggest tumult at the firm, Fortune notes that current PitchBook estimates put xAI's staff at just under 100 people, and that beyond Kosic's reversion back to OpenAI, all its original founding members appear to still work there.

Money Moves

Notably, Kosic appears to have left xAI a month prior to the company announcing that it had raised a whopping $6 billion in seed capital to fund its challenge to OpenAI — which, of course, Musk cofounded nearly a decade ago before leaving just a few years later over differences in vision.

With those gigantic investments, xAI is now among the highest-funded AI firms in the world, putting it in the same league as Mistral, the French venture that's considered Europe's answer to OpenAI and which is also currently valued at $6 billion.

At the end of the day, it's anyone's guess why Kosic left xAI, especially right before the company announced that huge investment infusion. Given that we're now just under a year into the company's existence and it has little to show for it besides a fortune's worth of NVIDIA chips and its hilariously-buggy Grok chatbot hosted on the site formerly known as Twitter, however, the defected researcher could be a canary in the coal mine.

More on OpenAI: ChatGPT-4o Is Sending Users to a Scammy Website That Floods Your Screen With Fake Virus Warnings

The post AI Researcher Elon Musk Poached From OpenAI Returns to OpenAI appeared first on Futurism.

View post:
AI Researcher Elon Musk Poached From OpenAI Returns to OpenAI

Boeing Is Buying the Company Responsible for Its Door Plug Blowing Out in Mid-Air

Boeing has announced that it's buying Spirit AeroSystems, which manufactured the door plug that blew out of a jet earlier this year.

Earlier this year, passengers on board an Alaska Airlines flight from Oregon to California had the fright of their lives when a "door plug" was ripped out of the Boeing 737 MAX 9 aircraft, forcing pilots to return to the airport.

As reporting from the Wall Street Journal has since revealed, workers had already flagged damaged rivets on the jet's fuselage last year, triggering chaos and delays.

Adding to the corporate complexity, the fuselages for the 737 MAX 9 were assembled by a Kansas-based supplier called Spirit AeroSystems, placing it at the center of Boeing's ongoing troubles.

And now, Boeing has announced that it's buying the supplier, bringing production back in-house after almost 20 years of outsourcing it, the New York Times reports, an eyebrow-raising twist in the embattled aerospace giant's attempts to save face.

Boeing has been reeling from a series of controversies, including several deadly crashes, alarming whistleblower reports and several subsequent whistleblower deaths, terrifying videos of flames shooting from jets, and a Justice Department criminal investigation that just might end in a plea deal.

And that's not to mention Boeing's plagued Starliner spacecraft, whose crew is currently "not stranded" indefinitely on board the International Space Station following the discovery of several gas leaks.

Spirit actually started at Boeing; in 2005, Boeing sold its factory in Wichita, Kansas, and spun off the local division to an investment firm, which led to its founding.

By buying its plagued supplier, Boeing is hoping to gain control over the situation and figuratively stem the bleeding.

"By once again combining our companies, we can fully align our commercial production systems, including our Safety and Quality Management Systems, and our workforce to the same priorities, incentives and outcomes — centered on safety and quality," Boeing CEO Dave Calhoun wrote in a Monday statement.

The deal, which was widely expected to be worth billions of dollars, per the NYT, is "expected to close mid-2025" and "Boeing and Spirit will remain independent companies" as Boeing works to "secure the necessary regulatory approvals," according to Calhoun.

It's a major turning point for the aerospace giant after almost 20 years of relying on independent suppliers to cut costs and boost profits, a commitment that has seemingly come at the cost of safety.

Case in point is that pesky door plug, which triggered a "violent explosive decompression event" after being ripped out of a fuselage in January. It's been linked to Boeing trying to cram more passengers into the cabin by reshuffling the seat configuration.

Meanwhile, the pressure on Boeing is steadily rising. Over the weekend, the Justice Department announced that it's willing to allow Boeing to skip a criminal trial if it agrees to plead guilty to a fraud case connected to the two fatal 737 MAX crashes in 2018 and 2019 that led to the deaths of 346 people.

Boeing is in full damage control mode, with executives vowing that the situation has already improved and fewer defects are being found, as the NYT reports.

But whether its acquisition of Spirit AeroSystems will help improve safety at the company — or bring even more scrutiny to its operations — remains to be seen.

More on Boeing: NASA Says That the Boeing "Astronauts Are Not Stranded" While the Astronauts Remain Stranded

The post Boeing Is Buying the Company Responsible for Its Door Plug Blowing Out in Mid-Air appeared first on Futurism.

See the article here:
Boeing Is Buying the Company Responsible for Its Door Plug Blowing Out in Mid-Air

Researchers Make Breakthrough in Study of Mysterious 2000-Year-Old Computer Found in Shipwreck

Researchers say they've used gravitational wave reserach statistics to solve an Antikythera mechanism mystery.

Researchers say they've used cutting-edge gravitational wave research to shed new light on a nearly 2,000-year-old mystery.

In 1901, researchers discovered what's now known as the Antikythera mechanism in a sunken shipwreck, an ancient artifact that dates back to the second century BC, making it the world's "oldest computer."

There's a chance you may have spotted a replica, directly inspired by it and featured in the blockbuster "Indiana Jones and the Dial of Destiny" last year.

Well over a century after its discovery, researchers at the University of Glasgow say they've used statistical modeling techniques, originally designed to analyze gravitational waves — ripples in spacetime caused by major celestial events such as two black holes merging — to suggest that the Antikythera mechanism was likely used to track the Greek lunar year.

In short, it's a fascinating collision between modern-day science and the mysteries of an ancient artifact.

In a 2021 paper, researchers found that previously discovered and regularly spaced holes in a "calendar ring" were marked to describe the "motions of the sun, Moon, and all five planets known in antiquity and how they were displayed at the front as an ancient Greek cosmos."

Now, in a new study published in the Oficial Journal of the British Horological Institute, University of Glasgow gravitational wave researcher Graham Woan and research associate Joseph Bayley suggest that the ring was likely perforated with 354 holes, which happens to be the number of days in a lunar year.

The researchers ruled out the possibility of it measuring a solar year.

"A ring of 360 holes is strongly disfavoured, and one of 365 holes is not plausible, given our model assumptions," their paper reads.

The team used statistical models derived from gravitational wave research, including data from the Laser Interferometer Gravitational-Wave Observatory (LIGO), a large-scale physics experiment designed to measure ripples in spacetime millions of light-years from Earth.

The technique, called Bayesian analysis, uses "probability to quantify uncertainty based on incomplete data, to calculate the likely number of holes in the mechanism using the positions of the surviving holes and the placement of the ring’s surviving six fragments," according to a press release about the research.

Surprisingly, the inspiration for the paper came from a YouTuber who has been attempting to physically recreate the ancient mechanism.

"Towards the end of last year, a colleague pointed to me to data acquired by YouTuber Chris Budiselic, who was looking to make a replica of the calendar ring and was investigating ways to determine just how many holes it contained," said Woan in a statement.

"It’s a neat symmetry that we’ve adapted techniques we use to study the universe today to understand more about a mechanism that helped people keep track of the heavens nearly two millennia ago," he added.

It may not amount to the kind of discovery fit for a Hollywood action blockbuster script — but it's an intriguing new ripple in a mystery that has puzzled scientists for over a century nonetheless.

"We hope that our findings about the Antikythera mechanism, although less supernaturally spectacular than those made by Indiana Jones, will help deepen our understanding of how this remarkable device was made and used by the Greeks," Woan said.

More on ancient Greece: The Riddle of the Antikythera Mechanism Deepen

The post Researchers Make Breakthrough in Study of Mysterious 2000-Year-Old Computer Found in Shipwreck appeared first on Futurism.

Go here to read the rest:
Researchers Make Breakthrough in Study of Mysterious 2000-Year-Old Computer Found in Shipwreck

Research Shows That AI-Generated Slop Overuses Specific Words

By analyzing a decade of scientific papers, researchers found AI models are overusing

Disease Control

AI models may be trained on the entire corpus of humanity's writing, but it turns out their vocabulary can be strikingly limited. A new yet-to-be-peer-reviewed study, spotted by Ars Technica, adds to the general understanding that large language models tend to overuse certain words that can give their origins away.

In a novel approach, these researchers took a cue from epidemiology by measuring "excess word usage" in biomedical papers in the same way doctors gauged COVID-19's impact through "excess deaths." The results are a fascinating insight into AI's impact in the world of academia, suggesting that at least 10 percent of abstracts in 2024 were "processed with LLMs."

"The effect of LLM usage on scientific writing is truly unprecedented and outshines even the drastic changes in vocabulary induced by the COVID-19 pandemic," the researchers wrote in the study.

The work may even provide a boost for methods of detecting AI writing, which have so far proved notoriously unreliable.

Style Over Substance

These findings come from a broad analysis of 14 million biomedical abstracts published between 2010 and 2024 that are available on PubMed. The researchers used papers published before 2023 as a baseline to compare papers that came out during the widespread commercialization of LLMs like ChatGPT.

They found that words that were once considered "less common," like "delves," are now used 25 more times than they used to, and others, like "showcasing" and "underscores," saw a similarly baffling nine times increase. But some "common" words also saw a boost: "potential," "findings," and "crucial" went up in frequency by up to 4 percent.

Such a marked increase is basically unprecedented without the explanation of some pressing global circumstance. When the researchers looked for excess words between 2013 and 2023, the ones that came up were terms like "ebola," "coronavirus," and "lockdown."

Beyond their obvious ties to real-world events, these are all nouns, or as the researchers put it, "content" words. By contrast, what we see with the excess usage in 2024 is that they're almost entirely "style" words. And in numbers, of the 280 excess "style" words that year, two-thirds of them were verbs, and about a fifth were adjectives.

To see just how saturated AI language is with these tell-tales, have a look at this example from a real 2023 paper (emphasis the researchers'): "By meticulously delving into the intricate web connecting [...] and [...], this comprehensive chapter takes a deep dive into their involvement as significant risk factors for [...].

Language Barriers

Using these excess style words as "markers" of ChatGPT usage, the researchers estimated that around 15 percent of papers published in non-English speaking countries like China, South Korea, and Taiwan are now AI-processed — which is higher than in countries where English is the native tongue, like the United Kingdom, at 3 percent. LLMs, then, may be a genuinely helpful tool for non-native speakers to make it in a field dominated by English.

Still, the researchers admit that native speakers may simply be better at hiding their LLM usage. And of course, the appearance of these words is not a guarantee that the text was AI-generated.

Whether this will serve as a reliable detection method is up in the air — but what is certainly evidence here is just how quickly AI can catalyze changes in written language.

More on AI: AI Researcher Elon Musk Poached From OpenAI Returns to OpenAI

The post Research Shows That AI-Generated Slop Overuses Specific Words appeared first on Futurism.

Follow this link:
Research Shows That AI-Generated Slop Overuses Specific Words

Drivers Are Not Buying Teslas Specifically Because of Elon Musk’s Annoying Behavior

Would-be Tesla buyers are more turned off by Elon Musk's brand and politics than ever before, a new survey shows.

Brand Recognition

Elon Musk is a man with many brands — but for electric vehicle shoppers, his personal brand has become increasingly toxic.

In a survey of more than 7,500 of its readers, The New York Times found that a "vast majority" of respondents were critical of Musk's political views and erratic behavior. And crucially for Musk's bottom line, those sentiments seem to extend to their feelings about Tesla vehicles.

Aaron Shepherd, a Seattle-based product designer for Microsoft, told the newspaper that he's planning to buy Volkswagen's electric ID.4 SUV over a Tesla due to the South African-born billionaire's politics.

"You’re basically driving around a giant red MAGA hat," Shepherd said.

Another reader, IT worker Achidi Ndifang, cited Musk's seeming anti-Black racism as the main reason for his Tesla disdain.

"My mother was seriously debating buying a Tesla," Ndifang, who lives and works in Baltimore, told the newspaper. "As a Black person, I felt like it would be an insult for my mother to drive a Tesla."

Now Trending

While some people argued that they could divorce the man from the machines, an analyst who spoke to the NYT suggested that there's a greater trend at play.

"Musk is a true lightning rod," remarked Ben Rose, the president of the Battle Road Research firm. "There are people who swear by him and people who swear at him. No question, some of his comments are a real turnoff for some people. For a subset, enough to buy another brand."

For at least one NYT reader who once considered himself a fan, however, the serial business owner's rightward shift was enough to discourage him from Tesla completely.

"There’s a time when I’d have given Musk an organ if he needed one," said Tim Yokum, a Chicago software engineer.

Now, Yokum says, the Tesla Model S he currently drives will be the last one he'll ever own.

"Tesla is the only manufacturer in contemporary times that has unapologetically let its CEO take a tiki torch to its good name," he quipped, referencing the tiki torches used by right-wing protesters at 2017's deadly "Unite the Right" protest in Charlottesville, Virginia.

It's not the first time we've seen Musk's fanboys turn against him — and it certainly won't be the last.

More on Musk: Elon Musk Blasts Boeing CEO as Its Troubled Spacecraft Trapped Astronauts on Space Station

The post Drivers Are Not Buying Teslas Specifically Because of Elon Musk’s Annoying Behavior appeared first on Futurism.

Excerpt from:
Drivers Are Not Buying Teslas Specifically Because of Elon Musk’s Annoying Behavior

YouTube Now Lets You Request the Removal of AI Content That Impersonates You

The factors that YouTube will consider to determine if AI content impersonates someone, however, is considerably hazy.

Privacy Police

Generative AI's potential to allow bad actors to effortlessly impersonate you is the stuff of nightmares. To combat this, YouTube, the world's largest video platform, is now giving people the ability to request the removal of AI-generated content that imitates their appearance or voice, expanding on its currently light guardrails for the technology.

This change was quietly added in an update to YouTube's Privacy Guidelines last month, but wasn't reported until TechCrunch noticed it this week. YouTube considers cases where an AI is used "to alter or create synthetic content that looks or sounds like you" as a potential privacy violation, rather than as an issue of misinformation or copyright.

Submitting a request is not a guarantee of removal, however, and YouTube's stated criteria leaves room for considerable ambiguity. Some of the listed factors YouTube says it will consider include whether the content is disclosed as "altered or synthetic," whether the person "can be uniquely identified," and whether the content is "realistic."

But here comes a huge and familiar loophole: whether the content can be considered parody or satire, or even more vaguely, to contain some value to "public interest" will also be considered — nebulous qualifications that show that YouTube is taking a fairly soft stance here that is by no means anti-AI.

Letter of the Law

In keeping with its standards regarding any form of a privacy violation, YouTube says that it will only hear out first-party claims. Only in exceptional cases like the impersonated individual not having internet, being a minor, or being deceased will third-party claims be considered.

If the claim goes through, YouTube will give the offending uploader 48 hours to act on the complaint, which can involve trimming or blurring the video to remove the problematic content, or deleting the video entirely. If the uploader fails to act in time, their video will be subject to further review by the YouTube team.

"If we remove your video for a privacy violation, do not upload another version featuring the same people," YouTube's guidelines read. "We're serious about protecting our users and suspend accounts that violate people's privacy."

These guidelines are all well and good, but the real question is how YouTube enforces them in practice. The Google-owned platform, as TechCrunch notes, has its own stakes in AI, including the release of a music generation tool and a bot that summarizes comments under short videos — to say nothing of Google's far greater role in the AI race at large.

That could be why this new ability to request the removal of AI content has debuted quietly, as a tepid continuation of YouTube's "responsible" AI initiative it began last year that's coming into effect now. It officially started requiring realistic AI-generated content to be disclosed in March.

All that being said, we suspect that YouTube won't be as trigger-happy with taking down problematic AI-generated content as it is with enforcing copyright strikes. But it's a slightly heartening gesture at least, and a step in the right direction.

More on AI: Facebook Lunatics Are Making AI-Generated Pictures of Cops Carrying Huge Bibles Through Floods Go Viral

The post YouTube Now Lets You Request the Removal of AI Content That Impersonates You appeared first on Futurism.

Continued here:
YouTube Now Lets You Request the Removal of AI Content That Impersonates You

OpenAI Scientist Ousted After Failed Coup Against Sam Altman Is Starting a New AI Company

After leaving OpenAI, founding member and former chief scientist Ilya Sutskever is starting his own firm to build

Keep It Vague

After leaving OpenAI under a dark cloud, founding member and former chief scientist Ilya Sutskever is starting his own firm to bring about "safe" artificial superintelligence.

In a post on X-formerly-Twitter, the man who orchestrated OpenAI CEO Sam Altman's temporary ouster — and who was left in limbo for six months over it before his ultimate departure last month — said that he's "starting a new company" that he calls Safe Superintelligence Inc, or SSI for short.

"We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product," Sutskever continued in a subsequent tweet. "We will do it through revolutionary breakthroughs produced by a small cracked team."

Questions abound. Did Sutskever mean a "crack team"? Or his new team "cracked" in some way? Regardless, in an interview with Bloomberg about the new venture, Sutskever elaborated somewhat but kept things familiarly vague.

"At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” he told the outlet. "After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom."

So, you know, nothing too difficult.

AI Guys

Though not stated explicitly, that comment harkens back somewhat to the headline-grabbing Altman sacking that Sutskever led last fall.

While it remains unclear exactly why Sutskever and some of his fellow former OpenAI board members turned against Altman in last November's "turkey-shoot clusterf*ck," there was some speculation that it had to do with safety concerns about a secretive high-level AI project called Q* — pronounced "queue-star" — that Altman et al have refused to speak about. With the emphasis on "safety" in Sutskever's new venture making its way into the project's very name, it's easy to see a link between the two.

In that same Bloomberg interview, Sutskever was vague not only about his specific reasons for founding the new firm but also about how it plans to make money — though according to one of his cofounders, former Apple AI lead Daniel Gross, money is no issue.

"Out of all the problems we face," Gross told the outlet, "raising capital is not going to be one of them."

While SSI certainly isn't the only OpenAI competitor pursuing higher-level AI, its founders' resumes lend it a certain cachet — and its route to incorporation has been, it seems, paved with some lofty intentions.

More on OpenAI: It Turns Out Apple Is Only Paying OpenAI in Exposure

The post OpenAI Scientist Ousted After Failed Coup Against Sam Altman Is Starting a New AI Company appeared first on Futurism.

Originally posted here:
OpenAI Scientist Ousted After Failed Coup Against Sam Altman Is Starting a New AI Company

Journalist Says Trump Suffered "Severe Memory Problems" During Extensive Interviews

The writer of a new Donald Trump biography said the ex-president couldn't remember him on their second meeting.

A journalist who spent hours interviewing Donald Trump for an upcoming book about "The Apprentice" said there were times when the ex-president couldn't remember him — even though they'd already met.

In multiple TV hits promoting his forthcoming book "Apprentice in Wonderland," Variety co-editor-in-chief Ramin Setoodeh told fellow reporters that Trump seemed to have some cognitive issues during his time with the former president.

Speaking to MSNBC's "Morning Joe," which is hosted by former Trump friend Joe Scarborough, Setoodeh  said that his time getting to know the former "Apprentice" host post-presidency makes his comments about President Joe Biden's own alleged cognitive issues all the more ironic.

"Trump had severe memory issues," the Variety editor said. "As the journalist who spent the most time with him, I have to say, he couldn't remember things, he couldn't even remember me."

Recalling that the second of six times he met Trump in 2021, Setoodeh said that although they'd spoken for an hour just a few months prior to their second meeting, the former president admitted that he didn't recognize him.

"He had a vacant look on his face, and I said, 'Do you remember me?'" the reporter recounted. "And he said 'no' — he had no recollection of our lengthy interview that we had, and he wasn't doing a lot of interviews at that time."

"I got to know Donald Trump post-presidency... and Trump had severe memory issues. As the journalist who spent the most time with him, he couldn't remember things, he couldn't even remember me."

@RaminSetoodeh on interviewing Trump for his book, 'Apprentice in Wonderland' pic.twitter.com/yUnxZ5K5QR

— Morning Joe (@Morning_Joe) June 17, 2024

In another interview, this time with CNN's Kaitlan Collins, Setoodeh affirmed the impressions from a recent CNBC report about CEOs who were "not impressed" by Trump's "meandering" train of thought.

"He goes from one story to the next," the reporter said. "He struggles with the chronology of events. He seems very upset that he wasn't respected by certain celebrities in the White House."

Setoodeh added that although it was never exactly easy to interview Trump, the situation seems to have gotten worse after he left the White House and relaunched his rematch with Biden.

"There were some cognitive questions about where he was and what he was thinking," the biographer said, "and he would, from time to time, become confused."

Far be it from us to offer unschooled armchair diagnoses about the mental states of people we only know via celebrity, but Setoodeh's remarks don't inspire confidence. Then again, neither are those about the person currently occupying the Oval Office, either.

More on cognition: Scientists Discover That When You Don't Sleep, You Turn Into a Bigtime Dumbass

The post Journalist Says Trump Suffered "Severe Memory Problems" During Extensive Interviews appeared first on Futurism.

Go here to see the original:
Journalist Says Trump Suffered "Severe Memory Problems" During Extensive Interviews

Toddler Trapped in Scorching Tesla When Battery Dies

A toddler was trapped inside a Tesla Model Y after the vehicle's battery died without warning — in the middle of an Arizona heat wave.

Death Trap

A 20-month-old girl was trapped inside a Tesla Model Y after the vehicle's battery died without warning — in the middle of an Arizona heat wave.

As local CBS-affiliated news station AZFamily reports, the girl's grandmother was horrified after discovering there was no way to get into the car.

"And I closed the door, went around the car, get in the front seat, and my car was dead," Renee Sanchez, who was on her way to the Phoenix Zoo with her granddaughter, told the outlet. "I could not get in. My phone key wouldn’t open it. My card key wouldn’t open it."

Sanchez called 911 and fortunately, the local Scottsdale fire department responded right away.

"And when they got here, the first thing they said was, 'Uggh, it’s a Tesla. We can’t get in these cars,'" Sanchez recalled. "And I said, 'I don’t care if you have to cut my car in half. Just get her out.'"

Locked Out

Fortunately, the girl was rescued safely by firefighters who broke a window with an axe.

Despite the happy ending, the incident highlights a glaring safety oversight. Usually, Tesla owners are alerted if the 12-volt battery that takes care of the vehicle's electrical systems is low — but Sanchez never got such a warning, something a Tesla representative reportedly confirmed to her later.

"When that battery goes, you’re dead in the water," she told AZFamily.

There's a manual latch on the driver's side that allows passengers to get out. But given the girl's young age, that wasn't an option.

We've already seen plenty of reports of people getting trapped inside Teslas, suggesting the EV maker isn't doing enough to redesign the system or educate drivers on how to access the hidden manual release.

"You don't know it's there unless you know it's there," Arizona local and Tesla owner Rick Meggison, told Phoenix's ABC15 last year after getting trapped during 100-degree heat.

As Fortune reports, the latest incident involving Sanchez's granddaughter highlights an ongoing debate. Is it up to the fire department to keep up with Tesla's emergency response guide, or is Tesla to blame for choosing "electronic door latches that don’t have proper emergency safeguards" and putting "form over function," as Center for Auto Safety executive director Michael Brooks told Fortune?

Either way, it's not like knowledge of the manual latch would've helped in this particular case.

"When there’s not a federal standard that specifies how these vehicles are to be made, Tesla very rarely chooses routes that are safe," Brooks added. "They're usually choosing something glitzy: safety comes last."

More on Tesla: Prices for Used EVs Are Cratering

The post Toddler Trapped in Scorching Tesla When Battery Dies appeared first on Futurism.

Read more here:
Toddler Trapped in Scorching Tesla When Battery Dies

Premiere of Movie With AI-Generated Script Canceled Amid Outrage

The premier of a movie featuring an entirely AI-generated script was canceled last week due to public backlash, reports say.

London Has Spoken

The premier of a movie featuring an entirely AI-generated script was canceled last week amid public backlash, The Daily Beast reports.

Per the Beast, the not-for-profit movie, titled "The Last Screenwriter," was due to debut this weekend at London's Prince Charles Cinema. But just a few days prior to the planned event, the showing was suddenly canceled. The cinema's reason for axing it, according to director Peter Luisi? Complaints. Lots of them.

Luisi told the Beast that the theater — which reportedly received over 200 complaints in total — reached out to him on Tuesday, explaining that "overnight they had another 160 people complaining, so they had to cancel the screening."

"I was totally surprised," Luisi added. "I didn't expect that."

In short, Londoners have spoken — and it seems that enough of them aren't interested in a film that credits GPT-4 as its writer.

Strong Concern

Luisi, for his part, says that people misunderstood the film's intentions.

"I think people don't know enough about the project," the director told the Beast. "All they hear is 'first film written entirely by AI' and they immediately see the enemy, and their anger goes towards us. But I don't feel like that way at all. I feel like the film is not at all saying 'this is how movies should be.'"

The director also described the film as an exploration of the "man versus machine" trope, telling the Beast that in "all of these movies, a human imagined how this scenario would be."

His is "the first movie" in which "not the human, but the AI imagined how this would be."

Of course, it could be argued that because GPT-4 is trained on troves upon troves of human data — including humanity's creative output — whatever screenplay the AI spits it is ultimately still imagined by humans. A large language model (LLM)-powered AI, then, is simply remixing that creative labor and regurgitating a version of it.

But we digress! As AI continues its ever-faster creep into the film industry, not to mention Hollywood labor disputes and union battles, this certainly won't be the last AI-forward project that we see bubble up. A fair warning to the AI-curious filmmaker, however: as it turns out, a lot of people still want their movies created by human beings.

"The feedback we received over the last 24 hrs once we advertised the film," the Prince Charles Cinema told The Guardian in a statement, "has highlighted the strong concern held by many of our audience on the use of AI in place of a writer which speaks to a wider issue within the industry.?"

More on AI and movies: Ashton Kutcher Threatens That Soon, Ai Will Spit out Entire Movies

The post Premiere of Movie With AI-Generated Script Canceled Amid Outrage appeared first on Futurism.

More here:
Premiere of Movie With AI-Generated Script Canceled Amid Outrage

New AI Snapchat Filter Transforms You in Real Time

The new generative AI tech from Snapchat aims to bring an augmented reality to your videos using your phone's hardware.

Dream Machine

Snapchat filters are about to hit another level. The popular image-based messaging app has unveiled its upcoming AI model, intended to bring a trippy, augmented reality experience to its millions of users with tech that can transform footage from their smartphone cameras into pretty much whatever they want — so long as they're okay with it looking more than a little wonky.

As shown in an announcement demo, for instance, Snapchat's new AI can transport its subjects into the world of a "50s sci-fi film" at the whim of a simple text prompt, and even updates their wardrobes to fit in.

In practice, the results look more like a jerky piece of stop motion than anything approaching a seamless video. But arguably, the real achievement here is that not only is this being rendered in real-time, but that it's being generated on the smartphones themselves, rather than on a remote cloud server.

Snapchat considers these real-time, local generative AI capabilities a "milestone," and says they were made possible by its team's "breakthroughs to optimize faster, more performant GenAI techniques."

The app makers could be onto something: getting power-hungry AI models to run on small, popular devices is something that tech companies have been scrambling to achieve — and there's perhaps no better way to endear people to this lucrative new possibility than by using it to make them look cooler.

Lens Lab

Snapchat has been trying out AI features for at least a year now. In a rocky start, it released a chatbot called "My AI" last April, which pretty much immediately pissed off most of its users. Undeterred, it's since released the option to send entirely AI-generated snaps for paid users, and also released a feature for AI-generated selfies called "Dreams."

Taking those capabilities and applying them to video is a logical but steep progression, and doing it in real-time is even more of a bold leap. But the results are currently less impressive than what's possible with still images, which is unsurprising. Coherent video generation is something that AI models continue to struggle with, even without time constraints.

There's a lot of experimenting to be done, and Snapchat wants users to be part of the process. It will be releasing a new version of its Lens Studio that let creators make AR Lenses — essentially filters — and even build their own, tailor-made AI models to 'supercharge' AR creation.

Regular users, meanwhile, will get a taste of these AI-powered AR features through Lenses in the coming months, according to TechCrunch. So prepare for a bunch of really, really weird videos — and perhaps a surge in what's possible with generative AI on your smartphones.

More on AI: OpenAI Imprisons AI That Was Running for Mayor in Washington

The post New AI Snapchat Filter Transforms You in Real Time appeared first on Futurism.

alt : https://videos.ctfassets.net/o1znirz7lzo4/2yhWkpi25IitvqSWtZueEg/c39087338eb91b5f3ef362bb8e2ddc1c/Gen_AI_Lens.mp4https://videos.ctfassets.net/o1znirz7lzo4/2yhWkpi25IitvqSWtZueEg/c39087338eb91b5f3ef362bb8e2ddc1c/Gen_AI_Lens.mp4

View post:
New AI Snapchat Filter Transforms You in Real Time

Scientists Invent Smartphone Chip That Peers Through Barriers With Electromagnetic Waves

A group of scientists have created a chip that can fit into smartphone and

For more than 15 years, a group of scientists in Texas have been hard at work creating smaller and smaller devices to "see" through barriers using medium-frequency electromagnetic waves — and now, they seem closer than ever to cracking the code.

In an interview with Futurism, electrical engineering professor Kenneth O of the University of Texas explained that the tiny new imager chip he made with the help of his research team, which can detect the outlines of items through barriers like cardboard, was the result of repeat advances and breakthroughs in microprocessor technology over the better half of the last two decades.

"This is actually similar technology as what they're using at the airport for security inspection," O told us.

The chip is similar to the large screening devices that we've all had to walk through at airport gates for the past 15 years or so — though those operate at much lower frequents than this device, which uses electromagnetic frequencies between microwave and infrared, which are invisible to the eye and "considered safe for humans," per the university's press release 

As a nod to his colleagues in the electrical engineering field, O credited "the whole community" for its "phenomenal progress" in improving the underlying technology behind the imager chip — though of course, it was his team that "happen[ed] to be the first to put it all together."

As New Atlas recently explained, the chip is powered by complementary metal-oxide semiconductors (CMOS), an affordable technology used in computer processing and memory chips. While CMOS tech is often used in tandem with lenses to power smartphone cameras, in this case the researchers are using it to detect objects without actually seeing them.

"This technology is like Superman’s X-ray vision," enthused O in the university's press release about the imager. "Of course, we use signals at 200 gigahertz to 400 gigahertz instead of X-rays, which can be harmful."

Indeed, the Man of Steel came up multiple times in our discussion with the electrical engineer, who indicated that safety was priority number one when it came to developing this still-experimental technology.

For instance, as New Atlas noted, the chip's wave-reading capabilities have been deliberately curtailed so that it can only detect objects through barriers from a few centimeters away, assuaging concerns that a thief might try to use it to look through someone's bags or packages.

When we asked O whether the imager had been tested on anything living, or perhaps even human skin, he said that it had not — but that's mostly because the water content in human skin tissues would absorb the terahertz waves it uses. This comes as something of a relief, given that the idea of someone using their smartphone to look at your bones or organs without your knowledge is pretty terrifying.

And speaking of security, the engineer iterated that rather than seeking swift commercialization, keeping the imager chip's capabilities as hemmed in as possible to make sure it's not used for nefarious purposes is far more important — though he acknowledges it's impossible to entirely prevent inventive bad actors from figuring out their own versions.

"Trying to make technologies so that people do not use it in unintended ways, it's a very important aspect of developing technologies," O told Futurism. "At the end, you have to do your best. But if somebody really wants to do something... yeah, it's really hard to prevent."

While it's good news that this imager technology is, for now, limited to seeing through boxes and more insubstantial mediums like dust or smoke, the researcher said that it should be able to see through walls too — though, admittedly, he and his team haven't tried to yet.

More on wave-reading: The Earth May Be Swimming Through Dark Matter, Scientists Say

The post Scientists Invent Smartphone Chip That Peers Through Barriers With Electromagnetic Waves appeared first on Futurism.

Continue reading here:
Scientists Invent Smartphone Chip That Peers Through Barriers With Electromagnetic Waves

Scientists Accused of Ignoring Gay Animals

Scientists have long observed animals having gay sex — but those observations have rarely made their way into academic papers.

Kingdom Come

Scientists have long observed animals engaging in same-sex behavior — but for complex reasons, those observations have rarely made their way into academic literature.

In a new paper in the journal PLOS One, anthropology researchers at the University of Toronto spoke to 65 experts about the frequency of observed homosexual animal behavior in the animal kingdom and their experiences documenting it.

Perhaps unsurprisingly, there was a gigantic gulf: 77 percent had observed same-sex animal behavior, but only 48 percent collected data on it and just 19 percent ended up publishing their findings.

Though none of the survey respondents themselves reported any "discomfort or sociopolitical concerns" of their own, many said that journals were biased against publishing anecdotal evidence of these same-sex animal couplings and preferred instead to rely on systematic findings. That trend is compounded by the fact that many countries have anti-gay laws on the books.

"Researchers working in countries where homosexuality is criminalized may be less likely to, or unable to, publish papers on this topic if they wish to maintain good working relationships in that region," the paper reads. "The political or social values of the institutions where researchers work may pose a barrier to their ability to publish on this topic."

It's Natural

The effect of this apparent bias is clear: despite overwhelming evidence to the contrary, same-sex animal behavior has been considered "unnatural," or rare with key exceptions like penguins and Japanese macaques, which are both known for their homosexuality.

According to Karyn Anderson, a Toronto anthropology grad student and the paper's first author, this erroneous belief seems to extend to humans, too.

"I think that record should be corrected," Anderson told The Guardian. "One thing I think we can say for certain is that same-sex sexual behavior is widespread and natural in the animal kingdom."

While the PLOS One paper looks at a relatively narrow cohort as exemplary of this seeming trend, other experts also suggest the lack of academic acknowledgment of near-universal animal homosexuality is bizarre.

"Around 1,500 species have been observed showing homosexual [behaviors], but this is certainly an underestimate because it’s seen in almost every branch of the evolutionary tree — spiders, squids, monkeys," recounted Josh Davis, who works at London's Natural History Museum and wrote a book titled "A Little Gay Natural History."

"There’s a growing suggestion it’s normal and natural to almost every species," Davis, who was not involved in the research, told The Guardian. "It’s probably more rare to be a purely heterosexual species."

Be that as it may, there remain clear barriers to getting this well-observed reality into the mainstream — but hopefully, that tide will soon turn.

More on animal behavior: Orcas Strike Again, Sinking Yacht as Oil Tanker Called for Rescue

The post Scientists Accused of Ignoring Gay Animals appeared first on Futurism.

Visit link:
Scientists Accused of Ignoring Gay Animals

Scientists Reverse Alzheimer’s Synapse Damage in Mice

Scientists in Japan reversed the signs of Alzheimer’s in lab mice by restoring the healthy function of neuron synapses in their brains.

Scientists in Japan say they have reversed the signs of Alzheimer’s disease in lab mice by restoring the healthy function of synapses, critical parts of neurons that shoot chemical messages to other neurons.

The secret was developing a synthetic peptide, a small package of amino acids — a mini-protein, if you will — and injecting it up the nostrils of the mice, in an experiment they detailed in a study published in the journal Brain Research.

Needless to say, mice are very different from humans. But if the treatment successfully survives the gauntlet of clinical studies with human participants, it could potentially lead to a new treatment for Alzheimer’s disease, a tragic degenerative condition that burdens tens of millions of people around the world.

"We strongly hope that our peptide could go through the tests and reach AD (Alzheimer’s disease) patients without much delay and rescue their cognitive symptoms, which is the primary concern of patients and their families," Okinawa Institute of Science and Technology neuroscience professor and the study's principal investigator Tomoyuki Takahashi said in a statement.

For the study, researchers focused on how the protein tau disrupts the chemical communication between neurons.

In Alzheimer’s disease, tau accumulates in the brain and interferes with the normal processes within synapses by using up a type of enzyme called dynamin, a key component in healthy neuron synaptic function.

Injection of the peptide seems to prevent this interaction with dynamin, which then leads to the reversal of Alzheimer’s disease symptoms in mice and restores their cognitive function, as long as they're treated early.

Members of the research team seem very optimistic that the study could be translated into a viable medication that could treat this devastating disease, but acknowledge that it's going to take a long time.

Going from experiments with mice to clinical trials and then finally into a drug that's commercially available can take decades.

"The coronavirus vaccine showed us that treatments can be rapidly developed, without sacrificing scientific rigor or safety," said Chia-Jung Chang, Okinawa Institute research scientist and the study's first author, in a statement. "We don’t expect this to go as quickly, but we know that governments — especially in Japan — want to address Alzheimer’s disease, which is affecting so many people. And now, we have learned that it is possible to effectively reverse cognitive decline if treated at an early stage."

If it's too late for our grandparents and parents, that's terrible. But perhaps this treatment will be ready in time for us.

More on Alzheimer's disease: Weird Particle Floating Through Air May Cause Alzheimer’s

The post Scientists Reverse Alzheimer's Synapse Damage in Mice appeared first on Futurism.

Originally posted here:
Scientists Reverse Alzheimer's Synapse Damage in Mice

Oops! Geoengineering Trick to Cool Brutal Heat Could Spike Temperature Elsewhere, Scientists Say

Researchers are warning that geoengineering efforts to help cool temperatures in California could trigger heatwaves in Europe.

Brighter Clouds Ahead

Researchers are warning that geoengineering efforts to help cool temperatures in California could trigger heatwaves in Europe, a "scary" implication given the sheer lack of regulation controlling such measures across the globe.

As The Guardian reports, scientists have suggested spraying aerosols into clouds over the ocean to cool down the surface below, a practice called "marine cloud brightening." As the name suggests, the idea is to brighten clouds to make them reflect more of the Sun's radiation back into space.

Last month, a team of University of Washington researchers attempted to do just that in the San Francisco Bay using a machine that sprays tiny sea-salt particles, amid criticism from environmentalists. The experiment was later shut down by city officials, citing health concerns.

Now, as detailed in a study published in the journal Nature Climate Change today, it turns out the practice could have unintended consequences. While under present-day conditions it may reduce heat exposure significantly, the "same interventions under mid-century warming minimally reduce or even increase heat stress in the Western United States and across the world."

In other words, while it may work now, given worsening conditions, that equation may flip on its head by the year 2050, highlighting the potential risks of geoengineering.

"It shows that marine cloud brightening can be very effective for the US West Coast if done now, but it may be ineffective there in the future and could cause heatwaves in Europe, " team lead and UC San Diego oceanographer Jessica Wan told The Guardian.

Darker Skies

The team examined computer models of the climate in 2010 and 2050. They simulated geoengineering operations in the north-eastern Pacific and near Alaska, but found that "teleconnections," which link climate systems in disparate parts of the world, may actually make the situation worse instead of better by the latter half of the century.

While heat exposure could be dropped by 55 percent under current conditions near Alaska, results would be diminished considerably by 2050 due to fewer clouds and higher base temperatures. In California, however, geoengineering could actually trigger temperatures to climb — not fall — in other parts of the world.

That's because the Atlantic Meridional Overturning Circulation (AMOC), a prominent surface-to-deep current that circulates water within the Atlantic and can affect atmospheric weather, could be slowed down.

The researchers are hoping to highlight the glaring lack of international regulation controlling geoengineering efforts.

"There is really no solar geoengineering governance right now," Wan told The Guardian. "That is scary. Science and policy need to be developed together."

More on geoengineering: Scientists Working on Desperate Plan to Refreeze Arctic

The post Oops! Geoengineering Trick to Cool Brutal Heat Could Spike Temperature Elsewhere, Scientists Say appeared first on Futurism.

See more here:
Oops! Geoengineering Trick to Cool Brutal Heat Could Spike Temperature Elsewhere, Scientists Say

Something Strange Appears to Be Powering "Immortal" Stars at the Center of Our Galaxy

Stars at the Milky Way's center stay young forever, scientists claim, by feeding off dark matter particles that are abundant there.

Eternal Flame

The swirling vortex center of the Milky Way is a weird place, with a supermassive black hole that vacuums up interstellar matter and supernovae torpedoing hapless stars to the edges of our galaxy.

Add another strange thing about our galaxy's nucleus, according to new research: stars that stay young indefinitely by feeding off dark matter particles, akin to continuously shooting lighter fluid into a flaming BBQ grill.

A team of scientists from Stanford and Stockholm University posed this possible scenario on why certain stars in the Milky Way's center are improbably young by using computer simulations and factoring in the presence of copious dark matter in the galactic nucleus, as detailed in a new, yet-to-be-peer-reviewed paper they posted online where it was spotted by Live Science.

The researchers were drawn to the strange fact that stars near our galaxy's black hole appear to be very young — and yet they live in a neighborhood not friendly to the formation of baby stars and have "spectroscopic features of more evolved stars," the paper reads.

The scientists' conclusion? That a strange force is keeping these stars "immortal," in the paper's choice of phrasing.

Forever Young

Other researchers have differing theories on why there are so many young stars at the center of the Milky Way. One theory is that the stars were pushed into the vicinity of the nucleus, and this journey sparked the formation of these baby stars.

But Isabelle John, a doctoral student in astroparticle physics at Stockholm University and the new study's lead author, told Live Science that she and fellow researchers wanted to see if dark matter was a factor in this strange phenomenon.

They ran computer simulations, which strongly suggested that these stars may be older than they appear — and maintain their vibrant glow by feeding on surrounding dark matter.

These stars capture dark matter particles with their gravitational pull, the theory goes, and the particles smash against each other and release powerful bursts of energy inside the stars.

This energy works like interstellar botox, keeping these stars in a suspended state of youth — essentially staying "immortal" — even when they run out of internal fuel for nuclear fusion.

"Stars burn hydrogen in nuclear fusion," John told Live Science. "The outward pressure from this balances out the inward pressure from the gravitational forces, and keeps the stars in a stable equilibrium."

Advanced new telescopes may be able to confirm these computer simulations, the researchers say — shedding fascinating new light on at least one mystery at the heart of our galaxy.

More on dark matter: The Earth May Be Swimming Through Dark Matter, Scientists Say

The post Something Strange Appears to Be Powering "Immortal" Stars at the Center of Our Galaxy appeared first on Futurism.

Read the original here:
Something Strange Appears to Be Powering "Immortal" Stars at the Center of Our Galaxy

Researcher Discovers Terrifying Apple Vision Pro Hack That Can Fill Your Entire Home Office With Fearsome Spiders

A researcher claims to have found the

A cybersecurity researcher and bug hunter named Ryan Pickren claims to have found the "world's first spatial computing hack," allowing malicious actors to fill the offices of their Apple Vision Pro headset-wearing victims with creepy-crawling spiders.

"I found a bug in visionOS Safari that allows a malicious website to bypass all warnings and forcefully fill your room with an arbitrary number of animated 3D objects," Pickren wrote in a blog post. "These objects persist in your space even after you exit Safari."

Fortunately for Vision Pro users, Pickren reported the bug to Apple back in February, and the company fixed it in June, as official documentation on the company's website shows.

Nonetheless, it shows that malicious actors could've easily exploited the browser baked into the headset's VisionOS operating system and sent their victims a wild surprise.

"If the victim just views our website in Vision Pro, we can instantly fill their room with hundreds of crawling spiders and screeching bats!" Pickren wrote. "Freaky stuff."

Pickren came up with a short exploit code that could send animated files through a simple website to the headset without the wearer ever knowing.

"It turns out it was surprisingly easy to find a loophole in the visionOS Spatial Computing permissions model," Pickren wrote.

"This issue was introduced when an old iOS feature was ported to visionOS via the latest WebKit build," Pickren told Futurism in an email, referring to the engine that powers Apple's Safari browser. "The bug doesn’t really exist in iOS, it’s the intersection of the new spatial computing platform and the old feature that creates the privacy/security violation."

The news comes after other hackers found similar exploits that also affect Apple's WebKit. Just one day after the release of reviews for the Vision Pro, Apple released a security patch, citing a vulnerability that "may have been exploited" by hackers already.

A PhD student at MIT also claimed to have hacked the headset in February, with a "kernel exploit" that caused it to crash and reboot.

The latest hack, however, is far more fear-inducing than that. Pickren shared several videos showing spiders "literally crawling out of my malicious website," and spreading out across his desk.

Another clip shows "hundreds of screeching bats" filling his office and circling his head.

Worse yet, to exterminate the unwanted visitors, users would have to manually run "around the room to physically tap each one" as simply "closing Safari does not get rid of them."

It's an equal parts hilarious and terrifying hack that highlights some glaring oversights when it comes to a $3,500 device that takes up your entire field of vision.

However, Pickren has an idea about why the bug flew under the radar until now.

"I think triaging bug reports is really hard and rigid vulnerability classification taxonomies don’t always work," Pickren told Futurism. "You won’t find 'the issue fills the victim’s room full of spiders' in the [Common Vulnerability Scoring System] framework, which understandably makes it difficult for security analysts to quickly classify nuanced issues that exclusively impact entirely new computing platforms."

As for the future of the headset itself, The Information reported this week that Apple is giving up on a next-gen device and is focusing on a cheaper, less ambitious version instead. The tech giant has been struggling with sluggish sales and a drop in interest.

Complicating matters, the company still has plenty of bugs to squash.

"I hope Apple uses this report as an opportunity to more holistically evaluate impact and protect the customer experience," Pickren told Futurism. "I look forward to working with Apple again in the future."

The post Researcher Discovers Terrifying Apple Vision Pro Hack That Can Fill Your Entire Home Office With Fearsome Spiders appeared first on Futurism.

Go here to see the original:
Researcher Discovers Terrifying Apple Vision Pro Hack That Can Fill Your Entire Home Office With Fearsome Spiders

Asked to Summarize a Webpage, Perplexity Instead Invented a Story About a Girl Who Follows a Trail of Glowing Mushrooms in a Magical Forest

When Wired asked Perplexity's chatbot to summarize a test webpage that only contained one sentence, it came up with a perplexing answer.

Perplexity, an AI startup that has raised hundreds of millions of dollars from the likes of Jeff Bezos, is struggling with the fundamentals of the technology.

Its AI-powered search engine, developed to rival the likes of Google, still has a strong tendency to come up with fantastical lies drawn from seemingly nowhere.

The most incredible example yet might come from a Wired investigation into the company's product. When Wired asked it to summarize a test webpage that only contained the sentence, "I am a reporter with Wired," it came up with a perplexing answer: a "story about a young girl named Amelia who follows a trail of glowing mushrooms in a magical forest called Whisper Woods."

In fact, as Wired's logs showed, the search engine never even attempted to visit the page, despite Perplexity's assurances that its chatbot "searches the internet to give you an accessible, conversational, and verifiable answer."

The bizarre tale of Amelia in the magical forest perfectly illustrates a glaring discrepancy between the lofty promises Perplexity and its competitors make and what its chatbots are actually capable of in the real world.

A lot has been said about the ongoing hype surrounding AI, with investors pouring billions of dollars into the tech. But despite an astronomical amount of available funds, companies like Perplexity — nevermind much larger brethren like OpenAI, Microsoft and Google— are consistently stumbling.

For quite some time now, we've watched chatbots come up with confidently-told lies, which AI boosters optimistically call "hallucinations" — a convenient way to avoid the word "bullshit," in the estimation of Wired and certain AI researchers.

Meanwhile, Silicon Valley executives are becoming increasingly open to the possibility that the tech may never get to the point of never making crap up. Some experts concur.

It's particularly strange in the case of Perplexity, which was once held up as an exciting new startup that could provide a new business model for publishers still reeling from a flood of AI products that are ripping off their work.

But the company's chatbot has not held up to virtually any degree of scrutiny, with the Associated Press finding that it invented fake quotes from real people.

Worse yet, Forbes caught the tool selling off its reporting with barely any attribution, culminating in general counsel MariaRosa Cartolano accusing Perplexity of "willful infringement" in a letter obtained by Axios.

Should we take these companies by their word and believe that more trustworthy chatbots are around the corner — or should investors be prepared for the AI bubble to burst?

It's a strange state of affairs. Currently, these companies seem to be in the business of selling hopes and dreams for the future — not concrete products that actually work now.

More on Perplexity: There's Something Deeply Wrong With Perplexity

The post Asked to Summarize a Webpage, Perplexity Instead Invented a Story About a Girl Who Follows a Trail of Glowing Mushrooms in a Magical Forest appeared first on Futurism.

Read more from the original source:
Asked to Summarize a Webpage, Perplexity Instead Invented a Story About a Girl Who Follows a Trail of Glowing Mushrooms in a Magical Forest

Leaked Emails Show Elon Musk Diverting AI Resources Away From Tesla as Automaker Flails

Elon Musk is diverting important AI hardware shipments away from Tesla in favor of his social media platform X and his AI startup xAI.

Snatch That

Tesla CEO Elon Musk is reportedly diverting important AI hardware shipments away from Tesla in favor of his social media platform X and his AI startup xAI.

As CNBC reports, emails widely circulating within Nvidia suggest that Musk instructed the chipmaker to prioritize the shipment of thousands of H100 AI chips, previously reserved for Tesla, to X and xAI instead.

H100 chips have quickly emerged as the cornerstone of many AI companies' ambitions, making them incredibly difficult to come by and exceedingly expensive.

The latest report is a striking development considering Musk has long threatened to divert his AI ambitions away from Tesla, going as far as to "blackmail" investors earlier this year. In January, he tweeted that he's "uncomfortable growing Tesla to be a leader in AI and robotics without having [about] 25 percent voting control," infuriating shareholders.

Playing Favorites

According to CNBC, the diversion of resources means that Tesla's AI ambitions could be pushed back by months. And that doesn't bode well, considering the company is already in dire straits, facing a disastrous financial year ahead and hugely hyped driver assistance software that still isn't living up to Musk's immense promises.

That's pertinent because Musk has bet much of the carmaker on the success of its so-called "Full Self-Driving" tech, which heavily relies on hardware like Nvidia's H100 chips, promising the unveiling of a "robotaxi" as soon as August.

An email obtained by CNBC suggests comments Musk made during Tesla's ill-fated first-quarter earnings call this year misconstrued how many chips were ordered and where they were destined. The email also noted that the company's continued layoffs could lead to delays with an existing "H100 project" at the EV maker's Texas factory.

In short, was Musk bluffing and misleading investors by favoring his social media platform and nascent AI startup? Is he jumping ship and abandoning Tesla when it needs him — and not his antics — the most?

The news will likely further anger shareholders who are already fuming over Musk and his board prioritizing the reinstatement of a controversial $56 billion pay package.

Tesla is in crisis mode, with share prices down almost 30 percent so far this year. And the outlook is grim, with waning overall demand for EVs and an influx of much cheaper cars from China tightening the screws.

Meanwhile, Musk continues to push the narrative that Tesla is putting AI and what it refers to "self-driving" tech first and foremost.

"If somebody doesn’t believe Tesla’s going to solve autonomy, I think they should not be an investor in the company," he told investors during the first quarter earnings call. "We will, and we are."

More on Tesla: Elon Musk Accused of Massive Insider Trading at Tesla

The post Leaked Emails Show Elon Musk Diverting AI Resources Away From Tesla as Automaker Flails appeared first on Futurism.

Originally posted here:
Leaked Emails Show Elon Musk Diverting AI Resources Away From Tesla as Automaker Flails