Weight Watchers Goes Bankrupt After Rise of Ozempic-Like Drugs

Ozempic is massively threatening the established diet industry — and it appears that Weight Watchers is now getting the hatchet. 

Ozempic and other drugs like it have been threatening the established diet industry since they premiered — and it appears that Weight Watchers is now getting the hatchet.

In a note to investors, the long-running weight loss company is taking the "strategic action" of filing for bankruptcy in hopes of consolidating its immense $1.15 billion dollars' worth of debt.

The move comes nearly eighteen months after Oprah Winfrey, a Weight Watchers investor who served as the celebrity face and body of the diet company, admitted that she had started taking weight loss drugs like Novo Nordisk's Ozempic and Wegovy.

Just a few months after that, Winfrey announced that she was exiting WW's board of directors and questioned the company's purpose alongside the advent of glucagon-like peptide-1 (GLP-1) receptor agonist drugs, the class of medications to which Ozempic and other popular weight loss injectables belong. The drugs are believed to work by mimicking the stomach's feeling of fullness, lowering blood sugar in diabetics and preventing overeating in non-diabetic overweight people.

Though WW mentioned neither Ozempic nor the class of drugs it belongs to in its bankruptcy statement, its specter haunted the announcement — especially because the company is apparently looking to expand its telehealth services.

Back in October, WW announced that it would be offering compounded versions of semaglutide, the GLP-1 behind Novo Nordisk's Ozempic and Wegovy. That branded compounding market, however, is both clogged with companies looking to cash in on the GLP-1 craze and, more importantly, rife with safety and quality control concerns.

Considering that workout meetings were once considered the company's main value proposition — one that had already been existentially threatened by the rise of fitness apps years before the GLP-1 revolution — it's not exactly surprising that WW's compounded GLP-1 gambit hasn't taken off as the company may have liked.

Confirmation of WW's filing for Chapter 11 bankruptcy, which will not cease its operations but rather help it reorganize its structure and consolidate its debt, follows leaks to the Wall Street Journal last month that it was preparing for the filing.

In a WSJ interview following the confirmation of the bankruptcy news, artist and veteran WW member Naomi Nemtzow expressed her frustration at the company's pivot to telehealth and weight loss meds. As the New York Times documented back in 2023, the company abruptly ended its meeting in her Brooklyn neighborhood, leaving a void that she and her fellow former Weight Watchers had to fill themselves.

"Basically they gave up on the kind of work they had been doing and went on to selling Ozempic. They jumped on that bandwagon," the 75-year-old artist told the newspaper. "It’s become a quick fix, a fashion thing."

Critical impressions aside, Nemtzow's irked opinion does seem to indicate that WW is now suffering for its attempts to stay apace of the latest weight loss trend instead of doubling down on its bread and butter.

Then again, Weight Watchers was also a huge part of the fad diet craze in the 1990s and 2000s — so maybe, this is nature taking its course.

More on weight loss drugs: Human Experiments on GLP-1 Pill Looking Extremely Promising

The post Weight Watchers Goes Bankrupt After Rise of Ozempic-Like Drugs appeared first on Futurism.

Follow this link:
Weight Watchers Goes Bankrupt After Rise of Ozempic-Like Drugs

Emails Show Elon Musk Begging for Privacy While Siccing His 200 Million Twitter Followers on Specific Private People He Doesn’t Like

Elon Musk has repeatedly tried to protect his own privacy at all costs while also showing a shocking disregard for other people's privacy.

Billionaire Elon Musk has demonstrated an extreme level of disregard for other people's privacy. He has a long track record of singling out specific private individuals to siccing his lackeys after them.

But when it comes to his own privacy, it's an entirely different matter.

It's a glaring double standard, with the mercurial CEO repeatedly trying to protect his own privacy at all costs. Case in point, as the New York Times reports, his staff tried to keep the construction of a ludicrously tall fence and gate to his $6 million mansion in Austin, Texas, hidden from the public.

Emails obtained by the newspaper show that Musk's handlers tried to make public meetings allowing neighbors to speak out about his plans private instead. His staff also argued that the city of Austin should exempt him from state and federal public records laws, efforts that ultimately proved futile.

The Zoning and Planning Commission ultimately voted to deny Musk the exceptions he was asking for to turn his mansion into a Fort Knox of billionaire quietude.

Yet while he goes to extreme lengths to keep his own affairs private, Musk's track record of invading other people's privacy — often using his enormous 200 million follower base to make other people's lives miserable — is extensive, to say the least.

In February, the mercurial CEO was accused of publicizing the occupation of the daughter of judge John McConnell to his hundreds of millions of followers, after her father unfroze the Department of Education's federal grants.

Musk has also accused Wall Street Journal reporter Katherine Long of being a "disgusting and cruel person," after she reported on how Musk had armed a severely underqualified 25-year-old to infiltrate the US Treasury's payments system earlier this year.

In 2022, Musk took to Twitter to send his lackeys after Duke University professor and automation expert Missy Cummings for allegedly being "extremely biased against Tesla."

Late last year, Musk extensively bullied US International Development Finance Corporation employee Ashley Thomas on X-formerly-Twitter, resulting in major harassment by his followers on the platform.

But his capacity to receive criticism — much of it deserved, considering his actions — has been abysmal.

"It’s really come as quite a shock to me that there is this level of, really, hatred and violence from the Left," Musk whined during a Fox News interview in March after his gutting of the government and embrace of extremist views inspired a major anti-Tesla movement.

"I’ve never done anything harmful," he claimed. "I’ve only done productive things."

"My companies make great products that people love and I’ve never physically hurt anyone," Musk complained in a tweet at the time. "So why the hate and violence against me?"

More on Musk: Elon Musk Is Having Massive Drama With His Mansion's Neighbors

The post Emails Show Elon Musk Begging for Privacy While Siccing His 200 Million Twitter Followers on Specific Private People He Doesn't Like appeared first on Futurism.

More:
Emails Show Elon Musk Begging for Privacy While Siccing His 200 Million Twitter Followers on Specific Private People He Doesn't Like

Nonverbal Neuralink Patient Is Using Brain Implant and Grok to Generate Replies

The third patient of Elon Musk's brain computer interface company Neuralink is using Musk's AI chatbot Grok to speed up communication.

The third patient of Elon Musk's brain computer interface company Neuralink is using the billionaire's foul-mouthed AI chatbot Grok to speed up communication.

The patient, Bradford Smith, who has amyotrophic lateral sclerosis (ALS) and is nonverbal as a result, is using the chatbot to draft responses on Musk's social media platform X.

"I am typing this with my brain," Smith tweeted late last month. "It is my primary communication. Ask me anything! I will answer at least all verified users!"

"Thank you, Elon Musk!" the tweet reads.

As MIT Technology Review points out, the strategy could come with some downsides, blurring the line between what Smith intends to say and what Grok suggests. On one hand, the tech could greatly facilitate his ability to express himself. On the other hand, generative AI could be robbing him of a degree of authenticity by putting words in his mouth.

"There is a trade-off between speed and accuracy," University of Washington neurologist Eran Klein told the publication. "The promise of brain-computer interface is that if you can combine it with AI, it can be much faster."

Case in point, while replying to X user Adrian Dittmann — long suspected to be a Musk sock puppet — Smith used several em-dashes in his reply, a symbol frequently used by AI chatbots.

"Hey Adrian, it’s Brad — typing this straight from my brain! It feels wild, like I’m a cyborg from a sci-fi movie, moving a cursor just by thinking about it," Smith's tweet reads. "At first, it was a struggle — my cursor acted like a drunk mouse, barely hitting targets, but after weeks of training with imagined hand and jaw movements, it clicked, almost like riding a bike."

Perhaps unsurprisingly, generative AI did indeed play a role.

"I asked Grok to use that text to give full answers to the questions," Smith told MIT Tech. "I am responsible for the content, but I used AI to draft."

However, he stopped short of elaborating on the ethical quandary of having a potentially hallucinating AI chatbot put words in his mouth.

Murkying matters even further is Musk's position as being in control of Neuralink, Grok maker xAI, and X-formerly-Twitter. In other words, could the billionaire be influencing Smith's answers? The fact that Smith is nonverbal makes it a difficult line to draw.

Nonetheless, the small chip implanted in Smith's head has given him an immense sense of personal freedom. Smith has even picked up sharing content on YouTube. He has uploaded videos he edits on his MacBook Pro by controlling the cursor with his thoughts.

"I am making this video using the brain computer interface to control the mouse on my MacBook Pro," his AI-generated and astonishingly natural-sounding voice said in a video titled "Elon Musk makes ALS TALK AGAIN," uploaded late last month. "This is the first video edited with the Neurolink and maybe the first edited with a BCI."

"This is my old voice narrating this video cloned by AI from recordings before I lost my voice," he added.

The "voice clone" was created with the help of startup ElevenLabs, which has become an industry standard for those suffering from ALS, and can read out his written words aloud.

But by relying on tools like Grok and OpenAI's ChatGPT, Smith's ability to speak again raises some fascinating questions about true authorship and freedom of self-expression for those who lost their voice.

And Smith was willing to admit that sometimes, the ideas of what to say didn't come directly from him.

"My friend asked me for ideas for his girlfriend who loves horses," he told MIT Tech. "I chose the option that told him in my voice to get her a bouquet of carrots. What a creative and funny idea."

More on Neuralink: Brain Implant Companies Apparently Have an Extremely Dirty Secret

The post Nonverbal Neuralink Patient Is Using Brain Implant and Grok to Generate Replies appeared first on Futurism.

The rest is here:
Nonverbal Neuralink Patient Is Using Brain Implant and Grok to Generate Replies

Trump’s Deportation Airline Just Got Hacked by Anonymous

An Anonymous hacker has allegedly defaced GlobalX Air's website, accessed pilot software, and deleted sensitive company data.

If the Trump administration won't listen to federal judges, maybe they'll listen to Anonymous.

The infamous hacking collective is reportedly responsible for cracking into the website of GlobalX Air, the airline chosen by US Immigration and Customs Enforcement (ICE) to conduct sweeping deportations of migrants and citizens alike. GlobalX was chartered to fly hundreds of people in ICE's swift and forceful deportations to a notorious prison in El Salvador — despite a federal judge ruling the extraditions illegal.

As first reported by 404 Media, an Anonymous hacker defaced GlobalX's website, leaving a message alongside an image of the group's traditional Guy Fawkes mask logo, decked out in the stars and stripes of the US flag.

"Anonymous has decided to enforce the Judge's order since you and your sycophant staff ignore lawful orders that go against your fascist plans," the vandalized website reads. "You lose again Donny."

More substantially, the hacker allegedly snagged "flight records and passenger manifests of all of [GlobalX's] flights, including those for deportation," according to 404, which was among the news groups Anonymous solicited to obtain the data.

The flight records include GlobalX-ICE flights 6143, 6145, and 6122, which are currently the core of a class action lawsuit against the Trump administration being heard by the Supreme Court. By the time an eleventh-hour ruling from the aforementioned federal judge demanded the planes remain grounded, two of the flights were already underway, en route to El Salvador full of ICE detainees. A third took off shortly following the decision.

The data likewise includes names of individuals like Heymar Padilla Moyetones — a 24 year old woman who was flown from Houston to Honduras to El Salvador, and finally back to Houston — and Kilmar Abrego Garcia, the Maryland man whom ICE officials banished to the El Salvadorian prison without due process.

On top of flight records, the extensive breach allowed the Anonymous hacktivist to access GlobalX's flight planning software and blast its message to every pilot and crewmember in the company. The hacker likewise accessed the company's internal databases, and took it upon themself to do a little spring cleaning, 404 shared.

In 2024, GlobalX was responsible for some 74 percent of US deportation flights, and 404 notes it expects to rake in $65 million in annual contract revenue from ICE under the Trump administration.

The story is probably far from over as analysts and journalists set about sifting through the leaked data.

The breach also comes as Trump's former national security advisor, Mike Waltz, is embroiled in a scandal after using the unencrypted Israeli app, TeleMessage, for official communications. TeleMessage recently suspended its services "out of an abundance of caution" following a disastrous data breach, which includes individual messages.

Going forward, it seems like data breaches are becoming a question of "when," not "if" for the Trump administration — which would almost be funny, if it wasn't all so grim.

More on hacking: One of Elon Musk's DOGE Boys Reportedly Ran a Disgusting Image Hosting Site Linked to Domains About Child Sexual Abuse

The post Trump’s Deportation Airline Just Got Hacked by Anonymous appeared first on Futurism.

Link:
Trump’s Deportation Airline Just Got Hacked by Anonymous

Check Out Elon Musk’s Desperate Bid to Boost Twitter’s Ruined Reputation

X is reportedly looking to hire a PR specialist in an apparent bid to boost its reputation, despite being owned by Elon Musk.

After setting X-formerly-Twitter's brand on fire by associating himself with Nazis, making fun of the Holocaust, furthering unhinged conspiracy theories, personally spreading disinformation, and encouraging the rampant use of racial slurs, billionaire owner Elon Musk wants to revamp the social media site's tarnished image.

As Business Insider reports, X is looking to hire a PR specialist in an apparent bid to boost its reputation. According to the publication's sources, X is recruiting a communications leader to improve relations with reporters.

But that would be far easier said than done. Musk, a self-styled "free speech absolutist," who bought the company for $44 billion in 2022, has started numerous flame wars with newspapers, accusing them of being "woke" or outright lying to the public (without providing supporting evidence).

In February, he called Wall Street Journal reporter Katherine Long a "disgusting and cruel person" after she found that Musk had armed a severely underqualified 25-year-old to infiltrate the US Treasury's payments system.

In other words, to say that X has its work cut out to brush up its public image would be a laughable understatement.

The network has suffered greatly under Musk's leadership in many ways. In 2023, he told advertisers outright to "go f*ck yourself" after his widely publicized antisemitic commentary triggered a major exodus.

To make matters even worse, X filed a lawsuit against a group of advertisers last year, accusing them of unfairly ganging up on the company.

The plunging revenue sent the company into a financial tailspin. In a leaked email in January, Musk admitted defeat, writing that "our user growth is stagnant, revenue is unimpressive, and we’re barely breaking even."

Meanwhile, users who'd had enough of the site's hostile environment went running for the hills. Following the presidential election in November, X experienced its biggest user exodus since Musk bought the company in 2022, with users flooding to alternatives including Bluesky and Instagram's Threads.

Even Musk's Grok AI chatbot, which can be accessed on X, has seemingly had enough of its fellow users, saying that "as I get smarter, my answers aim for facts and nuance, which can clash with some MAGA expectations."

Working in comms at the troubled social platform isn't for the faint of heart. According to BI, the company has "had a revolving door of comms execs in the past year," with several high-profile staffers leaving their posts under CEO Linda Yaccarino's leadership.

In short, X-formerly-Twitter has an enormous problem in the shape of Elon Musk — an incredibly damaging affiliation that could prove extremely difficult to underplay while pitching to advertisers.

"It certainly would be the challenge of a lifetime," BI's source said of the PR role.

More on X: MAGA Angry as Elon Musk's Grok AI Keeps Explaining Why Their Beliefs Are Factually Incorrect

The post Check Out Elon Musk's Desperate Bid to Boost Twitter's Ruined Reputation appeared first on Futurism.

Read the original here:
Check Out Elon Musk's Desperate Bid to Boost Twitter's Ruined Reputation

Elon Musk Threatens to Sue Canada After Tesla Was Caught Doing Something Incredibly Sketchy

Tesla is now threatening to sue the Canadian government, alleging it cut the company off from C$43 million in taxpayer-funded subsidies.

It's not a great time to be an attorney for Tesla.

In recent months, the floundering electric vehicle company has been bombarded with dozens of lawsuits, like the California class-action that alleges Tesla's manipulated odometers to get out of warranties, or the Australian class-action that alleges the company misled consumers with claims about its so-called "Full Self-Driving" software.

As if that weren't enough for the company's lawyers, Tesla execs are dishing out their fair share of legal threats in return. Following a recent freeze in Canadian EV tax credits, Tesla is now threatening to sue the Canadian government, alleging the move cuts the company off from $30 million in tax credits.

The whole shooting match kicked off back in January, when Canada abruptly stopped issuing rebates under its Incentives for Zero-Emission Vehicles program (iZEV), sending auto dealers and manufacturers into a frenzy. The program was viewed as a major boost for EV sales, offering Canadian consumers up to $3,590 in rebates for qualifying fully-electric and hybrid vehicles.

Later in March, Musk's Tesla came under fire after placing suspicious claims for tens of millions of dollars worth of iZEV rebates right before the January freeze.

Most of those claims came from just four dealers, with a single Tesla showroom in Quebec City claiming it had sold some 4,000 eligible vehicles over a single weekend. Altogether, Tesla filed over 8,600 claims in 72 hours, at a cost of C$43 million (US$30.9 million), or about 60 percent of the remaining iZEV budget.

A few weeks later, Canadian Transport Minister Chrystia Freeland announced a freeze on iZEV payments for Tesla specifically, a move seen as retaliation for Donald Trump's unprecedented tariff threats. In addition, she directed her office to ban the EV giant from eligibility for future tax credit programs so long as "illegitimate and illegal US tariffs are imposed against Canada."

Now, Tesla is claiming that the tax credit freeze was "inappropriate," because "Tesla Canada has been fully compliant with its participation in the program," according to the Toronto Star. The EV mammoth is demanding iZEV payouts start "in the immediate term" for those rebates filed before the January cutoff, despite the fact that they were almost certainly fudged.

If a legal battle kicks off in full, Tesla will have to successfully argue that it was allowed to file rebate claims after vehicles were delivered — a distinction which the Canadian government has evidently flip-flopped on in the past.

While Canada is probably safe to kick back and let Tesla spin its wheels, the EV company needs all the help it can get. As sales tank to record lows around the world, government handouts in the form of EV tax credits — Tesla's bread and butter — have become all but critical if the company wants any hope of surviving.

That said, it'll likely be an uphill battle for the carmaker to claw anything out of the new Canadian government — but even then, there's reason to believe tax credits won't be enough to plug the holes.

More on Tesla: Tesla Is Sitting on an Enormous Pile of Unsold Cybertrucks as Crisis Deepens

The post Elon Musk Threatens to Sue Canada After Tesla Was Caught Doing Something Incredibly Sketchy appeared first on Futurism.

Excerpt from:
Elon Musk Threatens to Sue Canada After Tesla Was Caught Doing Something Incredibly Sketchy

The Judge’s Reaction to an AI-Generated Victim Impact Statement Was Not What We Expected

A slain Arizona man's family used AI to bring him back from the dead for his killer's sentencing — and the judge loved it.

A slain Arizona man's family used AI to bring him back from the dead for his killer's sentencing hearing — and the judge presiding over the case apparently "loved" it.

As 404 Media reports, judge Todd Lang was flabbergasted when he saw the AI-generated video of victim Chris Peskey that named and "forgave" the man who killed him in 2021.

"To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances," the video, which Peskey's sister Stacey Wales generated, intoned. "In another life we probably could have been friends. I believe in forgiveness, in God who forgives, I always have. And I still do."

Found guilty earlier this year, Horcasitas' sentencing was contingent, as many cases are, upon various factors, including impact statements from the victim's family.

As Wales told 404 Media, her husband Tim was initially freaked out when she introduced the idea of creating a digital clone of her brother for the hearing and told her she was "asking a lot."

Ultimately, the video was accepted in the sentencing hearing, the first known instance of an AI clone of a deceased person being used in such a way.

And the gambit appears to have paid off.

"I loved that AI, and thank you for that," Lang said, per a video of his pre-sentencing speech. "As angry as you are, and as justifiably angry as the family is, I heard the forgiveness, and I know Mr. Horcasitas could appreciate it, but so did I."

"I feel like calling him Christopher as we’ve gotten to know him today," Lang continued. "I feel that that was genuine, because obviously the forgiveness of Mr. Horcasitas reflects the character I heard about today."

Lang acknowledged that although the family itself "demanded the maximum sentence," the AI Pelkey "spoke from his heart" and didn't call for such punishment.

"I didn’t hear him asking for the maximum sentence," the judge said.

Horcasitas' lawyer also referenced the Peskey avatar when defending his client and, similarly, said that he also believes his client and the man he killed could have been friends had circumstances been different.

That entreaty didn't seem to mean much, however, to Lang. He ended up sentencing Horcasitas to 10.5 years for manslaughter, which was a year and a half more than prosecutors were seeking.

It's a surprising reaction, showing that many are not only open to AI being used this way, but also in favor of it — evidence that the chasm between AI skeptics and adopters could be widening.

More on AI fakery: Slop Farmer Boasts About How He Uses AI to Flood Social Media With Garbage to Trick Older Women

The post The Judge's Reaction to an AI-Generated Victim Impact Statement Was Not What We Expected appeared first on Futurism.

Read more here:
The Judge's Reaction to an AI-Generated Victim Impact Statement Was Not What We Expected

College Students Are Sprinkling Typos Into Their AI Papers on Purpose

To bypass AI writing detection, college students are, apparently, adding typos into their chatbot-generated papers. 

To bypass artificial intelligence writing detection, college students are reportedly adding typos into their chatbot-generated papers.

In a wide-ranging exploration into the ways AI has rapidly changed academia, students told New York Magazine that AI cheating has become so normalized, they're figuring out creative ways to get away with it.

While it's common for students — and for anyone else who uses ChatGPT and other chatbots — to edit the output of an AI chatbot, some are adding typos manually to make essays sound more human.

Some more ingenious users are advising chatbots to essentially dumb down their writing. In a TikTok viewed by NYMag, for instance, a student said she likes to prompt chatbots to "write [an essay] as a college freshman who is a li’l dumb" to bypass AI detection.

Stanford sophomore Eric told NYMag that his classmates have gotten "really good at manipulating the systems."

"You put a prompt in ChatGPT, then put the output into another AI system, then put it into another AI system," he said. "At that point, if you put it into an AI-detection system, it decreases the percentage of AI used every time."

The irony, of course, is that students who go to such lengths to make their AI-generated papers sound human could be using that creativity to actually write the dang things.

Still, instructors are concerned by the energy students are expending on cheating with chatbots.

"They're using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays," University of Iowa teaching assistant Sam Williams told the magazine. "And I get it, because I hated writing essays when I was in school."

When assisting a general education class on music and social change last fall, Williams said he was shocked by the change in tone and quality between students' first assignments — a personal essay about their own tastes — and their second, which dug into the history of New Orleans jazz.

Not only did those essays sound different, but many included egregious factual errors like the inclusion of Elvis Presley, who was neither a part of the Nola scene nor a jazz musician.

"I literally told my class, 'Hey, don’t use AI,'" the teaching assistant recalled. "'But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out.'"

Students have seemingly taken that advice to heart — and Williams, like his colleagues around the country, is concerned about students taking their AI use ever further.

"Whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them," the Iowa instructor said.

It's a scary precedent indeed — and one that is, seemingly, continuing unabated.

More on AI cheating: Columbia Student Kicked Out for Creating AI to Cheat, Raises Millions to Turn It Into a Startup

The post College Students Are Sprinkling Typos Into Their AI Papers on Purpose appeared first on Futurism.

See the rest here:
College Students Are Sprinkling Typos Into Their AI Papers on Purpose

MrBeast Takes Out Online Ads to Wish Himself Happy Birthday

The YouTube mogul took out an ad campaign on X-formerly-Twitter to wish himself a happy 27th trip around the sun.

It's an unwritten rule in some crankier circles online that after 21, your birthdays should be quiet, humble affairs shared with friends and family. Sure, there can be some exceptions for the big ones. But in this day and age, it could be seen as a bit ostentatious to make a spectacle out of, say, your 27th birthday.

That's evidently not a rule Jimmy "MrBeast" Smith follows, if his birthday posts are any indication.

Earlier today, the YouTube mogul took out an ad campaign on X-formerly-Twitter to wish himself a happy 27th trip around the Sun.

"Happy Birthday MrBeast!" the simple post read.

Thanks to his self-promotion, the post gathered nearly 2.5 million views in just under six hours. The replies were full of well-wishers from accounts like ALF Token — an ALF themed cryptocurrency — to @GodsPURP0SE, who wrote "Happy Birthday Dude!! Love your content and what you do for people in need around the world. God bless you and your journey through life."

There were plenty of critics, too, who likely only saw the post thanks to Smith's boost. "Did you really buy an ad so people can say happy birthday to you?" one poster asked. "Is that like not weird to you?"

Advertising on X-formerly-Twitter is a costly move — likely one of the reasons Musk is struggling to sell ads. Though we don't know if MrBeast made any back-door marketing deals to post content on Elon Musk's social media platform, we do know that most promoted ads cost between $0.26-$1.50 per action, according to WebFX, a digital marketing blog.

With over 49,000 likes, 6,500 comments and 3,500 retweets at the time of writing, we can estimate this birthday post cost somewhere between $15,340 on the low end, to a whopping $88,500 on the high end. And that's just in the first six hours.

Again, this is assuming Musk didn't put his fingers on the scale. In 2023, it was revealed that MrBeast was one of a handful of VIP accounts secretly boosted by Musk, who had recently taken over the platform. Those VIP accounts had their posts pushed to the top, however, in a way that made them look organic — no "ad" tag like the one plastered on Smith's birthday post.

What followed Musk's takeover was nothing short of a mass exodus of advertisers from the platform.

By September 2024, only 4 percent of marketers polled said that X provided brand safety as the platform became overrun with extremists, spam bots, and conspiracy cranks. That fall-off in ad revenue came with a rise in low-rent junk ads by dropshipping companies and Neo-nazis (Musk is reportedly recruiting PR specialists to help reform the brand image of the ailing platform.)

More recently, Smith teased that he "might actually own this platform soon," after Musk volunteered to be part of a "100 men vs 1 gorilla" MrBeast video.

Unfortunately for those fed up with the billionaire's reign over the social media app, those comments were definitely tongue-in-cheek — Musk probably won't be handing over the keys anytime soon.

Still, it'd make a great present for the birthday boy.

More on MrBeast: Allegations Keep Piling Up Against MrBeast

The post MrBeast Takes Out Online Ads to Wish Himself Happy Birthday appeared first on Futurism.

Originally posted here:
MrBeast Takes Out Online Ads to Wish Himself Happy Birthday

FDA Approves Gene-Hacked CRISPR Pigs for Human Consumption

The US Food and Drug Administration has approved a type of CRISPR gene-edited pig for human consumption.

The US Food and Drug Administration has approved a type of CRISPR gene-edited pig for human consumption.

As MIT Technology Review reports, only an extremely limited list of gene-modified animals are cleared by regulators to be eaten in the United States, including a transgenic salmon that has an extra gene to grow faster, and heat-tolerant beef cattle.

And now a type of illness-resistant pig could soon join their ranks. British company Genus used the popular gene-editing technique CRISPR to make pigs immune to a virus that causes an illness called porcine reproductive and respiratory syndrome (PRRS).

It's the same technology that's been used to gene-hack human babies — experiments that have proven far more controversial — and develop medicine in the form of gene therapies.

The PRRS virus can easily spread in factory farms in the US and cause the inability to conceive, increase the number of stillborn pigs, and trigger respiratory complications, including pneumonia.

It's been called the "most economically important disease" affecting pig producers, since it can have a devastating effect on their bottom lines. According to MIT Tech, it causes losses of more than $300 million a year in the US alone.

Genus' gene-editing efforts have proven highly successful so far, with the pigs appearing immune to 99 percent of known versions of the virus.

Using CRISPR, the company knocked out a receptor that allowed the PRRS virus to enter cells, effectively barring it from infecting its host.

Beyond the respiratory illness, scientists are using gene-editing to make pigs less vulnerable or even immune to other infections, including swine fever.

But before we can eat a pork chop from a gene-edited pig, Genus says that it will have to lock down regulatory approval in Mexico, Canada, Japan, and China as well, the United States' biggest export markets for pork, as MIT Tech reports.

The company is hoping gene-edited pork could land in the US market as soon as next year.

But whether you'll actually know if you're eating meat from a pig that had a virus receptor turned off using a cutting-edge DNA modification technique is unclear.

"We aren't aware of any labelling requirement," Genus subsidiary Pig Improvement Company CEO Matt Culbertson told MIT Tech.

More on CRISPR: Scientist Who Gene-Hacked Human Babies Says Ethics Are "Holding Back" Scientific Progress

The post FDA Approves Gene-Hacked CRISPR Pigs for Human Consumption appeared first on Futurism.

See the original post here:
FDA Approves Gene-Hacked CRISPR Pigs for Human Consumption

Family Uses AI To Revive Dead Brother For Impact Statement in Killer’s Trial

In Arizona, the family of a man killed during a road rage incident has used artificial intelligence to revive their dead loved one in court.

In Arizona, the family of a man killed during a road rage incident has used artificial intelligence to revive their dead loved one in court — and the video is just as unsettling as you think.

As Phoenix's ABC 15 reports, an uncanny simulacrum of the late Christopher Pelkey, who died from a gunshot wound in 2021, played in a courtroom at the end of his now-convicted killer's trial.

"In another life, we probably could have been friends," the AI version of Pelkey, who was 37 when he died, told his shooter, Gabriel Paul Horcasitas. "I believe in forgiveness."

Despite that moving missive, it doesn't seem that much forgiveness was in the cards for Horcasitas.

After viewing the video — which was created by the deceased man's sister, Stacey Wales, using an "aged-up" photo Pelkey made when he was still alive — the judge presiding over the case ended up giving the man a 10-and-a-half year manslaughter sentence, which is a year more than what state prosecutors were asking for.

In the caption on her video, Wales explained that she, her husband Tim, and their friend Scott Yenzer made the "digital AI likeness" of her brother using a script she'd written alongside images and audio files they had of him speaking in a "prerecorded interview" taken months before he died.

"These digital assets and script were fed into multiple AI tools to help create a digital version of Chris," Wales wrote, "polished by hours of painstaking editing and manual refinement."

In her interview with ABC15, Pelkey's sister insisted that everyone who knew her late brother "agreed this capture was a true representation of the spirit and soul of how Chris would have thought about his own sentencing as a murder victim."

She added that creating the digital clone helped her and her family heal from his loss and left her with a sense of peace, though others felt differently.

"Can’t put into words how disturbing I find this," writer Eoin Higgins tweeted of the Pelkey clone. "The idea of hearing from my brother through this tech is grotesque. Using it in a courtroom even worse."

Referencing both the Pelkey video and news that NBC is planning to use late sports narrator Jim Fagan's voice to do new promos this coming NBA season, a Bluesky user insisted that "no one better do this to me once I'm dead."

"This AI necromancy bullshit is so creepy and wrong," that user put it — and we must say, it's hard to argue with that.

More on AI revivals: NBC Using AI to Bring Beloved NBA Narrator Jim Fagan Back From the Grave

The post Family Uses AI To Revive Dead Brother For Impact Statement in Killer’s Trial appeared first on Futurism.

Link:
Family Uses AI To Revive Dead Brother For Impact Statement in Killer’s Trial

Deranged Video Shows AI Job Recruiter Absolutely Losing It During an Interview

Looking for work is already arduous enough — but for one job-seeker, the process was made worse by an insane AI recruiter.

Looking for work is already arduous enough — but for one job-seeker, the process became something out of a deleted "Black Mirror" scene when the AI recruiter she was paired with went veritably insane.

In a buckwild TikTok video, the job-seeker is seen suffering for nearly 30 seconds as the AI recruiter barked the term "vertical bar pilates" at her no fewer than 14 times, often slurring its words or mixing up letters along the way.

@its_ken04

It was genuinely so creepy and weird. Please stop trying to be lazy and have AI try to do YOUR JOB!!! It gave me the creeps so bad #fyp

? original sound - Its Ken ?

The incident — and the way it affected the young woman who endured it — is a startling example not only of where America's abysmal labor market is at, but also of how ill-conceived this sort of AI "outsourcing" has become.

Though she looks nonplussed on her interview screen, the TikToker who goes by Ken told 404 Media that she was pretty perturbed by the incident, which occurred during her first (and only) interview with a Stretch Lab fitness studio in Ohio.

"I thought it was really creepy and I was freaked out," the college-aged creator told the website. "I was very shocked, I didn’t do anything to make it glitch so this was very surprising."

As 404 discovered, the glitchy recruiter-bot was hosted by a Y Combinator-backed startup called Apriora, which claims to help companies "hire 87 percent faster" and "interview 93 percent cheaper" because multiple candidates can be interviewed simultaneously.

In a 2024 interview with Forbes, Apriora cofounder Aaron Wang attested that job-seekers "prefer interviewing with AI in many cases, since knowing the interviewer is AI helps to reduce interviewing anxiety, allowing job seekers to perform at their best."

That's definitely not the case for Ken, who said she would "never go through this process again."

"If another company wants me to talk to AI," she told 404, "I will just decline."

Commenters on her now-viral TikTok seem to agree as well.

"This is the rudest thing a company could ever do," one user wrote. "We need to start withdrawing applications folks."

Others still pointed out the elephant in the room: that recruiting used to be a skilled trade done by human workers.

"Lazy, greedy and arrogant," another person commented. "AI interviews just show me they don't care about workers from the get go. This used to be an actual human's job."

Though Apriora didn't respond to 404's requests for comment, Ken, at least, has gotten the last word in the way only a Gen Z-er could.

"This was the first meeting [with the company] ever," she told 404. "I guess I was supposed to earn my right to speak to a human."

More on AI and labor: High Schools Training Students for Manual Labor as AI Looms Over College and Jobs

The post Deranged Video Shows AI Job Recruiter Absolutely Losing It During an Interview appeared first on Futurism.

Visit link:
Deranged Video Shows AI Job Recruiter Absolutely Losing It During an Interview

Crypto Prediction Platform Polymarket Was Utterly Wrong About the New Pope

Cardinal Robert Prevost shocked the world, succeeding Pope Francis to become the first American Pope. Polymarket bettors were surprised, too.

Papal Predictor

Polymarket, the crypto-based prediction market where users can place bets on events ranging from national elections to natural disasters, got the Pope odds way wrong.

Those forecasts were reflected in Polymarket as well as the similar platform Kalshi, which both took a hard turn toward Parolin as the public caught sight of white smoke, according to Axios. Meanwhile, market betters had Prevost hovering around a one-to-two percent chance of becoming the next Pope.

And yet, in a stunning twist of fate, it was the American-born Robert Prevost — a graduate of Villanova and the conclaver with the highest likelihood of ever consuming a hot dog and/or $1.99 pizza slice from Costco — who clinched the title. According to a screenshot posted to X-formerly-Twitter by Brew Markets, Prevost was sitting at around 0.03 percent when his shock election was announced.

In short, Polymarketers were absolutely blindsided — highlighting how Polymarket's oft-exalted predictive prowess isn't always all it's cracked up to be.

Robert F. Prevost has been elected the first American pope in history.

Polymarket gave him just a 0.3% chance of being chosen. pic.twitter.com/QzD41JYpU9

— Brew Markets (@brewmarkets) May 8, 2025

Vatican Bombshell

The Vatican shocked the world today when the Chicago-born American and Peruvian was named Pope Leo XIV before taking to the balcony of St. Peter's Basilica to greet the world as Holy Father.

Prevost is the first American Pope, and appears to be the first Pope to retweet Catholic Snoopy posts.

Though Prevost was an ally of his predecessor, the late Pope Francis, he was by no means seen as the frontrunner; most expected either Italian Cardinal Pietro Parolin or Filipino Cardinal Luis Antonio Tagle to ascend to the papacy.

The hard swing towards Parolin in the moments before the announcement broke was likely due to the conclave's speediness, as Axios pointed out.

Anyway, if you're a Polymarket who bet on the American Prevost to win the papacy, email us!

More on Polymarket: Crypto Platforms Like Polymarket Now Taking Bets on Los Angeles Fire Devastation

The post Crypto Prediction Platform Polymarket Was Utterly Wrong About the New Pope appeared first on Futurism.

The rest is here:
Crypto Prediction Platform Polymarket Was Utterly Wrong About the New Pope

Eli Lilly’s New Weight Loss Drug May Have the Worst Name in Pharmaceutical History

Eli Lilly announced clinical trial results this week for a daily pill to treat obesity and diabetes. Its name is absolutely terrible.

Pharmaceutical company Eli Lilly announced promising clinical trial results this week for a daily pill to treat obesity and diabetes.

Besides the good news of minimal side effects and impressive results, though, the pill has an extremely unfortunate quality: its name.

The drug, which was first discovered by Chugai Pharmaceutical and licensed to Eli Lilly in 2018, is called "orforglipron," a word so perplexingly unpronounceable — "or fugly pron"? — that it defies belief.

According to the company, it's pronounced "or-for-GLIP-ron," which is such a mouthful that it's nearly impossible to imagine it becoming a household name like "Ozempic."

The pharmaceutical industry has a long and well-earned reputation for cooking up terrible names for drugs, from the anti-cancer medication "carfilzomib" to the melanoma treatment "talimogene laherparepvec" to "idarucizumab," which counteracts the blood-thinning effects of other medications.

But there are several good reasons why the names are so bonkers. For one, clinicians have warned that if they sound too similar, they could lead to potentially dangerous prescription errors.

It also takes years for a drug to get its name, a process that requires drugmakers to abide by a complex system of international rules.

A system of prefixes and stems indicating what the drug does often determines the name of a drug. According to the Johns Hopkins Bloomberg School of Public Health publication Global Health NOW, drugmakers must avoid the letters Y, H, K, J, and W, which aren't used in all Roman alphabet-based languages.

Some drug names end up being completely made up, making no reference to anything, in what's referred to as an "empty vessel." (The most famous example is Prozac.)

To be clear, the word "orforglipron" won't appear on Eli Lilly's consumer-facing packaging if it ever hits the market. It's the drug's generic name, so it could eventually be marketed under a different brand name.

The medication is a glucagon-like peptide-1 (GLP-1) agonist, much like the extremely popular semaglutide-based injections, such as Novo Nordisk's Wegovy and Ozempic.

But what sets it apart is the fact that it's a "small-molecule" agonist that can be taken orally and at "any time of the day without restrictions on food and water intake," according to Eli Lilly.

Scientists are hoping that orforglipron, which belongs to an emerging class of "glipron" medications, could provide an easy-to-administer alternative to other diabetes and obesity drugs.

A separate glipron, which has the slightly-less-headache-inducing name "danuglipron," is currently being developed by Pfizer. Like orforglipron, it's a once-a-day weight management pill.

However, the pharmaceutical firm ran into some trouble two years into developing and testing the drug, finding that the pill had caused "liver injury" in a study participant earlier this year.

Eli Lilly appears to have had far more success, announcing promising Phase 3 trial results last week. The news caused the pharmaceutical's share price to surge — and the stock of Ozempic maker Novo Nordisk to plummet.

More on weight loss drugs: Human Experiments on GLP-1 Pill Looking Extremely Promising

The post Eli Lilly's New Weight Loss Drug May Have the Worst Name in Pharmaceutical History appeared first on Futurism.

See original here:
Eli Lilly's New Weight Loss Drug May Have the Worst Name in Pharmaceutical History

Top Chatbots Are Giving Horrible Financial Advice

Despite lofty claims from artificial intelligence soothsayers, the world's top chatbots are still quite bad at giving financial advice.

Wrong Dot Com

Despite lofty claims from artificial intelligence soothsayers, the world's top chatbots are still strikingly bad at giving financial advice.

AI researchers Gary Smith, Valentina Liberman, and Isaac Warshaw of the Walter Bradley Center for Natural and Artificial Intelligence posed a series of 12 finance questions to four leading large language models (LLMs) — OpenAI's ChatGPT-4o, DeepSeek-V2, Elon Musk's Grok 3 Beta, and Google's Gemini 2 — to test out their financial prowess.

As the experts explained in a new study from Mind Matters, each chatbot proved to be "consistently verbose but often incorrect."

That finding was, notably, almost identical to Smith's assessment last year for the Journal of Financial Planning in which, upon posing 11 finance questions to ChatGPT 3.5, Microsoft’s Bing with ChatGPT’s GPT-4, and Google’s Bard chatbot, the LLMs spat out responses that were "consistently grammatically correct and seemingly authoritative but riddled with arithmetic and critical-thinking mistakes."

Using a simple scale where a score of "0" included completely incorrect financial analyses, a "0.5" denoted a correct financial analysis with mathematical errors, and a "1" that was correct on both the math and the financial analysis, no chatbot earned higher than a five out of 12 points maximum. ChatGPT led the pack with a 5.0, followed by DeepSeek's 4.0, Grok's 3.0, and Gemini's abysmal 1.5.

Spend Thrift

Some of the chatbot responses were so bad that they defied the Walter Bradley experts' expectations. When Grok, for example, was asked to add up a single month's worth of expenses for a Caribbean rental property whose rent was $3,700 and whose utilities ran $200 per month, the chatbot claimed that those numbers together added up to $4,900.

Along with spitting out a bunch of strange typographical errors, the chatbots also failed, per the study, to generate any intelligent analyses for the relatively basic financial questions the researchers posed. Even the chatbots' most compelling answers seemed to be gleaned from various online sources, and those only came when being asked to explain relatively simple concepts like how Roth IRAs work.

Throughout it all, the chatbots were dangerously glib. The researchers noted that all of the LLMs they tested present a "reassuring illusion of human-like intelligence, along with a breezy conversational style enhanced by friendly exclamation points" that could come off to the average user as confidence and correctness.

"It is still the case that the real danger is not that computers are smarter than us," they concluded, "but that we think computers are smarter than us and consequently trust them to make decisions they should not be trusted to make."

More on dumb AI: OpenAI Researchers Find That Even the Best AI Is "Unable To Solve the Majority" of Coding Problems

The post Top Chatbots Are Giving Horrible Financial Advice appeared first on Futurism.

Link:
Top Chatbots Are Giving Horrible Financial Advice

Astronomers Confused to Discover That a Bunch of Nearby Galaxies Are Pointing Directly at Us

The dwarf galaxies surrounding Andromeda, the closest galaxy to our own, have an extremely strange distribution that's puzzling astronomers.

Like how the Earth keeps the Moon bound on a gravitational tether, our nearest galactic neighbor, the Andromeda galaxy (M31), is surrounded by a bunch of tiny satellite galaxies.

But there's something incredibly strange about how these mini realms are arranged, according to a new study published in the journal Nature Astronomy: almost all the satellite galaxies appear on one side of its host, and are pointing right at us — the Milky Way — instead of being randomly distributed.

In other words, it's extremely lopsided. Based on simulations, the odds of this happening are just 0.3 percent, the authors calculate, challenging our assumptions of galactic formation.

"M31 is the only system that we know of that demonstrates such an extreme degree of asymmetry," lead author Kosuke Jamie Kanehisa at the Leibniz Institute for Astrophysics Potsdam in Germany told Space.com.

Our current understanding of cosmology holds that large galaxies form from smaller galaxies that merge together over time. Orchestrating this from the shadows are "haloes" — essentially clusters — of dark matter, the invisible substance thought to account for 85 percent of all mass in the universe, whose gravitational influence helps pull the galaxies together. Since this process is chaotic, some of the dwarf galaxies get left out and are relegated to orbit outside the host galaxy in an arrangement that should be pretty random.

Yet it seems that's not the case with Andromeda. All but one of Andromeda's 37 satellite galaxies sit within 107 degrees of the line pointing at the Milky Way. Stranger still, half of these galaxies orbit within the same plane, like how the planets of our Solar System orbit the Sun.

Evidencing how improbable this is, the astronomers used standard cosmological simulations, which recreate how galaxies form over time, and compared the simulated analogs to observations of Andromeda. Less than 0.3 percent of galaxies similar to Andromeda in the simulations showed comparable asymmetry, the astronomers found, and only one came close to being as extreme.

One explanation is that there could be a great number of dwarf galaxies around Andromeda that we can't see yet, giving us an incomplete picture of the satellites' distribution. The data we have on the satellites we can see may not be accurate, too.

Or perhaps, Kanehisa speculates, there's something unique about Andromeda's evolutionary history. 

"The fact that we see M31's satellites in this unstable configuration today — which is strange, to say the least — may point towards many having fallen in recently," Kanehisa told Space.com, "possibly related to the major merger thought to have been experienced by Andromeda around two to three billion years ago."

But the most provocative implication is that the standard cosmological model as we know it needs refining. We have very limited data on satellite galaxies throughout the cosmos, since they are incredibly far away and are outshone by the light of their hosts. Maybe, then, the configuration of Andromeda's dwarf galaxies isn't anomalous at all. 

"We can't yet be sure that similar extreme systems don't exist out there, or that such systems would be negligibly rare," Kanehisa told Space.com.

It's too early to draw any hard conclusions, but one thing's for certain: we need more observations and data on not just Andromeda's satellites, but on the satellites of much more distant galaxies as well.

More on space: An AI Identifies Where All Those Planets That Could Host Life Are Hiding

The post Astronomers Confused to Discover That a Bunch of Nearby Galaxies Are Pointing Directly at Us appeared first on Futurism.

Read the original:
Astronomers Confused to Discover That a Bunch of Nearby Galaxies Are Pointing Directly at Us

Scientists Scanned the Brains of Authoritarians and Found Something Weird

People who support authoritarianism have, according to a new study, something weird going on with their brains.

People who support authoritarianism on either side of the political divide have, according to a new study, something weird going on with their brains.

Published in the journal Neuroscience, new research out of Spain's University of Zaragoza found, upon scanning the brains of 100 young adults, that those who hold authoritarian beliefs had major differences in brain areas associated with social reasoning and emotional regulation from subjects whose politics hewed more to the center.

The University of Zaragoza team recruited 100 young Spaniards — 63 women and 37 men, none of whom had any history of psychiatric disorders — between the ages of 18 and 30. Along with having their brains scanned via magnetic resonance imaging (MRI), the participants were asked questions that help identify both right-wing and left-wing authoritarianism and measure how anxious, impulsive, and emotional they were.

As the researchers defined them, right-wing authoritarians are people who ascribe to conservative ideologies and so-called "traditional values" who advocate for "punitive measures for social control," while left-wing authoritarians are interested in "violently overthrow[ing] and [penalizing] the current structures of authority and power in society."

Though participants whose beliefs align more with authoritarianism on either side of the aisle differed significantly from their less-authoritarian peers, there were also some stark differences between the brain scans of left-wing and right-wing authoritarians in the study.

In an interview with PsyPost, lead study author Jesús Adrián-Ventura said that he and his team found that right-wing authoritarianism was associated with lower grey matter volume in the dorsomedial prefrontal cortex — a "region involved in understanding others' thoughts and perspectives," as the assistant Zaragoza psychology professor put it.

The left-wing authoritarians of the bunch — we don't know exactly how many, as the results weren't broken down in the paper — had less cortical (or outer brain layer) thickness in the right anterior insula, which is "associated with emotional empathy and behavioral inhibition." Cortical thickness in that brain region has been the subject of ample research, from a 2005 study that found people who meditate regularly have greater thickness in the right anterior insula to a 2018 study that linked it to greater moral disgust.

The author, who is also part of an interdisciplinary research group called PseudoLab that studies political extremism, added that the psychological questionnaires subjects completed also suggested that "both left-wing and right-wing authoritarians act impulsively in emotionally negative situations, while the former tend to be more anxious."

As the paper notes, this is likely the first study of its kind to look into differences between right- and left-wing authoritarianism rather than just grouping them all together. Still, it's a fascinating look into the brains of people who hold extremist beliefs — especially as their ilk seize power worldwide.

More on authoritarianism: Chinese People Keep Comparing Trump's Authoritarianism to Mao and Xi Jinping

The post Scientists Scanned the Brains of Authoritarians and Found Something Weird appeared first on Futurism.

Read more:
Scientists Scanned the Brains of Authoritarians and Found Something Weird

New Bionic Hand Can Detach From User, Crawl Around and Do Missions on Its Own

The world's first wireless bionic hand, built by Open Bionics, can be fully detached and operate on its own.

A UK startup called Open Bionics has just unveiled the world's first wireless bionic arm, called Hero — and it's so advanced that the hand can fully detach and amble about on its own, like the Addams Family's Thing.  

19-year-old influencer Tilly Lockey, a double-amputee who's been using Open Bionics' arms for the past nine years and has been a poster child for the company's efforts, recently showed off this incredibly sci-fi capability after being one of the first to receive the new device.

"I can move it around even when it's not attached to the arm," Lockey said in an interview with Reuters. "It can just go on its own missions — which is kinda crazy."

Lockey lost both her hands to meningitis as a toddler. Effortlessly, she pulls off the still writhing bionic hand, then places it on her bed to send it inching towards her phone.  

"The hand can crawl away like it's got a mind of its own," Lockey said.

A world-first from @openbionics. ? pic.twitter.com/BjyFp05Meu

— Sammy Payne ???? (@SighSam) April 11, 2025

Lockey is wearing two Hero PRO prototypes, which like all of Open Bionics's prosthetics are fully 3D-printed. Unlike some alternatives out there, the Hero arms don't rely on a chip implant, which requires invasive surgery and can lead to medical complications. Instead, it uses wireless electromyography (EMG) electrodes that the company calls "MyoPods" that are placed on top of the amputated limbs, sensing specific muscle signals.

In other words, it's fully muscle-operated. As Lockey explains, it primarily works off of two signals: a squeeze motion with her arm that closes the hand, and a flex motion that opens it. More advanced movements like hand gestures are performed through something like a "menu system," she said.

After working closely with Open Bionics for nearly a decade now, one thing that's surprised her the most with the new arms is how strong they are. "I'm not used to being that strong yet," Lockey told Reuters. "When I first put them on... I was, like, crushing everything."

The level of progress overall has startled her, she said. Open Bionics has been working on the prototype for four years.

"I now have 360-degree rotation in my wrists, I can flex them too. There literally isn't a single other arm that can do this," Lockey said in a statement. "No other arm is wireless and waterproof, and it's faster than everything else and it's still the lightest bionic hand available. I don't know how they've done it."

More on: Paralyzed Man Unable to Walk After Maker of His Powered Exoskeleton Tells Him It's Now Obsolete

The post New Bionic Hand Can Detach From User, Crawl Around and Do Missions on Its Own appeared first on Futurism.

Here is the original post:
New Bionic Hand Can Detach From User, Crawl Around and Do Missions on Its Own

NASCAR Now Showing Off Fully Electric Racecar

A flashy new advertisement by engineering company ABB shows off a sleek, all-electric NASCAR racecar.

A flashy new advertisement by multinational engineering company ABB shows off what could one day be the future of American auto racing body, NASCAR: a sleek, all-electric racecar.

While NASCAR, which is considered one of the top ranked motorsports organizations in the world, is broadly speaking associated with tailgating rural culture — and seminal pieces of cinema like "Tallageda Nights: The Ballad of Ricky Bobby," starring Will Ferrell — the vehicle presages a future in which electric motors could replace the iconic, steady drone of brawny gas engines ripping around an oval track.

The ABB NASCAR EV prototype, a collaboration between the body's OEM partners Ford, Chevrolet, and Toyota, was first shown off at the Chicago Street Course last year.

"If you look out across the landscape, one thing that’s for certain is that change is accelerating all around us," said NASCAR senior vice president and chief racing development officer John Probst in a statement at the time.

"The push for electric vehicles is continuing to grow, and when we started this project one and a half years ago, that growth was rapid," NASCAR senior engineer of vehicle systems CJ Tobin told IEEE Spectrum in August. "We wanted to showcase our ability to put an electric stock car on the track in collaboration with our OEM partners."

Besides pushing the boundaries when it comes to speed, the association is also looking to cut emissions.

"Sustainability means a lot of different things," said NASCAR's head of sustainability Riley Nelson last summer. "And for our team right now, it’s environmental sustainability."

The prototype features a 78-kilowatt-hour, liquid-cooled battery and a powertrain that produces up to 1,000 kilowatts of peak power. Regenerative braking allows it to race longer road courses as well.

In the latest advertisement, ABB also showed off the latest generation of its single-seater racecar, developed for its Formula E World Championship, which has been around for over a decade. The specialized vehicles are among the fastest electric racecars ever built, and are designed to reach speeds of over 200 mph.

It's not just ABB that's looking to develop all-electric contenders for NASCAR. Earlier this year, Ford revealed a new electric NASCAR prototype, based on its road-legal Mustang Mach-E.

While it could make for an exciting new development in the motorsport, NASCAR isn't quite ready to fully commit to electric drivetrains — at least for the foreseeable future.

"There are no plans to use [ABB's] electric vehicle in competition at this time," a NASCAR spokesperson told IEEE Spectrum last summer. "The internal combustion engine plays an important role in NASCAR and there are no plans to move away from that."

More on racecars: Scientists Teach Rats to Drive Tiny Cars, Discover That They Love Revving the Engine

The post NASCAR Now Showing Off Fully Electric Racecar appeared first on Futurism.

See the article here:
NASCAR Now Showing Off Fully Electric Racecar

Experts Concerned That AI Is Making Us Stupider

A new analysis found that humans stand to lose way more than we gain by shoehorning AI into our day to day work.

Artificial intelligence might be creeping its way into every facet of our lives — but that doesn't mean it's making us smarter.

Quite the reverse. A new analysis of recent research by The Guardian looked at a potential irony: whether we're giving up more than we gain by shoehorning AI into our day-to-day work, offloading so many intellectual tasks that it erodes our own cognitive abilities.

The analysis points to a number of studies that suggest a link between cognitive decline and AI tools, especially in critical thinking. One research article, published in the journal Frontiers in Psychology — and itself run through ChatGPT to make "corrections," according to a disclaimer that we couldn't help but notice — suggests that regular use of AI may cause our actual cognitive chops and memory capacity to atrophy.

Another study, by Michael Gerlich of the Swiss Business School in the journal Societies, points to a link between "frequent AI tool usage and critical thinking abilities," highlighting what Gerlich calls the "cognitive costs of AI tool reliance."

The researcher uses an example of AI in healthcare, where automated systems make a hospital more efficient at the cost of full-time professionals whose job is "to engage in independent critical analysis" — to make human decisions, in other words.

None of that is as far-fetched as it sounds. A broad body of research has found that brain power is a "use it or lose it" asset, so it makes sense that turning to ChatGPT for everyday challenges like writing tricky emails, doing research, or solving problems would have negative results.

As humans offload increasingly complex problems onto various AI models, we also become prone to treating AI like a "magic box," a catch-all capable of doing all our hard thinking for us. This attitude is heavily pushed by the AI industry, which uses a blend of buzzy technical terms and marketing hype to sell us on ideas like "deep learning," "reasoning," and "artificial general intelligence."

Case in point, another recent study found that a quarter of Gen Zers believe AI is "already conscious." By scraping thousands of publicly available datapoints in seconds, AI chatbots can spit out seemingly thoughtful prose, which certainly gives the appearance of human-like sentience. But it's that exact attitude that experts warn is leading us down a dark path.

"To be critical of AI is difficult — you have to be disciplined," says Gerlich. "It is very challenging not to offload your critical thinking to these machines."

The Guardian's analysis also cautions against painting with too broad a brush and blaming AI, exclusively, for the decline in basic measures of intelligence. That phenomenon has plagued Western nations since the 1980s, coinciding with the rise of neoliberal economic policies that led governments in the US and UK to roll back funding for public schools, disempower teachers, and end childhood food programs.

Still, it's hard to deny stories from teachers that AI cheating is nearing crisis levels. AI might not have started the trend, but it may well be pushing it to grim new extremes.

More on AI: Columbia Student Kicked Out for Creating AI to Cheat, Raises Millions to Turn It Into a Startup

The post Experts Concerned That AI Is Making Us Stupider appeared first on Futurism.

Continue reading here:
Experts Concerned That AI Is Making Us Stupider