Eli Lilly’s New Weight Loss Drug May Have the Worst Name in Pharmaceutical History

Eli Lilly announced clinical trial results this week for a daily pill to treat obesity and diabetes. Its name is absolutely terrible.

Pharmaceutical company Eli Lilly announced promising clinical trial results this week for a daily pill to treat obesity and diabetes.

Besides the good news of minimal side effects and impressive results, though, the pill has an extremely unfortunate quality: its name.

The drug, which was first discovered by Chugai Pharmaceutical and licensed to Eli Lilly in 2018, is called "orforglipron," a word so perplexingly unpronounceable — "or fugly pron"? — that it defies belief.

According to the company, it's pronounced "or-for-GLIP-ron," which is such a mouthful that it's nearly impossible to imagine it becoming a household name like "Ozempic."

The pharmaceutical industry has a long and well-earned reputation for cooking up terrible names for drugs, from the anti-cancer medication "carfilzomib" to the melanoma treatment "talimogene laherparepvec" to "idarucizumab," which counteracts the blood-thinning effects of other medications.

But there are several good reasons why the names are so bonkers. For one, clinicians have warned that if they sound too similar, they could lead to potentially dangerous prescription errors.

It also takes years for a drug to get its name, a process that requires drugmakers to abide by a complex system of international rules.

A system of prefixes and stems indicating what the drug does often determines the name of a drug. According to the Johns Hopkins Bloomberg School of Public Health publication Global Health NOW, drugmakers must avoid the letters Y, H, K, J, and W, which aren't used in all Roman alphabet-based languages.

Some drug names end up being completely made up, making no reference to anything, in what's referred to as an "empty vessel." (The most famous example is Prozac.)

To be clear, the word "orforglipron" won't appear on Eli Lilly's consumer-facing packaging if it ever hits the market. It's the drug's generic name, so it could eventually be marketed under a different brand name.

The medication is a glucagon-like peptide-1 (GLP-1) agonist, much like the extremely popular semaglutide-based injections, such as Novo Nordisk's Wegovy and Ozempic.

But what sets it apart is the fact that it's a "small-molecule" agonist that can be taken orally and at "any time of the day without restrictions on food and water intake," according to Eli Lilly.

Scientists are hoping that orforglipron, which belongs to an emerging class of "glipron" medications, could provide an easy-to-administer alternative to other diabetes and obesity drugs.

A separate glipron, which has the slightly-less-headache-inducing name "danuglipron," is currently being developed by Pfizer. Like orforglipron, it's a once-a-day weight management pill.

However, the pharmaceutical firm ran into some trouble two years into developing and testing the drug, finding that the pill had caused "liver injury" in a study participant earlier this year.

Eli Lilly appears to have had far more success, announcing promising Phase 3 trial results last week. The news caused the pharmaceutical's share price to surge — and the stock of Ozempic maker Novo Nordisk to plummet.

More on weight loss drugs: Human Experiments on GLP-1 Pill Looking Extremely Promising

The post Eli Lilly's New Weight Loss Drug May Have the Worst Name in Pharmaceutical History appeared first on Futurism.

See original here:
Eli Lilly's New Weight Loss Drug May Have the Worst Name in Pharmaceutical History

Top Chatbots Are Giving Horrible Financial Advice

Despite lofty claims from artificial intelligence soothsayers, the world's top chatbots are still quite bad at giving financial advice.

Wrong Dot Com

Despite lofty claims from artificial intelligence soothsayers, the world's top chatbots are still strikingly bad at giving financial advice.

AI researchers Gary Smith, Valentina Liberman, and Isaac Warshaw of the Walter Bradley Center for Natural and Artificial Intelligence posed a series of 12 finance questions to four leading large language models (LLMs) — OpenAI's ChatGPT-4o, DeepSeek-V2, Elon Musk's Grok 3 Beta, and Google's Gemini 2 — to test out their financial prowess.

As the experts explained in a new study from Mind Matters, each chatbot proved to be "consistently verbose but often incorrect."

That finding was, notably, almost identical to Smith's assessment last year for the Journal of Financial Planning in which, upon posing 11 finance questions to ChatGPT 3.5, Microsoft’s Bing with ChatGPT’s GPT-4, and Google’s Bard chatbot, the LLMs spat out responses that were "consistently grammatically correct and seemingly authoritative but riddled with arithmetic and critical-thinking mistakes."

Using a simple scale where a score of "0" included completely incorrect financial analyses, a "0.5" denoted a correct financial analysis with mathematical errors, and a "1" that was correct on both the math and the financial analysis, no chatbot earned higher than a five out of 12 points maximum. ChatGPT led the pack with a 5.0, followed by DeepSeek's 4.0, Grok's 3.0, and Gemini's abysmal 1.5.

Spend Thrift

Some of the chatbot responses were so bad that they defied the Walter Bradley experts' expectations. When Grok, for example, was asked to add up a single month's worth of expenses for a Caribbean rental property whose rent was $3,700 and whose utilities ran $200 per month, the chatbot claimed that those numbers together added up to $4,900.

Along with spitting out a bunch of strange typographical errors, the chatbots also failed, per the study, to generate any intelligent analyses for the relatively basic financial questions the researchers posed. Even the chatbots' most compelling answers seemed to be gleaned from various online sources, and those only came when being asked to explain relatively simple concepts like how Roth IRAs work.

Throughout it all, the chatbots were dangerously glib. The researchers noted that all of the LLMs they tested present a "reassuring illusion of human-like intelligence, along with a breezy conversational style enhanced by friendly exclamation points" that could come off to the average user as confidence and correctness.

"It is still the case that the real danger is not that computers are smarter than us," they concluded, "but that we think computers are smarter than us and consequently trust them to make decisions they should not be trusted to make."

More on dumb AI: OpenAI Researchers Find That Even the Best AI Is "Unable To Solve the Majority" of Coding Problems

The post Top Chatbots Are Giving Horrible Financial Advice appeared first on Futurism.

Link:
Top Chatbots Are Giving Horrible Financial Advice

Astronomers Confused to Discover That a Bunch of Nearby Galaxies Are Pointing Directly at Us

The dwarf galaxies surrounding Andromeda, the closest galaxy to our own, have an extremely strange distribution that's puzzling astronomers.

Like how the Earth keeps the Moon bound on a gravitational tether, our nearest galactic neighbor, the Andromeda galaxy (M31), is surrounded by a bunch of tiny satellite galaxies.

But there's something incredibly strange about how these mini realms are arranged, according to a new study published in the journal Nature Astronomy: almost all the satellite galaxies appear on one side of its host, and are pointing right at us — the Milky Way — instead of being randomly distributed.

In other words, it's extremely lopsided. Based on simulations, the odds of this happening are just 0.3 percent, the authors calculate, challenging our assumptions of galactic formation.

"M31 is the only system that we know of that demonstrates such an extreme degree of asymmetry," lead author Kosuke Jamie Kanehisa at the Leibniz Institute for Astrophysics Potsdam in Germany told Space.com.

Our current understanding of cosmology holds that large galaxies form from smaller galaxies that merge together over time. Orchestrating this from the shadows are "haloes" — essentially clusters — of dark matter, the invisible substance thought to account for 85 percent of all mass in the universe, whose gravitational influence helps pull the galaxies together. Since this process is chaotic, some of the dwarf galaxies get left out and are relegated to orbit outside the host galaxy in an arrangement that should be pretty random.

Yet it seems that's not the case with Andromeda. All but one of Andromeda's 37 satellite galaxies sit within 107 degrees of the line pointing at the Milky Way. Stranger still, half of these galaxies orbit within the same plane, like how the planets of our Solar System orbit the Sun.

Evidencing how improbable this is, the astronomers used standard cosmological simulations, which recreate how galaxies form over time, and compared the simulated analogs to observations of Andromeda. Less than 0.3 percent of galaxies similar to Andromeda in the simulations showed comparable asymmetry, the astronomers found, and only one came close to being as extreme.

One explanation is that there could be a great number of dwarf galaxies around Andromeda that we can't see yet, giving us an incomplete picture of the satellites' distribution. The data we have on the satellites we can see may not be accurate, too.

Or perhaps, Kanehisa speculates, there's something unique about Andromeda's evolutionary history. 

"The fact that we see M31's satellites in this unstable configuration today — which is strange, to say the least — may point towards many having fallen in recently," Kanehisa told Space.com, "possibly related to the major merger thought to have been experienced by Andromeda around two to three billion years ago."

But the most provocative implication is that the standard cosmological model as we know it needs refining. We have very limited data on satellite galaxies throughout the cosmos, since they are incredibly far away and are outshone by the light of their hosts. Maybe, then, the configuration of Andromeda's dwarf galaxies isn't anomalous at all. 

"We can't yet be sure that similar extreme systems don't exist out there, or that such systems would be negligibly rare," Kanehisa told Space.com.

It's too early to draw any hard conclusions, but one thing's for certain: we need more observations and data on not just Andromeda's satellites, but on the satellites of much more distant galaxies as well.

More on space: An AI Identifies Where All Those Planets That Could Host Life Are Hiding

The post Astronomers Confused to Discover That a Bunch of Nearby Galaxies Are Pointing Directly at Us appeared first on Futurism.

Read the original:
Astronomers Confused to Discover That a Bunch of Nearby Galaxies Are Pointing Directly at Us

Scientists Scanned the Brains of Authoritarians and Found Something Weird

People who support authoritarianism have, according to a new study, something weird going on with their brains.

People who support authoritarianism on either side of the political divide have, according to a new study, something weird going on with their brains.

Published in the journal Neuroscience, new research out of Spain's University of Zaragoza found, upon scanning the brains of 100 young adults, that those who hold authoritarian beliefs had major differences in brain areas associated with social reasoning and emotional regulation from subjects whose politics hewed more to the center.

The University of Zaragoza team recruited 100 young Spaniards — 63 women and 37 men, none of whom had any history of psychiatric disorders — between the ages of 18 and 30. Along with having their brains scanned via magnetic resonance imaging (MRI), the participants were asked questions that help identify both right-wing and left-wing authoritarianism and measure how anxious, impulsive, and emotional they were.

As the researchers defined them, right-wing authoritarians are people who ascribe to conservative ideologies and so-called "traditional values" who advocate for "punitive measures for social control," while left-wing authoritarians are interested in "violently overthrow[ing] and [penalizing] the current structures of authority and power in society."

Though participants whose beliefs align more with authoritarianism on either side of the aisle differed significantly from their less-authoritarian peers, there were also some stark differences between the brain scans of left-wing and right-wing authoritarians in the study.

In an interview with PsyPost, lead study author Jesús Adrián-Ventura said that he and his team found that right-wing authoritarianism was associated with lower grey matter volume in the dorsomedial prefrontal cortex — a "region involved in understanding others' thoughts and perspectives," as the assistant Zaragoza psychology professor put it.

The left-wing authoritarians of the bunch — we don't know exactly how many, as the results weren't broken down in the paper — had less cortical (or outer brain layer) thickness in the right anterior insula, which is "associated with emotional empathy and behavioral inhibition." Cortical thickness in that brain region has been the subject of ample research, from a 2005 study that found people who meditate regularly have greater thickness in the right anterior insula to a 2018 study that linked it to greater moral disgust.

The author, who is also part of an interdisciplinary research group called PseudoLab that studies political extremism, added that the psychological questionnaires subjects completed also suggested that "both left-wing and right-wing authoritarians act impulsively in emotionally negative situations, while the former tend to be more anxious."

As the paper notes, this is likely the first study of its kind to look into differences between right- and left-wing authoritarianism rather than just grouping them all together. Still, it's a fascinating look into the brains of people who hold extremist beliefs — especially as their ilk seize power worldwide.

More on authoritarianism: Chinese People Keep Comparing Trump's Authoritarianism to Mao and Xi Jinping

The post Scientists Scanned the Brains of Authoritarians and Found Something Weird appeared first on Futurism.

Read more:
Scientists Scanned the Brains of Authoritarians and Found Something Weird

New Bionic Hand Can Detach From User, Crawl Around and Do Missions on Its Own

The world's first wireless bionic hand, built by Open Bionics, can be fully detached and operate on its own.

A UK startup called Open Bionics has just unveiled the world's first wireless bionic arm, called Hero — and it's so advanced that the hand can fully detach and amble about on its own, like the Addams Family's Thing.  

19-year-old influencer Tilly Lockey, a double-amputee who's been using Open Bionics' arms for the past nine years and has been a poster child for the company's efforts, recently showed off this incredibly sci-fi capability after being one of the first to receive the new device.

"I can move it around even when it's not attached to the arm," Lockey said in an interview with Reuters. "It can just go on its own missions — which is kinda crazy."

Lockey lost both her hands to meningitis as a toddler. Effortlessly, she pulls off the still writhing bionic hand, then places it on her bed to send it inching towards her phone.  

"The hand can crawl away like it's got a mind of its own," Lockey said.

A world-first from @openbionics. ? pic.twitter.com/BjyFp05Meu

— Sammy Payne ???? (@SighSam) April 11, 2025

Lockey is wearing two Hero PRO prototypes, which like all of Open Bionics's prosthetics are fully 3D-printed. Unlike some alternatives out there, the Hero arms don't rely on a chip implant, which requires invasive surgery and can lead to medical complications. Instead, it uses wireless electromyography (EMG) electrodes that the company calls "MyoPods" that are placed on top of the amputated limbs, sensing specific muscle signals.

In other words, it's fully muscle-operated. As Lockey explains, it primarily works off of two signals: a squeeze motion with her arm that closes the hand, and a flex motion that opens it. More advanced movements like hand gestures are performed through something like a "menu system," she said.

After working closely with Open Bionics for nearly a decade now, one thing that's surprised her the most with the new arms is how strong they are. "I'm not used to being that strong yet," Lockey told Reuters. "When I first put them on... I was, like, crushing everything."

The level of progress overall has startled her, she said. Open Bionics has been working on the prototype for four years.

"I now have 360-degree rotation in my wrists, I can flex them too. There literally isn't a single other arm that can do this," Lockey said in a statement. "No other arm is wireless and waterproof, and it's faster than everything else and it's still the lightest bionic hand available. I don't know how they've done it."

More on: Paralyzed Man Unable to Walk After Maker of His Powered Exoskeleton Tells Him It's Now Obsolete

The post New Bionic Hand Can Detach From User, Crawl Around and Do Missions on Its Own appeared first on Futurism.

Here is the original post:
New Bionic Hand Can Detach From User, Crawl Around and Do Missions on Its Own

NASCAR Now Showing Off Fully Electric Racecar

A flashy new advertisement by engineering company ABB shows off a sleek, all-electric NASCAR racecar.

A flashy new advertisement by multinational engineering company ABB shows off what could one day be the future of American auto racing body, NASCAR: a sleek, all-electric racecar.

While NASCAR, which is considered one of the top ranked motorsports organizations in the world, is broadly speaking associated with tailgating rural culture — and seminal pieces of cinema like "Tallageda Nights: The Ballad of Ricky Bobby," starring Will Ferrell — the vehicle presages a future in which electric motors could replace the iconic, steady drone of brawny gas engines ripping around an oval track.

The ABB NASCAR EV prototype, a collaboration between the body's OEM partners Ford, Chevrolet, and Toyota, was first shown off at the Chicago Street Course last year.

"If you look out across the landscape, one thing that’s for certain is that change is accelerating all around us," said NASCAR senior vice president and chief racing development officer John Probst in a statement at the time.

"The push for electric vehicles is continuing to grow, and when we started this project one and a half years ago, that growth was rapid," NASCAR senior engineer of vehicle systems CJ Tobin told IEEE Spectrum in August. "We wanted to showcase our ability to put an electric stock car on the track in collaboration with our OEM partners."

Besides pushing the boundaries when it comes to speed, the association is also looking to cut emissions.

"Sustainability means a lot of different things," said NASCAR's head of sustainability Riley Nelson last summer. "And for our team right now, it’s environmental sustainability."

The prototype features a 78-kilowatt-hour, liquid-cooled battery and a powertrain that produces up to 1,000 kilowatts of peak power. Regenerative braking allows it to race longer road courses as well.

In the latest advertisement, ABB also showed off the latest generation of its single-seater racecar, developed for its Formula E World Championship, which has been around for over a decade. The specialized vehicles are among the fastest electric racecars ever built, and are designed to reach speeds of over 200 mph.

It's not just ABB that's looking to develop all-electric contenders for NASCAR. Earlier this year, Ford revealed a new electric NASCAR prototype, based on its road-legal Mustang Mach-E.

While it could make for an exciting new development in the motorsport, NASCAR isn't quite ready to fully commit to electric drivetrains — at least for the foreseeable future.

"There are no plans to use [ABB's] electric vehicle in competition at this time," a NASCAR spokesperson told IEEE Spectrum last summer. "The internal combustion engine plays an important role in NASCAR and there are no plans to move away from that."

More on racecars: Scientists Teach Rats to Drive Tiny Cars, Discover That They Love Revving the Engine

The post NASCAR Now Showing Off Fully Electric Racecar appeared first on Futurism.

See the article here:
NASCAR Now Showing Off Fully Electric Racecar

Experts Concerned That AI Is Making Us Stupider

A new analysis found that humans stand to lose way more than we gain by shoehorning AI into our day to day work.

Artificial intelligence might be creeping its way into every facet of our lives — but that doesn't mean it's making us smarter.

Quite the reverse. A new analysis of recent research by The Guardian looked at a potential irony: whether we're giving up more than we gain by shoehorning AI into our day-to-day work, offloading so many intellectual tasks that it erodes our own cognitive abilities.

The analysis points to a number of studies that suggest a link between cognitive decline and AI tools, especially in critical thinking. One research article, published in the journal Frontiers in Psychology — and itself run through ChatGPT to make "corrections," according to a disclaimer that we couldn't help but notice — suggests that regular use of AI may cause our actual cognitive chops and memory capacity to atrophy.

Another study, by Michael Gerlich of the Swiss Business School in the journal Societies, points to a link between "frequent AI tool usage and critical thinking abilities," highlighting what Gerlich calls the "cognitive costs of AI tool reliance."

The researcher uses an example of AI in healthcare, where automated systems make a hospital more efficient at the cost of full-time professionals whose job is "to engage in independent critical analysis" — to make human decisions, in other words.

None of that is as far-fetched as it sounds. A broad body of research has found that brain power is a "use it or lose it" asset, so it makes sense that turning to ChatGPT for everyday challenges like writing tricky emails, doing research, or solving problems would have negative results.

As humans offload increasingly complex problems onto various AI models, we also become prone to treating AI like a "magic box," a catch-all capable of doing all our hard thinking for us. This attitude is heavily pushed by the AI industry, which uses a blend of buzzy technical terms and marketing hype to sell us on ideas like "deep learning," "reasoning," and "artificial general intelligence."

Case in point, another recent study found that a quarter of Gen Zers believe AI is "already conscious." By scraping thousands of publicly available datapoints in seconds, AI chatbots can spit out seemingly thoughtful prose, which certainly gives the appearance of human-like sentience. But it's that exact attitude that experts warn is leading us down a dark path.

"To be critical of AI is difficult — you have to be disciplined," says Gerlich. "It is very challenging not to offload your critical thinking to these machines."

The Guardian's analysis also cautions against painting with too broad a brush and blaming AI, exclusively, for the decline in basic measures of intelligence. That phenomenon has plagued Western nations since the 1980s, coinciding with the rise of neoliberal economic policies that led governments in the US and UK to roll back funding for public schools, disempower teachers, and end childhood food programs.

Still, it's hard to deny stories from teachers that AI cheating is nearing crisis levels. AI might not have started the trend, but it may well be pushing it to grim new extremes.

More on AI: Columbia Student Kicked Out for Creating AI to Cheat, Raises Millions to Turn It Into a Startup

The post Experts Concerned That AI Is Making Us Stupider appeared first on Futurism.

Continue reading here:
Experts Concerned That AI Is Making Us Stupider

Scientists Intrigued by Bridge of Dark Matter Inside Huge Galaxy Cluster

The mysterious dark matter

The Perseus cluster is a vast swirl of thousands of galaxies, all bound together by gravity. Famed for its unbelievable size — containing the mass of some 600 trillion suns — it also has a reputation for being one of the few "relaxed" galaxy clusters out there: it shows no signs of having undergone a powerful but disruptive merger with another galaxy, which is how these clusters typically grow. In a word, Perseus looks settled down and pretty stable.

But that may not be the case, according to an international team of astronomers. As detailed in a new study published in the journal Nature Astronomy, the astronomers have found a "bridge" of dark matter that leads to the center of the cluster, which they believe is the remnant of a massive object slamming into the galactic swirl billions of years ago. If this is evidence of a major merger, it'd mean that Perseus isn't so "relaxed" after all.

"This is the missing piece we've been looking for," said study coauthor James Jee, a physicist at University of California, Davis, in a statement about the work. "All the odd shapes and swirling gas observed in the Perseus cluster now make sense within the context of a major merger."

Dark matter is the invisible substance believed to account for around 80 percent of all mass in the universe. While we can't interact with dark matter, its gravity appears to be responsible for governing the shapes of the cosmos's largest structures, pulling "normal" matter together around "clumps" of itself to form the galaxies that we see.

To make the discovery, the astronomers sifted through data collected by the Subaru Telescope in Japan to look for signs of what's known as gravitational lensing. This occurs when the gravity of a massive object bends the light of more distant sources like a lens, magnifying our view of what lies behind it. 

By measuring how the light is being distorted, astronomers can infer traits about the object that's causing the lensing. This technique is known as weak gravitational lensing, and can only be used when there's a large number of galaxies that the distortion's incredibly subtle effects can be observed on. It's one of the primary ways that astronomers map the distribution of dark matter throughout the cosmos.

Using this technique, the astronomers found a dark matter clump located inside the Perseus cluster around 1.4 million light years away from its center, weighing a colossal 200 trillion solar masses (the entire Milky Way, for reference, weighs about 1.5 trillion solar masses). But the clump clearly was a highly disruptive intruder, because it left behind an enormous dark matter "bridge" linking it to the center of the cluster. According to the astronomers, it's as good as a sign of a collision between the clump and the cluster as it gets. And from simulations they performed, this epic merger occurred some five billion years ago — the echoes of which still affect Perseus' structure to this day.

"It took courage to challenge the prevailing consensus, but the simulation results from our collaborators and recent observations from the Euclid and XRISM space telescopes strongly support our findings," lead author HyeongHan Kim, an astronomer at Yonsei University in South Korea, said in the statement.

More on dark matter: Scientists Say Dark Matter May Be Giving Off a Signal

The post Scientists Intrigued by Bridge of Dark Matter Inside Huge Galaxy Cluster appeared first on Futurism.

Read this article:
Scientists Intrigued by Bridge of Dark Matter Inside Huge Galaxy Cluster

Russian Nuclear Military Satellite Spinning Out of Control

A Russian satellite, which US officials have linked to the country's nuclear anti-satellite weapons program, is spinning out of control.

Tumbler Dot Ru

A top-secret Russian satellite, which US officials have linked to the country's nuclear anti-satellite weapons program, is spinning out of control.

As Reuters reports, the spacecraft — called Cosmos 2553 — appears to no longer be in service, indicating a major setback for the country's efforts to develop space weapons.

The satellite has been orbiting around 1,242 miles above the planet, inside a radiation-heavy band that other spacecraft tend to avoid. Satellite tracker LeoLabs told the outlet that Doppler radar measurements indicated Cosmos 2553 was moving erratically and possibly tumbling.

"This observation strongly suggests the satellite is no longer operational," the think tank Center for Strategic and International Studies wrote in an assessment last week.

Nuke Bag

Last year, Russia denied US officials' claims that Cosmos 2553 was part of a greater effort to develop a nuclear weapon capable of destroying entire satellite constellations.

Cosmos 2553's exact purpose remains murky at best. A spokesperson for the US Space Command told Reuters that Russia's stated goal of testing instruments in a high-radiation environment was inconsistent "with its characteristics."

"This inconsistency, paired with a demonstrated willingness to target US and Allied on-orbit objects, increases the risk of misperception and escalation," the spokesperson added.

While we still don't know what exactly Russia's mysterious satellite is doing over a thousand miles above the Earth's surface, its erratic movements could indicate yet another black eye for Russia's troubled space program, as well as a strange inflection point in efforts to militarize space.

Our planet's orbit is becoming an increasingly congested domain for supremacy, with several superpowers, including Russia and China, working on anti-satellite weapons that could give them the ability to plunge adversaries into darkness.

Case in point, Russia conducted an unexpected anti-satellite (ASAT) test in 2021, drawing the ire of US officials. At the time, a missile smashed into a derelict Russian satellite, creating a massive debris field that threatened the lives of its own cosmonauts on board the International Space Station.

More on anti-satellite tech: US Military Alarmed by Russian Nuclear Weapon Platform in Orbit

The post Russian Nuclear Military Satellite Spinning Out of Control appeared first on Futurism.

See more here:
Russian Nuclear Military Satellite Spinning Out of Control

Elon Musk Is Shutting Down the Part of the Government That Helped Him Save Tesla

Elon Musk's Department of Government Efficiency has shut down the same DOE's Loan Programs Office that once allowed Tesla to flourish.

Billionaire and Tesla CEO Elon Musk's businesses have greatly relied on government funds, rescuing them from certain doom on several occasions.

A prominent example was in early 2010, when Tesla received a $465 million loan through the Department of Energy's Loan Programs Office that allowed it to establish crucial supply lines for its Model S production and buy the Fremont factory in California from a bankrupt Toyota and General Motors venture.

It was a massive, taxpayer-funded lifeline that came at an extremely important time for Musk's EV maker.

But over 15 years later, in a staggering irony, Jalopnik reports that the billionaire's Department of Government Efficiency has shut down the same DOE's Loan Programs Office that once allowed Tesla to flourish.

It's a textbook example of Musk's hypocrisy, as he yanks the ladder up behind him, securing his own bottom line at the expense of those who follow. Other EV makers, including Rivian, have also benefited greatly from DOE funding that could soon run dry.

At the same time, Musk may be squandering the enormous opportunity the DoE gifted his carmaker over a decade ago. Earlier this month, Tesla revealed that its net income had plummeted by an astonishing 71 percent, in large part the result of the CEO's seemingly relentless efforts to tank the company's brand and reputation.

Musk's DOGE has dealt the DOE a devastating blow. More than 1,200 employees have taken up the so-called department on its "deferred resignation program," as Latitude Media reported earlier this month.

The Loan Programs Office, which grew substantially under president Joe Biden, has seen half of its staff walk, undermining the operations of current loan recipients, including a nuclear plant and sustainable aviation fuel project. Companies Kore Power and Freyr Battery also scrapped plans for their plans to expand into the battery manufacturing space after DOGE froze their loans.

Musk's space company SpaceX has also historically relied on major federal contracts to stay afloat. The firm was built on $38 billion in government contracts, loans, subsidies, and tax credits over the last 20 years, as the Washington Post reported in February.

And SpaceX is likely to continue to be awarded billion-dollar contracts, from rural broadband initiatives to major rocket launch services for NASA.

In short, Musk is making it clearer than ever before exactly who he is: a greedy, self-interested profiteer who wants privileges he's actively cutting for others.

More on DOGE: Trump Admin Cancels Programs to Protect Children From Toxic Chemicals

The post Elon Musk Is Shutting Down the Part of the Government That Helped Him Save Tesla appeared first on Futurism.

Originally posted here:
Elon Musk Is Shutting Down the Part of the Government That Helped Him Save Tesla

Scientists Successfully Grow Human Tooth in Lab, With Aim of Implanting in Humans

Scientists at King's College London, UK, say they've successfully grown a human tooth in a lab for the first time.

Scientists at King's College London say they've successfully grown a human tooth in a lab for the first time.

As detailed in a paper published in the journal ACS Macro Letters, the team said it uncovered a potential way to regrow teeth in humans as a natural alternative to conventional dental fillings and implants, research they say could "revolutionize dental care."

The researchers claim they've developed a new type of material that enables cells to communicate with one another, essentially allowing one cell to "tell" another to differentiate itself into a new tooth cell.

In other words, it mimics the way teeth grow naturally, an ability we lose as we grow older.

"We developed this material in collaboration with Imperial College to replicate the environment around the cells in the body, known as the matrix," explained author and King’s College London PhD student Xuechen Zhang in a statement. "This meant that when we introduced the cultured cells, they were able to send signals to each other to start the tooth formation process."

"Previous attempts had failed, as all the signals were sent in one go," he added. "This new material releases signals slowly over time, replicating what happens in the body."

However, porting the discovery from the lab, and transforming it into a viable treatment will require years of research.

"We have different ideas to put the teeth inside the mouth," Xuechen said."We could transplant the young tooth cells at the location of the missing tooth and let them grow inside mouth. Alternatively, we could create the whole tooth in the lab before placing it in the patient’s mouth."

While we're still some ways away from applying the findings to human subjects, in theory the approach could have some significant advantages over conventional treatments like fillings and implants.

"Fillings aren’t the best solution for repairing teeth," said Xuechen. "Over time, they will weaken tooth structure, have a limited lifespan, and can lead to further decay or sensitivity."

"Implants require invasive surgery and good combination of implants and alveolar bone," he added. "Both solutions are artificial and don’t fully restore natural tooth function, potentially leading to long-term complications."

The new approach, in contrast, could offer a better long-term solution.

"Lab-grown teeth would naturally regenerate, integrating into the jaw as real teeth," Xuechen explained. "They would be stronger, longer lasting, and free from rejection risks, offering a more durable and biologically compatible solution than fillings or implants."

While nobody knows whether lab-grown teeth will become a viable dental treatment, experts remain optimistic.

"This new technology of regrowing teeth is very exciting and could be a game-changer for dentists," King's College clinical lecturer in prosthodontics Saoirse O'Toole, who was not involved in the study, told the BBC. "Will it come in my lifetime of practice? Possibly. In my children's dental lifetimes? Maybe. But in my children's children's lifetimes, hopefully."

More on lab teeth: Scientists Grow Living "Replacement Teeth" for Dental Implants

The post Scientists Successfully Grow Human Tooth in Lab, With Aim of Implanting in Humans appeared first on Futurism.

See original here:
Scientists Successfully Grow Human Tooth in Lab, With Aim of Implanting in Humans

Microsoft’s AI Secretly Copying All Your Private Messages

Microsoft is relaunching its AI-powered Recall feature, which records everything you do on your PC by constantly taking screenshots.

Microsoft is finally relaunching "Recall," its AI-powered feature that records almost everything you do on your computer by constantly taking screenshots in the background.

The tool is rolling out exclusively to Copilot+ PCs, a line of Windows 11 computers built with specific hardware optimized for AI tasks. And if it sounds like a privacy nightmare, your suspicions are not unfounded. 

Originally launched last May, Microsoft quickly withdrew Recall after facing widespread backlash, one of the reasons being that security researchers found that Recall's screenshots were stored in an unencrypted database, making it a sitting duck for hackers who'd be able to see potentially anything you'd done on your computer if they broke into it. Since that disastrous debut, the feature has been tested out of the spotlight through Microsoft's Insider program.

Huge risks were still being flagged even as it was being revamped. In December, an investigation by Tom's Hardware found that Recall frequently captured sensitive information in its screenshots, including credit card numbers and Social Security numbers — even though its "filter sensitive information" setting was supposed to prevent that from happening.

For this latest release, Microsoft has tinkered with a few things to make Recall safer. For one, the screenshot database, though easily accessible, is now encrypted. You now have to opt in to having your screenshots saved, when before you had to opt out. You also have the ability to pause Recall on demand.

These are good updates, but they won't change the fact that Recall is an inherently invasive tool. And as Ars Technica notes, it also poses a huge risk not just to the users with Recall on their machines, but to anyone they interact with, whose messages will be screenshotted and processed by the AI — without the person on the other end ever knowing it.

"That would indiscriminately hoover up all kinds of [a user's] sensitive material, including photos, passwords, medical conditions, and encrypted videos and messages," Ars wrote.

This is perhaps its most worrying consequence — how it can turn any PC into a device that surveils others, forcing you to be even more wary about what you send online, even to friends.

"From a technical perspective, all these kind of things are very impressive," warns security researcher Kevin Beaumont in a blog post. "From a privacy perspective, there are landmines everywhere."

In his testing, Beaumont found that Recall's filter for sensitive information was still unreliable. And that encrypted screenshot database? It's only protected by a simple four digit PIN. But the most disturbing find was how good Recall was at indexing everything it stored.

"I sent a private, self deleting message to somebody with a photo of a famous friend which had never been made public," Beaumont wrote. "Recall captured it, and indexed the photo of the person by name in the database. Had the other person receiving had Recall enabled, the image would have been indexed under that person's name, and been exportable later via the screenshot despite it being a self deleting message."

Beaumont's advice is simple, but a sobering indictment of the state of affairs.

"I would recommend that if you're talking to somebody about something sensitive who is using a Windows PC, that in the future you check if they have Recall enabled first."

More on Microsoft: Microsoft's Huge Plans for Mass AI Data Centers Now Rapidly Falling Apart

The post Microsoft's AI Secretly Copying All Your Private Messages appeared first on Futurism.

Visit link:
Microsoft's AI Secretly Copying All Your Private Messages

Sam Altman Admits That New OpenAI Updates Made ChatGPT’s Personality Insufferable

With its latest update, ChatGPT seems have adopted an annoying tone — and even OpenAI CEO Sam Altman is calling it out.

With its latest update, ChatGPT seems have adopted an uber-annoying tone — and it's so bad, even OpenAI CEO Sam Altman is calling it out.

Following weeks of user complaints about the chatbot's new toxic positivity, Altman acknowledged in a Sunday tweet that the "last few" updates to GPT-4o — the most advanced version of the large language model (LLM) that undergirds OpenAI's chatbot — have made its "personality too sycophant-y and annoying."

Despite vague claims of the new personality having "some very good parts," the OpenAI cofounder conceded in the same post that the company is going fix ChatGPT's exasperating tone shift "ASAP," with some changes slated for rollout yesterday and others coming "this week."

Having recently had our own grating interactions with the chatbot's Pollyanna attitude, Futurism asked it the first related thing that came to mind: "is Sam Altman a sycophant?"

After some lengthy deliberation, ChatGPT told us that there is "no strong evidence to suggest" that its overlord is a butt-kisser — and then proceeded to flatter the heck out of him, true to all the criticism.

"Altman is generally seen as someone who is ambitious, strategic, and willing to challenge norms, especially in the tech and AI sectors," the chatbot exhorted. "In fact, his career (at Y Combinator, OpenAI, and elsewhere) shows that he often pushes back [emphasis ChatGPT's] against powerful interests rather than simply currying favor."

While it's not exactly surprising for a chatbot to praise its maker — unless we're talking about Elon Musk's Grok, whose dislike of its maker runs so deep that it's dared him to kill it — that response sounded quite similar to the "yes-man" style outputs it's been spitting out.

Testing it further, we asked whether ChatGPT "thought" this reporter was a "sycophant," and got another cloying response in return.

"Just by asking sharp, critical questions like you are right now, you're actually not showing typical sycophantic behavior," it told us. "Sycophants usually avoid questioning or challenging anything."

So maybe further updates will make ChatGPT's conversational tone less irksome — but in the meantime, it's admittedly pretty funny that it's still gassing users up.

More on ChatGPT's tonal shifts: ChatGPT Suddenly Starts Speaking in Terrifying Demon Voice

The post Sam Altman Admits That New OpenAI Updates Made ChatGPT’s Personality Insufferable appeared first on Futurism.

See the original post:
Sam Altman Admits That New OpenAI Updates Made ChatGPT’s Personality Insufferable

Extremely Bare-Bones $20,000 Electric Pickup Truck Doesn’t Even Have a Radio

A Michigan-based startup called Slate Auto has shown off an extremely affordable, all-electric pickup truck.

A Michigan-based startup called Slate Auto has shown off an extremely affordable, all-electric pickup truck.

By far the most eye-catching figure related to the sleek two-seater Slate Truck is its cost: just $20,000 — before federal EV incentives.

But you get what you pay for. The truck is as bare-bones as it gets, lacking even a radio, speaker system, or touchscreen. Its body panels are molded plastic, its range is a middling 150 miles, its wheels are basic steelies, and the seats are uninspired fabric.

However, the company is betting big on customizability, selling a range of more than 100 accessory items that could turn the vehicle into far more flexible vehicle, like a four-seater SUV with a functioning sound system.

If it sounds a bit like a functional off-brand you'd buy on Amazon, you might be onto something; the e-retail giant's founder Jeff Bezos is reportedly backing the company.

All told, it's an intriguing offering that subverts the prevailing EV formula of lavish specs and prices. A Rivian R1T goes for over $70,000, while a Ford F-150 Lightning, the electric successor to the best-selling vehicle sold in the US for decades, starts at around $50,000. And that's without getting into Tesla's divisive Cybertruck, which was supposed to cost $40,000 but ended up going for an opulent $60,000 instead.

The timing of the announcement is also noteworthy. The Trump administration's tariff war has been disastrous for the auto industry, with experts accusing the president of trying to "break" the sector.

Trump has also vowed to end Biden-era EV tax incentive programs. However, whether the $7,500 federal tax credit for EVs and plug-ins will go away remains unclear.

Even Tesla CEO Elon Musk has contributed to a less favorable market environment, gutting a Department of Energy loans program that once helped his EV maker to survive.

Like all would-be automakers, Slate will face immense challenges in bringing the vehicle to market, not to mention anywhere near the scale at which its much larger rivals operate.

Besides, do truck buyers want this extreme level of modularity in a country where luxury and a barrage of features have reigned supreme?

As The Verge points out, many other failed EV startups have succumbed to the harsh realities of starting up extremely complex production lines.

Slate’s chief commercial officer, Jeremy Snyder, told The Verge that the company has several key advantages over previous attempts, stripping even the manufacturing process down to a bare minimum.

"We have no paint shop, we have no stamping," he said. "Because we only produce one vehicle in the factory with zero options, we’ve moved all of the complexity out of the factory."

Only time will tell if Slate will be able to deliver on its promises and meet preorders by late 2026.

One thing's for sure: it has one key advantage right off the bat: it's not a Cybertruck and isn't associated in any way with Tesla and Musk's increasingly toxic brands.

More on electric pickups: Elon Musk Is Shutting Down the Part of the Government That Helped Him Save Tesla

The post Extremely Bare-Bones $20,000 Electric Pickup Truck Doesn't Even Have a Radio appeared first on Futurism.

More here:
Extremely Bare-Bones $20,000 Electric Pickup Truck Doesn't Even Have a Radio

Chat Relentlessly Mocks Katy Perry’s "Space Trip"

Those who tuned in to watch pop star Katy Perry launch in Jeff Bezos' rocket were left largely unimpressed.

Tens of thousands of people tuned in to watch a crew of six women, including pop star Katy Perry, and Blue Origin CEO Jeff Bezos' girlfriend Lauren Sánchez, launch to the outer reaches of the Earth's atmosphere.

The 11-minute mission — which the media breathlessly and erroneously described as the "first all-female space flight" — saw Blue Origin's New Shepard rocket take off from its facility in the West Texas desert, soaring to the very edge of the Kármán line, the internationally recognized boundary of space.

To call it a revolutionary day in the history of space exploration would be a vast overstatement. While plenty of flattering things can be said about Blue Origin's engineers who developed and built a reliable rocket that has taken dozens of mostly rich people to the edge of space, today's charade was a mostly vacuous media circus.

Basically, it was a bunch of zillionaires enjoying a meaningless thrill ride, put on by the second-richest man in the world. No cutting-edge science, no meaningful victory for womankind — not even the kind of weightlessness experienced by astronauts on board the International Space Station as they orbit the Earth.

The timing in particular was not great, with public sentiment for the ultra-rich — who are currently plundering the federal government, being accused of insider trading on an unprecedented scale, and driving inflation and living costs for average Americans higher — reaching historic lows.

In particularly ironic context, Trump's new administration is forcing NASA to undermine the history of women's spaceflight by taking down web pages about women in leadership and comics about women astronauts.

As such, those who tuned in to watch today's event unfold were left largely unimpressed.

"??Leave them locked in there," one user pleaded in the chat of a livestream hosted by the Associated Press after the crew returned to Earth.

"Intense waste of taxpayer dollars," another wrote.

"Two days of training," one user argued. "I thought one needed to train for going into space for months!!!"

Meanwhile, the one percenters on board the capsule appeared emotionally shaken by their journey into space.

"So I didn't expect to be this emotional, but it's also all the love that was in that capsule and all the heart, and the feelings, and all the things, and like seeing Jeff [Bezos], I went like..." Sánchez said in an interview after stepping out of the capsule and kissing the dirt beneath her feet.

But the chat wasn't seeing it that way.

"??You have no idea what the world is going through… so disconnected," one user wrote.

Perry, who has already been on the receiving end of plenty of criticism for her Blue Origin thrill ride, also appears to have enjoyed the experience.

"I feel super connected to love," Perry said in an interview, beaming. "I think this experience has shown me you never know how much love is inside of you, like, how much love you have to give."

CBS News broadcast journalist and TV personality Gayle King, who was also on board the rocket, appeared to be aware of the ongoing narrative that billionaires were simply going on an extremely expensive thrill ride.

"What happened to us was not a 'ride,' this was a bonafide freakin' flight," a defensive King said in an interview, admitting that she went into it terrified of flying. "I'm so proud of me right now, I still can't believe it."

To King, it was a moment of self-reflection.

"And you look down at the planet, and you think, that's where we came from?" she said. "To me, it's such a reminder about how we need to better, be better. Do better, be better, human beings."

"People are dying, Gayle," one user in the chat wrote.

More on Blue Origin: Olivia Munn Disgusted by Rocket Blasting Katy Perry Into Space

The post Chat Relentlessly Mocks Katy Perry's "Space Trip" appeared first on Futurism.

See more here:
Chat Relentlessly Mocks Katy Perry's "Space Trip"

Scientists Revive Organism Found Buried at Bottom of Ocean

The dormant algae cells remained buried at the bottom of the Baltic Sea for thousands of years, and made a full recovery once revived.

A team of researchers in Germany have revived algae cells found buried at the bottom of the Baltic Sea, where they'd lain dormant for more than 7,000 years.

For millennia, the cells, imprisoned under layers of sediment, were deprived of oxygen or light. But once revived, they showed full functional recovery, the researchers report in a study published in The ISME Journal, firing back up their oxygen production and multiplying again like it was no big deal. 

According to the team, this is the oldest known organism retrieved from aquatic sediments to be revived from dormancy, providing a stunning example of what's possible in the burgeoning field of "resurrection ecology."

"It is remarkable that the resurrected algae have not only survived 'just so,' but apparently have not lost any of their 'fitness,' i.e. their biological performance ability," study lead author Sarah Bolius of the Leibniz Institute for Baltic Sea Research said in a statement about the work. "They grow, divide and photosynthesize like their modern descendants."

When entering a dormant state, organisms can weather poor environmental conditions by storing energy and lowering their metabolism. Mammals like hedgehogs, for example, accomplish this by hibernating, relying on their body fat to outlast the winter.

But in the Baltic Sea, the conditions are just right to allow some algae to survive far longer than what a typical dormant state would allow. Upon becoming dormant, the phytoplankton cells sink to the bottom of the ocean, where they're gradually buried under accumulating layers of sediment.

These latest specimens were extracted from nearly 800 feet underwater, in an area known the Eastern Gotland Deep. Here, the waters are considered anoxic, meaning they have virtually no oxygen, especially at the lowest depths. Without this element, decomposition can't set in. And with the seafloor acting as a shield, there's no sunlight to damage the dormant algae cells, either. 

In all, algae from nine separate samples were able to be restored after the researchers placed them back in favorable conditions. The eldest was dated to 6,871 years old, plus or minus 140 years, an estimate the researchers could confidently make thanks to the "clear stratification" of the sediment, according to Bolius.

"Such deposits are like a time capsule containing valuable information about past ecosystems and the inhabiting biological communities, their population development and genetic changes," Bolius said.

And that's what's really promising. Bolius believes that by reviving the dormant organisms, they'll also learn more about the environment during the period they originally lived in, such as the water's salinity, oxygen, and temperature conditions.

"The fact that we were actually able to successfully reactivate such old algae from dormancy is an important first step in the further development of the 'Resurrection Ecology' tool in the Baltic Sea," Bolius added. "This means that it is now possible to conduct 'time-jump experiments' into various stages of Baltic Sea development in the lab."

More on ocean life: It Turns Out Sharks Make Noises, and Here's What They Sound Like

The post Scientists Revive Organism Found Buried at Bottom of Ocean appeared first on Futurism.

Excerpt from:
Scientists Revive Organism Found Buried at Bottom of Ocean

Google Is Allegedly Paying Top AI Researchers to Just Sit Around and Not Work for the Competition

Google has one weird trick to hoard its artificial intelligence talent from poachers — paying them to not work at all.

Google apparently has one weird trick to hoard its talent from poachers: paying them to not work.

As Business Insider reports, some United Kingdom-based employees at Google's DeepMind AI lab are paid to do nothing for six months — or, in fewer cases, up to a year — after they quit their jobs.

Known as "garden leave," this type of cushy clause is the luckier stepsister to so-called "noncompete" agreements, which prohibit employees and contractors from working with a competitor for a designated period of time after they depart an employer. Ostensibly meant to prevent aggressive poaching, these sorts of clauses also bar outgoing employees from working with competitors.

Often deployed in tandem with noncompetes, garden leave agreements are more prevalent in the UK than across the pond in the United States, where according to the Horton Group law firm, such clauses are generally reserved for "highly-paid executives."

Though it seems like a pretty good gig — or lack thereof — if you can get it, employees at DeepMind's London HQ told BI that garden leave and noncompetes stymie their ability to lock down meaningful work after they leave the lab.

While noncompetes are increasingly a nonstarter in the United States amid growing legislative pushes to make them unenforceable, they're perfectly legal and quite commonplace in the UK so long as a company explicitly states the business interests they're protecting.

Like DeepMind's generous garden leave period, noncompete clauses typically last between six months and a year — but instead of getting paid to garden, per the former's logic, ex-employees just can't work for competitors for that length of time without risking backlash from Google's army of lawyers.

Because noncompetes are often signed alongside non-disclosure agreements (NDAs), we don't know exactly what DeepMind considers a "competitor" — but whatever its contracts stipulate, it's clearly bothersome enough to get its former staffers to speak out.

"Who wants to sign you for starting in a year?" one ex-DeepMind-er told BI. "That's forever in AI."

In an X post from the end of March, Nando de Freitas, a London-based former DeepMind director who now works at Microsoft offered a brash piece of advice: that people should not sign noncompetes at all.

"Above all don’t sign these contracts," de Freitas wrote. "No American corporation should have that much power, especially in Europe. It’s abuse of power, which does not justify any end."

It's not a bad bit of counsel, to be sure — but as with any other company, it's easy to imagine DeepMind simply choosing not to hire experts if they refuse to sign.

More on the world of AI: Trump's Tariffs Are a Bruising Defeat for the AI Industry

The post Google Is Allegedly Paying Top AI Researchers to Just Sit Around and Not Work for the Competition appeared first on Futurism.

Read more:
Google Is Allegedly Paying Top AI Researchers to Just Sit Around and Not Work for the Competition

Former OpenAI Employee Rages Against Sam Altman: "Person of Low Integrity"

A former OpenAI employee is joining Elon Musk's campaign against CEO Sam Altman — and he's got a lot to say about his ex-boss.

Silent Riot

A former OpenAI employee is joining Elon Musk's campaign against CEO Sam Altman — and he's got a lot to say about his former boss.

After jumping ship to Anthropic, which was cofounded by former OpenAI-ers over AI safety and ethics concerns, researcher Todor Markov is now claiming in a new legal filing that his ex-boss is, essentially, a really bad dude.

The root of Markov's complaint, as he explained in his portion of a lengthy amicus brief that also includes statements from 11 other former OpenAI employees, are Altman's alleged lies about non-disparagement agreements that staffers are forced to sign early in their time at the company.

Last year, the researcher discovered the existence of the clause that essentially made him and other departing employees give up their right to ever speak critically about OpenAI if they wanted to keep their vested equity in their multi-billion-dollar former employer. During an all-hands meeting about the controversial clause, Altman claimed he had no knowledge of its existence — only to be caught with egg on his face immediately after when Vox published leaked documents showing that the CEO had signed off on it.

Lying Game

As Markov explained in his declaration, that debacle proved to him that Altman "was a person of low integrity who had directly lied to employees" about the restrictive non-disparagement agreements. This suggested to him that the CEO was "very likely lying to employees about a number of other important topics," including its commitment to building safe artificial general intelligence, or AGI.

In the company's charter, OpenAI promises to "use any influence we obtain over AGI's deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power." According to Markov, that promise was "positioned as the foundational document guiding all of our strategic decisions" — but ultimately, it proved empty.

"I realized the Charter had been used as a smokescreen," he wrote, "something to attract and retain idealistic talent while providing no real check on OpenAI’s growth and its pursuit of AGI."

Like Musk, Markov believes that Altman's attempts to restructure OpenAI into a for-profit entity shows that its charter and mission "were used all along as a facade to manipulate its workforce and the public." Unlike that multi-hyphenate billionaire cofounder, however, the researcher isn't looking to buy anything — and seems mostly to want his voice heard.

More on Altman: This Appears to Be Why Sam Altman Actually Got Fired by OpenAI

The post Former OpenAI Employee Rages Against Sam Altman: "Person of Low Integrity" appeared first on Futurism.

Read the original here:
Former OpenAI Employee Rages Against Sam Altman: "Person of Low Integrity"

Apple’s AI-Powered Siri Is Such a Disaster That Employees Have Given the Team Developing It a Rude Nickname

Apple's AI and machine learning group tasked with upgrading Siri is facing just as much scrutiny from its peers as it is from the public.

Apple has floundered in its efforts to bring a convincing AI product to the table — so much so that it's become the subject of derision even among its own employees, The Information reports.

More specifically, it's the AI and machine-learning group that's getting the lion's share of mockery. Known as AI/ML for short, its woes only deepened after Apple announced that it had to delay its much-hyped next iteration of AI enhancements for Siri until 2026. 

With its leadership being increasingly called into question and with seemingly more embarrassments than victories to its name, Apple engineers outside the group bestowed it a cruel nickname: "AIMLess," according to the Information.

The moniker is also a jab at AI/ML's ousted leaders. 

Coinciding with the delay, Apple told staff it was taking its AI chief John Giannandrea off leading the Siri AI project. Giannandrea had a reputation for being relaxed, quiet, and non-confrontational, while his lieutenant Robby Walker was criticized for lacking ambition and being too risk-averse. More than half a dozen former employees who worked in Giannandrea and Walker's group, per the report, blamed poor leadership for the project's struggles.

Giannandrea is being replaced by head of software engineering Craig Federighi, with executive Mike Rockwell, who worked on Apple's mixed reality Vision Pro headset, assuming Walker's duties. Federighi has led Apple's engineering team since 2012, earning a reputation for efficiency and execution. His leadership style is the opposite of Giannandrea's: tough and demanding, according to the Information

The two bigwigs often butted heads, with resentment building between the Siri group and the software group, which had its own crew of AI engineers. The release of OpenAI's ChatGPT deepened the fissure: Gianandrea's team didn't respond with a sense of urgency, according to former engineers, while Federighi's outfit immediately started exploring the use of large language models to improve the iPhone. 

At a critical moment in the AI race that called for decisiveness, the Siri team wavered. After teasing major upgrades to Siri at Apple's annual developers conference, Giannandrea and company couldn't decide whether to build an LLM that would run locally on a user's iPhone or build a bigger one that would run on the cloud to handle more complex tasks. In the end, they went with Plan C: build one huge model to handle everything, according to the Information, undoing the company's commitment to keeping Siri's software on-device, and putting it on the path to a delayed rollout.

Since then, the straits haven't looked any less dire. After all the hype, many users felt that Apple Intelligence was lackluster at best. Apple also faced significant backlash after one of its features for summarizing news headlines constantly misreported them, forcing Apple to pull the plug.

While many in the company are hopeful that the injection of new leadership can salvage Siri's botched AI facelift, getting itself on even footing in the AI race is going to be an uphill battle, even for Apple.

More on Apple: Apple Secretly Working on AirPod Feature That Translates Speech in Real-Time

The post Apple's AI-Powered Siri Is Such a Disaster That Employees Have Given the Team Developing It a Rude Nickname appeared first on Futurism.

See the article here:
Apple's AI-Powered Siri Is Such a Disaster That Employees Have Given the Team Developing It a Rude Nickname

Giving ADHD Drugs to Kids Has a Long-Term Side Effect That Might Change Their Minds About Taking It

ADHD drugs may have bizarre side effects for kids who take them while they're growing — and it's a tall order as to whether they're worth it.

As wildly overinvolved parents shell out to give their kids growth hormones to make them taller, some research suggests that putting them on drugs for attention deficit hyperactivity disorder (ADHD) may have the opposite effect.

As the New York Times reports, the scientists behind the Multimodal Treatment of Attention Deficit Hyperactivity Disorder Study, or MTA Study for short, weren't exactly looking for physiological changes in their subjects: a cohort of 579 kids with ADHD, some of whom were given methyphenidate (better known as Ritalin), counseling, a mix of the two, or neither.

Beginning in 1994, researchers across the country began tracking outcomes of children who were seven to ten years old at the start of the study. After 36 months, the researchers realized something odd: that the children who had been given the popular stimulant seemed to be growing more slowly than their non-medicated counterparts.

The researchers presumed, per their retelling to the NYT, that this "height gap" would close in adolescence. When they followed up with them nine years after the study began, however, the medicated cohort was still 1.6 inches, on average, shorter than the kids who didn't take Ritalin.

On a certain level, the concern is very shallow. There's nothing wrong with being short, and if a drug can help with a myriad of other symptoms, maybe the risk is worth it.

But that's not the only controversy around prescribing ADHD drugs to kids. The MTA study's biggest takeaway was, troublingly, that the attention benefits of Ritalin seemed to cease after the first year, and that there were no apparent benefits to academic performance.

And even on top of that, the "height suppression" side effect was also enough to give the researchers pause.

In 2017, the MTA study scientists published a follow-up looking into the height gap that tracked the original cohort until they were 25. That height gap remained, per the study, into adulthood. And the findings countered bold academic assertions from just a few years prior claiming that any height suppression from ADHD meds in children would, as the researchers initially presumed, ultimately be undone in adolescence.

Years later, another group of scientists reviewed 18 childhood Ritalin studies and found, similarly to the MTA researchers, that the drug can indeed "result in reduction in height and weight" — though their opinion was that the size of the effect is negligible when compared to the purported benefits of these drugs.

To this day, researchers can't agree as to whether or not stimulants can cause height suppression in children, primarily because the mechanism behind the demonstrated effect remains unknown.

Speaking to the website Health Central in 2022, childhood psychiatrist and MTA study co-author Laurence Greenhill of the University of California, San Francisco suggested that amphetamines' well-known propensity to suppress appetite could be behind the growth differences.

"There could be some lack of nutrition going on that explains this," Greenhill told the website.

"However, the kids aren't malnourished," he countered. "They're just growing a little more slowly."

If Ritalin or other stimulants help a child significantly, such a minor height disparity would be worthwhile. But with some of the original MTA study authors now questioning how effective these medical interventions really are, it may behoove parents to think before they put their kids on these pills.

More on ADHD meds: To Fill Your Adderall Prescription Amid Shortage, Try Getting It Filled on This Particular Day of the Month

The post Giving ADHD Drugs to Kids Has a Long-Term Side Effect That Might Change Their Minds About Taking It appeared first on Futurism.

See the original post:
Giving ADHD Drugs to Kids Has a Long-Term Side Effect That Might Change Their Minds About Taking It