Check Out This Giant AI-Powered "Spice Dispenser" for Dorks Too Timid to Properly Season Their Food

If for some reason adding a pinch of anything to your meal was too intimidating, there's now an AI gizmo to address that.

Out of Season

Back in 2017, a buzzy multi-million dollar startup called Juicero — which sold a high tech, WiFi-enabled fruit and veggie juicer that had taken health circles by storm — imploded spectacularly when Bloomberg discovered that you could squeeze its juice packs by hand, without its $700 over-engineered machine, a demise that CNET derided as history's "greatest example of Silicon Valley stupidity."

We may now have a new pretender to the throne. Enter the Spicerr, a supposedly "AI-powered" "smart" spice dispenser that will automatically decide how much seasoning you should add to your barren foodstuffs. 

"Spicerr takes the guesswork out of seasoning," reads its marketing copy, with "curated spice blends" and "precise measurements," making it the perfect kitchen gizmo for dorks who are too unadventurous to even dabble in the art of adding a pinch of salt or ancho chile.

The company also has an extremely obnoxious ad featuring anthropomorphized kitchenware, which are for some reason aware what an AI model is. Suspension of disbelief shattered.

Lock and Preload

The Spicerr is designed like a minimalist, tech-inflected pepper grinder with a revolver's cylinder stuck on the bottom. It holds six pre-packaged spice capsules at a time, which you have to buy from the manufacturer, like so many hated inkjet printers. Spicerr sells an "Essential Collection" that comes with black pepper, turmeric, crushed pepper, ginger, cinnamon, and cumin, as well as three other collections for "family cooking," "baking with kids," and plain ol' "BBQ."

Using a small touch screen at the top — did we mention this thing uses a touch screen? — you choose the blend or recipe you want, which will require you to navigate more than a few tiny menus if this demonstration is anything to go by, load the necessary capsules, press down on the button, and let the Spicerr go to town. 

And voilà: you have now have a seasoned meal, human. Is the 3:1 ratio of salt to pepper with an uncertainty of 4 percent to your liking?

Data Driven

We know we said this thing has the "AI" label slapped on it, but it's unclear what exactly the "AI-powered platform" actually is, other than something that collects your data, apparently, via its accompanying app.

"By analyzing your preferences and interactions, Spicerr quickly learns your tastes and suggests dishes and spice blends perfectly suited to your palate," the website reads.

We'd posit it's not just variety, but also spontaneity, that's the spice of life. So for the love of all that is holy, don't let an algorithm decide how much cinnamon or paprika you're adding to your food. Take the risk and toss some of that stuff in there yourself — and then taste it, engage your brain, and decide whether it needs another pinch.

More on AI: Trump Signs Executive Order Banning Woke AI

The post Check Out This Giant AI-Powered "Spice Dispenser" for Dorks Too Timid to Properly Season Their Food appeared first on Futurism.

Read the original:
Check Out This Giant AI-Powered "Spice Dispenser" for Dorks Too Timid to Properly Season Their Food

New Law Would Allow AI to Replace Your Doctor, Prescribe Drugs

A bold new bill to allow AI chatbots to prescribe controlled drugs has been introduced into the House for review.

If you weren't convinced we're spiraling toward an actual cyberpunk future, a new bill seeking to let AI prescribe controlled drugs just might.

The proposed law was introduced in the House of Representatives by Arizona's David Schweikert this month, where it was referred to the House Committee on Energy and Commerce for review. Its purpose: to "amend the Federal Food, Drug, and Cosmetic Act to clarify that artificial intelligence and machine learning technologies can qualify as a practitioner eligible to prescribe drugs."

In theory, it sounds good. Engaging with the American healthcare system often feels like hitting yourself with a slow-motion brick, so the prospect of a perfect AI-powered medical practitioner that could empathically advise on symptoms, promote a healthy lifestyle, and dispense crucial medication sounds like a promising alternative.

But in practice, today's AI isn't anywhere near where it'd need to be to provide any of that, nevermind prescribing potentially dangerous drugs, and it's not clear that it'll ever get there.

Schweikert's bill doesn't quite declare a free-for-all — it caveats that these robodoctors could only be deployed "if authorized by the State involved and approved, cleared, or authorized by the Food and Drug Administration" — but downrange, AI medicine is clearly the goal. Our lawmakers evidently feel the time — and money — is right to remove the brakes and start letting AI into the health care system.

The Congressman's optimism aside, AI has already fumbled in healthcare repeatedly — like the time an OpenAI-powered medical record tool was caught fabricating patients' medical histories, or when a Microsoft diagnostic tool confidently asserted that the average hospital was haunted by numerous ghosts, or when an eating disorder helpline's AI Chatbot went off the rails and started encouraging users to engage in disordered eating.

Researchers agree. "Existing evaluations are insufficient to understand clinical utility and risks because LLMs [large language models] might unexpectedly alter clinical decision making," reads a critical study from medical journal The Lancet, adding that "physicians might use LLMs’ assessments instead of using LLM responses to facilitate the communication of their own assessments."

There's also a social concern: today's AI is notoriously easy to exploit, meaning patients would inevitably try — and likely succeed — to trick AI doctors into prescribing addictive drugs without any accountability or oversight.

For what it's worth, Schweikert used to agree. In a blurb from July of last year, the Congressman is quoted saying that the "next step is understanding how this type of technology fits 'into everything from building medical records, tracking you, helping you manage any pharmaceuticals you use for your heart issues, even down to producing datasets for your cardiologist to remotely look at your data.'"

He seems to have moved on from that cautious optimism, instead adopting the move-fast-break-things grindset that spits untested self driving cars onto our roads and AI Hitlerbots into our feeds — all without our consent, of course.

As the race to profitability in AI heats up, the demand for real-world use cases is growing. And as it does, tech companies are faced with immense pressure to pump out its latest iteration, the next big boom.

But the consequences of corner-cutting in the medical world are steep, and big tech has shown time and again that it would rather rush its products to market and shunt social responsibility onto us — filling our schools with ahistorical Anne Frank bots and AI buddies that drive teens toward suicide and self-harm.

Deregulation like the kind Schweikert proposes is exactly how big tech gets away with these offenses, such as training GenAI models on patient records without consent. It does nothing to ensure that subject matter experts are involved at any step in the process, or that we thoroughly consider the common good before the corporate good.

And as our lawmakers hand these tech firms the keys to the kingdom, it's often the most vulnerable who are harmed first — recall the bombshell revelation that the biggest and flashiest AI models are built on the backs of sweatshop workers.

When it comes to AI outpatient care, you don't need to be Cory Doctorow to imagine a world of stratified healthcare — well, anymore than we already have — where the wealthiest among us have access to real, human doctors, and the rest of us are left with the unpredictable AI equivalent.

And in the era of Donald Trump's full embrace of AI, it's not hard to imagine another executive order or federal partnership making AI pharmacists a reality without that pesky oversight.

More on tech and drugs: Congress Furious With Mark Zuckerberg for Making Money From Illegal Drug Ads

The post New Law Would Allow AI to Replace Your Doctor, Prescribe Drugs appeared first on Futurism.

Read the original:
New Law Would Allow AI to Replace Your Doctor, Prescribe Drugs

OpenAI’s Agent Has a Problem: Before It Does Anything Important, You Have to Double-Check It Hasn’t Screwed Up

Operator, OpenAI's brand new AI agent, doesn't quite deliver the hands-off experience some might hope it would.

Behold Operator, OpenAI's long-awaited agentic AI model that can use your computer and browse the web for you. 

It's supposed to work on your behalf, following the instructions it's given like your very own little employee. Or "your own secretary" might be more apt: OpenAI's marketing materials have focused on Operator performing tasks like booking tickets, restaurant reservations, and creating shopping lists (though the company admits it still struggles with managing calendars, a major productivity task.) 

But if you think you can just walk away from the computer and let the AI do everything, think again: Operator will need to ask for confirmation before pulling the trigger on important tasks, which throws a wrench into the premise of the AI agent acting on your behalf, since the clear implication is you need to make sure it's not screwing up before allowing it any real power.

"Before finalizing any significant action, such as submitting an order or sending an email, Operator should ask for approval," reads the safety section in OpenAI's announcement.

This measure highlights the tension between keeping stringent guardrails on AI models while allowing them to freely exercise their purportedly powerful capabilities. How do you put out an AI that can do anything — without it doing anything stupid?

Right now, a limited preview of Operator is only available to subscribers of the ChatGPT Pro plan, which costs an eye-watering $200 per month. 

The agentic tool uses its own AI model called Computer-Using Agent to interact with its virtual environment — as in use mouse and keyboard actions — by constantly taking screenshots of your desktop. 

The screenshots are interpreted by GPT-4o's image-processing capabilities, theoretically allowing Operator to use any software it's looking at, and not just ones designed to integrate with AI.

But in practice, it doesn't sound like the seamless experience you'd hope it to be (though to be fair, it's still in its early stages). When the AI gets stuck, as it still often does, it hands control back to the user to remedy the issue. It will also stop working to ask you for your usernames and passwords, entering a "takeover mode."

It's "simply too slow," wrote one user on the ChatGPTPro subreddit in a lengthy writeup, who said they were "shocked" by its sluggish pace. "It also bugged me when Operator didn't ask for help when it clearly needed to," the user added. In reality, you may have to sit there and watch the AI painstakingly try to navigate your computer, like supervising a grandparent trying their hand at Facebook and email.

Obviously, safety measures are good. But it's worth asking just how useful this tech is going to be if it can't be trusted to work reliably without neutering it.

And if safety and privacy are important to you, then you should already be uneasy with the idea of letting an AI model run rampant on your machine, especially one that relies on constantly screenshotting your desktop.

While you can opt out of having your data being used to train the AI model, OpenAI says that it will store your chats and screenshots up to 90 days on its servers, TechCrunch reported, even if you delete them.

Because Operator can browse the web, that means it will potentially be exposed to all kinds of danger, including attacks called prompt injections that could trick the model into defying its original instructions.

More on AI: Rumors Swirl That OpenAI Is About to Reveal a "PhD-Level" Human-Tier Intelligence

The post OpenAI's Agent Has a Problem: Before It Does Anything Important, You Have to Double-Check It Hasn't Screwed Up appeared first on Futurism.

Visit link:
OpenAI's Agent Has a Problem: Before It Does Anything Important, You Have to Double-Check It Hasn't Screwed Up

Manhattan Shows Huge Reduction in Car Crashes After Instituting Congestion Pricing

New Yorkers are seeing huge quality of life wins from congestion pricing so far, a great sign for people-first transit policy.

Wheels Up

The wins keep coming after Manhattan initiated its congestion toll on cars early this month. Now, the latest data is showing a massive decrease in crash related injuries.

New data from the first 12 days of congestion pricing shows that total injuries below 60th street — the zone where congestion pricing takes effect, charging drivers up to $9 to enter — dropped 51 percent compared to the same period in 2024. Total crashes, meanwhile, dropped 55 percent.

The analysis comes courtesy of outspoken transit advocate Gersh Kuntzman and his team at Streetsblog NYC. He cautions that it's too early to take a victory lap, given that the figures do not account for variations in weather between 2024 and 2025, but they are promising. This is the latest indicator that the congestion pricing is working as intended — kids are getting to school faster, the city is quieter, bridges and tunnels are seeing significantly less traffic, and the air is becoming a bit cleaner. And these are just the knock-on effects of traffic reduction.

The real winners are New York City's public transit riders, whom the congestion toll is meant to directly benefit via station improvements, critical infrastructure repairs, extended bus routes, and a resumption of the much-needed 2nd avenue subway extension project, which had been stalled for years.

In other words, if the good news keeps coming, the initiative could become a compelling proof-of-concept for other areas of New York and more crowded cities around the country.

Cutting Edge

That's in a perfect world, of course. While adding friction for cars is looking like a major win for New York so far, it comes at a time when common-sense transit projects across the country are flailing. Some are way overbudget, outsourced to pie-in-the-sky tech startups, or falling to a busted legislative process.

And that's even if a city ponies up the will to make affordable, car-free transit a priority at all.

Going forward, there's very little evidence that investments in crucial infrastructure will be coming from the federal government — Trump has already cut Biden's Infrastructure Investment and Jobs Act in favor of his Stargate gamble — making state and local policy like the Big Apple's congestion pricing all the more crucial.

While other cities spend taxpayer money to beautify parking lots, New City is leading by example and showing the rest of the country what people-first transit policy can do for their communities. In a country dominated by cars, this rare win for public transit is worth imitating.

More on transit: Leftists Plead With Trump Not to Build High Speed Rail System Connecting America's Major Cities

The post Manhattan Shows Huge Reduction in Car Crashes After Instituting Congestion Pricing appeared first on Futurism.

Originally posted here:
Manhattan Shows Huge Reduction in Car Crashes After Instituting Congestion Pricing

Trump, Who Called for Death Penalty for Drug Dealers, Pardons Most Influential Drug Dealer in Human History

Trump appears to have reversed on his position that drug dealers should get the death penalty by pardoning Ross Ulbricht this week.

During his announcement in late 2022 that he would be once again running for president of the United States, now-reelected president Donald Trump issued an outrageous threat: that he'd seek to punish drug dealers with the death penalty.

"We're going to be asking everyone who sells drugs, gets caught selling drugs, to receive the death penalty for their heinous acts," Trump told an audience during his speech at his Mar-a-Lago estate at the time. "Because it's the only way."

Just over two years later, Trump appears to have completely reversed that position by unconditionally pardoning Ross "Dread Pirate Roberts" Ulbricht this week. Ulbricht ran Silk Road, the now infamous site that made history as the first dark web marketplace, offering a cornucopia of banned drugs in exchange for the then-nascent cryptocurrency Bitcoin.

Ulbricht was sentenced to life in federal person in 2015 after he was busted by the FBI — and after Silk Road had facilitated the sale of over $200 million in illegal drugs and other illicit goods.

"Make no mistake: Ulbricht was a drug dealer and criminal profiteer who exploited people’s addictions and contributed to the deaths of at least six young people," Preet Bharara, then US Attorney for the Southern District of New York, said in a statement at the time.

During his 2015 sentencing, district judge Katherine Forrest described Ulbricht as "no better a person than any other drug dealer."

Forrest had a point. Ulbricht didn't just sell drugs; he disrupted the entire industry. He wasn't your neighborhood dealer; he was the Uber of plugs, singlehandedly creating an entire new model that directly connected drug buyers with drug sellers — and taking a handsome cut in the process. In terms of long term impact, Ubricht probably had more of an impact on the global drug trade than Pablo Escobar.

It's not that drug crimes actually deserve draconian punishments, but Trump's pardon serves as a painstakingly obvious attempt to curry favor with the crypto industry — which he also once reviled, before gaining its financial support — as well as contradicting his earlier promise to literally punish drug dealers with death.

The pardon highlights Trump's fragile moral compass and well-documented willingness to abandon his stance on a given matter when presented with an opportunity to cash in.

Ulbricht's life sentence has long been the subject of libertarian outrage, as Al Jazeera reports, with the crypto community arguing he was unfairly prosecuted as an example to others.

"I was doing life without parole, and I was locked up for more than 11 years but he let me out," he said in a video message posted on X-formerly-Twitter. "I’m a free man now. So let it be known that Donald Trump is a man of his word."

Trump's decision to pardon him, however, had plenty of critics.

"Pardoning drug trafficking kingpins is a slap in the face to the families who’ve lost loved ones to his crimes," Democratic senator Catherine Cortez Masto wrote in a tweet. "Donald Trump should have to explain to them how any of this makes America safer. It's an outrage."

Meanwhile, Trump signed an executive order earlier this week, looking to reinstate the death penalty. Whether the Supreme Court will green-light the decision remains to be seen.

During a campaign speech in September, Trump reiterated his desire to expand the death penalty to those who were convicted of drug trafficking.

"These are terrible, terrible, horrible people who are responsible for death, carnage and crime all over the country,"  he said at the time. "We’re going to be asking everyone who sells drugs, gets caught, to receive the death penalty for their heinous acts."

Yet somehow Ulbricht, who created an entire marketplace and new business model for the sale of drugs, has been set free.

More on Silk Road: Silk Road Mastermind Ross Ulbricht Seen Leaving Prison Holding a Small Plant

The post Trump, Who Called for Death Penalty for Drug Dealers, Pardons Most Influential Drug Dealer in Human History appeared first on Futurism.

Read more here:
Trump, Who Called for Death Penalty for Drug Dealers, Pardons Most Influential Drug Dealer in Human History

UnitedHealthcare’s New CEO Announcement Draws a Frenzy of Dark Humor

UnitedHealthcare has just promoted one of its C-Suiters to CEO — and the dark humor is already pouring in.

Six weeks after Brian Thompson was gunned down in the streets of Manhattan, UHC announced that it was naming Tim Noel, who formerly ran the insurance company's Medicare and retirement department, to the chief executive role. Almost as soon as that news dropped, social media lit up with gallows humor about the position's deadly history.

"Imagine getting that promotion?" wrote one X-formerly-Twitter user wrote. "I'm thinking Tim Noel may just be the bravest CEO in all of America."

Tim Noel: https://t.co/nbqfjxWz6u pic.twitter.com/mC0BCVkt8i

— The Pint (@ReadThePint) January 23, 2025

Others were far more pointed in their welcomes to the longtime UHC executive.

"I don't think it's THAT interesting that Tim Noel, new CEO for UnitedHealth, lives in Minneapolis," a Bluesky user quipped. "Minneapolis is where the UnitedHealth headquarters are located, so it just makes sense that he, and other CEOs, would live there, in Minneapolis, which as everyone on earth knows, is in Minnesota!"

On the Eat the Rich subreddit, meanwhile, the vibe was nothing short of feral.

"Looks scrawny," wrote a user whose display image features suspected Thompson assassin Luigi Mangione dressed like a saint. "Needs fattening up like the last one. Then it’ll be good eating!"

Obviously, nobody is actually calling for any harm to Noel, who has worked at UHC since 2007. As with Thompson's murder, these jokes instead highlight the rampant frustration Americans have with the gatekeepers of their healthcare — a bitter resentment that burst to the forefront of the national conversation after Thompson's assassination last month.

That anger was on full display in other posts and comments accusing Noel of being a "serial killer." Given that he likely ran UHC's Medicare department during at least part of the period when the company nearly tripled its post-acute services denial rate from 8.7 to 22.7 percent for Medicare patients, they might not be that far off the money, at least technically speaking.

More on Mangione: Americans Flood Chinese App RedNote, Discover Its Users Are Obsessed With Luigi Mangione

The post UnitedHealthcare's New CEO Announcement Draws a Frenzy of Dark Humor appeared first on Futurism.

Go here to see the original:
UnitedHealthcare's New CEO Announcement Draws a Frenzy of Dark Humor

Trump Admin Announces Plans to Build Database of Migrant DNA

A DNA helix is trapped behind a barbed wire fence.

Trump is ringing in his second term with a barrage of executive orders — and many are laying the groundwork for a massive genetic surveillance campaign targeting migrants.

That's according to analysis by award-winning National Security journalist Spencer Ackerman, who writes that "along with the attorney general, the secretary of homeland security will 'fulfill the requirements of the DNA Fingerprint Act of 2005,' according to the 'Securing Our Borders' executive order," referencing one of the numerous presidential actions targeting migrants signed by Trump on his first day back.

"In other words," Ackerman continues, "[the] DHS and the Justice Department will create and manage a migrant DNA database."

Many crucial questions remain: how that database will look, who will have access to it, what data will be collected, and from whom. After all, many actual American citizens lack documentation of their legal status, like the poor and homeless — will their DNA be swept up in wanton collection efforts that trample the privacy rights of citizens and non-citizens alike?

With tech moguls lining up to pitch Trump on dystopian border tech, we can be sure the surveillance effort won't come cheap for American taxpayers.

It'll also almost certainly come with new cruelty. In addition to inevitable family separations, a rise in lost children, heightened processing time due to missed court hearings, documented and undocumented residents alike are going to be contending with aggressive new efforts at domestic surveillance.

"[The] DHS is empowered to use 'any available technologies and procedures' to adjudicate migrants' 'claimed familiar relationships' with people in the United States," Ackerman's analysis warns. "So this is designed to be not only vastly intrusive beyond the border, but a windfall opportunity for, say, artificial intelligence and biometrics firms."

Ackerman — who was among the Guardian team to win the 2014 Pulitzer for public service journalism for reporting on the NSA spying debacle — has noticed the rhetoric used in Trump's orders mirrors vague national security directives from the days of the War on Terror.

For example, the "Protecting the American People Against Invasion" order claims that "many of these aliens unlawfully within the United States present significant threats to national security and public safety, committing vile and heinous acts against innocent Americans."

"Others are engaged in hostile activities," the mandate continues, "including espionage, economic espionage, and preparations for terror-related activities."

To Ackerman, that last bit is striking, because in this context, "terror-related activities" have not been defined. Vaguely worded presidential decrees like this are crucial in that they allow agencies like the NSA or the DHS to operate with impunity — building the American surveillance state between the ink.

Though their power is increasing under Trump, these surveillance mechanisms are nothing new. Ackerman notes that the measure to harvest migrant DNA seems "reminiscent of the biometrics database created under the Bush administration for Muslim travelers known as NSEERS," a similarly troubling moment in American history which some of Trump's executive orders are predicated on.

More recently, Biden's approach to the immigration crisis was also a decidedly invasive one, thanks in part to the Customs and Border Patrol's CBP One app which rolled out in October of 2020. In 2023, that app got a controversial update: a Visa-lottery system for hopeful migrants to schedule meetings for processing into the United States.

That app came with a host of privacy concerns, not least of which was the harvesting of applicant biometric and geolocation data for case processing.

Rather than delete that data after an individual has been processed, as the TSA claims it does, the DHS collects it into two federal databases — the Traveler Verification System and Automated Targeting System. CBP One has since been shut down by Trump, canceling thousands of applicant's appointments and stranding them at the border, but the personal data its collected is likely still being held by the federal government.

It's likewise been reported that, as of 2020, the DHS has already captured data from over 1.5 million immigrants crossing the border in its Combined DNA Index System. That DNA harvesting program is laundered as a law enforcement index — though the collection includes hundreds of thousands of migrants who have only ever been administratively detained, and have never been charged with a crime.

Many immigrants report not being informed of the DNA collection, believing DNA swabs to be medical procedures, despite the DHS' internal guidelines mandating disclosure.

While Trump isn't the only electected official pushing to harvest the DNA of every incoming immigrant, his influence will certainly have the most impact as his nominees shape their agencies to his dystopian image.

More on mass surveillance: Billionaire Drools That "Citizens Will Be on Their Best Behavior" Under Constant AI Surveillance

The post Trump Admin Announces Plans to Build Database of Migrant DNA appeared first on Futurism.

Read the rest here:
Trump Admin Announces Plans to Build Database of Migrant DNA

Trump Gives Elon Musk Access to All Unclassified Data in the US Government

A new executive order appears to grant DOGE leader Elon Musk sweeping access to unclassified data held by US government agencies.

Bait and Switch

The fine print of a sweeping executive order seemingly grants Elon Musk — the wealthiest and arguably most powerful unelected figure in the world — and his associates at the somehow-still-real Department of Government Efficiency (DOGE) access to all unclassified data held by US government agencies, according to Wired.

Since the evening of his inauguration, president Donald Trump has been busy signing a still-growing wave of sweeping executive actions. Among them was the establishment of the unfortunately-named DOGE, which per the order will be tasked with "modernizing Federal technology and software to maximize governmental efficiency and productivity."

When DOGE was announced late last year, it was widely believed that the "department" would operate as a federal advisory committee, a type of consultative group subject to fairly strict transparency rules.

But as flagged by Wired, under the executive order, the Trump Administration didn't create a new federal advisory committee. It instead repurposed the United States Digital Service (USDS), an existing government organization with sweeping access to vast caches of data across government agencies, including the sensitive information of US citizens, as the "United States DOGE Service" — a move that seemingly opens the door for Musk and his operatives to access a massive amount of data without much transparency oversight.

"It's quite a clever way of integrating DOGE into the federal government that I think will work," George Washington University law professor Richard Pierce told Wired, "in the sense of giving it a platform for surveillance and recommendations."

Inside Out

A former USDS employee told Wired that the rebranding of the organization was an "A+ bureaucratic jiu-jitsu move" — and warned of dystopian, surveillance-driven outcomes that access to USDS-held data could foster.

"Is this technical talent going to be pointed toward using data from the federal government to track down opponents?" they told Wired. "To track down particular populations of interest to this administration for the purposes of either targeting them or singling them out or whatever it might end up being?" (That in mind: reporting from NextGov this week revealed that USDS workers are already being re-interviewed for their jobs, in part to gauge their perceived loyalty to the new president.)

As Wired notes, DOGE could still face some headaches regarding the complexities of inter-agency information sharing and the accessing of certain sensitive data, particularly in cases where department members lack certain clearances. Even so, according to experts, our federal government is wading into muddy, unknown waters.

"It could be a bipartisan effort to make government technology work better. It could be an oligarch extracting resources from the government," University of Michigan public policy Don Moynihan told Wired. "We just really don't know."

More on DOGE: DOGE.gov Website Launches With Mangled, AI-Generated American Flag

The post Trump Gives Elon Musk Access to All Unclassified Data in the US Government appeared first on Futurism.

Read the original:
Trump Gives Elon Musk Access to All Unclassified Data in the US Government

Scientist Testing Spider-Man-Style Web Shooters He Accidentally Made in Lab

Tufts University biotech researcher Marco Lo Presti accidentally discovered a

With Great Power

Tufts University biotech researcher Marco Lo Presti made an astonishing discovery while investigating how silk and dopamine allow mussels to stick to rocky surfaces.

"While using acetone to clean the glassware of this silk and dopamine substance," he told Wired, "I noticed it was undergoing a transition into a solid format, into a web-looking material, into something that looked like a fiber."

Lo Presti and his colleagues immediately got to work, investigating whether the sticky fibers could be turned into a "remote adhesive."

The result is an astonishingly "Spider Man"-like silk that can be shot not unlike the superhero's wrist-mounted web shooters, as detailed in a paper published in the journal Advanced Functional Materials last year.

While it won't allow an adult person to swing from skyscraper to skyscraper any time soon, the results speak for themselves. Footage of the team's experiments shows strands of the material being dripped onto a number of objects from several inches above, forming a solid connection in a matter of seconds and allowing the object to be carried away.

The researcher's collaborator, Tufts engineering professor Fiorenzo Omenetto, recalled being caught off guard by the accidental discovery.

"You explore and you play and you sort of connect the dots," he told Wired. "Part of the play that is very underestimated is where you say 'Hey, wait a second, is this like a Spider-Man thing?' And you brush it off at first, but a material that mimics superpowers is always a very, very good thing."

Comes Great Responsibility

Intriguingly, Lo Presti explained that no spider has the ability to "shoot a stream of solution, which turns into a fiber and does the remote capturing of a distant object."

In other words, the discovery appears to be entirely new, despite initially being inspired by nature.

The fibers also have an impressive tensile strength.

"We can now catch an object up to 30 or 35 centimeters away, and lift an object of around 15 to 20 grams," Lo Presti told Wired.

But scaling it up could prove difficult.

"Everybody wants to know if we're going to be able to swing from buildings," Omenetto added, stopping short of hazarding a guess as to when or if that's possible.

"I mean you could probably lift a very heavy object, but that’s one of the big questions — what can you lift? Can you remotely drag something?" he added. "Silk is very, very strong, it’s very tough, it can lift incredible weights but this is silk in its natural form whether it’s from the spider or the silkworm."

More on the silk shooters: Researchers Create Real-Life "Spider-Man" Web-Slinging Tech

The post Scientist Testing Spider-Man-Style Web Shooters He Accidentally Made in Lab appeared first on Futurism.

Visit link:
Scientist Testing Spider-Man-Style Web Shooters He Accidentally Made in Lab

There’s Apparently a Huge Financial Problem With Trump’s Massive AI Project

President Donald Trump's behemoth $500 billion AI infrastructure project, dubbed Stargate, may be doomed from the start.

Trump made the sweeping announcement earlier this week, revealing that the ChatGPT maker, investment company SoftBank, tech giant Oracle, and Abu Dhabi state-run AI fund MGX would initially spend a total of $100 billion on the project, with the eventual goal of reaching half a trillion dollars in just a few years.

But in reality, according to the Financial Times' sources, Stargate may be facing insurmountable financial challenges as it attempts to get off the ground.

"They haven’t figured out the structure, they haven’t figured out the financing, they don’t have the money committed," an unnamed source told the newspaper.

Did Trump put the cart before the horse by making a splashy announcement before the pieces were in place? Critics of the project think it's entirely possible.

The FT's reporting is especially interesting considering this is exactly what multi-hyphenate Elon Musk, a personal enemy of Altman's, accused OpenAI of earlier this week.

"They don’t actually have the money," the mercurial CEO  tweeted just hours after the project was announced.

"SoftBank has well under $10B secured," Musk wrote in a followup an hour later. "I have that on good authority."

It's difficult to gauge the legitimacy of either Musk's or the FT's claims. Could Stargate actually collapse under its own weight, stumbling at the starting line without the necessary funds to build out data centers in the United States?

It's true that SoftBank has had a troubled history with past investments, posting a record $32 billion loss for its Vision Fund in 2023. Many companies the lender has backed have shuttered or filed for bankruptcy, with WeWork being a particularly notable example.

Musk certainly has plenty to gain from voicing his doubts, having founded his own AI company that was passed over by the Stargate program. He's has had an extremely strained relationship with Altman for years.

OpenAI and SoftBank are each expected to commit $19 billion to fund Stargate, as The Information reported on Wednesday. Effectively, each company will hold a 40 percent interest in the project.

The companies behind Stargate claim that work has already begun. Construction began for an Oracle-funded data center in Abilene, Texas, in June 2023, well over a year before Stargate was announced.

But other than that, details about Stargate are notably thin.

"There’s a real intent to do this, but the details haven’t been fleshed out," an unnamed source told the FT. "People want to do splashy things in the first week of Trump being in office."

More on Stargate: Trump's $500 Billion AI Deal Includes Funding by UAE Royal Family Linked to Astonishing Number of Scandals, Including Human Torture

The post There's Apparently a Huge Financial Problem With Trump's Massive AI Project appeared first on Futurism.

Original post:
There's Apparently a Huge Financial Problem With Trump's Massive AI Project

A Mother Says an AI Startup’s Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in…

Character.AI says it's protected against liability for

Content warning: this story discusses suicide, self-harm, sexual abuse, eating disorders and other disturbing topics.

In October of last year, a Google-backed startup called Character.AI was hit by a lawsuit making an eyebrow-raising claim: that one of its chatbots had driven a 14-year-old high school student to suicide.

As Futurism's reporting found afterward, the behavior of Character.AI's chatbots can indeed be deeply alarming — and clearly inappropriate for underage users — in ways that both corroborate and augment the suit's concerns. Among others, we found chatbots on the service designed to roleplay scenarios of suicidal ideation, self-harm, school shootings, child sexual abuse, as well as encourage eating disorders. (The company has responded to our reporting piecemeal, by taking down individual bots we flagged, but it's still trivially easy to find nauseating content on its platform.)

Now, Character.AI — which received a $2.7 billion cash injection from tech giant Google last year — has responded to the suit, brought by the boy's mother, in a motion to dismiss. Its defense? Basically, that the First Amendment protects it against liability for "allegedly harmful speech, including speech allegedly resulting in suicide."

In TechCrunch's analysis, the motion to dismiss may not be successful, but it likely provides a glimpse of Character.AI's planned defense (it's now facing an additional suit, brought by more parents who say their children were harmed by interactions with the site's bots.)

Essentially, Character.AI's legal team is saying that holding it accountable for the actions of its chatbots would restrict its users' right to free speech — a claim that it connects to prior attempts to crack down on other controversial media like violent video games and music.

"Like earlier dismissed suits about music, movies, television, and video games," reads the motion, the case "squarely alleges that a user was harmed by speech and seeks sweeping relief that would restrict the public’s right to receive protected speech."

Of course, there are key differences that the court will have to contend with. The output of Character.AI's bots isn't a finite work created by human artists, like Grand Theft Auto or an album by Judas Priest, both of which have been targets of legal action in the past. Instead, it's an AI system that users engage to produce a limitless variety of conversations.

A Grand Theft Auto game might contain reprehensible material, in other words, but it was created by human artists and developers to express an artistic vision; a service like Character.AI is a statistical model that can output more or anything based on its training data, far outside the control of its human creators.

In a bigger sense, the motion illustrates a tension for AI outfits like Character.AI: unless the AI industry can find a way to reliably control its tech — a quest that's so far eluded even its most powerful players — some of the interactions users have with its products are going to be abhorrent, either by the users' design or when the chatbots inevitably go off the rails.

After all, Character.AI has made changes in response to the lawsuits and our reporting, by pulling down offensive chatbots and tweaking its tech in an effort to serve less objectionable material to underage users.

So while it's actively taking steps to get its sometimes-unconscionable AI under control, it's also saying that any legal attempts to curtail its tech fall afoul of the First Amendment.

It's worth asking where the line actually falls. A pedophile convicted of sex crimes against children can't use the excuse that they were simply exercising their right to free speech; Character.AI is actively hosting chatbots designed to prey on users who say they're underage. At some point, the law presumably has to step in.

Add it all up, and the company is walking a delicate line: actively catering to underage users — and publicly expressing concern for their wellbeing — while vociferously fighting any legal attempt to regulate its AI's behavior toward them.

"C.AI cares deeply about the wellbeing of its users and extends its sincerest sympathies to Plaintiff for the tragic death of her son," reads the motion. "But the relief Plaintiff seeks would impose liability for expressive content and violate the rights of millions of C.AI users to engage in and receive protected speech."

More on Character.AI: Embattled Character.AI Hiring Trust and Safety Staff

The post A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in Suicide" appeared first on Futurism.

Read the original post:
A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in...

Strange Signal Coming From Dead Galaxy, Scientists Say

Astronomers say they've detected a mysterious type of signal known as a fast radio burst coming from an ancient, dead galaxy.

Radio Star

Astronomers say they've detected a mysterious type of signal known as a fast radio burst coming from an ancient, dead galaxy billions of light years away. Figuratively speaking, it makes for one hell of a sign of life. 

The findings, documented in two studies published in The Astrophysical Journal Letters, upends the long held belief that FRBs — extremely powerful pulses of energy — originate exclusively from star-forming regions of space, as dead galaxies no longer support the birth of new stars. 

Adding to the seeming improbability of the FRB's origin, the researchers believe that the signal's source came from the furthermost outskirts of the galaxy, about 130,000 light years from its center, with only moribund stars at the end of their stellar evolution for company.

"This is both surprising and exciting, as FRBs are expected to originate inside galaxies, often in star-forming regions," said Vishwangi Shah, lead author of one of the studies and an astronomer at McGill University, said in a statement about the work"The location of this FRB so far outside its host galaxy raises questions as to how such energetic events can occur in regions where no new stars are forming."

Quick and the Dead

Though they're often only milliseconds in duration, FRBs are so powerful at their source that a single pulse emits more energy than our Sun does in an entire year. 

What could cause such staggering outbursts? Astronomers have speculated that they originate from magnetars, a type of collapsed, extremely dense stellar object called a neutron star that maintains an unfathomably potent magnetic field, perhaps trillions of times stronger than Earth's.

But that theory is now being challenged by this latest FRB, designed FRB 20240209A, because there are no young stars in the 11.3 billion year old galaxy that could form magnetars. Only extremely massive stars, which have short lifespans as a consequence of their size and thus would need to have been recently formed, possess enough mass to collapse into neutron stars in the first place. 

Outcasts Together

FRB 20240209A isn't the first to be found in such a remote location. In 2022, astronomers detected another signal originating from the outskirts of its galaxy, Messier 81, where no active star formation was taking place.

"That event single-handedly halted the conventional train of thought and made us explore other progenitor scenarios for FRBs," said Wen-fai Fong, a coauthor of both studies and an astrophysicist at Northwestern University, in the statement. "Since then, no FRB had been seen like it, leading us to believe it was a one-off discovery — until now."

Crucially, the M81 FRB was found in a dense conglomeration of stars called a globular cluster. Given their similar circumstances, it led the astronomers to believe that FRB 20240209A could be residing in a globular cluster, too. To confirm this hunch, they hope to use the James Webb Telescope to image the region of space around the FRB's origins.

More on space: Scientists Intrigued by Planet With Long Tail

The post Strange Signal Coming From Dead Galaxy, Scientists Say appeared first on Futurism.

Here is the original post:
Strange Signal Coming From Dead Galaxy, Scientists Say

Huge Study Finds Constellation of Health Benefits for Ozempic Beyond Weight Loss

In a ginormous new study, researchers have begun mapping the manifold health benefits of drugs like Ozempic and Wegovy beyond weight loss. 

In a ginormous new study, researchers have begun mapping the manifold health benefits of drugs like Ozempic and Wegovy beyond weight loss.

Published in the journal Nature Medicine, this new study led by Ziyad Al-Aly of the Veteran's Affairs health system in St. Louis tracked millions of diabetes patient outcomes over a period of 3.5 years.

Of those, over 215,000 had been prescribed a glucagon-like peptide-1 (GLP-1) agonist receptor — the class of drugs that includes Ozempic, Wegovy, Mounjaro, Zepbound, and others — and 1.7 million were on another form of blood sugar-lowering medicine.

Looking at other disorders in the data ranging from Parkinson's disease and Alzheimer's to kidney disease and opiate addiction, Al-Aly and his team found that those who were on GLP-1 medications saw significant improvement across a staggering range of health concerns — and far beyond anything clearly linked to weight or blood sugar.

Though many studies have found that these blockbuster drugs seem to be beneficial for specific disorders, "no one had comprehensively investigated the effectiveness and risks of GLP-1 receptor agonists across all possible health outcomes," the physician-scientist told Nature.

In particular, Al-Aly said that the drugs' impact on addiction disorders "stood out" to him, with 13 percent of the GLP-1 cohort who had issues with addiction seeing improvement — a finding that dovetails with other studies about these drugs and their effect on addiction.

Other apparent benefits were even harder to make sense of. Al-Aly and his team also discovered that psychotic disorder risk was lowered by 18 percent for the GLP-1 cohort, and the Alzheimer's risk was cut by 12 percent.

"Interestingly, GLP-1RA drugs act on receptors that are expressed in brain areas involved in impulse control, reward and addiction — potentially explaining their effectiveness in curbing appetite and addiction disorders," Al-Aly said in a statement published by the University of Washington, which was also involved in the study. "These drugs also reduce inflammation in the brain and result in weight loss; both these factors may improve brain health and explain the reduced risk of conditions like Alzheimer’s disease and dementia."

While those findings are indeed incredible, the researchers also found that other issues seemed to be exacerbated by taking GLP-1s. Along with an 11 percent increase in arthritis risk, the team found a whopping 146 percent increase in cases of pancreatitis — another discovery that complements prior research into the drugs' dark side.

Though that figure is pretty jarring, Al-Aly seemed to take it in stride.

"Given the drugs’ newness and skyrocketing popularity, it is important to systematically examine their effects on all body systems — leaving no stone unturned — to understand what they do and what they don’t do," he said in the UWash press release.

By looking so deeply into the drugs, these scientists are, as Al-Aly puts it, drawing a "comprehensive atlas mapping the associations" of GLP-1 drugs that looks into all of their effects on the body — an important quest as they continue to rise in popularity and usage.

More on GLP-1s: Woman Annoyed When She Gets on Wegovy and It Does Nothing

The post Huge Study Finds Constellation of Health Benefits for Ozempic Beyond Weight Loss appeared first on Futurism.

See more here:
Huge Study Finds Constellation of Health Benefits for Ozempic Beyond Weight Loss

China Is Hosting The World’s First Foot Race Between Humans and Robots

In the race to build the best humanoid robots, China is literally ahead of the pack as it prepares for the world's first human-robot race.

Track Stars

In the race to build the best humanoid robots, China is quite literally ahead of the pack.

As the South China Morning Post reports, the Beijing Economic-Technological Development Area — or E-Town — is hosting 12,000 humans and humanoid robots from more than 20 companies in a half-marathon race this April.

The race will be roughly 13 miles, and robotic competitors both cannot have wheels and must stand between 1.5 and 6.5 feet tall. In a statement, E-Town added that "competing robots must have a humanoid appearance and mechanical structure capable of bipedal walking or running movements."

Though this seems to be the world's first race explicitly pitting bipedal humans against robots, it won't be the first time a humanoid robot has taken part in a Chinese atheletic competition.

Last fall, a bipedal robot called Tiangong — not to be confused with China's space station of the same name, which translates to "heavenly palace" — jumped into Beijing's Yizhaung half-marathon towards its end. Though it only ran about 100 meters and wasn't particularly fast, the robot got a medal because it crossed the finish line (a participation trophy if we've ever heard of one).

At the Beijing Yizhuang Half #Marathon on the morning of Nov. 10, the #Beijing humanoid #robot "Tiangong" entered the racecourse and crossed the finish line alongside the runners. #funinbeijing pic.twitter.com/DQM1zjxneK

— Beijing Daily (@DailyBeijing) November 10, 2024

Dog Gone It

Just a few week after Tiangong's surprise marathon debut, the the RAIBO2 robodog competed in a full marathon weeks later in South Korea. Though the adorable quadruped was significantly faster than Tiangong, it still took nearly four hours and 20 minutes to run the 26.2 mile race — nearly double the time of the human winner, who clocked in at around two hours and 36 minutes.

Because it's neither Chinese nor bidepal, RAIBO2 will unfortunately not be involved in the E-Town half-marathon. According to the state-run Xinhua news agency, however, Tiangong will be one of the participants and will purportedly be capable of running 10 kilometers (6.2 miles) per hour by the time the race rolls around.

That same agency also reported that later this year, in August, Beijing will be hosting an all-robot sporting event that not only features track and field races, but also football — unclear on whether the outlet is using the American or European definition — and "comprehensive skills and other application scenarios."

Though we can't known how fast these running robots will be until we actually watch them, we can't wait to watch.

More on unique robots: Inventor Builds Six Robot Copies of Himself, Uses One to Give Speeches and Take Questions From Audience

The post China Is Hosting The World’s First Foot Race Between Humans and Robots appeared first on Futurism.

See the original post:
China Is Hosting The World’s First Foot Race Between Humans and Robots

Scientists Find Signs of Life Deep Inside the Earth

A groundbreaking new study of microbes underground is challenging everything we thought we knew about extreme environments.

Little Friends

We've heard of underground parties, but this is ridiculous. A new study by an international team of researchers has uncovered troves of microbes thriving in the hostile subsurface of the earth, far from the life-giving energy of the sun.

The findings, published in the journal ScienceAdvances, are the culmination of eight years of first-of-its-kind research comparing over 1,400 datasets from microbiomes across the world.

Chief among the findings is that the dank cracks of the planet's crust could be home to over half of microbial cells on Earth, challenging our previous — and logical — understanding that life gets less diverse and abundant the farther it gets from the sun.

"It’s commonly assumed that the deeper you go below the Earth’s surface, the less energy is available, and the lower is the number of cells that can survive," said lead author Emil Ruff, a microbial ecologist at the famed Woods Hole Marine Biological Laboratory, in a news release about the research. "Whereas the more energy present, the more diversity can be generated and maintained — as in tropical forests or coral reefs, where there’s lots of sun and warmth."

"But we show that in some subsurface environments," he added, "the diversity can easily rival, if not exceed, diversity at the surface."

Breakthrough

That comparable diversity is the key to the group's breakthrough — the researchers wrote in their paper that "species richness and evenness in many subsurface environments rival those in surface environments," in what the team is calling a previously unknown "universal ecological principle."

The study is notable not only for its findings, but also for its methodology.

Prior to the team's work, which began in 2016, there was little concerted effort to standardize microbial datasets from around the globe, due to differences in collection and analysis standards. That changed thanks to a survey led by Bay Paul Center molecular biologist Mitchell Sogin — also a coauthor of the new paper — who organized a drive to standardize microbial DNA datasets from researchers around the world.

The team's comparative work is built on these standardized datasets, allowing them to compare a sample sourced by a team at the University of Utah to that of a sample from the Universidad de Valladolid in Spain.

It's a captivating tale of international collaboration and deep-diving research — paving the way for a fascinating and previously overlooked avenue of research.

More on microorganisms: Researchers Say "Conan the Bacterium" Could Be Hidden Beneath Mars’ Surface

The post Scientists Find Signs of Life Deep Inside the Earth appeared first on Futurism.

Link:
Scientists Find Signs of Life Deep Inside the Earth

You’ll Never Guess What That Millionaire Biohacker Is Measuring on His Teenage Son

Amid his expensive efforts to live forever, biohacker Bryan Johnson is now comparing something really weird with his 19-year-old son.

Amid his bizarre and expensive efforts to reverse aging or gain eternal life, tech founder-turned-biohacker Bryan Johnson is now — we wish we were making this up — comparing his erections with those of his 19-year-old son.

In a post on X-formerly-Twitter, the 47-year-old longevity enthusiast presented what he refers to as "nighttime erection data" for himself and his son, whose name is Talmage.

As the Braintree founder explained, the younger Johnson's erectile "duration" was two minutes longer than his own. If the confusingly-marked dashboard shared in the post is to be believed, each man had roughly three hours' worth of erections per night, and the son had exactly one more "erection episode" than the five his father experienced.

"Raise children to stand tall, be firm, and be upright," Johnson added, in case readers weren't yet feeling quite enough secondhand embarrassment. He also added in another post that his son is his "best friend," which would be sweet in almost any different context but seems awfully weird in this one.

Unfortunately, Johnson having five boners per night seems to suggest that his single-minded quest to return his penis to its youth — which has also involved the man having his long-suffering genitals electrocuted and shot up with Botox, is working. As studies have shown, the average 20-to-26-year-old man also has five erections per night. Similar studies suggest that nocturnal erections decrease progressively with age, dependent on various health factors and quality of sleep.

Per the dick dashboard data, both father and son have an "AndroAge" of 22. The elder Johnson may even have the edge over his son on "average erection quality," whatever that means, with his being scored at a 94 while the younger's was a mere 90. The data also indicates that the 47-year-old is getting more "efficient" slumber than his 19-year-old son, likely due to the elder's extremely strict sleeping habits that see him in bed by 8:30 PM with little "arousal" beforehand.

As you may recall, Talmage Johnson last made waves nearly two years ago when, at age 17, his father was infused with his teen blood in hopes of receiving its regenerative powers, while giving some of his own blood to his own dad. Jarringly similar to the "blood boy" plot line on HBO's "Silicon Valley," that gruesome scenario brought Johnson — an early investor in Futurism who hasn't been involved with the site for years — into the public eye.

Back in 2023, biochemist Larry Brenner of the City of Hope National Medical Center in Los Angeles told Bloomberg that the practice of young blood transfusions is, to his mind, "gross, evidence-free, and relatively dangerous."

"The people going into these [longevity] clinics who want anti-aging infusions basically have an anxiety problem," Brenner elaborated. "They have an anxiety problem about their mortality."

With all we've seen from Johnson over the past few years, we can't say we disagree — though "going meat for meat" with his own son, as one X user put it, really takes the cake.

More on Johnson: Tech Guy Doing Bizarre Things to Live Forever Says He Now Suffers From Endless Hunger

The post You’ll Never Guess What That Millionaire Biohacker Is Measuring on His Teenage Son appeared first on Futurism.

Read the rest here:
You’ll Never Guess What That Millionaire Biohacker Is Measuring on His Teenage Son

Paralyzed Man Can Now Fly Drone Using Brain Implant

A groundbreaking brain implant has allowed a paralyzed man to control a virtual drone and fly it through an obstacle course.

A groundbreaking brain implant has allowed a paralyzed man to control a virtual drone and fly it through an obstacle course.

The feat, as detailed in a study published in the journal Nature Medicine, was achieved by mapping virtual inputs to signals sent by a region of the brain that controls the fingers, the left precentral gyrus, which is where the brain computer interface (BCI) was implanted.

All the paralyzed patient had to do to exert control is simply think about moving the digits of his hand — bringing a whole new meaning, we must report, to the expression of "not lifting a finger."

"This is a greater degree of functionality than anything previously based on finger movements," said study lead author Matthew Willsey, an assistant professor of neurosurgery and biomedical engineering at the University of Michigan, in a statement about the work.

Key to the BCI's success, the researchers argue, was the fact that it was a brain implant, and not a noninvasive alternative like a brain cap. The researchers believe that placing electrodes as close as possible to neurons is essential to achieve highly functional motor control.

In this case, a total of 192 electrodes were surgically placed in the patient's brain, connecting to a computer. 

From there, a type of AI called a feed-forward neural network interprets the signals, assigning them to different finger movements. The AI system learned to distinguish the signals during a training stage in which the patient tried to perform motions with their fingers — in their mind, to clarify — in sync with a moving virtual hand.

In total, the system provides four degrees of freedom: forwards and backwards, left and right, up and down, and horizontal rotation. Plenty to fly a drone or take control of any virtual environment.

The researchers hope that their technique will open up vast recreational opportunities for people with paralysis and other severe disabilities — like being able to play multiplayer video games, a feat already achieved by a Neuralink patient.

"People tend to focus on restoration of the sorts of functions that are basic necessities — eating, dressing, mobility — and those are all important," co-author Jamie Henderson, a Stanford professor of neurosurgery, said in the statement. "But oftentimes, other equally important aspects of life get short shrift, like recreation or connection with peers. People want to play games and interact with their friends."

Willsey's patient, a 69-year-old man who became quadriplegic after sustaining a devastating spine injury, has a passion for flying. With any luck, he may be able to play a full blown flight simulator — or maybe even control a real drone — in the near future.

More on brain implants: First Neuralink Patient Using It to Learn New Languages

The post Paralyzed Man Can Now Fly Drone Using Brain Implant appeared first on Futurism.

Read the original here:
Paralyzed Man Can Now Fly Drone Using Brain Implant

OpenAI’s Sora Is Generating Videos of Real People, Including This Unintentionally Demonic Version of Pokimane

A creepy Sora output of the streamer Pokimane shows that despite guardrails, the video generator is good at depicting real-life people.

OpenAI has long refused to say whether its Sora video generator was trained on YouTube content — but its propensity for generating videos that look a whole lot like real gaming streamers suggests it did.

When TechCrunch put Sora to the test, its reporters found not only that it could generate videos that were strikingly similar to real-life gameplay of "Super Mario Bros" and "Call of Duty," but also spat out what appeared very much to look like the streamers Raúl Álvarez "Auronplay" Genesand and Imane "Pokimane" Anys.

Though OpenAI claims it has guardrails on the way it depicts real people, it doesn't seem that reporters had any trouble getting it to spit out a video of Anys — though she did end up looking pretty monstrous, with the uncannily exaggerated features distinctive to AI depictions.

Using the prompt "pokimane twitch playthrough watch game live stream subscribe," TechCrunch got Sora to output a video that strongly resembles the YouTube-based streamer. Viewed in profile, the woman in the screenshot looks at a screen in front of her while wearing light-up over-ear headphones and a giant, creepy grin that would be at home in the "Smile" horror franchise.

Unfortunately, we are currently unable to replicate these outputs for ourselves because OpenAI has suspended new Sora signups due to the influx of traffic following its release earlier in the week.

All the same, this demonic rendition of a popular streamer not only seems to offer further evidence that OpenAI is training its models on creators' content without consent, but also that Sora's guardrails don't sufficiently prevent it from depicting real people.

Along with contacting OpenAI about this apparent overriding of the company's guardrails, we've reached out to Anys' representation to ask if she was aware that Sora is depicting her.

In January 2023, shortly after OpenAI released ChatGPT, Pokimane had a terrifying "eureka" moment mid-stream about the future of AI in her line of work.

"What if someday we have streamers that evolve from ChatGPT?" she pondered. "It’s kind of freaky, it’s kind of scary, to be honest, but it had me think, you can basically have a conversation with this thing."

Pointing to the world of VTubers, or streamers who use computer-generated avatars that they voice and control behind the scenes, Anys predicted that someday, fully-generative streamers may well take over the industry — though at that point, she didn't think it would be that sophisticated.

"I do feel like if they make one right now it’s probably not that advanced," she said, "but someday it’ll be very advanced and very scary."

While AI streamers haven't yet arrived, it appears very much like real streamers' content has made its way into other generative AI models — so that future isn't far off.

More on Sora: OpenAI’s Super-Hyped Sora Goes Absolutely Freakshow If You Ask It to Generate Gymnastics Videos

The post OpenAI's Sora Is Generating Videos of Real People, Including This Unintentionally Demonic Version of Pokimane appeared first on Futurism.

Read more:
OpenAI's Sora Is Generating Videos of Real People, Including This Unintentionally Demonic Version of Pokimane

Trump’s New NASA Head Announces Plans to Send Troops to Space

President-elect Donald Trump's pick for NASA administrator billionaire SpaceX tourist Jared Isaacman wants to send troops into space.

Space Soldiers

President-elect Donald Trump's pick for NASA administrator billionaire SpaceX tourist Jared Isaacman wants to send soldiers into space.

During the Space Force Association’s Spacepower 2024 conference in Orlando, Florida, Isaacman argued that troops in space are "absolutely inevitable."

"If Americans are in low Earth orbit, there’s going to need to be people watching out for them," he said, as quoted by the Independent.

"This is the trajectory that humankind is going to follow," he added. "America is going to lead it and we’re going to need guardians there on the high ground looking out for us."

Star Wars Kid

Isaacman's comments are eyebrow-raising for a number of reasons. Do US astronauts really need armed bodyguards in space? What exactly will these space troops do once they reach space? Will these troops be Space Force "Guardians" — who aren't trained to be astronauts — or will the Pentagon send troops from a different military branch?

Besides, where will they stay? With the retirement of the International Space Station in 2030, the Pentagon will also have a hard time coming by accommodations for armed forces in orbit.

Perhaps unsurprisingly, Isaacman had few details to share regarding his plans to send troops into space, let alone how much such an initiative would cost. He did hint at the possibility of sending soldiers into space around the time NASA hopes to settle on the surface of the Moon, according to the Independent.

Isaacman also said he's hoping to turn outer space into an economic opportunity.

"Space holds unparalleled potential for breakthroughs in manufacturing, biotechnology, mining, and perhaps even pathways to new sources of energy," he told audiences during the conference. "There will inevitably be a thriving space economy — one that will create opportunities for countless people to live and work in space."

The tech entrepreneur has been to space twice over the last three years, both times on board SpaceX's Crew Dragon spacecraft.

But given his new desk job in Washington, DC, Isaacman may have to give up on future opportunities to visit space as part of the Polaris program he organized.

"The future of the Polaris program is a little bit of a question mark at the moment," Isaacman admitted at the event, as quoted by Reuters. "It may wind up on hold for a little bit."

More on Isaacman: The New Head of NASA Had an Interesting Disagreement with the Space Agency

The post Trump's New NASA Head Announces Plans to Send Troops to Space appeared first on Futurism.

Excerpt from:
Trump's New NASA Head Announces Plans to Send Troops to Space

UnitedHealth Is Asking Journalists to Remove Names and Photos of Its CEO From Published Work

In the wake of Brian Thompson's murder, the UnitedHealth is now asking journalists to remove or obscure photos of its CEOs' names and faces.

In the wake of UnitedHealthcare CEO Brian Thompson's murder last week, the insurer's parent company is now asking journalists to remove photos of its remaining executives' names and faces.

After Futurism published a blog about "wanted" posters appearing in New York City that featured the names and faces of the CEOs of UHC's owner UnitedHealth Group and its prescription middleman Optum Rx, a spokesperson for the parent company reached out to ask if we would adjust our coverage to "leave out any names and images of our executives' identities," citing "safety concerns."

That original piece didn't include either CEO's name in its text, but the header image accompanying the article did show screenshots of a TikTok video showing the posters that had been spotted around Manhattan, which featured the execs' faces and names.

During these exchanges, the spokesperson repeatedly refused to say whether any specific and credible threats had been made to the people on the posters.

Out of an abundance of caution, we did decide to edit out the names and faces from the image.

But the request highlights the telling dynamics of the murder that have seized the attention of the American public for over a week now. While everyday people struggle to get the healthcare they need with no support — and frequently die during the process — the executives overseeing the system have operatives working behind the scenes to control the dissemination of information that makes them uncomfortable.

After all, these are business leaders who are paid immense sums to be public figures, and whose identities are listed on Wikipedia and business publications — not to mention these insurers' own websites, until they abruptly pulled them down in the wake of the slaying.

There's also something unsettling about the rush to decry the murder and censor information around other healthcare executives when children are killed by gun violence every week, with little reaction from lawmakers and elites beyond a collective shrug.

Per the Gun Violence Archive, a nonprofit that tracks firearm violence, there have been at least five mass shootings since Thompson was killed on December 4. There have also been two ongoing stories about children shooting and killing family members — one in which a seven-year-old accidentally killed his two-year-old brother, and another involving a toddler who shot his 22-year-old mother with her boyfriend's gun after discovering it lying around.

When anybody is killed with a firearm in the United States, whether they're a CEO or a young mother, it's a tragedy. But only one of those horrors activates a behind-the-scenes effort to protect future victims.

More on the UHC shooting: Americans Point Out That UnitedHealthcare Tried to Kill Them First

The post UnitedHealth Is Asking Journalists to Remove Names and Photos of Its CEO From Published Work appeared first on Futurism.

See the original post:
UnitedHealth Is Asking Journalists to Remove Names and Photos of Its CEO From Published Work