OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

A former OpenAI researcher became so convinced that the technology would usher in doom for humanity, he left the company and called it out.

Getting Warner

After former and current OpenAI employees released an open letter claiming they're being silenced against raising safety issues, one of the letter's signees made an even more terrifying prediction: that the odds AI will either destroy or catastrophically harm humankind are greater than a coin flip.

In an interview with The New York Times, former OpenAI governance researcher Daniel Kokotajlo accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) because its decision-makers are so enthralled with its possibilities.

"OpenAI is really excited about building AGI," Kokotajlo said, "and they are recklessly racing to be the first there."

Kokotajlo's spiciest claim to the newspaper, though, was that the chance AI will wreck humanity is around 70 percent — odds you wouldn't accept for any major life event, but that OpenAI and its ilk are barreling ahead with anyway.

MF Doom

The term "p(doom)," which is AI-speak for the probability that AI will usher in doom for humankind, is the subject of constant controversy in the machine learning world.

The 31-year-old Kokotajlo told the NYT that after he joined OpenAI in 2022 and was asked to forecast the technology's progress, he became convinced not only that the industry would achieve AGI by the year 2027, but that there was a great probability that it would catastrophically harm or even destroy humanity.

As noted in the open letter, Kokotajlo and his comrades — which includes former and current employees at Google DeepMind and Anthropic, as well as Geoffrey Hinton, the so-called "Godfather of AI" who left Google last year over similar concerns — are asserting their "right to warn" the public about the risks posed by AI.

Kokotajlo became so convinced that AI posed massive risks to humanity that eventually, he personally urged OpenAI CEO Sam Altman that the company needed to "pivot to safety" and spend more time implementing guardrails to reign in the technology rather than continue making it smarter.

Altman, per the former employee's recounting, seemed to agree with him at the time, but over time it just felt like lip service.

Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had "lost confidence that OpenAI will behave responsibly" as it continues trying to build near-human-level AI.

"The world isn’t ready, and we aren’t ready," he wrote in his email, which was shared with the NYT. "And I’m concerned we are rushing forward regardless and rationalizing our actions."

Between the big-name exits and these sorts of terrifying predictions, the latest news out of OpenAI has been grim — and it's hard to see it getting any sunnier moving forward.

"We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk," the company said in a statement after the publication of this piece. "We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world."

"This is also why we have avenues for employees to express their concerns including an anonymous integrity hotline and a Safety and Security Committee led by members of our board and safety leaders from the company," the statement continued.

More on OpenAI: Sam Altman Replaces OpenAI's Fired Safety Team With Himself and His Cronies

The post OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity appeared first on Futurism.

Read the original post:
OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

Scientists Find That a Tiny Proportion of People Spread Almost All the Fake News, and They Turn Out to Be Exactly Who You’d Expect

A tiny cohort of

Naomi Wolf Pipeline

A new study shows that a minuscule subset of "supersharers" spread the overwhelming majority of fake news on social media during the 2020 election cycle. The average supersharer profile, according to the research? Older, white, conservative, and incredibly online women in red states. Cue the gasp!

The study, published this week in the journal Science, analyzed data from the accounts of 660,000 verifiably real, US-based voters on the platform X, formerly known as Twitter.

Of these hundreds of thousands of American netizens, the researchers — a team comprising American and Israeli scientists — were able to determine that only about 2,000 users were responsible for sharing a whopping 80 percent of misinformation that spread during the 2020 election.

When the researchers examined the voter registration information attributed to the supersharers, a clear pattern emerged: they were disproportionately likely to be middle-aged-to-older white women with an average age of 58; they were also primarily Republican and lived in conservative states like Florida, Texas, and Arizona.

These users aren't just active, either. Per the journal's writeup, more than one in every 20 X users examined for the study were following these accounts, meaning that these supersharers are punching way above their weight in terms of reach. (The study builds on an earlier 2019 study from many of the same researchers, which found similar supersharer results when analyzing the 2016 election cycle.)

In a way, they could be likened to fake news influencers. Popular conspiracy websites like Infowars and Gateway Pundit publish fake news, which then makes its way to supersharers, who distribute it to the social media masses.

Final Boss

Though the researchers expected to find that the supersharers' many tweets were somehow automated, there was no clear timing pattern or other indicator suggesting that was the case. Instead, they found the opposite: that these folks are fully plugged into the misinformation IV, mainlining fake news and manually clicking retweet over, and over, and over again.

"That was a big surprise," study coauthor Briony Swire-Thompson, a psychologist at Northeastern University, told Science. "They are literally sitting at their computer pressing retweet."

To that end, it's also unlikely that the supersharing cohort in question was part of a coordinated disinformation effort. On the contrary, according to researchers, these users moreso represent a caustic breakdown in the way online fake news is created, shared, and consumed by a large faction of American voters. And though this study was about the 2020 election, as we all go kicking and screaming into November 2024, it's important to remember that not everyone exists in the same digital reality.

"It does not seem like supersharing is a one-off attempt to influence elections by tech-savvy individuals," Nir Grinberg, study co-author and computational social scientist at Israel's Ben-Gurion University of the Negev, told Science, "but rather a longer-term corrosive socio-technical process that contaminates the information ecosystem for some part of society."

More on fake news: Police Say AI-Generated Article about Local Murder Is "Entirely" Made Up

The post Scientists Find That a Tiny Proportion of People Spread Almost All the Fake News, and They Turn Out to Be Exactly Who You’d Expect appeared first on Futurism.

Continued here:
Scientists Find That a Tiny Proportion of People Spread Almost All the Fake News, and They Turn Out to Be Exactly Who You’d Expect

Scientists Find Plastic-Eating Fungus Feasting on Great Pacific Garbage Patch

Marine scientists discovered an ocean-borne fungus chomping through plastic trash in the Great Pacific Garbage Patch.

Chomp Chomp

Does nature have to do everything itself?

An international cohort of marine scientists discovered an ocean-borne fungus chomping through plastic trash suspended in the Great Pacific Garbage Patch, as detailed in a new study published in the journal Science of the Total Environment.

Dubbed Parengyodontium album, the fungus was discovered among the thin layers of other microbes that live in and around the floating plastic pile in the North Pacific.

According to the study, it's the fourth known marine fungus capable of consuming and breaking down plastic waste. Researchers found that P. album was specifically able to break down UV-exposed carbon-based polyethylene, which is the type of plastic most commonly used to make consumer products, like water bottles and grocery bags — and the most pervasive form of plastic waste that pollutes Earth's oceans.

"It was already known that UV light breaks down plastic by itself mechanically," said study lead author Annika Vaksmaa, a marine biologist and biogeochemist at the Royal Netherlands Institute for Sea Research (NIOZ), in a statement, "but our results show that it also facilitates the biological plastic breakdown by marine fungi."

Don't Get Carried Away

Before you get ahead of yourself: no, this discovery doesn't mean that you should start consuming single-use plastics with abandon. Our oceans are overrun with destructive plastic pollutants, and refraining from plastic use as much as possible is still our best bet at keeping plastic from plugging up the Earth's vital — though fragile — oceans with animal- and environment-harming garbage.

Mitigating and removing the plastic that's already clogging Earth's waterways is still an important goal. But doing so unfortunately isn't quite as simple as scooping it out of the ocean en masse. Trawling for plastic with large nets can disturb marine life, and efforts to do so are costly and often wasteful themselves.

So in the fight to find a way to reduce ocean plastic, finding a new fungus capable of speeding up the plastic degradation process is an exciting new turn. But it's not a cure-all. According to the research, lab-grown P. album was observed to break down a given piece of UV-treated plastic at a rate of roughly 0.05 percent per day for every nine-day period. Which isn't nothing, but it'd take a very long time for the bacteria to get through the entirety of the Great Pacific Garbage Patch, let alone the millions of metric tons of plastics that enter the ocean every year.

Regardless, the P. album finding is heartening — and according to the researchers, this latest discovery suggests that more plastic-eating organisms might be out there.

"Marine fungi can break down complex materials made of carbon," added Vaksmaa, adding that it's "likely that in addition to the four species identified so far, other species also contribute to plastic degradation."

More on plastic-hungry microbes: Scientists Gene-Hack Bacteria to Turn Waste Plastic Into Kevlar-Like Spider Silk

The post Scientists Find Plastic-Eating Fungus Feasting on Great Pacific Garbage Patch appeared first on Futurism.

Read the original here:
Scientists Find Plastic-Eating Fungus Feasting on Great Pacific Garbage Patch

Forensic Analysis Finds Overwhelming Similarities Between OpenAI’s Voice and Scarlett Johansson

The analysis used several AI models to compare the OpenAI voice to the voices of around 600 actresses, including Scarlett Johansson.

A+ Copycat

OpenAI's controversial "Sky" voice for ChatGPT sounds remarkably similar to the voice of Scarlett Johansson, a forensic analysis has found, adding weight to what many already suspected and what Johansson herself has charged: that OpenAI deliberately mimicked the actress's voice without her permission.

The analysis, conducted by researchers at the Arizona State University and commissioned by NPR, used several AI models to evaluate similarities between the voices of Sky and about 600 actresses, including Johansson.

Lo and behold, it found that Johansson's voice was more similar to Sky than 98 percent of the other candidates.

There are a few caveats, however. Johansson wasn't always the top scorer, with the voices of Anne Hathaway and Keri Russell "often" being rated as more alike, according to NPR. Sky's voice is slightly higher pitched and more expressive, too, while Johansson's is breathier.

But other parts of the analysis are damning, such as one that simulated the speakers' vocal tracts based on the characteristics of their voice and found that Sky and Johansson would have identical tract lengths.

Visar Berisha, a computer scientist at ASU who led the analysis, summed it up neatly. "Our analysis shows that the two voices are similar but likely not identical," he told NPR.

Sky-High Lies

The controversy stems from a big update to ChatGPT released last month, which debuted a new voice assistant capable of real-time conversation.

Sky was one of those voices, and soon enough, people took note of its resemblance to Johansson's role in the sci-fi movie "Her," in which she voices a chirpy AI chatbot that the film's melancholic protagonist falls in love with.

If those parallels weren't already suspicious, they were all but confirmed by OpenAI CEO Sam Altman — a professed fan of the movie — who cheekily tweeted the single word "her" on the day of the voice assistant's release.

Then in a blundering series of backpedals, the AI company suddenly pulled the Sky voice days later, but said it had not copied ScarJo's voice. Instead, it claimed, a different actress was behind the chatbot (which was later corroborated by reporting from The Washington Post).

Johansson fired back, revealing that OpenAI had in fact twice approached her to license her voice. She turned the offers down, only to discover that OpenAI had released a chatbot with a voice she thought was "eerily similar" to hers.

In the face of mounting negative PR, OpenAI has maintained that this whole fiasco was simply the fault of its poor communication with the actress. Johansson hasn't filed a lawsuit yet, but she has hired lawyers. Many legal experts already believed that she would have a strong case. And now, with these latest forensic findings, it could be even stronger.

More on AI: OpenAI Insiders Say They're Being Silenced About Danger

The post Forensic Analysis Finds Overwhelming Similarities Between OpenAI's Voice and Scarlett Johansson appeared first on Futurism.

Go here to read the rest:
Forensic Analysis Finds Overwhelming Similarities Between OpenAI's Voice and Scarlett Johansson

OpenAI Negotiating to Buy "Vast Quantities" of Fusion Power, Which Doesn’t Exist Yet

Fusion startup Helion is in talks for a deal with OpenAI to

For all his public visibility, Sam Altman only get a measly $65,000 a year in salary from OpenAI, and no ownership stake.

But as The Wall Street Journal reports, the CEO has a far more lucrative venture fund to pay the bills: he's invested in hundreds of companies, many of which are directly benefiting from his AI company's soaring success.

And some of those companies directly do business with OpenAI, raising questions over possible conflicts of interest.

Near the top of that list is Helion, a nuclear fusion power company that's been around for about 11 years.

And as it turns out, the company didn't just sign a massive partnership with OpenAI partner Microsoft last year, but it's even in talks for a deal with OpenAI itself to "buy vast quantities of electricity to provide power for data centers," according to the WSJ.

But there's one big problem: the tech has yet to materialize, making any promises of "vast" amounts of power nothing but an empty-handed commitment in the distant future. Try as they might, researchers have yet to figure out how to make it a viable way to generate energy, rendering it nothing more than a moonshot.

At the same time, the revelation throws Altman's already dubious personal dealings into an even murkier light.

Despite fusion remaining a glint in the eye of nuclear engineers, Altman professes to believe in the promise of a renewable source of electricity, having invested $375 million in Helion back in 2021, the biggest payout he's ever made to a startup.

"Helion is, like, more than an investment to me," he said at a StrictlyVC event last year. "It’s the other thing beside OpenAI that I spend a lot of time on. I’m just super excited about what’s going to happen there."

In many ways, it would be an elegant — albeit entirely unproven — solution to OpenAI's insatiable energy demands. Training AI is an infamously power-hungry process that consumes a staggering amount of water as well. Apart from fusion energy, Microsoft is also investigating building small nuclear fission reactors to keep its data centers running.

But despite scientists repeatedly claiming various "breakthroughs" in the field of nuclear fusion, we have yet to build a reactor that can produce a significant amount of energy.

Still, Altman is doubling down, claiming that the future of AI will depend on a "breakthrough" in power generation during this year's World Economic Forum in Davos. "It motivates us to go invest more in fusion," he said at the time.

The recently-minted billionaire has even implied that artificial general intelligence, a form of AI that would supercede the capabilities of humans, could even "make fusion" happen.

In other words, like many of his peers in the venture capital world, Altman is in the business of selling dreams, not technological actualities. Besides, as he recently admitted, his AI company doesn't even know how its current crop of AI models works in the first place.

"We can build AGI," he tweeted back in 2022. "We can colonize space. We can get fusion to work and solar to mass scale. We can cure all human disease. We can build new realities."

More on Sam Altman: Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Works

The post OpenAI Negotiating to Buy "Vast Quantities" of Fusion Power, Which Doesn't Exist Yet appeared first on Futurism.

More:
OpenAI Negotiating to Buy "Vast Quantities" of Fusion Power, Which Doesn't Exist Yet

The AI Industry Is Swarming DC With Lobbyists

A report by Public Citizen found that the amount of AI lobbyists more than doubled in 2023 from the year before.

Swarm of the Suits

As the AI industry continues to balloon, so does its army of lackeys ready to buttonhole lawmakers for favorable regulation. According to a new report by the consumer rights advocacy group Public Citizen, thousands of AI lobbyists have descended upon the Capitol, in a dramatic surge of influence that's already coincided with major policy decisions.

Between 2019 and 2022, the number of lobbyists sent by corporations and other groups on AI-related issues stayed relatively equal year-to-year, hovering around 1,500. Then in 2023, things went off the charts, with over 3,400 lobbyists flooding Washington DC — an increase of more than 120 percent.

"We're reaching a point where the policies that are going to shape AI policy in the next 10 years are really being decided now," Mike Tanglis, research director at Public Citizen's Congress Watch division, told The Hill. "From our perspective, having the leading voices on an issue being those that stand to make billions of dollars is generally not a good idea for the public."

Power Up

Those numbers show that the AI lobby has had a sizeable presence for years. It's only now, with the mainstream popularity of chatbots like ChatGPT and image generators like Midjourney, that people are beginning to take notice — and that the number of AI lobbyists has begun to significantly climb.

One of the more brazen displays of the industry's sway over the federal government took place last fall, when dozens of tech leaders, from Elon Musk to Sam Altman, gathered for an historic closed door session with over 60 US senators, lecturing them about the future of AI.

Unsurprisingly, a plurality of the lobbyists today comes from the tech industry — 700 of them, or 20 percent of that total. But a mix of 17 different industries comprised the other 80 percent, illustrating the wide scope of intersecting interests in AI.

Advocacy groups accounted for the next largest chunk with 425 lobbyists, of whom could be pro or anti-AI. Others included defense, health care, financial services, and education, all with clear stakes.

Presidential Prize

What's also interesting isn't just where these lobbyists are coming from, but where they're going. Excluding both houses of Congress — the most obvious target — the White House was the most lobbied body of the federal government last year, with over 1,100 lobbyists.

Their reasons for hounding the Oval Office are obvious. In October, President Joe Biden issued an executive order on AI laying down ground rules for its development, which were noticeably and perhaps acceptably vague. If the AI industry hadn't already influenced that ordinance, it will undoubtedly keep sending lobbyists to shape how it's enforced in the future. Case in point: the report found that the amount of lobbyists immediately went up after the executive order was issued.

Of course, what really talks is the money, and those figures are less clear. A recent report from OpenSecrets found that groups that lobbied the government on AI spent more than $957 million last year — but that represents a range of interests, and not just on the emerging technology.

But, as Public Citizen's report concludes, expect all those figures to climb — dollars, suits, you name it.

More on AI: OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

The post The AI Industry Is Swarming DC With Lobbyists appeared first on Futurism.

Excerpt from:
The AI Industry Is Swarming DC With Lobbyists

News Site Says It’s Using to AI to Crank Out Articles Bylined by Fake Racially Diverse Writers in a Very Responsible Way

A news network is attributing AI-spun articles to fake authors with racially diverse names. Its publisher claims the names were unintentional.

A national network of local news sites called Hoodline is using fake authors with fictional and pointedly racially diverse names to byline AI-generated articles.

The outlet's publisher claims it's doing so in an extremely normal, human-mitigated way. But unsurprisingly, a Nieman Lab analysis of the content and its authors suggests otherwise.

Per Neiman, Hoodline websites were once a refuge for hyperlocal human-boots-on-the-ground reporting. These days, though, when you log onto a Hoodline site, you'll find articles written by a slew of entirely fabricated writers.

Hoodline is owned by a company called Impress3, which in turn is helmed by a CEO named Zack Chen. In April, Chen published an article on Hoodline's San Francisco site explaining that the news network was using "pen names" to publish AI-generated content — a euphemism that others have deployed when caught publishing fake writers in reputable outlets.

In that hard-to-parse post, Chen declared that these pen names "are not associated with any individual live journalist or editor." Instead, "the independent variants of the AI model that we're using are tied to specific pen names, but are still being edited by humans." (We must note: that's not the definition of a pen name, but whatever.)

Unlike the fake authors that Futurism investigations discovered at Sports Illustrated, The Miami Herald, The LA Times, and many other publications, Hoodline's made-up authors do have little "AI" badges next to their names. But in a way, as Nieman notes, that disclosure makes its writers even stranger — not to mention more ethically fraught. After all, if you're going to be up-front about AI use, why not just publish under a generalized byline, like "Hoodline Bot"?

The only reason to include a byline is to add some kind of identity, even if a fabricated one, to the content — and as Chen recently told Nieman, that's exactly the goal.

"These inherently lend themselves to having a persona," Chen told the Harvard journalism lab, so "it would not make sense for an AI news anchor to be named 'Hoodline San Francisco.'"

Which brings us to the details of the bylines themselves. Each city's Hoodline site has a bespoke lineup of fake writers, each with their own fake name. In May, Chen told Bloomberg that the writers' fake names were chosen at random. But as Nieman found, the fake author lineups at various Hoodline websites appear to reflect a given region's demographics, a reality that feels hardly coincidental. Hoodline's San Francisco-focused site, for example, published content under fake bylines like "Nina Singh-Hudson," "Leticia Ruiz," and "Eric Tanaka." But as Nieman's Neel Dhanesha writes, the "Hoodline site for Boston, where 13.9 percent of residents reported being of Irish ancestry in the 2022 census, 'Leticia Ruiz' and 'Eric Tanaka' give way to 'Will O'Brien' and 'Sam Cavanaugh.'"

In other words, it strongly seems as though Hoodline's bylines were designed to appeal to the people who might live in certain cities — and in doing so, Hoodline's sites didn't just manufacture the appearance of a human writing staff, but a racially varied one to boot. (In reality, the journalism industry in the United States is starkly lacking in racial diversity.)

And as it turns out? Hoodline's authors weren't quite as randomized as Chen had previously suggested.

"We instructed [the tool generating names] to be randomized, though we did add details to the AI tool generating the personas of the purpose of the generation," Chen admitted to Nieman, adding that his AI was "prompted to randomly select a name and persona for an individual who would be reporting on — in this case — Boston."

"If there is a bias," he added, "it is the opposite of what we intended."

Chen further claimed that Hoodline has a "team of dozens of (human) journalist researchers who are involved with information gathering, fact checking, source identification, and background research, among other things," though Nieman's research unsurprisingly found a number of publishing oddities and errors suggesting there might be less human involvement than Chen was letting on. Hoodline also doesn't have any kind of masthead, so it's unclear whether its alleged team of "dozens" reflects the same kind of diversity it's awarded its fake authors.

It's worth noting that a similar problem existed in the content we discovered at Sports Illustrated and other publishers. Like at Hoodline, many of these fake writers were attributed racially diverse names; many of these made-up writer profiles even went a step further and were outfitted with AI-generated headshots depicting fake, diverse faces.

Attributing AI-generated authors to fake writers at all, regardless of whether they have an "AI" badge next to their names, raises red flags from the jump. But fabricating desperately needed diversity in journalism by whipping up a fake writing staff — as opposed to, you know, hiring real humans from different minority groups — is a racing-to-the-bottom kind of low.

It seems that Hoodline's definition of good journalism, however, would differ.

"Our articles are crafted with a blend of technology and editorial expertise," reads the publisher's AI policy, "that respects and upholds the values of journalism."

More on fake writers: Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry

The post News Site Says It’s Using to AI to Crank Out Articles Bylined by Fake Racially Diverse Writers in a Very Responsible Way appeared first on Futurism.

View original post here:
News Site Says It’s Using to AI to Crank Out Articles Bylined by Fake Racially Diverse Writers in a Very Responsible Way

Boeing Spacecraft Finally Manages to Limp Off the Earth

Boeing has finally launched its much-maligned Starliner astronaut shuttle into space with two NASA astronauts on board.

Participation Trophy

The third time's the charm — NASA and Boeing have finally done it.

After many years of delays, technical issues, an unsuccessful test flight and plenty of bad luck, Boeing has finally launched its much-maligned Starliner astronaut shuttle into space.

The United Launch Alliance's Atlas V rocket took off around 10:52 am, just as planned, from Space Launch Complex-41 at NASA's Cape Canaveral Space Force Station in Florida, carrying astronauts Butch Wilmore and Suni Williams into orbit.

While it's far from the first time we've seen a privately developed spacecraft launch into space, Williams became the first woman to fly on the first crewed test flight of an orbital spacecraft.

It's a triumphant moment for the spacecraft, which has been in development for roughly a decade now. The project, which was meant to compete with SpaceX's Crew Dragon capsule under NASA's commercial crew program, encountered plenty of setbacks, from the discovery of flammable materials inside the spacecraft to a strange "buzzing" sound that forced officials to scrub the first of three launch attempts last month.

Despite those hurdles, Boeing has now prevailed, limping over the finish line with today's crewed test launch.

Better Late

However, the elephant in the room is SpaceX, which has already made ten successful trips to the ISS over the last five years (and with a reusable rocket, to boot).

Nonetheless, it's still a momentous occasion in the world of space exploration, a big step in the United States' efforts to establish independent ways to get astronauts into space without relying on Russia.

"Congratulations to NASA, Boeing, and ULA on this morning’s launch to the space station, and Godspeed to Butch, Suni and Starliner on your flight!" SpaceX president and COO Gwynne Shotwell tweeted.

But Boeing isn't out of the woods just yet. The capsule is expected to take around 25 hours to reach the orbital space station, at which point it'll have to adjust its trajectory perfectly for docking (something the spacecraft has already done without any crew on board).

Needless to say, we wish Butch and Suni the best of luck.

More on the launch: Boeing Keeps Making Excuses to Push Back First Astronaut Launch

The post Boeing Spacecraft Finally Manages to Limp Off the Earth appeared first on Futurism.

Read this article:
Boeing Spacecraft Finally Manages to Limp Off the Earth

The Cybertruck’s Steering Has a Significant Lag

Tesla's Cybertruck's steer-by-wire system has a considerable delay. But that may be a feature, not a bug, as many netizens have argued.

Tesla's Cybertruck is a major departure from conventional automotive design in many ways, from its peculiar shape to its use of stainless steel.

High on that list is also the pickup's steer-by-wire system, which translates the movement of the steering yoke to all four wheels using actuators, foregoing any physical connection.

Tesla claims that the system means that "steering Cybertruck feels more responsive and requires less effort from the driver."

But if a recent video spotted by Jalopnik is anything to go by, the system suffers from a considerable delay between the movement of the steering yoke and the front wheels, raising questions over whether the truck is truly safe to drive.

The cybertruck has a fly-by-wire steering wheel.... and it LAGS ??? pic.twitter.com/nUbrCXjU0r

— Heart (@heartereum) June 3, 2024

The video quickly drew plenty of derision.

"Imagine crashing because of your steering ping," one user joked.

But as many other netizens have since pointed out, there may be a good reason for the delay.

Other users on Tesla CEO Elon Musk's social media platform X quickly amended the viral video with a Community note, arguing that "without steer by wire would take far longer to make that turn."

"It isn't lag," the note reads. "This is a safety feature."

They may be onto something. Going from one extreme of the steering range to the othertakes a considerable amount of movement of the steering wheel in a conventional car, as one Reddit user demonstrated with his Ford F-150. Besides, completing the maneuver seen in the video while traveling at speed could result in very erratic and potentially dangerous movement, and in extreme cases even flip the vehicle (although you'd have to try very hard to flip a 6,600-pound EV).

The Cybertruck's steering wheel is also designed to translate far more movement to the wheels with relatively little turning of the steering yoke at slower speeds. At highway speeds, that ratio becomes much lower to ensure stability on the road.

"There’s absolutely no real-life scenario in which you need to turn the wheels that quickly while stationary," one Reddit user pointed out.

Car journalists have generally spoken highly of the steer-by-wire system, noting the truck's surprising agility. However, most have also noted that the unusual setup takes time to get used to.

But what about the responsiveness of the steering at higher speeds? What would happen if a Cybertruck driver had to swerve out of the way of an oncoming obstacle, a situation where every fraction of a second counts? As users on Hacker News pointed out, even a minimal amount of lag could lead to a driver overreacting, making the situation worse.

Plenty of questions remain. For one, we don't know whether the delay is present when the Cybertruck is in motion, or how a possible delay would compare to a stationary one, especially when taking its variable turning ratio into account.

Nonetheless, there's a good case to be made that this particular video may have primarily served as a way to take a potshot at Tesla and draw a crowd.

To be clear, there are plenty of other valid criticisms of the unusual pickup, including terrible range, shoddy workmanship, besmirched body panels, lack of manual controls, a finicky and unreliable truck bed cover — and lots of lemons being delivered to customers.

"There's many, many, many, many reasons to hate on the Cybertruck but this isn't one of them," one Reddit user argued.

More on the Cybertruck: Elon Musk Is Gonna Blow a Gasket When He Sees This Pride-Themed Cybertruck

The post The Cybertruck's Steering Has a Significant Lag appeared first on Futurism.

More:
The Cybertruck's Steering Has a Significant Lag

NASA Slaps Down Billionaire’s Plan to Fly Up and Fix Hubble Space Telescope

Billionaire space tourist Jared Isaacman offered to fix NASA's Hubble telescope. Officials are still unsure the benefits outweigh the risks.

Offer Declined

NASA's groundbreaking Hubble Space Telescope is on its last legs.

Ongoing issues with the aging spacecraft's remaining gyroscopes, which help point in the right direction, have forced scientists to limit its scientific operations, according to a Tuesday update, with teams preparing for "one-gyro operations."

And while billionaire space tourist Jared Isaacman, who already circled the Earth inside a SpaceX Crew Dragon, has offered to foot the bill for a Hubble maintenance mission — the last one took place in 2009, before the end of the Space Shuttle program — NASA has now turned him down.

Basically, the agency is worried Isaacman and his collaborators may end up doing more harm than good.

"After exploring the current commercial capabilities, we are not going to pursue a reboost right now," said NASA astrophysics director Mark Clampin, as quoted by CBS News. While NASA "greatly appreciates" their efforts, "our assessment also raised a number of considerations, including potential risks such as premature loss of science and some technology challenges."

However, the door isn't entirely shut just yet.

"So while the reboost is an option for the future, we believe we need to do some additional work to determine whether the long-term science return will outweigh the short-term science risk," Clampin concluded.

Thanks, But No Thanks

It's yet another intriguing development in the ever-changing relationship between NASA and the burgeoning private industry it's increasingly relying on for access to space.

As NPR reported last month, NASA spent years hemming and hawing over Isaacman's offer to visit the Hubble.

The entrepreneur and trained fighter jet pilot, who was the commander of the first all-civilian mission into space, which saw a crew of four circle the Earth inside a SpaceX Crew Dragon spacecraft in September 2021, has been calling for a maintenance mission, arguing that "the 'clock' is being run out on this game."

Isaacman will also attempt to perform the first-ever private spacewalk later this year.

But plenty of concerns remain, with NASA pointing out that SpaceX's Crew Dragon isn't exactly designed for such a mission, and lacks several core features over NASA's Space Shuttle, which was used to service the Hubble five times between 1993 and 2009.

For one, it doesn't have an airlock or a robotic arm, which could make repairing the Hubble difficult.

Besides, even during NASA's servicing missions, astronauts came nail-bitingly close to permanently damaging the space telescope.

Instead, NASA is looking for ways to eke out just over another decade of life out of the Hubble, without a SpaceX-enabled visit.

"We updated reliability assessments for the gyros... and we still come to the conclusion that (we have a) greater than 70 percent probability of operating at least one gyro through 2035," Hubble project manager Patrick Crouse told reporters on Tuesday.

More on the Hubble: NASA Experts Concerned Billionaire Space Tourist Will Accidentally Break Hubble Space Telescope While Trying to Fix It

The post NASA Slaps Down Billionaire's Plan to Fly Up and Fix Hubble Space Telescope appeared first on Futurism.

See original here:
NASA Slaps Down Billionaire's Plan to Fly Up and Fix Hubble Space Telescope

You Might Cry When You Read This Study About What’s Happening to the Oceans

Beware the three horsemen of the ocean apocalypse, according to a new study: extreme heat, acidification, and deoxygenation.

Aquatic Omens

Beware the three horsemen of the ocean apocalypse: extreme heat, acidification, and deoxygenation. New research, published in the journal AGU Advances, has shown how this "triple threat" has drastically intensified over the past several decades, pushing our oceans ever closer to the brink in what is yet another clear consequence of climate change.

Though nothing's set in stone, the findings exhibit eerie parallels to the precursors of previous mass extinctions.

"If you look at the fossil record you can see there was this same pattern at the end of the Permian, where two-thirds of marine genera became extinct," Andrea Dutton, a climate scientist at the University of Wisconisin-Madison who was not involved in the study, told The Guardian. "We don't have identical conditions to that now, but it's worth pointing out that the environmental changes going on are similar."

New Extremes

Extreme heat, acidification, and deoxygenation are all fearsome forces on their own. Combine two or more of them, and they can be catastrophic: they cause what's known as column-compound extreme events (CCX), which turn affected areas of the ocean virtually uninhabitable.

The research, which focused on the effects in the upper one thousand feet of the ocean, found that these compound events are growing, and now threaten up to 20 percent of global ocean volume. The waters of the North Pacific and the tropics are the most hard hit, as the only areas faced with full-blown triple CCX — at least so far.

To make matters worse, the events are only getting more extreme, lasting three times longer — up to 30 days — and are six times more intense compared to the 1960s, per the Guardian. And wherever they occur, they can cut down the amount of habitable space by up to 75 percent.

"The impacts of this have already been seen and felt," study lead author Joel Wong, a researcher at ETH Zurich, told the newspaper.  "Intense extreme events like these are likely to happen again in the future and will disrupt marine ecosystems and fisheries around the world."

Sinking Feeling

Oceans are the world's largest carbon sinks, absorbing the greenhouse gas and keeping it out of the atmosphere — and this immense burden, worsened due to climate change driven by human emissions, is taking its toll.

As the oceans absorb more carbon, their seawater becomes more acidic, damaging marine life. It also has the effect of crowding out oxygen molecules, straining aquatic populations.

Marine biomes are also enormous heat sinks. As expected, soaring global temperatures are putting them under incredible stress. But last year, the oceans also experienced a spike in warming that outpaced even the most pessimistic predictions, bewildering scientists. Who knows, then, just how extreme these compounding catastrophes can get.

More on oceans: Scientists Find Plastic-Eating Fungus Feasting on Great Pacific Garbage Patch

The post You Might Cry When You Read This Study About What's Happening to the Oceans appeared first on Futurism.

The rest is here:
You Might Cry When You Read This Study About What's Happening to the Oceans

Widow Astonished by Options After Husband Dies: "Space?! I Can Shoot Him Into Space?"

After her husband died, one woman found the perfect way to send off his cremated remains after discovering space burial company Celestis.

Rest in Space

Jeremiah Corner was a lifelong fan of "Star Trek."

In 2022, he succumbed to an aggressive autoimmune disease, leaving his wife Uli to decide what to do with his cremated remains, as KOMO News reports.

After doing some research, Corner found the perfect solution after discovering space burial company Celestis.

"Space?! I can shoot him into space?" Corner recalled in an interview with KOMO News.

Fitting End

Having your loved one's cremated remains launched into near-space doesn't come cheap, costing anywhere from $3,500 to $13,000 in the case of a deep space mission.

"I thought to myself, 'If he was alive, I'd be like, honey, do you want to keep your car or do you want to go to space?'" Corner told KOMO. "Space! So, I sold his car and sent him to space."

Since 1997, Celestis has been rocketing the ashes of the deceased into space. Over the decades, it's delivered the remains of "Star Trek" creator Gene Roddenberry, legendary physicist Dr. Gerard O’Neill, and Apollo-era Moon astronaut Philip Chapman.

In total, the company has completed 17 memorial spaceflights, including one that impacted the Moon.

But not everybody agrees with the practice. In January, Navajo Nation president Buu Nygren filed a formal objection with NASA and the US Department of Transportation, decrying plans to deliver ashes to the lunar surface as part of US-based space startup Astrobotic's Peregrine Mission 1 as an "act of desecration."

Fortunately for Nygren — and unfortunately for Roddenberry's family — Peregrine never made it to the Moon and crash-landed in the Pacific Ocean after spending six days in orbit.

Corner's husband Jeremiah, however, fared much better. His remains were part of Celestis Enterprise mission — named in honor of "Star Trek," of course — into deep space, which launched on the same United Launch Alliance's Vulcan rocket as Celestis' moonbound Tranquility mission on January 8.

Corner was in good company, to say the least. Joining his ashes were the DNA of American presidents George Washington, Dwight Eisenhower, and John Kennedy, as well as some of the remains of several cast and crew members from the original "Star Trek" series.

"It felt very spiritual in a way because you're watching someone ascend, literally ascent into the heavens," his surviving wife told KOMO News. "One of the things I wrote on his memorial was I give you the universe. I loved him that much, and so I wanted to do that for him."

More on Celestis: Native Americans Say New Mission Will Desecrate the Moon

The post Widow Astonished by Options After Husband Dies: "Space?! I Can Shoot Him Into Space?" appeared first on Futurism.

Continue reading here:
Widow Astonished by Options After Husband Dies: "Space?! I Can Shoot Him Into Space?"

The Diamond Industry Is Withering as Beautiful Lab-Grown Diamonds Drive Down Prices

Lab-grown diamonds are taking the jewelry industry by storm — and those invested in the natural-grown kind are none too pleased.

Not Forever

Lab-grown diamonds are taking the jewelry industry by storm — and those invested in the natural-grown kind are none too pleased.

As CNBC reports, consumers have developed a taste for less-expensive lab-grown diamonds, which are identical to those forged within the Earth's pressurized mantle but only take a few hours to make, rather than a few billion years.

Said to be up to 85 percent cheaper than mined diamonds, lab-grown diamonds have seen a huge jump in demand as frugal consumers seek to save money and — let's be honest — patronize sellers who don't have blood on their hands.

According to data provided to CNBC by diamond industry analyst Paul Zimnisky, lab-grown represented more than 18 percent of the diamonds sold in 2023, up from just two percent in 2017. Overall diamond prices, meanwhile, have fallen 5.7 percent this year alone, the analyst said.

This sea change has made major waves in the diamond industry, as evidenced by the current debacle at De Beers, the company that in 1948 coined the slogan "diamonds are forever." After seeing major revenue losses in the hundreds of millions of dollars, De Beers is now in a tense breakup with its majority shareholder, the mining company Anglo American — and is recommitting itself to mined diamonds in the midst of it all.

Crazy Diamond

While the drama at the diamond industry's most prestigious institutions rages on, smaller companies are left dealing with the fallout.

Take it from Ankur Daga, the CEO and founder of the e-commerce jeweler Angara who pointed to analysis suggesting that half of engagement rings bought this year will feature lab-grown diamonds. That figure, as a survey commissioned by The Knot wedding magazine found, is almost quadruple the 12 percent who said they'd be buying lab-grown in 2019.

"The diamond industry is in trouble," said Daga, with the "core issue" at hand being the "rapid growth of lab-grown diamonds."

Anish Aggarwal, the cofounder of the diamond advisory firm Gemdex, told CNBC that he believes the industry is up to the challenge — and that its own short-sightedness is likely the cause for its current crisis, anyway.

"The industry has not done large-scale... marketing for almost 20 years," Aggarwal noted. "And we’re seeing the aftermath of that."

To recapture the public's infatuation with diamonds, the industry clearly needs to get on board with the times — and, perhaps, take the L when it comes to consumers wanting to save on luxury goods during an ongoing global recession.

More on luxury: Orcas Strike Again, Sinking Yacht as Oil Tanker Called for Rescue

The post The Diamond Industry Is Withering as Beautiful Lab-Grown Diamonds Drive Down Prices appeared first on Futurism.

Read more:
The Diamond Industry Is Withering as Beautiful Lab-Grown Diamonds Drive Down Prices

Facebook Page Uses AI-Generated Image of Disabled Veteran to Farm Engagement

An AI-generated image of a young disabled veteran went viral on Facebook. A lot of folks — particularly older men — think she's the real deal.

An image, posted this week to a Facebook page called "Summer Vibes," shows a smiling young woman with brunette hair. She's dressed in Army fatigues — although, quizzically, she's not wearing pants, and the mangled American flag patch on the arm of her jacket has only six stripes and zero stars. She's white. She's conventionally attractive. And crucially, this grinning young woman is seated in a wheelchair, implying that she's an injured or disabled veteran.

"Please don't swip [sic] up without giving some love," reads the image's garbled caption. "Without heroes,we [sic] are all plain people,and [sic] don't know how far we can go." The caption is then followed by a string of hashtags listing the names of famous actresses like Anne Hathaway, Megan Fox, and Jennifer Lopez (as well as Christian Bale and Chris Evans, for some reason.)

Needless to say, the woman isn't real. She's AI-generated, and to many, that's obvious. In addition to the woefully incorrect American flag tacked onto the uniform, the last name that would normally appear on a soldier's pocket is an illegible clump of blobs that, when zoomed out, gives off only the semblance of lettering. Her teeth, eyes, and ears are also blurry and uncanny, as are her poorly defined hands.

And yet, despite these obvious flaws, the image has gone viral: at present, it has more than 62,000 reactions, nearly 5,000 comments, and 2,500 shares. And judging by the comments section? A lot of folks — particularly older men — absolutely think she's the real deal.

"Thank you for your sacrificial service to America and its citizens to maintain, [sic] our Republic, our Constitution and our God given [sic] rights and freedoms!" wrote one commenter, noting that he served in the military during the Vietnam war. "Thank you Summer, you are a beautiful, brave young lady!" he added, rounding the post out with heart, American flag, Statue of Liberty, and bald eagle emojis.

"Thank you my sister in arms," wrote another older man, "bless you for your service and dedication."

"Beautiful," added yet another. "Thank you for your service and prayers for healing and mercies and comfort from our Lord Jesus Christ Amen."

"Miss Beautiful USA!" yet another older guy chimed in. "THANK YOU FOR YOUR SERVICE."

Though the title of the Facebook page — "Summer Vibes" — would suggest a feed of poolside shots and cocktails with tiny umbrellas, its posts are neither summery nor even vibey. It's a spam page, dedicated to posting what's likely an automated stream of images and graphics featuring battle-wounded war veterans; each post is outfitted with that same error-packed caption imploring users not to scroll through without "giving some love." Despite the page's continued pleas to support the folks in the many photos and graphics, it doesn't give any information about them, or link to any charities or donations. The page instead keeps posting image after image, begging for likes, comments, and shares.

The vast majority of Summer Vibes' images appear to be of real veterans. However, most of these posts don't get a ton of reach — some gain a sparse handful of likes and comments, others might rake in a few hundred on a good day. But like Facebook's now-infamous Shrimp Jesus AI images, not to mention countless other AI outputs that have recently gone viral on the platform, it seems that the pantsless AI vet was scooped up by Facebook's recommendation algorithm and took off from there.

It's concerning for a few reasons. On the one hand, we have plenty of real disabled veterans who deserve care, respect, and medical and financial help. Distracting from these actual humans — who have been historically neglected upon their return from service — with a viral image of a fake one immediately raises ethical red flags. And broadly speaking, images like this clogging up the internet, where real people are still trying to share information and communicate, isn't great. (We reached out to Summer Vibes, but haven't received any response.)

Then, of course, there's the reaction to the image. As far as AI images go, this one isn't even particularly good or convincing. But some Facebook users — again, mostly older people, and especially older men — fell hook, line, and sinker for the fake photo. Some were even persuaded enough to push back against the few commenters who pointed out that the image was AI-made.

"What makes u so sure of that??" read one such retort.

This kind of reaction also has implications beyond synthetic clickbait. In March, a BBC report revealed that MAGA influencers and pundits were using AI to generate fabricated images of former president Donald Trump posing with groups of Black voters, a demographic group with whom the presidential candidate is hoping to shore up more support in his ongoing 2024 campaign.

When we looked at the comments on one of these fake photos, which was posted to Facebook by a far-right media personality as part of their effort to sell a Rush Limbaugh-inspired Christmas book — yes, seriously — we noticed lauding, clueless comments. Some users praised Trump; some users issued prayers; others simply remarked on the "beautiful photo." Sound familiar?

And that's just one example of AI's convincing use in political content. AI is creeping further into political campaigns and election cycles worldwide, the United States' 2024 race included, and experts have repeatedly warned of the associated risks. Spamming the web with photos of attractive fake veterans, though an objectively lousy thing to do, is one thing. But after spending some time in the cursed land that is Facebook comments, it's hard not to come away with the uneasy sense that enough fake images could make a genuine dent in what a large group of individuals believes to be true.

In a consequential year, it might just matter that old dudes on the internet are looking straight past this extremely fake brunette's mangled fingers and messed-up uniform and thanking her for her service.

More on AI and misinformation: Researchers Say Russia Is Using AI to Predict Terrorism at Paris Olympics

The post Facebook Page Uses AI-Generated Image of Disabled Veteran to Farm Engagement appeared first on Futurism.

Continued here:
Facebook Page Uses AI-Generated Image of Disabled Veteran to Farm Engagement

Remote Amazon Tribe Finally Gets Internet, Gets Hooked on Porn and Social Media

Starlink allows the Marubo people, an Amazon tribe, to have internet even in the heart of the rainforest — but it comes at a cost.

Five Bars

A remote tribe in the Amazon rainforest is getting to experience the wonders of the internet for the first time, thanks to Elon Musk's satellite network Starlink. But, by connecting to the rest of the world, it sounds like the Marubo people are beginning to pick up some of our modern bad habits.

The New York Times reports on what may sound a bit familiar: young people poring over social media feeds, streaming soccer games, and of course, gossiping over WhatsApp. Evenings are spent lounging around on their phones and playing first-person shooters and other video games.

"When it arrived, everyone was happy," said Tsainama Marubo, 73. "But now, things have gotten worse."

Some of the young men are especially getting a kick out of it. Alfredo Marubo, a leader of an association of the tribe's villages, lamented that the boys, now with their own group chats, were sharing porn and other explicit videos — which is unprecedented in their culture that considers kissing in public taboo.

"We're worried young people are going to want to try it," Alfredo told the NYT, referring to what they see in porn.

Culture Rot

The Marubo have been using Starlink since September, after an American woman bought them some antennas to connect to the satellite network.

Now, some in the tribe fear that the internet poses an existential threat to their culture. Young people kill time by fiddling with their smartphones instead of socializing the old-fashioned way, isolating them from their elders. By being exposed to the outside world, some of the teenagers now dream of exploring it. Alfredo fears that this could mean the tribe's culture and history, which has been passed down orally, could be lost.

"Everyone is so connected that sometimes they don't even talk to their own family," he told the NYT.

Tsainama echoed those fears, but was more conflicted. "Young people have gotten lazy because of the internet," she said. "They're learning the ways of the white people. But please don't take our internet away."

A Tangled Web

The internet comes with its vices, and to combat them, leaders have imposed strict windows for using it, outside of which the connection's shut off. But they also realize its undeniable benefits. In an area so remote that it takes several days of arduous hiking to reach, effortless and instant communication is life-changing.

New job opportunities have opened up. Villages can now easily coordinate over group chats, and also reach out to local authorities.

"It's already saved lives," Enoque Marubo, who was one of the first in the tribe to push for an internet connection, told the NYT, such as in the case of venomous snakebites, which need immediate medical treatment.

"The leaders have been clear," he added. "We can't live without the internet."

More on: Something Fascinating Happens When You Take Smartphones Away From Narcissists

The post Remote Amazon Tribe Finally Gets Internet, Gets Hooked on Porn and Social Media appeared first on Futurism.

Link:
Remote Amazon Tribe Finally Gets Internet, Gets Hooked on Porn and Social Media

Doctors Administer Oxytocin Nasal Spray to Lonely People

Doctors administered the

We might like to think of ourselves as rational creatures, but the fact is that the whole experience of being human is basically the result of a bunch of swirling chemicals in the brain.

Case in point? A team of European and Israeli doctors just released an intriguing study, published in the journal Psychother Psychosom, in which they administered oxytocin — that's the much hyped feel-good hormone that's released by physical intimacy, among other activities — to lonely people as a nasal spray.

Take a beat to get over the premise of giving people in social distress direct doses of what's known to many researchers as the "love hormone," because the results were pretty interesting.

While the subjects didn't report a reduction in perceived loneliness, perceived stress, or quality of life, they did report a reduction in acute feelings of loneliness — a narrow distinction, but one that was clearly tantalizing to the researchers, especially because the effect seemed to linger for months after treatment.

"The psychological intervention was associated with a reduced perception of stress and an improvement in general loneliness in all treatment groups, which was still visible at the follow-up examination after three months," said the paper's senior author Jana Lieberz, a faculty member at Germany's University of Bonn, in a press release about the research.

Perhaps more intuitively — oxytocin is strongly associated with bonding — the researchers also found that subjects dosed with the hormone had an easier time connecting with others during group therapy sessions in which they were enrolled.

"This is a very important observation that we made — oxytocin was able to strengthen the positive relationship with the other group members and reduce acute feelings of loneliness right from the start," Leiberz said. "It could therefore be helpful to support patients with this at the start of psychotherapy. This is because we know that patients can initially feel worse than before starting therapy as soon as problems are named. The observed effects of administering oxytocin can in turn help those affected to stay on the ball and continue."

Further research is clearly needed; the trial size was limited, at just 78 participants, and it's difficult to parse the exact difference between "perceived" and "acute" loneliness they reported.

But the doctors behind the study are clearly intrigued, writing in the press release that the work "could help to alleviate loneliness," which is "associated with many mental and physical illnesses."

While Lieberz "emphasizes that oxytocin should not be seen as a panacea," the release continues, the "results of the study suggest that oxytocin can be used to achieve positive effects during interventions."

With the rush of academic and commercial interest we've seen in the potential pharmaceutical benefits of everything from ketamine to MDMA, don't be surprised if we see a rush of interest in oxytocin over the next few years.

More on oxytocin: Scientists Discover That Dogs Cry Tears of Joy When Reuinted With Owners

The post Doctors Administer Oxytocin Nasal Spray to Lonely People appeared first on Futurism.

See the article here:
Doctors Administer Oxytocin Nasal Spray to Lonely People

OpenAI Employees Forced to Sign NDA Preventing Them From Ever Criticizing Company

OpenAI employees are forced to sign a restrictive nondisclosure agreement (NDA) forbidding them from ever criticizing the company.

Cone of Silence

ChatGPT creator OpenAI might have "open" in the name, but its business practices seem diametrically opposed to the idea of open dialogue.

Take this fascinating scoop from Vox, which pulls back the curtain on the restrictive nondisclosure agreement (NDA) that employees at the Sam Altman-helmed company are forced to sign to retain equity. Here's what Vox's Kelsey Piper wrote of the legal documents:

It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI "due to losing confidence that it would behave responsibly around the time of AGI," has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

Signature Flourish

How egregious the NDA is depends on your industry and view of employees' rights. But what's certain is that it flies directly in the face of the "open" in OpenAI's name, as well as much of its rhetoric around what it frames as the responsible and transparent development of advanced AI.

For its part, OpenAI issued a cryptic denial after Vox published its story that seems to contradict what Kokotajlo has said about having to give up equity when he left.

"We have never canceled any current or former employee’s vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit," it said. When Vox asked if that was a policy change, OpenAI replied only that the statement "reflects reality."

It's possible to imagine a world in which the development of AI was guided by universities and publicly funded instead of being pushed forward by impulsive and profit-seeking corporations. But that's not the timeline we've ended up in — and how that reality influences the outcome of AI research is anyone's guess.

More on OpenAI: OpenAI Researcher Quits, Flames Company for Axing Team Working to Prevent Superintelligent AI From Turning Against Humankind

The post OpenAI Employees Forced to Sign NDA Preventing Them From Ever Criticizing Company appeared first on Futurism.

Go here to see the original:
OpenAI Employees Forced to Sign NDA Preventing Them From Ever Criticizing Company

OpenAI Researcher Quits, Flames Company for Axing Team Working to Prevent Superintelligent AI From Turning Against Humankind

It might sound like a joke, but OpenAI has dissolved the team responsible for making sure advanced AI doesn't turn against humankind.

OpenAI Shut

It might sound like a joke, but OpenAI has dissolved the team responsible for making sure advanced AI doesn't turn against humankind.

Yes, you read that right. The objective of the team, formed just last summer, was to "steer and control AI systems much smarter than us."

"To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20 percent of the compute we’ve secured to date to this effort," the company wrote at the time. "We’re looking for excellent ML researchers and engineers to join us."

If those two names sound familiar, it's because Sutskever departed the company under a dark cloud this week, prompting Leike to quit in disgust.

And now the other shoe has dropped: as first reported by Wired, the entire team has now been dissolved.

Terminator Prequel

Sutskever, who was intimately involved with last year's plot to out OpenAI CEO Sam Altman, has remained largely mum this week. But Leike has been publicly sounding off.

"I joined because I thought OpenAI would be the best place in the world to do this research," he wrote on X-formerly-Twitter. "However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."

Among his gripes: that the company wasn't living up to its promises to dedicate technical resources to the effort.

"Over the past few months my team has been sailing against the wind," he continued. "Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done."

But his objections also sound more existential than just company politics.

"Building smarter-than-human machines is an inherently dangerous endeavor," Leike wrote. "OpenAI is shouldering an enormous responsibility on behalf of all of humanity."

"But over the past years, safety culture and processes have taken a backseat to shiny products," he alleged.

OpenAI, for its part, has been busy doing exactly that: this week, it showed off a new version of ChatGPT that can respond to live video through a user's smartphone camera in an emotionally inflected voice that Altman compared to the 2013 romantic tragedy "Her."

"It’s a process of trust collapsing bit by bit, like dominoes falling one by one," one OpenAI employee told Vox.

More on OpenAI: The Person Who Was in Charge of OpenAI's $175 Million Fund Appears to Be Fake

The post OpenAI Researcher Quits, Flames Company for Axing Team Working to Prevent Superintelligent AI From Turning Against Humankind appeared first on Futurism.

See original here:
OpenAI Researcher Quits, Flames Company for Axing Team Working to Prevent Superintelligent AI From Turning Against Humankind

Strange Photos Show NASA Astronauts Testing Spacesuits With No Arms or Visors

Arms Race

New photos from NASA show the space agency's astronauts testing spacesuits in the Arizona desert — but we're not sure these things are quite spaceworthy yet.

Why? Because they're missing, among other things, arms, legs and visors, leading to an entertaining photoshoot in which astronauts Kate Rubins and Andre Douglas trudge around completing Moonish tasks while garbed half in space gear and half in fairly regular-looking hiking clothes, including sunglasses that look comically out of place with the off-world getups.

NASA's writeup doesn't quite explain the eccentric spacesuit design, but it does specify that the outfits are mockups. Reading between the lines, it sounds like the agency is basically rehearsing parts of the Artemis 3 mission — slated to return astronauts to the lunar surface in a few years — even if the suits aren't fully cooked yet, to practice and pin down any shortcomings in the design back on the safety of Earth.

"Field tests play a critical role in helping us test all of the systems, hardware, and technology we’ll need to conduct successful lunar operations during Artemis missions," said Barbara Janoiko, director for the field test at Johnson. "Our engineering and science teams have worked together seamlessly to ensure we are prepared every step of the way for when astronauts step foot on the Moon again."

Moon Walkers

It also sounds like the astronauts are being prepared for the geological research they'll need to conduct on the Moon. That's well-precedented; the Apollo astronauts were so highly trained that by the time they landed, it's estimated that each had the equivalent of a master's degree in geology.

"During Artemis III, the astronauts will be our science operators on the lunar surface with an entire science team supporting them from here on Earth," said NASA Goddard Space Flight Center science officer Cherie Achilles in the writeup. "This simulation gives us an opportunity to practice conducting geology from afar in real time."

And big picture, it's just another fascinating glimpse into the exhaustive preparation that NASA and its Artemis astronauts are now undertaking to prepare for humankind's first crewed lunar landing since 1972.

"The test will evaluate gaps and challenges associated with lunar South Pole operations, including data collection and communications between the flight control team and science team in Houston for rapid decision-making protocols," reads the blurb. "At the conclusion of each simulated moonwalk, the science team, flight control team, crewmembers, and field experts will come together to discuss and record lessons learned."

More on NASA: NASA Admits Space Station Junk Crashed Through Man's Roof

The post Strange Photos Show NASA Astronauts Testing Spacesuits With No Arms or Visors appeared first on Futurism.

Read more:
Strange Photos Show NASA Astronauts Testing Spacesuits With No Arms or Visors

Tesla Factory Accused of Spewing Illegal Amounts of Air Pollution

An environmental group has slapped Tesla with a lawsuit this week for spewing pollution and violating the Clean Air Act.

Smogging Gun

In an awkward turn, an environmental group has slapped Tesla with a lawsuit this week, CNBC reports, for spewing pollution from its factory in Fremont, California and violating the Clean Air Act.

Despite Tesla touting that its factories are conscious about limiting waste, the California nonprofit group Environmental Democracy Project alleges in its lawsuit that the electric vehicle company has disregarded the Clean Air Act "hundreds of times since January 2021, emitting harmful pollution into the neighborhoods surrounding the Factory," as reported by CNBC.

The litigants say the pollution has continued to this day, with the factory spewing "excess amounts of air pollution, including nitrogen oxides, arsenic, cadmium, and other harmful chemicals."

This comes on the heels of the local air pollution control agency, the Bay Area Air Quality Management District, announcing that it's seeking to stop Tesla from releasing more pollutants into the community — and dinging it for some 112 notices of violation since 2019.

"Each of these violations can emit as much as 750 pounds of illegal air pollution, according to some estimates," the agency wrote in a statement earlier this month. "The violations are frequent, recurring, and can negatively affect public health and the environment."

Factory Hazard

The Tesla factory in California isn't the only one facing criticism. In Germany, hardline environmental activists recently breached the barriers around a factory just outside Berlin and tried to storm the plant, upset that Tesla felled more than 200 acres of trees.

"Companies like Tesla are there to save the car industry, they’re not there to save the climate," one anti-Tesla activist in Germany told a Wired reporter.

This is a persistent criticism against Tesla and other electric vehicles that aim to save the environment and make a profit selling a product: are they really that green?

After all, the manufacture of an electric battery takes up an enormous amount of fossil fuels and requires the mining of lithium, cobalt and other minerals.

However, life cycle analysis of electric vehicles versus one that runs on fossil fuels shows that EVs win the race on lifetime emission output.

But that doesn't excuse the pollution allegations at Tesla factories, which have also earned scorn in California for hazardous waste violations.

More on Tesla: Investor Predicts Tesla Could "Go Bust"

The post Tesla Factory Accused of Spewing Illegal Amounts of Air Pollution appeared first on Futurism.

Read the original post:
Tesla Factory Accused of Spewing Illegal Amounts of Air Pollution