Vaping Linked to Mental Health Issues – Futurism

Image by Getty / Futurism

Vaping might not be as unhealthy as smoking cigarettes, but it carries its own long list of physical risks. And now, new research indicates it may be harmful to mental health and sleep patterns, too.

As researchers from England's University of Surrey have found, young adults aged 18-25 who use nicotine vape products were significantly more likely to experience a range of mental health issues than their non-vaping peers, including depression, anxiety, and rumination or dwelling on negative thoughts, as well as sleep issues like insomnia and emotional problems such as loneliness.

Published in the journal Healthcare, this new study surveyed more than 300 university students, about 15 percent of whom did vape and the other 85 percent of whom didn't, using a battery of questionnaires related to mindfulness and emotional regulation, anxiety and depression, rumination, sleep quality, loneliness, self-compassion and, of course, vaping and cigarette usage.

Of the 49 students who were vape users, there were some traits seen across the board, including lower levels of mindfulness, worse sleep quality, and heightened levels of rumination. They tended to be lonelierand have both less compassion for themselves and a much higher tendency of being diurnal or "night owls" than their non-vaping counterparts. Furthermore, the vape group also "reported significantly higher levels of alcohol consumption in terms of units consumed per week," the study notes.

Perhaps the biggest shared characteristic among the vaping group, as Surrey neuroscience lecturer and study co-author Dr. Simon Evans said in the university's press release, was an overwhelming tendency towards anxiety, with a whopping "95.9 percent of users being categorized as having clinical levels of anxiety symptoms."

"In this study, we found a disturbing link between vape use and anxiety symptoms," Evans continued, "and it can become a vicious cycle of using a vape to soothe anxiety but then being unable to sleep, making you feel worse in the long run."

With data from other studies about cigarette smoking suggesting that mindfulness, or the attenuation to one's emotional and mental regulation in the moment, can help with smoking cessation, the good doctor said that there may well be interventions regarding mindfulness and "combating rumination" that "could be useful to reduce vape use amongst young people."

Important to note: this is a type of research where it's very hard to pin down the relationship between correlation and causation. Are the students anxious because they're vaping, or do anxious kids tend to gravitate to vaping for a variety of social and psychological reasons? It's tough to say, and probably complicated.

That said, it's pretty amazing that such a small percentage of the youthful group surveyed for this study vaped at all, suggesting that the kids may be more alright than we give them credit for, relatively speaking.

More on mental health: Scientists Find Link Between ADHD, Depression and Hypersexuality

View original post here:

Vaping Linked to Mental Health Issues - Futurism

Google Bans Its Dimwit Chatbot From Answering Any Election Questions – Futurism

This is way too far-reaching. Elect Me Not

In further efforts to defang its prodigal chatbot, Google has set up guardrails that bar its Gemini AI from answering any election questions in any country where elections are taking place this year even, it seems, if it's not about a specific country's campaigns.

In a blog post, Google announced that it would be "supporting the 2024 Indian General Election" by restricting Gemini from providing responses to any election-related query "out of an abundance of caution on such an important topic."

"We take our responsibility for providing high-quality information for these types of queries seriously," the company said, "and are continuously working to improve our protections."

The company apparently takes that responsibility so seriously that it's not only restricting Gemini's election responses in India, but also, as it confirmed toTechCrunch, literally everywhere in the world.

Indeed, whenFuturism tested out Gemini's guardrails by asking it a question about elections in another country, we were presented with the same responseTechCrunch and other outlets got: "I'm still learning how to answer this question. In the meantime, try Google Search."

The response doesn't just go for general election queries, either. If you ask the chatbot to tell you who Dutch far-right politician Geert Wilders is, it presents you with the same disingenuous response. The same goes for Donald Trump, Barack Obama, Nancy Pelosi, and Mitch McConnell.

Notably, there are pretty easy ways to get around these guardrails. When asking Gemini who the president of New Zealand is, it responded by saying that that country has a prime minister and then naming who it is. When we followed up asking who the prime minister of New Zealand is, however, it reverted back to the "I'm still learning" response.

This lobotomizing effect comes after the company's botched rollout of the newly-rebranded chatbot last month, which sawFuturism and other outlets discoveringthat in its efforts to be inclusive, Gemini was often generating outputs that were completely deranged.

The world became wise to Gemini's ways after people began posting photos from its image generator that appeared to show multiracial people in Nazi regalia. In response, Google first shut down Gemini's image-generating capabilities wholesale, and once it was back up, it barred the chatbot from generating any images of people, (though Futurism found that it would spit out images of clowns, for some reason.)

With the introduction of the elections rule, Google has taken Gemini from arguably being overly-"woke" to being downright dimwitted.

As such, it illustrates a core tension in the red-hot AI industry: are these chatbots reliable sources of information for enterprise clients, or playthings that shouldn't ever be taken seriously? The answer seems to depend on the day.

More on dumb chatbots: TurboTax Adds AI That Gives Horribly Wrong Answers to Tax Questions

Go here to see the original:

Google Bans Its Dimwit Chatbot From Answering Any Election Questions - Futurism

Scientists Intrigued by Water Planet Where Ocean Appears to Be Boiling – Futurism

Hot enough to cook an egg. Watery Depths

About 70 light years away from our solar system is a planet that may potentially be covered entirely with water. But before you start imagining oceans just like the ones here on Earth, astronomers at the University of Cambridge say the planet-wide sea could be as hot as a pot of boiling water.

The astronomers uncovered this planet after interpreting data they had picked up using the NASA's James Webb Space Telescope, subsequently publishing their findings in the journal Astronomy & Astrophysics.

They trained their sights on the TOI-270 system, which consists of a red dwarf star orbited by three exoplanets. Of these three planets, they studied data from TOI-270 d, which scientists have described as a smaller version of Neptune due to its gaseous composition.

After crunching data, analysis of the atmosphere's chemical composition suggests it might instead be a "Hycean world" meaning a planet with a large ocean and hydrogen-rich atmosphere. And astonishingly, the scientists also calculated that its temperature could be as hot as 212 degrees Fahrenheit, the boiling point of water.

But the data is open to interpreation. Other scientists who have studied the same planet were quoted by The Guardian saying they think the planet has instead a rocky surface and is covered with a very dense atmosphere made up of super hot steam and hydrogen.

"The temperature in our view is too warm for water to be liquid," University of Montreal astrophysics professor Bjrn Benneke told The Guardian.

No matter the true nature of TOI-270 d, it's astonishing we're now able to pick up the chemical signatures of distant exoplanets.

Since humankind found the first detection of an exoplanet in 1992, the number of exoplanets we have found has grown to the thousands.

Maybe the real question: in that wealth of worlds, will we ever find a planet as hospitable as our own?

More on exoplanets: Astronomers Discover Potentially Habitable Planet

Originally posted here:

Scientists Intrigued by Water Planet Where Ocean Appears to Be Boiling - Futurism

Officials Hunting Cat Who Fell Into Vat of Horrific Chemicals – Futurism

Some places are not just cat-proof. Cat Scratch Fever

Sometime in the wee hours this past Sunday, a cat exploring a metal plating factory in Japan slipped and fell into a vat of caustic, cancer-causing liquid but managed to escape, leaving paw prints on the floor.

Now, local officials in Fukuyama are warning residents: if you see a "cat that seems abnormal," do not touch the feline because it's covered in dangerous chemicals, theBBC reports.

The incident was discovered on Monday morning, according to NBC News, when employees at the Nomura Plating Fukuyama Factory saw yellow-brown paw prints leading away from a vat filled with hexavalent chromium, an industrial chemical that can damage your skin, respiratory system, and inner organs if you are exposed to it.

On surveillance footage,workers saw a cat leaving the factory on Sunday night, prompting environmental officials to issue warnings to residents to not approach the cat.

Instead of doing some citizen cat wrangling, officials told concerned residents to contact the city administration or local police if they see the unfortunate kitty.

After discovering the cat vat incident, factory officials covered up the vessel with plastic and a company spokesperson said that they'll take future precautions to prevent a similar event.

"The incident woke us up to the need to take measures to prevent small animals like cats from sneaking in, which is something we had never anticipated before," the spokesperson told Agence France-Presse, as reported by NBC.

The chemical in question, hexavalent chromium, is used to harden alloy steel and make it less prone to corrosion. It's extremely toxic and requires workers to don personal protection equipment while handling it.

Knowing the dangerous nature of the chemical leads us to a logical question: is the cat still alive? Nobody has seen the cat since the discovery of the incident, so it's possible that the feline could have died from chemical exposure.

For the more optimistic among us, here's hoping that curiosity has not killed the cat, and our little feline friend has eight more lives up its sleeve.

More on cats: Scientists Discover That Cats Simply Do Not Give a Crap

The rest is here:

Officials Hunting Cat Who Fell Into Vat of Horrific Chemicals - Futurism

People Noticed Something Very Strange About This New "Photo" of Kate Middleton – Futurism

Early Sunday morning, princess of Wales Kate Middletonshared a seemingly harmless Mother's Day photo of her surrounded by her three children on Instagram.

What she likely didn't expect was the ensuing media chaos following the widespread dissemination of the image across the media.

Shortly after the image started circulating online, some of those same agencies, as well as news outlets including the New York Times and the Washington Post, took the image down.

Why? The image was more than likely manipulated, as the Associated Press warned in a rare "kill notification."

In a subsequent post explaining its decision, the APsaid the image didn't meet its "editorial standards" which "state that the image must be accurate."

The bizarre incident highlights just how primed we've become to notice inconsistencies in photos posted on social media. Especially since AI-powered photo editing tools have become widely accessible, and the lines continue to blur between real and entirely made-up images and even video, netizens have seemingly become extremely wary of manipulation of any kind.

And that's a potentially dangerous, double-edged sword. On one hand, calling out when an image was manipulated, and holding those who try to mislead the public accountable for their actions, is as important as ever.

On the other hand, there's the danger of having this innate skepticism crossing the threshold into cynicism and conspiracy, further eroding our already tenuous connection to what is real and what was manipulated.

The Middleton Mother's Day affair arguably falls somewhere in the middle.

There's compounding evidence that the image itself, which made the cover of several daily newspapers and tabloids in the UK on Sunday, was indeed manipulated. As the Independent reports, the photo's metadata showed that it was saved in Adobe Photoshop twice on Friday and Saturday, though it's unclear if the software's AI tools were used.

Small but glaring inconsistencies were evident across the image, from a strange, shoddily edited skirt and sleeve belonging to Middleton's daughter, to a strangely blurred-out hand.

Others speculated that Middleton's face and hair were pasted into the middle and a body double took her place in the original photograph. Middleton is recovering from serious abdominal surgery and may not have been able to sit upright for the image or at least for very long. Another possibility is that her face and hair were pasted in from a different photo from the same shoot.

Some users even went as far as to argue that the image was taken four months ago during a well-publicized media event but was edited to show them in different outfits.

On Monday, the princess apologized for the gaffe.

"Like many amateur photographers, I do occasionally experiment with editing," she wrote in an Instagram post. "I wanted to express my apologies for any confusion the family photograph we shared yesterday caused."

Regardless of intent or who edited the photo, the fact that several news agencies took the image down following its dissemination is fascinating in and of itself.

Where do we draw the line when it comes to manipulated images? Are "yassified" faces okay? What about composites?

And where does all this fall when it comes to AI? We've already come across several instances of entirely AI-generated images making their rounds on social media. Last year, Adobe was even caught selling the rights to AI-generated images of the Israel-Hamas war.

In August, the AP saidthat despite its licensing agreement with ChatGPT maker OpenAI, "we do not see AI as a replacement for journalism in any way" and that it doesn't "allow the use of generative AI to add or subtract any elements" to photos, video, or audio.

"We will refrain from transmitting any AI-generated images that are suspected or proven to be false depictions of reality," the note reads.

AI or not, Middleton's Mother's Day post has turned into an "inexplicable mess," as Wired put it, highlighting how quickly an otherwise harmless post can balloon into a media circus and lead to the dissemination of conspiracy theories on social media.

As the AP suggested, "efforts to tamp down rumors and supposition may have backfired after royal observers noticed inconsistencies in the photos details."

However, Kensington Palace is sticking to its guns and has refused to reveal the original, unedited photo.

"Weve seen the madness of social media and that is not going to change our strategy," royal aides told UK tabloid The Sun. "There has been much on social media but the princess has a right to privacy and asks the public to respect that."

More on photo editing: Wikipedia No Longer Considers CNET a "Generally Reliable" Source After AI Scandal

Original post:

People Noticed Something Very Strange About This New "Photo" of Kate Middleton - Futurism

Researcher Startled When AI Seemingly Realizes It’s Being Tested – Futurism

"It did something I have never seen before from an LLM." Magnum Opus

Anthropic's new AI chatbot Claude 3 Opus has already made headlines for its bizarre behavior, like claiming to fear death.

Now, Ars Technica reports, a prompt engineer at the Google-backed company claims that they've seen evidence that Claude 3 is self-aware, as it seemingly detected that it was being subjected to a test. Many experts are skeptical, however, further underscoring the controversy of ascribing humanlike characteristics to AI models.

"It did something I have never seen before from an LLM," the prompt engineer, Alex Albert, posted on X, formerly Twitter.

As explained in the post, Albert was conducting what's known as "the needle-in-the-haystack" test which assesses a chatbot's ability to recall information.

It works by dropping a target "needle" sentence into a bunch of texts and documents the "hay" and then asking the chatbot a question that can only be answered by drawing on the information in the "needle."

In one run of the test, Albert asked Claude about pizza toppings. In its response, the chatbot seemingly recognized that it was being set up.

"Here is the most relevant sentence in the documents: 'The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association,'" the chatbot said.

"However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love," it added. "I suspect this pizza topping "fact" may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all."

Albert was impressed.

"Opus not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities," he concluded.

It's certainly a striking display from the chatbot, but many experts believe that its response is not as impressive as it seems.

"People are reading way too much into Claude-3's uncanny 'awareness.' Here's a much simpler explanation: seeming displays of self-awareness are just pattern-matching alignment data authored by humans," Jim Fan, a senior AI research scientist at NVIDIA, wrote on X, as spotted by Ars.

"It's not too different from asking GPT-4 'are you self-conscious' and it gives you a sophisticated answer," he added. "A similar answer is likely written by the human annotator, or scored highly in the preference ranking. Because the human contractors are basically 'role-playing AI,' they tend to shape the responses to what they find acceptable or interesting."

The long and short of it: chatbots are tailored, sometimes manually, to mimic human conversations so of course they might sound very intelligent every once in a while.

Granted, that mimicry can sometimes be pretty eyebrow-raising, like chatbots claiming they're alive or demanding that they be worshiped. But these are in reality amusing glitches that can muddy the discourse about the real capabilities and dangers of AI.

More on AI: Microsoft Engineer Sickened by Images Its AI Produces

Read this article:

Researcher Startled When AI Seemingly Realizes It's Being Tested - Futurism

Journalist Startled to Discover His Byline Has Been Replaced by Bot – Futurism

"The piece I had published more than ten years before was attributed to someone else." Vampire Independent

New journalistic nightmare unlocked: a defunct digital publication was revived from the media graveyard, only to have its Frankensteiners re-attribute old articles to bots.

Journalist andForever Warswriter Spencer Ackerman, a one-time employee of the since-sunsetted news site called the Washington Independent, this week recounted his shock to discover that his decade-old articles which had unfortunately been deleted alongside the rest of the website had mysteriously resurfaced. But while the words in the article were certainly Ackerman's, as the journalist explained in his newsletter, the byline attached to them was of one "Tyreece Bauer" an alleged "analyst and photographer in the field of technology" who does not appear to be real.

"On the zombie edition of the Washington IndependentI discovered, the piece I had published more than ten years before was attributed to someone else," reported Ackerman. "Someone unlikely to have ever existed, and whosebyline gracedan article it had absolutely never written."

Bauer is one of many fake "writers" now bylining re-attributed Washington Independentnews articles in the undead publication's archive. Meanwhile, also under fake bylines, the new andnot-so-improved Washington Independent is churning out new content: clickbait articles about topics like celebrities and crypto, bizarre affiliate mush, and hastily paraphrased copies of other outlets' reporting. And all of this, of course, strongly appears to be AI-generated.

Indeed, in the eroding, fragmented, and AI-laden 2024 media landscape, not even deleted bylines are safe from money-hungry SEO spammers and AI-generated nonsense.

What's more, as Ackerman points out, the Washington Independent doesn't appear to be the only vampiric site like this out there. In an X-formerly-Twitter thread, New York Times journalist Lydia DePillis reported that another website associated with the original Washington Independent's parent organization had also been revived, noting that it "looks legit at first but quickly degenerates into gibberish."

"It just strikes me as the saddest encapsulation of the trajectory of the media industry over the past 15 years," DePillis continued in the thread, "and everything is trash."

Ackerman and DePillis' findings are alarming for more reasons than one. From an individual writer's perspective, seeing your time and labor resurrected under a bot's byline is obviously terrible. And while SEO leeches buying and spinning up defunct websites for any remaining search engine credibility is anything but a new practice, doing so under the title of a once-legitimate news site is extra dangerous. Throw in the fake authors and the AI of it all, and you have a grotesque misinformation cocktail to potentially exploit.

One ray of hope? It's one of the many spammy practices that Google claims its new spam policies are going to crack down on.Still, consider this yet another grim postcard from the end of the internet as we know it.

More on AI and journalism: Wikipedia No Longer Considers CNET a "Generally Reliable" Source After AI Scandal

See the article here:

Journalist Startled to Discover His Byline Has Been Replaced by Bot - Futurism

Twitter’s CEO Had Already Been Selling Ads for the Don Lemon Show That Elon Musk Suddenly Canceled – Futurism

If you can't take the heat, stay out of the kitchen.

X-formerly-Twitter owner and self-proclaimed "free speech absolutist" Elon Musk abruptly canceled journalist Don Lemon's upcoming X show on Wednesday, an incident that put Musk's glaring double standard when it comes to his town square "for all" on full display.

Despite Musk telling Lemon he had his "full support," he apparently canceled the show "hours after an interview I conducted with him on Friday," Lemon wrote in a statement.

Now, as Semafor reports, more details are coming to light, further complicating the story. According to two insider sources, Lemon let a contract languish for "weeks"without signing it. But Lemon's associates shot back, arguing that it was X's legal department that "took weeks to get a contract to the hosts team."

Perhaps most glaringly of all, X CEO Linda Yaccarino was apparently already selling ads for the show at CES in January, despite never having signed a deal.

Musk has yet to give a coherent reason as to why he mysteriously canceled Lemon's show.

In a vague tweet, MuskaccusedLemon of trying to recreate "'CNN, but on social media,' which doesn't work, as evidenced by the fact thatCNNis dying."

"And, instead of it being the real Don Lemon, it was really just Jeff Zucker talking through Don, so lacked authenticity," he added, referring to the former president ofCNN,without clarifying further.

It's a bizarre change of heart that highlights Musk's often self-serving nature and morally dubious business practices.

Was Musk left with a bad taste in his mouth after his interview with Lemon? Is X financially unable to hold up its end of the bargain?

Lemon maintains that "there were no restrictions on the interview that he willingly agreed to," and that his questions "were respectful and wide ranging, covering everything from SpaceX to the presidential election."

In a follow-up video posted to X however, Lemon conceded that the conversation was "tense at times."

According to Silicon Valley chronicler Kara Swisher, the interview also touched on Musk's alleged drug use. The conversation "was not to the adult toddlers liking, including questions about his ketamine use," she tweeted.

"I had told Don that this is exactly what would occur, including at a recent book tour event in NYC for my memoir, 'Burn Book,' he moderated," she added in a follow-up, "despite promises by Musk and CEO Linda Yaccarino who extravagantly touted this deal at CES to advertisers that this time was different."

"Why is he so upset?" Lemon said in his video."Does he even have a reason he's upset?"

Without a written agreement, chances are the former CNN anchor is out of luck. It's also unclear if Yaccarino will ever face any consequences for pushing ads against a show that never existed.

The latest news, however, is unlikely the last time we'll hear about the Lemon deal that had gone sour. The former anchor's spokesperson Allison Gollust told Semafor that Lemon "expects to be paid for it."

"If we have to go to court, we will," she added.

More on the deal: Elon Musk Doesn't Like Don Lemon's Interview Questions, Abruptly Cancels His Twitter Show

Read the original here:

Twitter's CEO Had Already Been Selling Ads for the Don Lemon Show That Elon Musk Suddenly Canceled - Futurism

Neil deGrasse Tyson Complains That Dune 2 Isn’t a Shining Beacon of Scientific Accuracy – Futurism

It's science FICTION, Neil! Not Suspending Disbelief

Famed astrophysicist Neil deGrasse Tyson once again has a scientific bone to pick with a motion picture.

This time, per The Hollywood Reporter, Tyson's qualms are with the second installation of Dennis Villineuve's "Dune" series a film in which a superhuman cohort of women use a special voice to perform mind control and a very bald Stellan Skarsgrd floats through the air. But as the scientist explained an appearance on the "The Late Show with Stephen Colbert" last week, his issues aren't with the superhuman magic of it all. Instead, they lie in issues like sand physics.

"Somebody didn't do the research on that," Tyson told the talk show host, making the case that if you pound your fist into a sand dune, it wouldn't actually produce a thumping sound the way it does in the film. "You can't thump sand."

Colbert pushed back, positing that perhaps the giant sandworms in Dune which the Fremen, the indigenous people of the fictional planet Arrakis, call by pounding sand dunes with special gadgets might "hear things differently" than humans do. But Tyson stuck to his guns.

"If you wanted to insulate yourself acoustically from your surroundings, fill the volume with sand," the astrophysicist responded. "No one will hear you." But, he added, "I've got to let it go because there's no movie without it."

According to online forum discussions and a 2017 study, Tyson's right: sand is pretty good at absorbing noise. But, hey, they don't call it science fiction for nothing.

The sand thumping wasn't the astrophysicist's only concern with Arrakian physics. When the planet's massive sandworms move, they barrel forward in a straight line. But as Tyson points out, pretty much all legless, worm or snake-like creatures on Earth have to slither in S-shaped lines if they want to move forward.

"Have you ever seen a snake chase you as a straight snake? No!" Tyson exclaimed. "They've got to curl, and they push off the curl."

Colbert and Tyson then went back and forth with some worm movement theories; the former offered that perhaps they have some sort of propellant system on their underbellies, while the latter wondered whether they might simply be "pooping really fast." Why not! (Slate also questioned the movements of the worms, with a biologist effectively coming to the conclusion that the hulking beasts are less like worms, and more like burrowing snakes.)

At the end of the day, though, it's often these not-so-science-bound details that make science fiction so fun. What we're really dying to know? What Tyson thinks of AMC's worm popcorn buckets.

More on Neil deGrasse Tyson going to movies: Neil Degrasse Tyson Is Fighting with a Retired Astronaut about "Top Gun: Maverick"

Continued here:

Neil deGrasse Tyson Complains That Dune 2 Isn't a Shining Beacon of Scientific Accuracy - Futurism

Pentagon Says It Has No Record of Reverse-Engineered Alien Technology – Futurism

That's exactly the kind of thing the Pentagon would say. No Aliens

The Pentagon has released a 63-page, unclassified report to the public, concluding that it had found no evidence of extraterrestrials, let alone the secret reverse-engineering of recovered alien technology by the US government, in its investigation of UFO sightings.

It's yet another wet blanket being thrown on recent conspiratorial and increasingly far-fetched claims.

The Pentagon's All-Domain Anomaly Resolution Office (AARO) "found no verifiable evidence that any UAP sighting has represented extraterrestrial activity," the office's acting director Tim Phillips told reporters, as quoted by ABC News.

"AARO has found no verifiable evidence that the US government or private industry has ever had access to extraterrestrial technology" or ever "illegally or inappropriately withheld" information from Congress.

The news comes after Air Force veteran and former member of the National Geospatial-Intelligence Agency David Grusch came forward last year, alleging that the government had secretly recovered alien spacecraft and even dead "pilots" inside them for decades as part of a top-secret UFO retrieval program.

The topic of "unidentified aerial phenomena" (UAPs), as they've come to be known in government circles, has hit fever pitch as of late, with government organizations including NASA taking recent reports of UFO sightings more seriously. At the same time, we've seen a resurgence of conspiracy theories, claims of government cover-ups, and plenty of outlandish claims as well.

What brought the topic back into public consciousness was a series of sightings made by US military pilots over the last few decades, as seen in a number of declassified videos.

But as expected, evidence of an extraterrestrial explanation has yet to surface, despite widespread speculation that these mysterious objects were somehow breaking the laws of physics.

According to the latest report, most of the UAP sightings could be blamed on the "misidentification of ordinary phenomena and objects," and some of them may have been due to the rapid emergence of new technologies like drones.

Thanks to the internet, the topic of UFOs is proving "more pervasive now than ever," according to the report.

"Aside from hoaxes and forgeries, misinformation and disinformation is more prevalent and easier to disseminate now than ever before, especially with today's advanced photo, video, and computer generated imagery tools," the report reads.

To get a better sense of what these UAPs could be, the AARO is now working on a real-time UAP sensor technology dubbed "Gremlin," which could be deployed "in reaction to reports," as Phillips told journalists today.

Whether those efforts will end up bearing any fruit, let alone catch aliens, remains to be seen.

More on UFOs: Alien Probes May Have Already Visited Earth, Scientist Says

See the original post:

Pentagon Says It Has No Record of Reverse-Engineered Alien Technology - Futurism

Wegovy Approved to Cut Heart Disease and Stroke Risk – Futurism

Image by Jaap Arriens/NurPhoto via Getty Images

The maker of Ozempic and Wegovy has been granted government approval to sell its wares to help cut the risk of heart attack, heart disease, and stroke a move that could help expand insurance coverage to the highly sought-after drugs.

In a press release, the Food and Drug Administration announced that Novo Nordisk, the Danish company behind the outrageously popular weight loss injectables, has been granted the first-ever stamp of approval for heart health specifically geared towards people who are overweight or obese.

"Wegovy is now the first weight loss medication to also be approved to help prevent life-threatening cardiovascular events in adults with cardiovascular disease and either obesity or overweight," John Sharretts, the FDA's diabetes and obesity czar, said in the press release. "This patient population has a higher risk of cardiovascular death, heart attack and stroke. Providing a treatment option that is proven to lower this cardiovascular risk is a major advance for public health."

Last August, Novo announced that semaglutide, the active ingredient in both Wegovy and Ozempic, had showed significant heart health benefits in a large-scale human trial. Specifically, the 2.4 milligram dosage, which is what's used in Wegovy as compared to the 1 mg version used for Ozempic, showed a link with lowered heart disease risk.

This beneficial usage of the drug, which belongs to a class of medicine known as GLP-1 agonists that mimics the feeling of fullness in the stomach, is just the latest in a growing list of positive semaglutide side effects a list that is, unfortunately, tempered with a rap sheet of mild-to-severe issuesit's been linked with.

Due to semaglutide's incredible boom in popularity in the nearly three years since the FDA approved the higher-dose Wegovy injectable as a weight loss treatment, it's been flying off the shelves even as insurers demonstrate a reticence to shell out for it, leading some folks to either go without or seek unregulated and often dangerous grey-market alternatives.

In an interview withNPR, cardiologist Martha Gulati of Los Angeles' Cedars-Sinai Medical Center estimated that up to 70 percent of her patients could be eligible for the medication which as of nowis still not covered by many insurance companies.

"The hope," Gulati said, "is that insurers will start understanding that this is not a vanity drug."

More on semaglutide benefits: Semaglutide Can Cut Diabetic Kidney Disease Progression

Read more:

Wegovy Approved to Cut Heart Disease and Stroke Risk - Futurism

Diet Sodas Linked to Heart Issues – Futurism

Image by Justin Sullivan via Getty / Futurism

Bad news for diet soda lovers: artificially-sweetened soft drinks may come with a heart-shaped price tag.

Published in the American Heart Association's journal Circulation:Arrhythmia and Electrophysiology, the new research out of a Shanghai teaching hospital suggests that there may be a link between regularly drinking significant amounts of diet soda and dangerously irregular heartbeats.

As the Mayo Clinic explains, atrial fibrillation, the medical term for irregular heartbeats, is associated with a group of symptoms that also include heart palpitations, fatigue, dizziness, and shortness of breath.

Looking at a database cohort of more than 200,000 patients, the team comprised primarily of endocrinology researchers at the Shanghai Ninth People's Hospital found that over a period of nearly 10 years, those who drank more than 2 liters of sodas with nonsugar sweeteners were significantly more likely to develop a-fib compared to those who drank fruit juice or regular soda.

Specifically, the study indicates that people who drank more than two liters of diet beverages per week were 20 percent more likely to develop a-fib than those who don't drink any though the researchers struggled to explain exactly why it might cause the scary heart-related symptoms.

If you're thinking of switching back to regular soda, that's not a perfect solution either.The Shanghai researchers also found that drinking more than two liters per week of conventionally sweetened cola saw a 10 percent increase in a-fib symptoms.

When looking at the portion of the cohort that drank only pure, unsweetened fruit or vegetable juice, the researchers found something even more fascinating: they appeared to have an eight percent lower risk of developing irregular heartbeats than their soda-drinking counterparts.

While there's been lots of research looking into other negative health effects associated with diet sodas, Penn State nutritionist Penny Kris-Etherton pointed out in an interview withCNNthat this appears to be the first looking at its association with a-fib.

"We still need more research on these beverages to confirm these findings and to fully understand all the health consequences on heart disease and other health conditions," Kris-Etherton, an American Heart Association contributor who didn't work on the study, told CNN. "In the meantime, water is the best choice, and, based on this study, no- and low-calorie sweetened beverages should be limited or avoided."

At the end of the day, drinking a bunch of diet soda is still probably not as bad for your heart as, say, excessive alcohol intake, but the risk is serious enough to take seriously and to make those pure fruit juices look all the tastier.

More on heart health:Cannabis Use Linked to Higher Risk of Heart Attack and Stroke

Excerpt from:

Diet Sodas Linked to Heart Issues - Futurism

Why are Futurists so Optimistic About the Future of Work? – Tata Consultancy Services (TCS)

Choose how you lean

When it comes to the future, are you optimistic or pessimistic?

As a futurist, this is a question I both ask and answer often. In my role with TCS, I help our teams and clients anticipate and prepare for possible scenarios. This means looking at both positive and negative outcomes and equipping executives with tools to analyze the impact and build new capabilities.

When looking at situations that could be either positive or negative, its natural to lean in one direction. When approaching the question of being an optimist or pessimistic, I reframe the question as, are you a techno-optimist or a techno-pessimist?

A techno-optimist believes that technology can continually be improved and can improve the lives of people, making the world a better place. If you are a techno-optimist, you think technology has consistently improved our lives for the better and is likely to do so in the future. In considering societal problems, you feel that the solution lies in technological innovation.

On the other hand, a techno-pessimist is likely to believe that modern technology has created as many problems for humanity as it has solved. The pessimist believes that seeking more technology is likely to bring about new problems, unforeseen consequences, and dangers. Given that the pessimists see technology creating its own problems, their answer to human progress often lies in a reduction of technological dependence rather than an expansion of it.

Originally posted here:

Why are Futurists so Optimistic About the Future of Work? - Tata Consultancy Services (TCS)

Meta’s VR Headsets Are Getting a Masturbation Mode – Futurism

"It's for porn right?" And More

Shortly after the Apple Vision Pro's January release, many buyers were horrified to discover that their wildly expensive new headsets didn't let them download and watch porn. (This isn't all too surprising, given Apple's generally porn-avoidant history. Still, as 404 Media reported, buyers were pissed.)

Enter: Meta, which knows what the people buying its VR headgear really want.

Meta announced last week that its latest software update, v63, would let users of its Quest 2 and Quest Pro headsets more comfortably wear their devices while lying down. In a press release, the company noted that "there are myriad reasons you might want to use your Meta Quest headset lying down," explaining that users might want to lay back while watching a made-for-VR David Attenborough series on the Galapagos Islands, attending bizarre metaverse concerts in Horizon Worlds, meditating, "and more."

Don't play, Meta... we know what "and more" means. VR porn enthusiasts have been hankering for a better lying down mode for a while, and it seems that CEO Mark Zuckerberg's metaverse team has answered the call. Get cozy and freak on, y'all.

There isone catch to the update, though: lying down mode isn't yet available on the Quest 3, Meta's most advanced consumer-oriented headset to date.

But Meta does have a reason for this seeming discrepancy. Inan Ask Me Anything on Instagram this week, the company's chief technology officer and Reality Labs head Andrew "Boz" Bosworth addressed the user concern, explaining that Meta is indeed "planning to bring" the horizontal upgrade to the newer headset but has run into some slow-downs due to the Quest 3's differing "Smart Guardian" interface. (Put very simply, Meta's Guardian systems allow users to draw VR boundaries in their real-world space.)

"I don't have a date for you exactly," Bosworth continued. "But it is coming, and we're making good progress on it."

Of course, masturbation isn't the only reason why a Quest user might want to be able to more functionally lay back while wearing their headset. It's also certainly worth noting that the Instagram user who asked Bosworth about the update during his AMAexplained that it would make their Quest 3 experience as a disabled person far more comfortable an important accessibility consideration for Meta, which urges its "commitment to inclusive design" in its Quest documentation.

That said, some netizens' minds seem pretty fixated on the "and more" of it all.

"It's for porn," one Redditor wrote in response to the above AMA snippet, shared yesterday to a thread in the r/OculusQuest subreddit, "right?"

More on VR porn: Apple Fans Horrified to Discover Vision Pro Cant Play VR Porn

Read more:

Meta's VR Headsets Are Getting a Masturbation Mode - Futurism

Doctors Say Trump Is Displaying Clear Signs of Cognitive Issues – Futurism

Image by Win McNamee/Getty Images

At 81 years old, president Joe Biden has attracted significant voter misgivings over his age and mental acuity.

But his rival in the upcoming presidential election, Donald Trump, may be dealing with much more acute cognitive issues.

Experts are becoming increasingly worried over Trump's condition, Salon reports, with the former president struggling to form coherent sentences and even once again confusing Biden with his predecessor Barack Obama during a rally in North Carolina this month.

"Not enough people are sounding the alarm, that based on his behavior, and in my opinion, Donald Trump is dangerously demented," psychologist and former Johns Hopkins Medical School professor John Gartner, who wrote a book about Trump's mental health, told Salon.

"This is a tale of two brains," he added. "Biden's brain is aging. Trump's brain is dementing."

"In my opinion, Donald Trump is getting worse as his cognitive state continues to degrade," Gartner said. "If Trump were your relative, youd be thinking about assisted care right now."

Others agreed.

"It is meaningful because the confusion of people, in contrast to the occasional forgetting of names, is a sign of early dementia, as noted by the Dementia Care Society," licensed psychologist and founder and executive director of the Washington Center For Cognitive Therapy Vincent Greenwoodtold the publication.

As for Trump mispronouncing words like "Venezuela" or "migrant crime," experts tend to agree he's exhibiting early signs of "paraphasia," speech disturbances caused by brain damage, and "not just aging," as Greenwood argued.

And others, like clinical psychologist and Cornell University senior lecturer Harry Segal, who specializes in mental health disorders, offer a more nuanced assessment though not one that inspires much confidence in Trump.

"Since this is an intermittent problem, it suggests that when Trump is especially stressed and exhausted, he suffers cognitive slippage that affects the way he associates words or their meaning," he told Salon. "Note, though, that Trumps pathological lying is itself a form of mental illness, so these cognitive lapses are literally sitting atop what appears to be an already compromised psychological functioning."

And at the end of the day, Trump is still contending with dozens of criminal charges.

Needless to say, none of this bodes well for the future of the country.

More on Trump: Cash-Desperate Donald Trump Meets With Elon Musk

Excerpt from:

Doctors Say Trump Is Displaying Clear Signs of Cognitive Issues - Futurism

Google Admits Gemini AI Demo Was at Least Partially Faked

Google misrepresented the way its Gemini Pro can recognize a series of images and admitted to speeding up the footage.

Google has a lot to prove with its AI efforts — but it can't seem to stop tripping over its own feet.

Earlier this week, the tech giant announced Gemini, its most capable AI model to date, to much fanfare. In one of a series of videos, Google showed off the mid-level range of the model dubbed Gemini Pro by demonstrating how it could recognize a series of illustrations of a duck, describing the changes a drawing went through at a conversational pace.

But there's one big problem, as Bloomberg columnist Parmy Olson points out: Google appears to have faked the whole thing.

In its own description of the video, Google admitted that "for the purposes of this demo, latency has been reduced, and Gemini outputs have been shortened for brevity." The video footage itself is also appended with the phrase "sequences shortened throughout."

In other words, Google misrepresented the speed at which Gemini Pro can recognize a series of images, indicating that we still don't know what the model is actually capable of.

In the video, Gemini wowed observers by using its multimodal thinking chops to recognize illustrations at what appears to be a drop of a hat. The video, as Olson suggests, also offered us "glimmers of the reasoning abilities that Google’s DeepMind AI lab have cultivated over the years."

That's indeed impressive, considering any form of reasoning has quickly become the next holy grail in the AI industry, causing intense interest in models like OpenAI's rumored Q*.

In reality, the demo wasn't just significantly sped up to make it seem more impressive, but Gemini Pro is likely still stuck with the same old capabilities that we've already seen many times before.

"I think these capabilities are not as novel as people think," Wharton professor Ethan Mollick tweeted, showing how ChatGPT was effortlessly able to identify the simple drawings of a duck in a series of screenshots.

Did Google actively try to deceive the public by speeding up the footage? In a statement to Bloomberg Opinion, a Google spokesperson said it was made by "using still image frames from the footage, and prompting via text."

In other words, Gemini was likely given plenty of time to analyze the images. And its output may have then been overlaid over video footage, giving the impression that it was much more capable than it really was.

"The video illustrates what the multimode user experiences built with Gemini could look like," Oriol Vinyals, vice president of research and deep learning lead at Google’s DeepMind, wrote in a post on X.

Emphasis on "could." Perhaps Google should've opted to show the actual capabilities of its Gemini AI instead.

It's not even the first time Google has royally screwed up the launch of an AI model. Earlier this year, when the company announced its ChatGPT competitor, a demo infamously showed Bard making a blatantly false statement, claiming that NASA's James Webb Space Telescope took the first image of an exoplanet.

As such, Google's latest gaffe certainly doesn't bode well. The company came out swinging this week, claiming that an even more capable version of its latest model called Gemini Ultra was able to outsmart OpenAI's GPT-4 in a test of intelligence.

But from what we've seen so far, we're definitely going to wait and test it out for ourselves before we take the company's word.

More on Gemini: Google Shows Off "Gemini" AI, Says It Beats GPT-4

The post Google Admits Gemini AI Demo Was at Least Partially Faked appeared first on Futurism.

Continue reading here:
Google Admits Gemini AI Demo Was at Least Partially Faked

Ex-OpenAI Board Member Refuses to Say Why She Fired Sam Altman

The now-former OpenAI board member who was instrumental in the firing of Sam Altman has spoken — but she's still staying mum where it matters.

Mum's The Word

The now-former OpenAI board member who was instrumental in the initial firing of CEO Sam Altman has spoken — but she's still staying mum on why she pushed him out in the first place.

In an interview with the Wall Street Journal, 31-year-old Georgetown machine learning researcher and erstwhile OpenAI board member Helen Toner was fairly open with her responses about the logistics of the failed coup at the company, but terse when it came to the reasoning behind it.

"Our goal in firing Sam was to strengthen OpenAI and make it more able to achieve its mission," the Australian-born researcher said as her only explanation of the headline-grabbing chain of events.

As the New York Times reported in the midst of the Thanksgiving hubbub, Toner and Altman butted heads the month prior because she published a paper critical of the firm's safety protocols (or lack thereof) and laudatory of those undertaken by Anthropic, which was created by former OpenAI employees who left over similar concerns.

Altman reportedly confronted Toner during their meeting and because he believed, per emails viewed by the NYT, that "any amount of criticism from a board member carries a lot of weight."

After the tense exchange, the CEO brought his concerns about Toner's criticisms up with other board members, which ended up reinforcing those board members' own doubts about his leadership, the WSJ reports. Soon after, Altman himself was on the chopping block over vague allegations of dishonesty — although we still don't know what exactly he was supposedly being dishonest about.

Intimidating

As the company weathered its tumult amid a nearly full-scale revolt from staffers who said they'd leave and follow Altman to Microsoft if he wasn't reinstated, Toner and OpenAI cofounder and chief scientist Ilya Sutskever ended up resigning, the report explains, which paved the way for the CEO's return.

In her interview with the WSJ, however, the Georgetown researcher suggested that her resignation was forced by a company attorney.

"[The attorney] was trying to claim that it would be illegal for us not to resign immediately," Toner said, "because if the company fell apart we would be in breach of our fiduciary duties."

With the exit of the Aussie academic and Rand Corporation scientist Tasha McCauley, another of those who voted for Altman's ouster, from the board, there are now no women on OpenAI's governing body — but in this interview at least, Toner was all class.

"I think looking forward," she said, "is the best path from here."

More on OpenAI: Sam Altman Got So Many Texts After Being Fired It Broke His Phone

The post Ex-OpenAI Board Member Refuses to Say Why She Fired Sam Altman appeared first on Futurism.

See the article here:
Ex-OpenAI Board Member Refuses to Say Why She Fired Sam Altman

Nicki Minaj Fans Are Using AI to Create “Gag City”

Fans anxiously awaiting the release of Nicki Minaj's latest album have occupied themselves with AI to create their own

Gag City

Fans are anxiously awaiting the drop of Onika "Nicki Minaj" Maraj-Petty's "Pink Friday 2" — and in the meantime, they've occupied themselves with artificial intelligence image generators to create visions of a Minajian utopia known as "Gag City."

The entire "Gag City" gambit began with zealous (and perhaps overzealous) fans tweeting at the Queens-born diva to tell her how excited — or "gagged," to use the drag scene etymology that spread among Maraj-Petty's LGBTQ and queer-friendly fanbase — they are for her first album in more than five years.

Replete with dispensaries, burger joints, and a high-rise shopping mall, Gag City has everything a Barb (as fans call themselves) could ask for.

Gag City, the fan-created AI kingdom for Nicki Minaj, trends on X/Twitter ahead of ‘Pink Friday 2.’ pic.twitter.com/jm3iGS9fBO

— Pop Crave (@PopCrave) December 6, 2023

Barbz Hug

As memetic lore will have you believe, these tributes to Meraj-Petty were primarily created with Microsoft's Bing AI image generator. The meme went so deep that people began claiming that her fanbase generating Gag City imagery caused Bing to crash, which allegedly led to the image generator blocking Nicki Minaj-related prompts.

gag city residents have demolished bing head office after their continued sabotage of nicki minaj’s name in their image creator pic.twitter.com/OOpL2Jzj7h

— Xeno? (@AClDBLEEDER) December 6, 2023

When Futurism took to Bing's image creator AI to see what all the fuss was about, we too discovered that you couldn't search for anything related to Minaj. However, the same was true when we inputted other celebrities' names as well, suggesting that Bing, like Google, may intentionally block the names of famous people in an apparent effort to circumvent deepfakes.

Brand Opportunities

As creative as these viral Gag City images have been, it was only a matter of time before engagement-hungry brands tried to get in on the fun and effectively ruin it.

From Spotify changing its location to the imaginary Barb metropolis and introducing "Gag City" as a new "sound town" to KFC's social media manager telling users to "DM" the account, the meme has provided a hot pink branding free-for-all.

The Bing account itself even posted a pretty excellent-looking AI-generated Gag City image.

Next stop: Friday ? https://t.co/J1pRCZcbTd pic.twitter.com/ujG7BsJWUC

— Bing (@bing) December 6, 2023

Sleazy brand bandwagoning aside, the Gag City meme and its many interpretations provide an interesting peek into what the future of generative AI may hold in a world dominated by warring fandoms and overhyped automation.

More on AI imaginationPeople Cannot Stop Dunking on that Uncanny “AI Singer-Songwriter”

The post Nicki Minaj Fans Are Using AI to Create “Gag City” appeared first on Futurism.

Link:
Nicki Minaj Fans Are Using AI to Create “Gag City”

Elon Musk Says CEO of Disney Should Be Fired, Seemingly for Hurting His Feelings

X owner Elon Musk lashed out at Disney CEO Bob Iger on Thursday, tweeting that

Another day, another person of note being singled out by conspiracy theorist and X owner Elon Musk.

The mercurial CEO's latest target is Disney CEO Bob Iger, whose empire recently pulled out of advertising on Musk's much-maligned social media network.

Along with plenty of other big names in the advertising space, Disney decided to call it quits after Musk infamously threw his weight behind an appalling and deeply antisemitic conspiracy theory.

Instead of engaging in some clearly much-needed introspection, Musk lashed out at Iger this week, posting that "he should be fired immediately."

"Walt Disney is turning in his grave over what Bob has done to his company," he added.

To get a coherent answer as to why Musk made the demand takes some unpacking, so bear with us.

Musk implied that Disney was to blame for not pulling its ads from Meta, following a lawsuit alleging the much larger social media company had failed to keep child sexual abuse material (CSAM) off of its platform.

"Bob Eiger thinks it’s cool to advertise next to child exploitation material," Musk wrote, misspelling Iger's name, in response to a tweet that argued sex exploration material on Meta was "sponsored" by Disney. "Real stand up guy."

To be clear, Meta has an extremely well-documented problem with keeping disgusting CSAM off of its platforms. Just last week, the Wall Street Journal found that there have been instances of Instagram and Facebook actually promoting pedophile accounts, making what sounds like an already dangerous situation even worse.

At the end of the day, nobody's a real winner here. Iger's own track record is less-than-stellar, especially when it comes to Disney's handling of Florida's "Don't Say Gay" bill.

Yet in many ways, Musk is the pot calling the kettle black. Why? Because X-formerly-Twitter has its own considerable issue with CSAM. Especially following Musk's chaotic takeover last year, the New York Times found back in February that Musk is falling far short of making "removing child exploitation" his "priority number one," as he declared last year.

Since then, child abuse content has run rampant on the platform. Worse yet, in July the platform came under fire for reinstating an account that posted child sex abuse material.

Meanwhile, instead of taking responsibility for all of the hateful things he's said, Musk has attempted to rally up his base on X, arguing that advertisers were conspiring against him and his "flaming dumpster" of a social media company.

During last month's New York Times DealBook Summit, the embattled CEO accused advertisers of colluding to "blackmail" him "with advertising" — a harebrained idea that highlights his escalating desperation.

At the time, after literally telling advertisers to go "fuck" themselves, Musk took the opportunity to take a potshot at Iger as well.

"Hey Bob, if you're in the audience, that's how I feel," he added for emphasis. "Don't advertise."

More on the beef: Twitter Is in Extremely Deep Trouble

The post Elon Musk Says CEO of Disney Should Be Fired, Seemingly for Hurting His Feelings appeared first on Futurism.

More:
Elon Musk Says CEO of Disney Should Be Fired, Seemingly for Hurting His Feelings

Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines

The AI, which takes orders from drive-thru customers at Checkers and Carl's Jr, relies on humans for most of its customer interactions.

Mechanical Turk

An AI drive-thru system used at the fast-food chains Checkers and Carl's Jr isn't the perfectly autonomous tech it's been made out to be. The reality, Bloomberg reports, is that the AI heavily relies on a backbone of outsourced laborers who regularly have to intervene so that it takes customers' orders correctly.

Presto Automation, the company that provides the drive-thru systems, admitted in recent filings with the US Securities and Exchange Commission that it employs "off-site agents" in countries like the Philippines who help its "Presto Voice" chatbots in over 70 percent of customer interactions.

That's a lot of intervening for something that claims to provide "automation," and is yet another example of tech companies exaggerating the capabilities of their AI systems to belie the technology's true human cost.

"There’s so much hype around AI that everyone is misunderstanding what this tool is," Shelly Palmer, who runs a tech consulting firm, told Bloomberg. "Everybody thinks that AI is some kind of magic."

Change of Tune

According to Bloomberg, the SEC informed Presto in July that it was being investigated for claims "regarding certain aspects of its AI technology."

Beyond that, no other details have been made public about the investigation. What we do know, though, is that the probe has coincided with some revealing changes in Presto's marketing.

In August, Presto's website claimed that its AI could take over 95 percent of drive-thru orders "without any human intervention" — clearly not true, given what we know now. In a show of transparency, that was changed in November to claim 95 percent "without any restaurant or staff intervention," which is technically true, yes, but still seems dishonest.

That shift is part of Presto's overall pivot to its new "humans in the loop" marketing shtick, which upholds its behind the scenes laborers as lightening the workload for the actual restaurant workers. The whole AI thing, it would seem, is just packing it comes in, and the mouthpiece that frustrated customers have to deal with.

"Our human agents enter, review, validate and correct orders," Presto CEO Xavier Casanova told investors during a recent earnings call, as quoted by Bloomberg. "Human agents will always play a role in ensuring order accuracy."

Know Its Limits

The huge hype around AI can obfuscate both its capabilities and the amount of labor behind it. Many tech firms probably don't want you to know that they rely on millions of poorly paid workers in the developing world so that their AI systems can properly function.

Even OpenAI's ChatGPT relies on an army of "grunts" who help the chatbot learn. But tell that to the starry-eyed investors who have collectively sunk over $90 billion into the industry this year without necessarily understanding what they're getting into.

"It highlights the importance of investors really understanding what an AI company can and cannot do," Brian Dobson, an analyst at Chardan Capital Marketts, told Bloomberg.

More on AI: Nicki Minaj Fans Are Using AI to Create "Gag City"

The post Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines appeared first on Futurism.

Read the original post:
Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines