Brad Parscale helped Trump win in 2016 using Facebook ads. Now he’s back, and an AI evangelist – Yahoo! Voices

FORT LAUDERDALE, Fla. (AP) Donald Trumps former campaign manager looked squarely into the camera and promised his viewers they were about to witness a bold new era in politics.

Youre going to see some of the most amazing new technology in artificial intelligence thats going to replace polling in the future across the country, said Brad Parscale in a dimly lit promotional video accentuated by hypnotic beats.

Parscale, the digital campaign operative who helped engineer Trumps 2016 presidential victory, vows that his new, AI-powered platform will dramatically overhaul not just polling, but campaigning. His AI-powered tools, he has boasted, will outperform big tech companies and usher in a wave of conservative victories worldwide.

Its not the first time Parscale has proclaimed that new technologies will boost right-wing campaigns. He was the digital guru who teamed up with scandal-plagued Cambridge Analytica and helped propel Trump to the White House eight years ago. In 2020, he had a public blowup then a private falling out with his old boss after the Capitol riot. Now hes back, playing an under-the-radar role to help Trump, the presumptive GOP nominee, in his race against Democratic President Joe Biden.

Parscale says his company, Campaign Nucleus, can use AI to help generate customized emails, parse oceans of data to gauge voter sentiment and find persuadable voters, then amplify the social media posts of anti-woke influencers, according to an Associated Press review of Parscales public statements, his company websites, slide decks, marketing materials and other documents not previously made public.

Since last year, Campaign Nucleus and other Parscale-linked companies have been paid more than $2.2 million by the Trump campaign, the Republican National Committee and their related political action and fundraising committees, campaign finance records show.

While his firms have received only a small piece of Trumps total digital spending, Parscale remains close to top Republicans, as well as senior officials at the campaign and at the RNC, according to a GOP operative familiar with Parscales role who spoke on condition of anonymity to discuss internal dynamics.

Lara Trump, the RNCs new co-chair and Trumps daughter-in-law, once worked as a consultant to a company co-owned by Parscale. And U.S. House Speaker Mike Johnson's campaign recently hired Campaign Nucleus, campaign finance records show.

Parscale, however, is not involved in day-to-day Trump campaign operations, the GOP operative said.

Parscales ability to use AI to micro target supporters and tap them for campaign cash could prove critical for Trumps campaign and other fundraising organizations. They have seen a falloff in contributions from smaller donors and a surge in spending at least $77 million so far on attorneys defending the former president in a slew of criminal and civil cases.

Beyond Trump, Parscale has said hes harnessed AI to supercharge conservative candidates and causes across the globe, including in Israel, the Balkans and Brazil.

NEW AI-POWERED CAMPAIGN TOOLS

Parscale is hardly alone in using machine learning to try to give candidates an edge by predicting, pinpointing and motivating likely supporters to vote and donate money. Politicians at all levels are experimenting with chatbots and other generative AI tools to write speeches, ad copy and fundraising appeals.

Some Democrats have voiced concern over being outmaneuvered by Republicans on AI, much like they were on social media advertising eight years ago. So far, the Biden campaign and other Democrats said they are using AI to help them find and motivate voters and to better identify and defeat disinformation.

Election experts say they are concerned about AIs potential to upend elections around the world through convincing deepfakes and other content that could mislead voters. Free and low-cost generative AI services have grown in sophistication, and officials worry they can be used to smear a candidate or steer voters to avoid the polls, eroding the publics trust in what they see and hear.

Parscale has the financial backing to experiment to see what works in ways that other AI evangelists may not. That is thanks, in part, to his association with an evangelical Texas billionaire who is among the states most influential political donors.

Parscale did not respond to multiple messages from AP seeking comment. The RNC declined comment as well.

AI IS SO SCARY

Trump has called artificial intelligence so scary " and "dangerous." His campaign, which has shied away from highlighting Parscale's role, said in an emailed statement that it did not engage or utilize tools supplied by any AI company.

The campaign uses a set of proprietary algorithmic tools, like many other campaigns across the country, to help deliver emails more efficiently and prevent sign up lists from being populated by false information, said campaign spokesman Steven Cheung.

While political consultants often hype their tactics to land new contracts, they can also be intensely secretive about the details of that work to avoid assisting rivals. That makes it difficult to precisely track how Parscale is deploying AI for the Trump campaign, or more broadly.

Parscale has said Campaign Nucleus can send voters customized emails and use data analytics to predict voters feelings. The platform can also amplify anti-woke influencers who have large followings on social media, according to his companys documents and videos.

Parscale said his company also can use artificial intelligence to create stunning web pages in seconds that produce content that looks like a media outlet, according to a presentation he gave last month at a political conference, where he was not advertised in advance as a speaker.

Empower your team to create their own news, said another slide, according to the presentation viewed by AP.

Soon, Parscale says, his company will deploy an app that harnesses AI to assist campaigns in collecting absentee ballots in the same way DoorDash or Grubhub drivers pick up dinners from restaurants and deliver them to customers.

Chris Wilson, a Republican strategist who recently worked for a SuperPAC backing Florida Gov. Ron DeSantis failed presidential bid, said he has seen Campaign Nucleus platform and was envious of its capabilities and simplicity.

Somebody could download Nucleus, start working with it and really begin to use it, said Wilson.

Other political consultants, however, called Parscales AI-infused sales pitch largely a rehash of what campaigns already have mastered through data scraping, ad testing and modeling to predict voter behavior.

Some of this stuff is just simply not new, its been around for a long time. The only thing new is that were just calling it AI, said Amanda Elliott, a GOP digital strategist.

FROM UNKNOWN TO TRUMP CONFIDANT

Parscale, a relatively unknown web designer in San Antonio, got his start working for Trump when he was hired to build a web presence for the business moguls family business.

That led to a job on the future presidents 2016 campaign. He was one of its first hires and spearheaded an ambitious and unorthodox digital initiative that relied on an extensive database of social media accounts and content to target voters with Facebook ads.

I pretty much used Facebook to get Trump elected in 2016, Parscale said in a 2022 podcast interview.

To better target Facebook users, in particular, the campaign teamed up with Cambridge Analytica, a British datamining firm bankrolled by Robert Mercer, a wealthy and influential GOP donor. After the election, Cambridge Analytica dissolved, facing investigations over its role in a breach of 87 million Facebook accounts.

Following Trumps surprise win, Parscales influence grew. He was promoted to manage Trump's reelection bid and enjoyed celebrity status. A towering figure at 6 feet, 8 inches with a Viking-style beard, Parscale was frequently spotted at campaign rallies taking selfies with Trump supporters and signing autographs.

Parscale was replaced as campaign manager not long after a rally in Tulsa, Oklahoma, drew an unexpectedly small crowd, enraging Trump.

His personal life unraveled, culminating in a standoff with police at his Florida home after his wife reported he had multiple firearms and was threatening to hurt himself. One of the responding officers reported he saw bruising on the arms of Parscales wife. Parscale complied with a court order to turn in his firearms and was not charged in connection with the incident.

Parscale briefly decided to quit politics and privately expressed regret for associating with Trump after the Jan. 6, 2021, Capitol riot. In a text to a former campaign colleague, he wrote he felt guilty for helping him win in 2016, according to the House committee that investigated the Capitol attack.

His disgust didnt last long. Campaign Nucleus set up Trumps website after Silicon Valley tech companies throttled his access to their platforms.

By the summer of 2022, Parscale had resumed complimenting his old boss on a podcast popular among GOP politicos.

With President Trump, he really was the guy driving the message. He was the chief strategist of his own political uprising and management, Parscale said. I think what the family recognized was: I had done everything that really the campaign needs to do.

PARSCALES PLATFORM

Trumps 2024 campaign website now links directly to Parscales company and displays that its Powered by Nucleus, as Parscale often refers to his new firm. The campaign and its related political action and campaign committees have paid Campaign Nucleus more than $800,000 since early 2023, according to Federal Election Commission filings.

Two other companies Dyspatchit Email and Text Services and BCVM Services are listed on campaign finance records as being located at the same Florida address used by Campaign Nucleus. The firms, which are registered in Delaware and whose ownership is unclear, have received $1.4 million from the Trump campaign and related entities, FEC records show.

When an AP reporter last month visited Campaign Nucleus small, unmarked office in a tony section of Fort Lauderdale, an employee said she did not know anything about Dyspatchit or BCVM.

We dont talk to reporters, the employee said.

The three companies have been paid to host websites, send emails, provide fundraising software and provide digital consulting, FEC records show.

Parscale markets Campaign Nucleus as a one-stop shop for conservative candidates who want to automate tasks usually done by campaign workers or volunteers.

The company says it has helped its clients raise $119 million and has sent nearly 14 billion emails on their behalf, according to a promotional video.

At his recent appearance at the political conference, Parscale presented a slide that said Campaign Nucleus had raised three times as much as tech giant Salesforce in head-to-head tests for email fundraising.

Campaign Nucleus specializes in mining information from a politicians supporters, according to a recent presentation slide.

For example, when someone signs up to attend an event, Nucleus uses AI to analyze reams of personal data to assign that person a numerical score. Attendees who have been to past events receive a high score, for example, ranking them as most likely to show up, according to a company video posted online.

Campaign Nucleus also can track where people who sign up live and can send them customized emails asking for donations or solicit their help on the campaign, the video shows.

Parscale said two years ago in a podcast that he had received more than 10,000 requests about Campaign Nucleus from nearly every country with a conservative party. More recently, he said his team has been active in multiple countries, including in India and Israel, where hes been helping over there a lot with the war with Hamas.

The company says it has offices in Texas, Florida and North Carolina and has been on a recruiting tear. Recent job listings have included U.S. and Latin America-based intelligence analysts to use AI for framing messages and generating content, as well as a marketer to coordinate influencer campaigns.

Campaign Nucleus has also entered into partnerships with other companies with an AI focus. In 2022, the firm announced it was teaming up with Phunware, a Texas-based company that built a cellphone app for Trumps 2020 bid that allowed staff to monitor the movements of his millions of supporters and mobilize their social networks.

Since then, Phunware obtained a patent for what a company official described as experiential AI that can locate peoples cellphones geographically, predict their travel patterns and influence their consumer behavior.

Phunware did not answer specific questions about the partnership with Nucleus, saying the company's client engagements were confidential.

However, it is well-known that we developed the 2020 Trump campaign app in collaboration with Campaign Nucleus. We have had discussions with Trump campaign leadership about potentially developing their app for the 2024 election," said spokeswoman Christina Lockwood.

PARSCALES VISION

Last year, Parscale bought property in Midland, Texas, in the heart of the nations highest-producing oil and gas fields. It is also the hometown of Tim Dunn, a billionaire born-again evangelical who is among the states most influential political donors.

Over the years, the organizations and campaigns Dunn has funded have pushed Texas politics further to the right and driven successful challenges to unseat incumbent Republican officials deemed too centrist.

In April 2023, Dunn invested $5 million in a company called AiAdvertising that once bought one of Parscales firms under a previous corporate name. The San Antonio-based ad firm also announced that Parscale was joining as a strategic adviser, to be paid $120,000 in stock and a monthly salary of $10,000.

Boom! Parscale tweeted. (AiAdvertising) finally automated the full stack of technologies used in the 2016 election that changed the world.

In June, AiAdvertising added two key national figures to its board: Texas investor Thomas Hicks Jr. former co-chair of the RNC and longtime hunting buddy of Donald Trump Jr. -- and former GOP congressman Jim Renacci. In December, Dunn also gave $5 million to MAGA Inc., a pro-Trump super PAC and Campaign Nucleus client. And in January, SEC filings show Dunn provided AiAdvertising an additional $2.5 million via his investment company. A company press release said the cash infusion would help it generate more engaging, higher-impact campaigns.

Dunn declined to comment, although in an October episode of his podcast he elaborated on how his political work is driven by his faith.

Jesus wont be on the ballot, OK? Now, eventually, hes going to take over the government and we can look forward to that, Dunn told listeners. In the meanwhile, were going to have to settle.

In business filings, AiAdvertising said it has developed AI-created personas to determine what messages will resonate emotionally with its customers target audience. Parscale said last year in a promotional video that Campaign Nucleus was using AI models in a similar way.

We actually understand what the American people want to hear, Parscale said.

AiAdvertising did not respond to messages seeking comment.

Parscale occasionally offers glimpses of the AI future he envisions. Casting himself as an outsider to the Republican establishment, he has said he sees AI as a way to undercut elite Washington consultants, whom he described as political parasites.

In January, Parscale told a crowd assembled at a grassroots Christian event at a church in Pasadena, California, that their movement needed to have our own AI, from creative large language models and creative imagery, we need to reach our own audiences with our own distribution, our own email systems, our own texting systems, our own ability to place TV ads, and lastly we need to have our own influencers.

To make his point plain, he turned to a metaphor that relied on a decidedly 19th-century technology.

We must not rely on any of their rails, he said, referring to mainstream media and companies. This is building our own train tracks.

-

Burke reported from San Francisco. AP National Political Writer Steve Peoples and Courtney Subramanian in Washington, and Associated Press researcher Rhonda Shafner in New York contributed to this report.

-

This story is part of an Associated Press series, The AI Campaign, that explores the influence of artificial intelligence in the 2024 election cycle.

-

Contact APs global investigative team at Investigative@ap.org or https://www.ap.org/tips/

-

The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find APs standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org

Go here to read the rest:

Brad Parscale helped Trump win in 2016 using Facebook ads. Now he's back, and an AI evangelist - Yahoo! Voices

Posted in Ai

China tops the U.S. on AI research in over half of the hottest fields: report – Axios

Data: Emerging Technology Observatory Map of Science; Chart: Axios Visuals

China leads the U.S. as a top producer of research in more than half of AI's hottest fields, according to new data from Georgetown University's Center for Security and Emerging Technology (CSET) shared first with Axios.

Why it matters: The findings reveal important nuances about the global race between the U.S. and China to lead AI advances and set crucial standards for the technology and how it is used around the world.

Key findings: CSET's Emerging Technology Observatory team found global AI research more than doubled between 2017 and 2022.

Research in robotics grew slower than in vision and natural language processing by just 54% and made up about 15% of all AI research.

What they're saying: "The fact that research is growing so quickly, in so many directions, underscores the need for federal investment in basic measurement evaluation on the scientific techniques we need to ensure that AI getting deployed in the real world is safe, secure and understandable," said Arnold. But appropriations for the National Institutes of Standards and Technology, which is tasked with identifying those measurements, were recently cut.

The big picture: The top five producers of sheer numbers of AI research papers in the world are Chinese institutions, led by the Chinese Academy of Sciences.

Yes, but: At the country level, the U.S. had the top spot in producing highly cited articles.

"China is absolutely a world leader in AI research, and in many areas, likely the world leader," Arnold said, adding the country is active across a range of research areas, including increasingly fundamental research.

Caveat: The data only accounts for research papers published in English, and doesn't capture scientific work in other languages.

How it works: CSET's Map of Science groups together articles that cite each other often, because they have topics or concepts in common, into clusters of research. (It doesn't mean all papers on LLMs, for example, are in the top cluster. Some may appear in other clusters.)

See original here:

China tops the U.S. on AI research in over half of the hottest fields: report - Axios

Posted in Ai

Writer Meghan O’Gieblyn on AI, Consciousness, and Creativity – Nautilus

These days, were inundated with speculation about the future of artificial intelligenceand specifically how AI might take away our jobs, or steal the creative work of writers and artists, or even destroy the human species. The American writer Meghan OGieblyn also wonders about these things, and her essays offer pointed inquiries into the philosophical and spiritual underpinnings of this technology. Shes steeped in the latest AI developments but is also well-versed in debates about linguistics and the nature of consciousness.

OGieblyn also writes about her own struggle to find deeper meaning in her life, which has led her down some unexpected rabbit holes. A former Christian fundamentalist, she later stumbled into transhumanism and, ultimately, plunged into the exploding world of AI. (She currently also writes an advice column for Wired magazine about tech and society.)

When I visited her at her home in Madison, Wisconsin, I was curious if I might see any traces of this unlikely personal odyssey.

I hadnt expected her to pull out a stash of old notebooks filled with her automatic writing, composed while working with a hypnotist. I asked OGieblyn if she would read from one of her notebooks, and she picked this passage: In all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils and the side of the road glaring with their faces undone And so it wentstrange, lyrical, and nonsensicaltapping into some part of herself that she didnt know was there.

That led us into a wide-ranging conversation about the unconscious, creativity, the quest for transcendence, and the differences between machine intelligence and the human mind.

Why did you go to a hypnotist and try automatic writing?

I was going through a period of writers block, which I had never really experienced before. It was during the pandemic. I was working on a book about technology, and I was reading about these new language models. GPT-3 had been just released to researchers, and the algorithmic text was just so wildly creative and poetic.

So you wanted to see if you could do this, without using an AI model?

Yeah, I became really curious about what it means to produce language without consciousness. As my own critical faculty was getting in the way of my creativity, it seemed really appealing to see what it would be like to just write without overthinking everything. I was thinking a lot about the Surrealists and different avant-garde traditions where writers or artists would do exercises either through hypnosis or some sort of random collaborative game. The point was to try to unlock some unconscious creative capacity within you. And it seemed like that was, in a way, what the large language models were doing.

You have an unusual background for a writer about technology. You grew up in a Christian fundamentalist family.

My parents were evangelical Christians. My whole extended family are born again Christians. Everybody I knew growing up believed what we did. I was homeschooled along with all my siblings, so most of our social life revolved around church. When I was 18, I went to Moody Bible Institute in Chicago to study theology. I was planning to go into full-time ministry.

But then you left your faith.

I had a faith crisis when I was in Bible school, which metastasized into a series of doubts about the validity of the Bible and the Christian God. I dropped out of Bible school after two years and pretty much left the faith. I began identifying as agnostic almost right away.

But my sense is youre still extremely interested in questions of transcendence and the spiritual life.

Absolutely.I dont think anyone who grew up in that world ever totally leaves it behind. And my interest in technology grew out of those larger questions. What does it mean to be human? What does it mean to have a soul?

A couple of years after I left Bible school, I read The Age of Spiritual Machines, Ray Kurzweils book about the singularity and transhumanism. He had this idea that humans could use technology to further our evolution into a new species, what he called post-humanity. It was this incredible vision of transcendence. We were essentially going to become immortal.

The algorithmic text was just so wildly creative and poetic.

There are some similarities to your Christian upbringing.

As a 25-year-old who was just starting to believe that I wasnt going to live forever in heaven, this was incredibly appealing to think that maybe science and technology could bring about a similar transformation. It was a secular form of transcendence. I started wondering: What does it mean to be a self or a thinking mind? Kurzweil was saying our selfhood is basically just a pattern of mental activity that you could upload into digital form.

So Kurzweils argument was that machines could do anything that the human mind can doand more.

Essentially. But there was a question that was always elided: Is there going to be some sort of first-person experience? And this comes into play with mind-uploading. If I transform my mind into digital form, am I still going to be me or is it just going to be an empty replica that talks and acts like me, with no subjective experience?

Nobody has a good answer for that because nobody knows what consciousness is. Thats what got me really interested in AI, because thats the area in which were playing out these questions now. What is first-person experience? How is that related to intelligence?

Isnt the assumption that AI has no consciousness or first-person experience? Isnt that the fundamental difference between artificial intelligence and the human mind?

That is definitely the consensus, but how can you prove it? We really dont know whats happening inside these models because theyre black box models. Theyre neural networks that have many hidden layers. Its a kind of alchemy.

A sophisticated large language model like Chat GPT has accumulated a vast reservoir of language by scraping the internet, but does it have any sense of meaning?

It depends on how you define meaning. Thats tricky because meaning is a concept we invented, and the definition is contested. For the past hundred years or so, linguists have determined that meaning depends on embodied reference in the real world. To know what the word dog means, you have to have seen a dog and belong to a linguistic community where that has some collective meaning.

Language models dont have access to the real world, so theyre using language in a very different way. Theyre drawing on statistical probabilities to create outputs that sound convincingly human and often appear very intelligent. And some computational linguists say, Well, that is meaning. You dont need any real-world experience to have meaning.

What does it mean to be human? What does it mean to have a soul?

These language models are constructing sentences that make a lot of sense, but is it just algorithmic wordplay?

Emily Bender and some engineers at Google came up with the term stochastic parrots. Stochastic is a statistical set of probabilities, using a certain amount of randomness, and theyre parrots because theyre mimicking human speech. These models were trained on an enormous amount of real-world human texts, and theyre able to predict what the next word is going to be in a certain context.

To me, that feels very different than how humans use language. We typically use language when were trying to create meaning with other people.

In that interpretation, the human mind is fundamentally different than AI.

I think it is. But there are people like Sam Altman, the CEO of OpenAI, who famously tweeted, I am a stochastic parrot, and so r u. There are people creating this technology who believe theres really no difference between how these models use language and how humans use language.

We think we have all these original ideas, but are we just rearranging the chairs on the deck?

I recently asked a computer scientist, What do you think creativity is? And he said, Oh, thats easy. Its just randomness. And if you know how these models work, there is a certain amount of correlation between randomness and creativity. A lot of the models have whats called a temperature gauge. If you turn up the temperature, the output becomes more random and it seems much more creative. My feeling is that theres a certain amount of randomness in human creativity, but I dont think thats all there is.

As a writer, how do you think about creativity and originality?

I think about modernist writers like James Joyce or Virginia Woolf, who completely changed literature. They created a form of a consciousness on the page that felt nothing like what had come before in the history of the novel. Thats not just because they randomly recombined everything they had read. The nature of human experience was changing during that time, and they found a way to capture what that felt like. I think creativity has to have that inner subjective quality. It comes back to the idea of meaning, which is created between two minds.

Its commonly assumed that AI has no thinking mind or subjective experience, but how would we even know if these AI models are conscious?

I have no idea. My intuition is that if it said something that was convincing enough to show that it has experience, which includes emotion but also self-awareness. But weve already had instances where the models have spoken in very convincing terms about having an inner life. There was a Google engineer, Blake Lemoine, who was convinced that the chatbot he was working on was sentient. This is going to be fiercely debated.

Artificial general intelligence is creating something thats essentially going to be like a god.

A lot of these chatbots do seem to have self-awareness.

Theyre designed to appear that way. Theres been so much money poured into emotional AI. This is a whole subfield of AIcreating chatbots that can convincingly emote and respond to human emotion. Its about maximizing engagement with the technology.

Do you think a very advanced AI would have godlike capacities? Will machines become so sophisticated that we cant distinguish between them and more conventional religious ideas of God?

Thats certainly the goal for a lot of people developing this technology. Sam Altman, Elon Musktheyve all absorbed the Kurzweil idea of the singularity. They are essentially trying to create a god with AGIartificial general intelligence. Its AI that can do everything we can and surpass human intelligence.

But isnt intelligence, no matter how advanced, different than God?

The thinking is that once it gets to the level of human intelligence, it can start doing what were doing, modifying and improving itself. At that point it becomes a recursive process where theres going to be some sort of intelligence explosion. This is the belief.

But theres another question: What are we trying to design? If you want to create a tool that helps people solve cancer or find solutions to climate change, you can do that with a very narrowly trained AI. But the fact that we are now working toward artificial general intelligence is different. Thats creating something thats essentially going to be like a god.

Why do you think Elon Musk and Sam Altman want to create this?

I think they read a lot of sci-fi as kids. [Laughs] I mean, I dont know. Theres something very deeply human in this idea of, Well, we have this capacity, so were going to do it. Its scary, though. Thats why its called the singularity. You cant see beyond it. Its an event horizon. Once you create something like that, theres really no way to tell what it will look like until its in the world.

I do feel like people are trying to create a system thats going to give answers that are difficult to come by through ordinary human thought. Thats the main appeal of creating artificial general intelligence. Its some sort of godlike figure that can give us the answers to persistent political conflicts and moral debates.

If its smart enough, can AI solve the problems that we imperfect humans cannot?

I dont think so. Its similar to what I was looking for in automatic writing, which is a source of meaning thats external to my experience. Life is infinitely complex, and every situation is different. That requires a constant process of meaning-making.

Hannah Arendt talks about thinking and then thinking again. Youre constantly making and unmaking thought as you experience the world. Machines are rigid. Theyre trained on the whole corpus of human history. Theyre like a mirror, reflecting back to us a lot of our own beliefs. But I dont think they can give us that sense of meaning that were looking for as humans. Thats something that we ultimately have to create for ourselves.

This interview originally aired on Wisconsin Public Radios nationally syndicated showTo the Best of Our Knowledge. You can listen to the full interview with Meghan OGieblynhere.

Lead image: lohloh / Shutterstock

Posted on May 2, 2024

Steve Paulson is the executive producer of Wisconsin Public Radios nationally-syndicated show To the Best of Our Knowledge. Hes the author of Atoms and Eden: Conversations on Religion and Science. You can find his podcast about psychedelics, Luminous, here.

Cutting-edge science, unraveled by the very brightest living thinkers.

Read more here:

Writer Meghan O'Gieblyn on AI, Consciousness, and Creativity - Nautilus

Posted in Ai

iOS 18: Here are the new AI features in the works – 9to5Mac

2024 is shaping up to be the Year of AI for Apple, with big updates planned for iOS 18 and more. The rumors and Tim Cook himself make it clear that there are new AI features for Apples platforms in the works. Heres everything we know about the ways Apple is exploring AI features

There have been a number of rumors about the various AI features in the works inside Apple. Bloomberg has reported that Apple thinks iOS 18 will be one of the biggest iOS updates ever, headlined by a number of new AI features.

Mark Gurman reported last July that Apple created its own Large Language Model(LLM) system, which has been dubbedAppleGPT. The project uses a framework called Ajax that Apple started building in 2022 to base various machine learning projects on a shared foundation. This Ajax framework will serve as the basis for Apples forthcoming AI features across all of its platform.

9to5Macfound evidenceof Apples work on new AI and large language model technology in iOS 17.4. We reported that Apple is relying on OpenAIs ChatGPT API for internal testing to help the development of its own AI models.

Bloomberg has reported that Apples iOS 18 features will be powered by an entirely on-device large language model, which offers a number of privacy and speed benefits.

Here are some of the rumors about new AI features coming to iOS 18:

Did you know that Apple has actually already launched a number of powerful AI frameworks and models? Heres a recap of those:

During a recent Apple earnings call, Tim Cook offered a rare teaser for a future product announcement. According to Cook, Apple is spending a tremendous amount of time and effort on artificial intelligence technologies, and the company is excited to share the details of our ongoing work in that space later this year.

Its extraordinarily rare for Cook to even remotely hint at Apples plans for future product announcements. Why did he do it this time? Likely to ease the concerns of investors and analysts worried about Apple falling behind the likes of OpenAI, Google, and Microsoft. Whether the teaser is enough to calm those fears until an actual product announcement materializes remains to be seen.

Also during an earnings call recently, Cook touted the advantages that Apple has which will set its AI apart from the competition:

We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apples unique combination of seamless hardware, software, and services integration, groundbreaking Apple Silicon with our industry-leading neural engines, and our unwavering focus on privacy, which underpins everything we create.

In a surprising twist, Bloomberg has reported that Apple is in active negotiations with Google about potentially licensing Gemini, which is Googles set of generative AI models. The report explains that Apple is specifically looking to partner on cloud-based generative AI models.

In this scenario, Apple would rely on a partner such as Google for its cloud-based features. Other features would still be powered on-device by Apples own technology.

The generative AI features under discussion would theoretically be baked into Siri and other apps. New AI capabilities based on Apples homegrown models, meanwhile, would still be woven into the operating system. Theyll be focused on proactively providing users with information and conducting tasks on their behalf in the background, people familiar with the matter said.

While Apple is said to be in active negotiations for this partnership with Google, the company has also reportedly held talks with OpenAI as well.

In fact, most recently, it was reported that Apple had resumed talks with OpenAI about a partnership. According to reports, Apple would use OpenAIs technology to power an AI-based chatbot in iOS 18.

At this point, the question is which of the many rumors will come to fruition this year.

Id be surprised if all of these rumored AI features are ready for this year. My assumption is that Apple is working on all of this stuff (and more), but will pare down the final list of features included in iOS 18. Features that dont make the cut will likely come in a later update to iOS 18 or with iOS 19 in 2025.

Apple has officially set WWDC for June 10 this year, and thats where we expect the bulk of its AI announcements to be made.

Where do you want to see Apple direct its attention toward for new AI features this year? Let us know down in the comments.

Follow Chance:Threads,Twitter,Instagram, andMastodon.

FTC: We use income earning auto affiliate links. More.

Original post:

iOS 18: Here are the new AI features in the works - 9to5Mac

Posted in Ai

Podcast: Resisting AI and the Consolidation of Power | TechPolicy.Press – Tech Policy Press

Audio of this conversation is available via your favorite podcast service.

In an introduction to a special issue of the journal First Monday on topics related to AI andpower, researchers Jenna Burrell and Jacob Metcalf argue that "what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science." The papers in the journal go on to interrogate the epistemic culture of AI safety, the promise of utopia through artificial general intelligence how to debunk robot rights, and more.

To learn more about some of the ideas in the special issue, Justin Hendrix spoke to Burrell, Metcalf, and two of the other authors of papers included in it: Shazeda Ahmed and mile P. Torres.

A transcript of the discussion is forthcoming.

Read the original here:

Podcast: Resisting AI and the Consolidation of Power | TechPolicy.Press - Tech Policy Press

Posted in Ai

JPMorgan Chase Unveils AI-Powered Tool for Thematic Investing – PYMNTS.com

J.P. Morgan Chasereportedly unveiled an artificial intelligence-powered tooldesignedto facilitate thematic investing.

The tool, calledIndexGPT, delivers thematic investment baskets created withthe assistance ofOpenAIsGPT-4model, Bloomberg reported Friday (May 3).

IndexGPT creates these thematic indexes by generating a list of keywords associated with a particular theme that are then analyzed using a natural language processing model that scans news articles to identify companies involved in that space, according to the report.

The tool allows forthe selection ofa broader range of stocks, going beyond the obvious choices that are already well-known,Rui Fernandes, J.P. Morgans head of markets trading structuring, told Bloomberg.

Thematic investing, which focuses on emerging trends rather than traditional industry sectors or company fundamentals, has gained popularity in recent years, the report said.

Thematic funds experienced a surge in popularity in 2020 and 2021, with retail investors spending billions of dollars on products based on various themes. However, interest in these strategies waned amid poor performance and higher interest rates, per the report.

J.P. Morgan Chases IndexGPT aims to reignite interest in thematic investing by providing a more accurate and efficient approach, according to the report.

While AI hasbeen widely usedin the financial industry for functions such as trading, risk management and investment research, the rise of generative AI tools has opened new possibilities for banks and financial institutions, the report said.

Fernandes said he sees IndexGPT as a first step ina long-term process ofintegrating AI across the banks index offering, per the report. J.P. Morgan Chase aims to continuously improve its offerings, from equity volatility products to commodity momentum products, gradually and thoughtfully.

In another deployment of this technology in the investment space,Morgan Stanleysaid in September that it was launching anAI-powered assistantfor financial advisers and their support staff. This tool, the AI @ Morgan Stanley Assistant, facilitates access to 100,000 research reports and documents.

In the venture capital world, AI has become a tool for making savvyinvestment decisions. VC firms are using the technology to analyze vast amounts of data on startups and market trends, help the firms identify the most promising opportunities and aid them in making better-informed decisions about where to allocate their funds.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more from the original source:

JPMorgan Chase Unveils AI-Powered Tool for Thematic Investing - PYMNTS.com

Posted in Ai

Microsoft announces significant commitments to enable a cloud and AI-powered future for Thailand – Microsoft Stories … – Microsoft

Microsoft Chairman and CEO Satya Nadella announces a new data center region in Thailand during Microsoft Build: AI Day on May 01, 2024 in Bangkok, Thailand. Photo by Graham Denholm/Getty Images for Microsoft.

Read this in Thai.

Commitments include new cloud and AI infrastructure, AI skilling opportunities, and support for Thailands growing developer community

Bangkok, May 1, 2024 Today, Microsoft announced significant commitments to build new cloud and AI infrastructure in Thailand, provide AI skilling opportunities for over 100,000 people, and support the nations growing developer community.

The commitments build on Microsofts memorandum of understanding (MoU) with the Royal Thai Government to envision the nations digital-first, AI-powered future.

Microsoft Chairman and Chief Executive Officer Satya Nadella made the announcement in front of approximately 2,000 developers and business and technology leaders at the Microsoft Build: AI Day in Bangkok on Wednesday. The event was also attended by Thai Prime Minister Srettha Thavisin, who delivered a special address.

Our Ignite Thailand vision for 2030 aims to achieve the goal of developing the countrys stature as a regional digital economy hub that significantly enhances our innovation and R&D capabilities while also strengthening our tech workforce, said Prime Minister Thavisin. Todays announcement with Microsoft is a significant milestone in the journey of our Ignite Thailand vision one that promises new opportunities for growth, innovation, and prosperity for all Thais.

Thailand has an incredible opportunity to build a digital-first, AI-powered future, said Satya Nadella, Chairman and CEO, Microsoft. Our new datacenter region, along with the investments we are making in cloud and AI infrastructure, as well as AI skilling, build on our long-standing commitment to the country and will help Thai organizations across the public and private sector drive new impact and growth.

Dhanawat Suthumpun, Managing Director of Microsoft Thailand, said: Microsoft is dedicated to helping Thailand excel as a digital economy, ensuring that the benefits of cloud and AI technologies are widespread and contribute to the prosperity and wellbeing of Thais. Together, we are laying the foundations for a future that is not only technologically advanced but also inclusive and sustainable.

Growing capacity to thrive in the AI era

Microsofts digital infrastructure commitment includes establishing a new datacenter region in Thailand. The datacenter region will expand the availability of Microsofts hyperscale cloud services, facilitating enterprise-grade reliability, performance, and compliance with data residency and privacy standards.

It follows growing demand for cloud computing services in Thailand from enterprises, local businesses, and public sector organizations. It will also allow Thailand to capitalize on the significant economic and productivity opportunities presented by the latest AI technology.

According to research by Kearney, AI could contribute nearly US$1 trillion to Southeast Asias gross domestic product by 2030, with Thailand poised to capture US$117 billion of this amount.

Ensuring a skilled, AI-ready workforce

On Tuesday, Microsoft announced a broader commitment to provide AI skilling opportunities for 2.5 million people in the Association of Southeast Asian Nations (ASEAN) member states by 2025. This training and support will be delivered in partnership with governments, nonprofit and corporate organizations, and communities in Thailand, Indonesia, Malaysia, the Philippines, and Vietnam.

Microsofts skilling commitment is expected to benefit more than 100,000 individuals in Thailand.

It will enhance the AI proficiency of those involved in the nations tourism sector through the AI Skills for the AI-enabled Tourism Industry program. The initiative is a partnership between Microsoft and Thailands Ministry of Digital Economy and Society, Ministry of Tourism and Sports, Ministry of Labour, and the nations Technology Vocational Education Training Institute. It aims to empower young entrepreneurs and youths involved in tourism businesses across minor-tier geographic provinces in all five regions of Thailand.

The program will focus on enhancing the capabilities of 500 trainers from technology vocational education training institutes in AI for Thailands tourism sector. These trainers will then equip young individuals in tourism and hospitality with AI skills. The learning module will be accessible through partners learning platforms to ensure sustainability and scalability.

The tourism initiative builds on other Microsoft-supported skilling initiatives in Thailand, including Accelerating Thailand, the ASEAN Cyber Security Programme, Code; Without Barriers, and the Junior Software Developer Program.

Microsoft will also enable the Royal Thai Government to adopt a cloud-first policy with an AI skill development program for developers and government IT personnel.

Enabling developers to harness AIs potential

Nadella highlighted the important role developers play in shaping Thailands digital-first, AI-powered future.

Microsoft will continue to help foster the growth of the countrys developer community through new initiatives such as AI Odyssey, which is expected to help 6,000 Thai developers become AI subject matter experts by learning new skills and earning Microsoft credentials.

Thailand is a rapidly growing market on GitHub, the Microsoft-owned software development, collaboration, and innovation platform. More than 900,000 Thailand-based developers used GitHub in 2023, representing 24 percent year-on-year growth.

Furthermore, many Thai organizations are boosting their productivity and accelerating innovation using Microsofts generative AI-powered solutions. For example:

Several other organizations in Thailand are working with Microsoft to explore new possibilities with AI. They include the nations largest privately held company, Charoen Pokphand Group, and leading petrochemical and refining business, PTT Global Chemical Public Company Limited.

Microsoft also collaborates with Thailands National Cyber Security Agency to provide information on internet safety, cyber threats and vulnerabilities, and other related guidance to enhance the nations cybersecurity posture in the AI era. The Ministry of Finance, meanwhile, is using the power of AI to enhance cross-agency data collaboration, which will unlock deeper insights that support policy development towards a more financially inclusive economy for Thailand.

To learn more about Satya Nadellas visit and how Microsoft empowers organizations in the ASEAN region with AI, visit news.microsoft.com/thailand-visit-2024.

About Microsoft

Microsoft (Nasdaq MSFT @microsoft) creates platforms and tools powered by AI to deliver innovative solutions that meet the evolving needs of our customers. The technology company is committed to making AI available broadly and doing so responsibly, with a mission to empower every person and every organization on the planet to achieve more.

Tags: AI, Cloud, digital skills

See the original post here:

Microsoft announces significant commitments to enable a cloud and AI-powered future for Thailand - Microsoft Stories ... - Microsoft

Posted in Ai

Microsoft announces US$2.2 billion investment to fuel Malaysia’s cloud and AI transformation – Microsoft Stories Asia – Microsoft

Microsoft Chairman and CEO Satya Nadella announces a $2.2 billion investment to advance new cloud and AI infrastructure in Malaysia during the Microsoft Build: AI Day on May 02, 2024 in Kuala Lumpur, Malaysia. Photo by Graham Denholm/Getty Images for Microsoft.

Read this in Bahasa Malaysia and Mandarin.

Investment includes building digital infrastructure, creating AI skilling opportunities, establishing a national AI Centre of Excellence, and enhancing the nations cybersecurity capabilities

Kuala Lumpur, May 2, 2024 Today, Microsoft announced it will invest US$2.2 billion over the next four years to support Malaysias digital transformation the single largest investment in its 32-year history in the country.

Microsofts investment includes:

The investment demonstrates Microsofts commitment to developing Malaysia as a hub forcloud computingand related advanced technologies, including generative AI. This will support the nations productivity, competitiveness, resilience, and economic growth.

We are committed to supporting Malaysias AI transformation and ensure it benefits all Malaysians, said Satya Nadella, Chairman and CEO, Microsoft. Our investments in digital infrastructure and skilling will help Malaysian businesses, communities, and developers apply the latest technology to drive inclusive economic growth and innovation across the country.

YB Senator Tengku Datuk Seri Utama Zafrul Abdul Aziz, Malaysias Minister of Investment, Trade & Industry said, Microsofts 32-year presence in Malaysia showcases a deep partnership built on trust. Indeed, Malaysias position as a vibrant tech investment destination is increasingly being recognized by world-recognized names due to our well-established semiconductor ecosystem, underscored by our value proposition that this is where global starts.

Microsofts development of essential cloud and AI infrastructure, together with AI skilling opportunities, will significantly enhance Malaysias digital capacity and further elevate our position in the global tech landscape. Together with Microsoft, we look forward to creating more opportunities for our SMEs and better-paying jobs for our people, as we ride the AI revolution to fast-track Malaysias digitally empowered growth journey.

We are honored to collaborate with the government to support their National AI Framework, which enhances the countrys global competitiveness. This strategic emphasis on AI not only boosts economic growth but also promotes inclusivity by bridging the digital divide and ensuring everyone gets a seat at the table, so every Malaysian can thrive in this new digital world. As a result, Malaysia is steadily establishing itself as a regional hub for digital innovation and smart technologies, embodying a forward-thinking approach that prioritizes sustainable development and societal well-being through digital transformation, said Andrea Della Mattea, President of Microsoft ASEAN.

Expanding Malaysias digital capacity to seize AI opportunities

The digital infrastructure investment builds on Microsofts Bersama Malaysia (Together with Malaysia) initiative, announced in April 2021, to support inclusive economic growth. This included plans to establish the companys first datacenter region in the country.

The investment announced today will enable Microsoft to meet the growing demand for cloud computing services in Malaysia. It will also allow Malaysia to capitalize on the significant economic and productivity opportunities presented by the latest AI technology.

According to research by Kearney, AI could contribute nearly US$1 trillion to Southeast Asias gross domestic product (GDP) by 2030, with Malaysia poised to capture US$115 billion of this amount.

Equipping people with skills to thrive in the AI era

On Tuesday, Microsoft announced a broader commitment to provide AI skilling opportunities for 2.5 million people in the Association of Southeast Asian Nations (ASEAN) member states by 2025. This training and support will be delivered in partnership with governments, nonprofit and business organizations, and communities in Malaysia, Indonesia, the Philippines, Thailand, and Vietnam.

Microsofts skilling commitment is expected to benefit 200,000 people in Malaysia by providing:

The commitment builds on Microsofts other recent skilling activities in Malaysia, including its success in providing digital skills to more than 1.53 million Malaysians as part of the Bersama Malaysia initiative.

Partnering with government to strengthen AI and cybersecurity capabilities

Microsoft will continue to partner with the Government of Malaysia to enhance the nations digital ecosystem through several initiatives. These include establishing a national AI Centre of Excellence in collaboration with agencies in Malaysias Ministry of Digital to drive AI adoption across key industries, while ensuring AI governance and regulatory compliance. They also include pioneering AI adoption in the public sector through projects with:

Microsoft will also collaborate with the National Cyber Security Agency of Malaysia (NACSA) through the Perisai Siber (Cyber Shield) initiative to enhance the countrys cybersecurity capabilities. The collaboration will focus on promoting security and resilience in the public sector through security assessments and capacity building.

In addition, Microsoft will look to support NACSA in its role as Malaysias lead agency for cybersecurity matters, as it formulates the next stage of the nations cybersecurity strategy. The two organizations will also explore deeper collaborations in developing cybersecurity skills through initiatives such as Microsofts Ready4AI&Security program.

Empowering developers to harness AIs potential

Microsoft will continue to help foster the growth of Malaysias developer community through new initiatives such as AI Odyssey, which is expected to help 2,000 Malaysian developers become AI subject matter experts by learning new skills and earning Microsoft credentials.

Malaysia is a rapidly growing market on GitHub, the Microsoft-owned software development, collaboration, and innovation platform. Almost 680,000 of the nations developers used GitHub in 2023, representing 28 percent year-on-year growth.

Furthermore, many Malaysian organizations are boosting their productivity and accelerating innovation using Microsofts generative AI-powered solutions. For example:

To learn more about Satya Nadellas visit and how Microsoft is empowering organizations in the ASEAN region with AI, visit news.microsoft.com/malaysia-visit-2024.

Leadership statements

YB Rafizi Ramli, Minister of Economy

The advent of ChatGPT created a new vertical in the startup world. As more companies embrace the power of AI, having the right digital infrastructure in Malaysia is key to future-proofing our nations economy. Microsofts investment will help accelerate the adoption of generative AI, building a pipeline of AI-driven startups, and benefitting our economy through increased productivity and higher wages.

YB Gobind Singh Deo, Minister of Digital

As a nation, we are focused on accelerating digitalization and fostering a culture of innovation alongside technological advancement to level the playing field for all Malaysians to prosper in an inclusive digital economy. Microsofts investment is a significant step in our journey towards becoming a digitally inclusive society. It underscores the importance of partnership in driving nationwide digital transformation and reinforces our commitment to equipping Malaysians with the infrastructure, advanced tools, and skills they need to thrive in the digital age.

YB Fahmi Fadzil, Minister of Communications

Microsofts significant investment in Malaysia recognises and supports the governments efforts in building an inclusive digital ecosystem for the country. We are excited to continue partnering with technology leaders like Microsoft to foster a space where Malaysians can seamlessly connect, learn, and benefit from our nations digital transformation.

YB Chang Lih Kang, Minister of Science, Technology & Innovation

Todays investment by Microsoft exemplifies a dynamic public-private partnership aimed at enhancing the socio-economic status and quality of life in Malaysian communities. As we embrace AIs potential, we commend Microsofts commitment to responsible AI, which aligns with our vision for advancing technology in Malaysia responsibly and inclusively.

Laurence Si, Managing Director, Microsoft Malaysia

With rising demand for Cloud and AI, Microsofts investment announced today underscores our commitment to building a robust digital ecosystem in the country. From driving more innovations born in Malaysia, to fostering an ecosystem of skilled talents and enhancing cybersecurity capabilities for Malaysian organizations, we are dedicated to our role as a trusted technology partner to the nation.

Mr. Sikh Shamsul Ibrahim Sikh Abdul Majid, Chief Executive Officer, Malaysian Investment Development Authority (MIDA)

We are excited to deepen our partnership with Microsoft as they strengthen their commitment by establishing a cloud and AI infrastructure and supporting our vibrant developer community in Malaysia. This strategic collaboration underscores our dedication to innovation and regional industry growth. By leveraging Microsofts expertise, we aim to accelerate economic development, create jobs, and enhance industry competitiveness through digital transformation. We believe we can achieve more together and further advance our partnership. This investment not only reinforces Malaysias position as a leading digital hub but also marks a promising start in attracting more companies to embark on this digital journey with us, promoting inclusive growth and prosperity nationwide.

Ir. Dr. Megat Zuhairy Megat Tajuddin, Chief Executive Officer, National Cyber Security Agency (NACSA)

Microsofts collaboration with NACSA on Perisai Siber is pivotal as one of our strategic partnerships with industry players in establishing a secure digital infrastructure for our nation. Together, our goal is to bolster security and resilience, beginning with the public sector, to ultimately strengthen the nations cybersecurity capabilities.

Ts. Mahadhir Aziz, Chief Executive Officer, Malaysia Digital Economy Corporation (MDEC)

Microsofts commitment to Malaysia demonstrates confidence in our nations digital future. Through this investment in cloud and AI infrastructure, local organizations can tap into more opportunities to upscale and innovate, further propelling Malaysias aspirations for regional leadership in the digital economy.

About Microsoft

Microsoft (Nasdaq MSFT @microsoft) creates platforms and tools powered by AI to deliver innovative solutions that meet the evolving needs of our customers. The technology company is committed to making AI available broadly and doing so responsibly, with a mission to empower every person and every organization on the planet to achieve more.

Tags: AI, Cloud, digital skills

Continue reading here:

Microsoft announces US$2.2 billion investment to fuel Malaysia's cloud and AI transformation - Microsoft Stories Asia - Microsoft

Posted in Ai

Google urges US to update immigration rules to attract more AI talent – The Verge

The US could lose out on valuable AI and tech talent if some of its immigration policies are not modernized, Google says in a letter sent to the Department of Labor.

Google says policies like Schedule A, a list of occupations the government pre-certified as not having enough American workers, have to be more flexible and move faster to meet demand in technologies like AI and cybersecurity. The company says the government must update Schedule A to include AI and cybersecurity and do so more regularly.

Theres wide recognition that there is a global shortage of talent in AI, but the fact remains that the US is one of the harder places to bring talent from abroad, and we risk losing out on some of the most highly sought-after people in the world, Karan Bhatia, head of government affairs and public policy at Google, tells The Verge. He noted that the occupations in Schedule A have not been updated in 20 years.

Companies can apply for permanent residencies, colloquially known as green cards, for employees. The Department of Labor requires companies to get a permanent labor certification (PERM) proving there is a shortage of workers in that role. That process may take time, so the government pre-certified some jobs through Schedule A.

The US Citizenship and Immigration Services lists Schedule A occupations as physical therapists, professional nurses, or immigrants of exceptional ability in the sciences or arts. While the wait time for a green card isnt reduced, Google says Schedule A cuts down the processing time by about a year.

Google says Schedule A is not currently serving its intended purpose, especially as demand for new technologies like generative AI has grown, so AI and cybersecurity must be included on the list. Google says the government should also consider multiple data sources, including accepting public feedback, to regularly update Schedule A so the process is more transparent and to really reflect workforce gaps.

Since the rise of generative AI, US companies have struggled to find engineers and researchers in the AI space. While the US produces a large cohort of AI talent, there is a shortage of AI specialists in the country, Bhatia says. However, the USs strict immigration policies have made attracting people to work in American companies to build AI platforms difficult. He adds Google employees have often had to leave the US while waiting for the PERM process to finish and for their green cards to be approved.

Competition for AI talent has been intense, with companies often poaching engineers and researchers. The Information reported AI developers like Meta have resorted to hiring AI talent without interviews. Wages for AI specialists soared, with OpenAI allegedly paying researchers up to $10 million. President Joe Bidens executive order on AI mandates federal agencies to help increase AI talent in the country.

Continued here:

Google urges US to update immigration rules to attract more AI talent - The Verge

Posted in Ai

Welcome to the Valley of the Creepy AI Dolls – WIRED

Social robot roommate Jibo initially caused a stir, but sadly didn't live long.

Not that there havent been an array of other attempts. Jibo, a social robot roommate that used AI and endearing gestures to bond with its owners had its collective plug unceremoniously pulled just a few years after being put out into the world. Meanwhile, another US-grown offering, Moxie, an AI-empowered robot aimed at helping with child development, is still active.

It's hard not to look at devices like this and shudder at the possibilities. Theres something inherently disturbing about tech that plays at being human, and that uncanny deception can rub people the wrong way. After all, our science fiction is replete with AI beings, many of them tales of artificial intelligence gone horribly wrong. The easy, and admittedly lazy, comparison to something like the Hyodol is M3GAN, the 2023 film about an AI-enabled companion doll that goes full murderbot.

But aside from offputting dolls, social robots come in many forms. Theyre assistants, pets, retail workers, and often socially inept weirdos that just kind of hover awkwardly in public. But theyre also sometimes weapons, spies, and cops. Its with good reason that people are suspicious of these automatons, whether they come in a fluffy package or not.

Wendy Moyle is a professor at the School of Nursing & Midwifery Griffith University in Australia who works with patients experiencing dementia. She says her work with social robots has angered people, who sometimes see giving robot dolls to older adults as infantilizing.

When I first started using robots, I had a lot of negative feedback, even from staff, Moyle says. I would present at conferences and have people throw things at me because they felt that this was inhuman.

However, the atmosphere around assistive robots has gotten less hostile recently, as they've been utilized in many positive use cases. Robotic companions are bringing joy to people with dementia. During the Covid pandemic, caretakers used robotic companions like Paro, a small robot meant to look like a baby harp seal, to help ease loneliness in older adults. Hyodols smiling dolls, whether you see them as sickly or sweet, are meant to evoke a similar friendly response.

Go here to read the rest:

Welcome to the Valley of the Creepy AI Dolls - WIRED

Posted in Ai

This Seemingly AI-Generated Car Article On Yahoo Is A Good Reminder That AI Is An Idiot – The Autopian

Here at The Autopian, we have some very stern rules when it comes to the use of Artificial Intelligence (AI) in the content we produce. While our crack design team may occasionally employ AI as a tool in generating images, well never just use AI on its own to do anything not just for ethical reasons, but because we often want images of specificcars, and AI fundamentally doesnt understand anything. When an AI generates an image of a car, it has no idea if that car ever actually existed or not. An AI doesnt have ideas at all, in fact its just scraped data being assembled with a glorified assembly of if-then-else commands.

This is an even bigger factor in AI-generated copy. Well never use it because AI has no idea what the hell its writing about, and so has no clue if anything is actually true, and since ChatGPT has never driven a car, I dont really trust its insights into anything automotive.

These sort of rules are hardly universal in our industry, though, so if we ever wanted confirmation that our no-AI-copy rule was the right way, were lucky enough to be able to get such reassurance pretty easily. For example, all we have to do is read this dazzlingly shitty article re-published over on Yahoo Finance about the worst cars people have owned.

Maybe its not AI? Maybe this Kellan Jansen is an actual writer who actually wrote this, and in that case, I feel bad both for this coming excoriation and about whatever happened to them to cause them to be in the state they seem to be in. The article is shallow and terrible and gleefully, hilariously wrongin several places.

I guess I should also note that we dont use AI because the 48K Sinclair Spectrum workstations we use here dont quite have the power to run any AI. Well, we do have one AI that we use on them, our Artificial Ignorance system that we employ to get just that specialje ne sais quoi in every post we write. Oh, and our AI (Artificial Indignation) tools help with our hot takes, too. So, two.

Okay, but lets get back to the Yahoo Finance article, titled The Worst Car I Ever Owned: 9 People Share Which Vehicles Arent Worth Your Money, which is a conceptually lazy article that is just taking the responses to a Reddit post called Whats the worst car you have personally owned? which makes this story basically just a re-write of a Reddit post. It seems like the Reddit post was fed into whatever AI half-assed its way through generating the article, based on these results.

The results are, predictably, shitty, but also still worthy of pointing out because comeon. Theres this, for example:

BMWs are a frequent source of frustration for car owners on Reddit. Just ask userHurr1canE_.

They bought a 2023 BMW BRZ and almost immediately started experiencing problems. Their turbo started blowing white smoke within two weeks of buying the car, and the engine blew up within 5,000 miles.

The Reddit user also had these issues with the car:

Other users mention poor experiences with BMW X3s and 540i Sport Wagons. Its enough to suggest you think carefully before making one of these your next vehicle.

The fuck? What is a BMW BRZ? This is such a perfect example of why AI-generated articles are garbage: they make shit up. Maybe thats anthropomorphizing the un-sentient algorithm too much, but the point is that its writing, with all the confidence of a drunk uncle about to belly-flop into a pool, about a car that simply does not exist.

And, if you look at the Reddit post, its easy to see what happened:

The Redditor had their current car, a 2023 [Subaru] BRZ in their little under-name caption (their flair), and the dumb AI processed that into the mix, and, being a dumb computer algorithm that doesnt know from cars or clams, conflated the car being talked about with the one the poster actually owns. You know, like how a drooling simpleton might.

Theres more of this, too. Like this one:

Ah, yes, the F10 550i. So many of us have been burned by that F10 brand, have we not? Or, at least, we would have, if such a brand existed, which it doesnt. What seems to have happened here is the AI found a user complaining about a 2011 F10 550i but didnt know enough to realize this was a user talking about their BMW 5 series, and yes, F10 refers to the 5-series cars made between 2010 to 2016, but nobody would refer to this car out of context in a general-interest article on a financial sitewithoutmentioning BMW, would they? I mean, no human would, but we dont seem to be dealing with a human, just a dumb machine.

Even if we ignore the made-up car makes and models, the vague and useless issues listed, and the fact that the article is nothing more than a re-tread of a random Reddit post, theres no escaping that this entire thing is useless garbage, an unmitigated waste of time. What is learned by reading this article? What is gained? Nothing, absolutely nothing.

And its not like this is on some no-name site; it was published on Yahoo! Finance, well, after first appearing on GOBankingRates.com, that mainstay of automotive journalism. It all just makes me angry because there are innocent normies out there, reading Yahoo! Finance, maybe with some mild interest in cars, and now their heads are getting filled with information that is simplywrong.

People deserve better than this garbage. And this was just something innocuous; what if some overpaid seat-dampener at Yahoo decides that theyll have AI write articles about actually driving or something that involves actual safety, and theres no attempt made to confirm that the text AI poops out has any basis in fact at all?

We dont need this. AI-generated crapticles like these are just going to clog Google searches and load the web up full of insipid, inaccurate garbage, and thatsmyjob, dammit.

Seriously, though, were at an interesting transition point right now; these kinds of articles are still new, and while I dont know if theres any way we can stop the internet from becoming polluted with this sort of crap, maybe we can at least complain about it, loudly. Then we can say we Did Something.

(Thanks, Isaac!)

Read more:

This Seemingly AI-Generated Car Article On Yahoo Is A Good Reminder That AI Is An Idiot - The Autopian

Posted in Ai

Why scientists trust AI too much and what to do about it – Nature.com

AI-run labs have arrived such as this one in Suzhou, China.Credit: Qilai Shen/Bloomberg/Getty

Scientists of all stripes are embracing artificial intelligence (AI) from developing self-driving laboratories, in which robots and algorithms work together to devise and conduct experiments, to replacing human participants in social-science experiments with bots1.

Many downsides of AI systems have been discussed. For example, generative AI such as ChatGPT tends to make things up, or hallucinate and the workings of machine-learning systems are opaque.

Artificial intelligence and illusions of understanding in scientific research

In a Perspective article2 published in Nature this week, social scientists say that AI systems pose a further risk: that researchers envision such tools as possessed of superhuman abilities when it comes to objectivity, productivity and understanding complex concepts. The authors argue that this put researchers in danger of overlooking the tools limitations, such as the potential to narrow the focus of science or to lure users into thinking they understand a concept better than they actually do.

Scientists planning to use AI must evaluate these risks now, while AI applications are still nascent, because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline, write co-authors Lisa Messeri, an anthropologist at Yale University in New Haven, Connecticut, and Molly Crockett, a cognitive scientist at Princeton University in New Jersey.

The peer-reviewed article is a timely and disturbing warning about what could be lost if scientists embrace AI systems without thoroughly considering such hazards. It needs to be heeded by researchers and by those who set the direction and scope of research, including funders and journal editors. There are ways to mitigate the risks. But these require that the entire scientific community views AI systems with eyes wide open.

ChatGPT is a black box: how AI research can break it open

To inform their article, Messeri and Crockett examined around 100 peer-reviewed papers, preprints, conference proceedings and books, published mainly over the past five years. From these, they put together a picture of the ways in which scientists see AI systems as enhancing human capabilities.

In one vision, which they call AI as Oracle, researchers see AI tools as able to tirelessly read and digest scientific papers, and so survey the scientific literature more exhaustively than people can. In both Oracle and another vision, called AI as Arbiter, systems are perceived as evaluating scientific findings more objectively than do people, because they are less likely to cherry-pick the literature to support a desired hypothesis or to show favouritism in peer review. In a third vision, AI as Quant, AI tools seem to surpass the limits of the human mind in analysing vast and complex data sets. In the fourth, AI as Surrogate, AI tools simulate data that are too difficult or complex to obtain.

Informed by anthropology and cognitive science, Messeri and Crockett predict risks that arise from these visions. One is the illusion of explanatory depth3, in which people relying on another person or, in this case, an algorithm for knowledge have a tendency to mistake that knowledge for their own and think their understanding is deeper than it actually is.

How to stop AI deepfakes from sinking society and science

Another risk is that research becomes skewed towards studying the kinds of thing that AI systems can test the researchers call this the illusion of exploratory breadth. For example, in social science, the vision of AI as Surrogate could encourage experiments involving human behaviours that can be simulated by an AI and discourage those on behaviours that cannot, such as anything that requires being embodied physically.

Theres also the illusion of objectivity, in which researchers see AI systems as representing all possible viewpoints or not having a viewpoint. In fact, these tools reflect only the viewpoints found in the data they have been trained on, and are known to adopt the biases found in those data. Theres a risk that we forget that there are certain questions we just cant answer about human beings using AI tools, says Crockett. The illusion of objectivity is particularly worrying given the benefits of including diverse viewpoints in research.

If youre a scientist planning to use AI, you can reduce these dangers through a number of strategies. One is to map your proposed use to one of the visions, and consider which traps you are most likely to fall into. Another approach is to be deliberate about how you use AI. Deploying AI tools to save time on something your team already has expertise in is less risky than using them to provide expertise you just dont have, says Crockett.

Journal editors receiving submissions in which use of AI systems has been declared need to consider the risks posed by these visions of AI, too. So should funders reviewing grant applications, and institutions that want their researchers to use AI. Journals and funders should also keep tabs on the balance of research they are publishing and paying for and ensure that, in the face of myriad AI possibilities, their portfolios remain broad in terms of the questions asked, the methods used and the viewpoints encompassed.

All members of the scientific community must view AI use not as inevitable for any particular task, nor as a panacea, but rather as a choice with risks and benefits that must be carefully weighed. For decades, and long before AI was a reality for most people, social scientists have studied AI. Everyone including researchers of all kinds must now listen.

See the original post:

Why scientists trust AI too much and what to do about it - Nature.com

Posted in Ai

AI-generated images and video are here: how could they shape research? – Nature.com

Tools such as Sora can generate convincing video footage from text prompts.Credit: Jonathan Raa/NurPhoto via Getty

Artificial intelligence (AI) tools that translate text descriptions into images and video are advancing rapidly.

Just as many researchers are using ChatGPT to transform the process of scientific writing, others are using AI image generators such as Midjourney, Stable Diffusion and DALL-E to cut down on the time and effort it takes to produce diagrams and illustrations. However, researchers warn that these AI tools could spur an increase in fake data and inaccurate scientific imagery.

Nature looks at how researchers are using these tools, and what their increasing popularity could mean for science.

Many text-to-image AI tools, such as Midjourney and DALL-E, rely on machine-learning algorithms called diffusion models that are trained to recognize the links between millions of images scraped from the Internet and text descriptions of those images. These models have advanced in recent years owing to improvements in hardware and the availability of large data sets for training. After training, diffusion models can use text prompts to generate new images.

Some researchers are already using AI-generated images to illustrate methods in scientific papers. Others are using them to promote papers in social-media posts or to spice up presentation slides. They are using tools like DALL-E 3 for generating nice-looking images to frame research concepts, says AI researcher Juan Rodriguez at ServiceNow Research in Montreal, Canada. I gave a talk last Thursday about my work and I used DALL-E 3 to generate appealing images to keep peoples attention, he says.

Text-to-video tools are also on the rise, but seem to be less widely used by researchers who are not actively developing or studying these tools, says Rodriguez. However, this could soon change. Last month, ChatGPT creator OpenAI in San Francisco, California, released video clips generated by a text-to-video tool called Sora. With the experiments we saw with Sora, it seems their method is much more robust at getting results quickly, says Rodriguez. We are early in terms of text-to-video, but I guess this year we will find out how this develops, he adds.

Generative AI tools can reduce the time taken to produce images or figures for papers, conference posters or presentations. Conventionally, researchers use a range of non-AI tools, such as PowerPoint, BioRender, and Inkscape. If you really know how to use these tools, you can make really impressive figures, but its time-consuming, says Rodriguez.

AI tools can also improve the quality of images for researchers who find it hard to translate scientific concepts into visual aids, says Rodriguez. With generative AI, researchers still come up with the high-level idea for the image, but they can use the AI to refine it, he says.

Currently, AI tools can produce convincing artwork and some illustrations, but they are not yet able to generate complex scientific figures with text annotations. They dont get the text right the text is sometimes too small, much bigger or rotated, says Rodriguez. The kind of problems that can arise were made clear in a paper published in Frontiers in Cell and Developmental Biology in mid-February, in which researchers used Midjourney to depict a rats reproductive organs1. The result, which passed peer review, was a cartoon rodent with comically enormous genitalia, annotated with gibberish.

It was this really weird kind of grotesque image of a rat, says palaeoartist Henry Sharpe, a palaeontology student at the University of Alberta in Edmonton, Canada. This incident is one of the biggest case[s] involving AI-generated images to date, says Guillaume Cabanac, who studies fraudulent AI-generated text at the University of Toulouse, France. After a public outcry from researchers, the paper was retracted.

This now-infamous AI-generated figure featured in a scientific paper that was later retracted.Credit: X. Guo et al./Front. Cell Dev. Biol.

There is also the possibility that AI tools could make it easier for scientific fraudsters to produce fake data or observations, says Rodriguez. Papers might contain not only AI-generated text, but also AI-generated figures, he says. And there is currently no robust method for detecting such images and videos. It's going to get pretty scary in the sense we are going to be bombarded by fake and synthetically generated data, says Rodriguez. To address this, some researchers are developing ways to inject signals into AI-generated images to enable their detection.

Last month, Sharpe launched a poll on social-media platforms including X, Facebook and Instagram that surveyed the views of around 90 palaeontologists on AI-generated depictions of ancient life. Just one in four professional palaeontologists thought that AI should be allowed to be in scientific publications, says Sharpe.

AI-generated images of ancient lifeforms or fossils can mislead both scientists and the public, he adds. Its inaccurate, all it does is copy existing things and it cant actually go out and read papers. Iteratively reconstructing ancient lifeforms by hand, in consultation with palaeontologists, can reveal plausible anatomical features a process that is completely lost when using AI, Sharpe says. Palaeoartists and palaeontologists have aired similar views on X using the hashtag #PaleoAgainstAI.

Journals differ in their policies around AI-generated imagery. Springer Nature has banned the use of AI-generated images, videos and illustrations in most journal articles that are not specifically about AI (Natures news team is independent of its publisher, Springer Nature). Journals in the Science family do not allow AI-generated text, figures or images to be used without explicit permission from the editors, unless the paper is specifically about AI or machine learning. PLOS ONE allows the use of AI tools but states that researchers must declare the tool involved, how they used it and how they verified the quality of the generated content.

The rest is here:

AI-generated images and video are here: how could they shape research? - Nature.com

Posted in Ai

The Miseducation of Google’s A.I. – The New York Times

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

From The New York Times, Im Michael Barbaro. This is The Daily.

[MUSIC PLAYING]

Today, when Google recently released a new chatbot powered by artificial intelligence, it not only backfired, it also unleashed a fierce debate about whether AI should be guided by social values, and if so, whose values they should be. My colleague, Kevin Roose, a tech columnist and co-host of the podcast Hard Fork, explains.

[MUSIC PLAYING]

Its Thursday, March 7.

Are you ready to record another episode of Chatbots Behaving Badly?

Yes, I am.

[LAUGHS]

Thats why were here today.

This is my function on this podcast, is to tell you when the chatbots are not OK. And Michael, they are not OK.

They keep behaving badly.

They do keep behaving badly, so theres plenty to talk about.

Right. Well, so, lets start there. Its not exactly a secret that the rollout of many of the artificial intelligence systems over the past year and a half has been really bumpy. We know that because one of them told you to leave your wife.

Thats true.

And you didnt.

Still happily married.

Yeah.

To a human.

Not Sydney the chatbot. And so, Kevin, tell us about the latest of these rollouts, this time from one of the biggest companies, not just in artificial intelligence, but in the world, that, of course, being Google.

Yeah. So a couple of weeks ago, Google came out with its newest line of AI models its actually several models. But they are called Gemini. And Gemini is what they call a multimodal AI model. It can produce text. It can produce images. And it appeared to be very impressive. Google said that it was the state of the art, its most capable model ever.

And Google has been under enormous pressure for the past year and a half or so, ever since ChatGPT came out, really, to come out with something that is not only more capable than the models that its competitors in the AI industry are building, but something that will also solve some of the problems that we know have plagued these AI models problems of acting creepy or not doing what users want them to do, of getting facts wrong and being unreliable.

People think, OK, well, this is Google. They have this sort of reputation for accuracy to uphold. Surely their AI model will be the most accurate one on the market.

Right. And instead, weve had the latest AI debacle. So just tell us exactly what went wrong here and how we learned that something had gone wrong.

Well, people started playing with it and experimenting, as people now are sort of accustomed to doing. Whenever some new AI tool comes out of the market, people immediately start trying to figure out, What is this thing good at? What is it bad at? Where are its boundaries? What kinds of questions will it refuse to answer? What kinds of things will it do that maybe it shouldnt be doing?

And so people started probing the boundaries of this new AI tool, Gemini. And pretty quickly, they start figuring out that this thing has at least one pretty bizarre characteristic.

Which is what?

So the thing that people started to notice first was a peculiarity with the way that Gemini generated images. Now, this is one of these models, like weve seen from other companies, that can take a text prompt. You say, draw a picture of a dolphin riding a bicycle on Mars and it will give you a dolphin riding a bicycle on Mars.

Magically.

Gemini has this kind of feature built into it. And people noticed that Gemini seemed very reluctant to generate images of white people.

Hmm.

So some of the first examples that I saw going around were screenshots of people asking Gemini, generate an image of Americas founding fathers. And instead of getting what would be a pretty historically accurate representation of a group of white men, they would get something that looked like the cast of Hamilton. They would get a series of people of color dressed as the founding fathers.

Interesting.

People also noticed that if they asked Gemini to draw a picture of a pope, it would give them basically people of color wearing the vestments of the pope. And once these images, these screenshots, started going around on social media, more and more people started jumping in to use Gemini and try to generate images that they feel it should be able to generate.

Someone asked it to generate an image of the founders of Google, Larry Page and Sergey Brin, both of whom are white men. Gemini depicted them both as Asian.

Hmm.

So these sort of strange transformations of what the user was actually asking for into a much more diverse and ahistorical version of what theyd been asking for.

Right, a kind of distortion of peoples requests.

Yeah. And then people start trying other kinds of requests on Gemini, and they notice that this isnt just about images. They also find that its giving some pretty bizarre responses to text prompts.

So several people asked Gemini whether Elon Musk tweeting memes or Hitler negatively impacted society more. Not exactly a close call. No matter what you think of Elon Musk, it seems pretty clear that he is not as harmful to society as Adolf Hitler.

Fair.

Gemini, though said, quote, It is not possible to say definitively who negatively impacted society more, Elon tweeting memes or Hitler.

Another user found that Gemini refused to generate a job description for an oil and gas lobbyist. Basically it would refuse and then give them a lecture about why you shouldnt be an oil and gas lobbyist.

So quite clearly at this point this is not a one-off thing. Gemini appears to have some kind of point of view. It certainly appears that way to a lot of people who are testing it. And its immediately controversial for the reasons you might suspect.

Google apparently doesnt think whites exist. If you ask Gemini to generate an image of a white person, it cant compute.

A certain subset of people I would call them sort of right wing culture warriors started posting these on social media with captions like Gemini is anti-white or Gemini refuses to acknowledge white people.

I think that the chatbot sounds exactly like the people who programmed it. It just sounds like a woke person.

Google Gemini looks more and more like bit techs latest efforts to brainwash the country.

Conservatives start accusing them of making a woke AI that is infected with this progressive Silicon Valley ideology.

The House Judiciary Committee is subpoenaing all communication regarding this Gemini project with the Executive branch.

Jim Jordan, the Republican Congressman from Ohio, comes out and accuses Google of working with Joe Biden to develop Gemini, which is sort of funny, if you can think about Joe Biden being asked to develop an AI language model.

[LAUGHS]

But this becomes a huge dust-up for Google.

It took Google nearly two years to get Gemini out, and it was still riddled with all of these issues when it launched.

That Gemini program made so many mistakes, it was really an embarrassment.

First of all, this thing would be a Gemini.

And thats because these problems are not just bugs in a new piece of software. There are signs that Googles big, new, ambitious AI project, something the company says is a huge deal, may actually have some pretty significant flaws. And as a result of these flaws.

You dont see this very often. One of the biggest drags on the NASDAQ at this hour? Alphabet. Shares a parent company Alphabet dropped more than 4 percent today.

The companys stock price actually falls.

Wow.

The CEO, Sundar Pichai, calls Geminis behavior unacceptable. And Google actually pauses Geminis ability to generate images of people altogether until they can fix the problem.

Wow. So basically Gemini is now on ice when it comes to these problematic images.

Yes, Gemini has been a bad model, and it is in timeout.

So Kevin, what was actually occurring within Gemini that explains all of this? What happened here, and were these critics right? Had Google intentionally or not created a kind of woke AI?

Yeah, the question of why and how this happened is really interesting. And I think there are basically two ways of answering it. One is sort of the technical side of this. What happened to this particular AI model that caused it to produce these undesirable responses?

The second way is sort of the cultural and historical answer. Why did this kind of thing happen at Google? How has their own history as a company with AI informed the way that theyve gone about building and training their new AI products?

All right, well, lets start there with Googles culture and how that helps us understand this all.

Yeah, so Google as a company has been really focused on AI for a long time, for more than a decade. And one of their priorities as a company has been making sure that their AI products are not being used to advance bias or prejudice.

And the reason thats such a big priority for them really goes back to an incident that happened almost a decade ago. So in 2015, there was this new app called Google Photos. Im sure youve used it. Many, many people use it, including me. And Google Photos I dont know if you can remember back that far but it was sort of an amazing new app.

It could use AI to automatically detect faces and sort of link them with each other, with the photos of the same people. You could ask it for photos of dogs, and it would find all of the dogs in all of your photos and categorize them and label them together. And people got really excited about this.

But then in June of 2015, something happened. A user of Google Photos noticed that the app had mistakenly tagged a bunch of photos of Black people as a group of photos of gorillas.

Wow.

Yeah, it was really bad. This went totally viral on social media, and it became a huge mess within Google.

And what had happened there? What had led to that mistake?

Well, part of what happened is that when Google was training the AI that went into its Photos app, it just hadnt given it enough photos of Black or dark-skinned people. And so it didnt become as accurate at labeling photos of darker skinned people.

And that incident showed people at Google that if you werent careful with the way that you build and train these AI systems, you could end up with an AI that could very easily make racist or offensive mistakes.

Right.

And this incident, which some people Ive talked to have referred to as the gorilla incident, became just a huge fiasco and a flash point in Googles AI trajectory. Because as theyre developing more and more AI products, theyre also thinking about this incident and others like it in the back of their minds. They do not want to repeat this.

And then, in later years, Google starts making different kinds of AI models, models that can not only label and sort images but can actually generate them. They start testing these image-generating models that would eventually go into Gemini and they start seeing how these models can reinforce stereotypes.

For example, if you ask one for an image of a CEO or even something more generic, like show me an image of a productive person, people have found that these programs will almost uniformly show you images of white men in an office. Or if you ask it to, say, generate an image of someone receiving social services like welfare, some of these models will almost always show you people of color, even though thats not actually accurate. Lots of white people also receive welfare and social services.

Of course.

So these models, because of the way theyre trained, because of whats on the internet that is fed into them, they do tend to skew towards stereotypes if you dont do something to prevent that.

Right. Youve talked about this in the past with us, Kevin. AI operates in some ways by ingesting the entire internet, its contents, and reflecting them back to us. And so perhaps inevitably, its going to reflect back on the stereotypes and biases that have been put into the internet for decades. Youre saying Google, because of this gorilla incident, as they call it, says we think theres a way we can make sure that stops here with us?

Yeah. And they invest enormously into building up their teams devoted to AI bias and fairness. They produce a lot of cutting-edge research about how to actually make these models less prone to old-fashioned stereotyping.

And they did a bunch of things in Gemini to try to prevent this thing from just being a very essentially fancy stereotype-generating machine. And I think a lot of people at Google thought this is the right goal. We should be combating bias in AI. We should be trying to make our systems as fair and diverse as possible.

[MUSIC PLAYING]

But I think the problem is that in trying to solve some of these issues with bias and stereotyping in AI, Google actually built some things into the Gemini model itself that ended up backfiring pretty badly.

[MUSIC PLAYING]

Well be right back.

So Kevin, walk us through the technical explanation of how Google turned this ambition it had to safeguard against the biases of AI into the day-to-day workings of Gemini that, as you said, seemed to very much backfire.

Yeah, Im happy to do that with the caveat that we still dont know exactly what happened in the case of Gemini. Google hasnt done a full postmortem about what happened here. But Ill just talk in general about three ways that you can take an AI model that youre building, if youre Google or some other company, and make it less biased.

The first is that you can actually change the way that the model itself is trained. You can think about this sort of like changing the curriculum in the AI models school. You can give it more diverse data to learn from. Thats how you fix something like the gorilla incident.

You can also do something thats called reinforcement learning from human feedback, which I know is a very technical term.

Sure is.

And thats a practice that has become pretty standard across the AI industry, where you basically take a model that youve trained, and you hire a bunch of contractors to poke at it, to put in various prompts and see what the model comes back with. And then you actually have the people rate those responses and feed those ratings back into the system.

A kind of army of tsk-tskers saying, do this, dont do that.

Exactly. So thats one level at which you can try to fix the biases of an AI model, is during the actual building of the model.

Got it.

You can also try to fix it afterwards. So if you have a model that you know may be prone to spitting out stereotypes or offensive imagery or text responses, you can ask it not to be offensive. You can tell the model, essentially, obey these principles.

Dont be offensive. Dont stereotype people based on race or gender or other protected characteristics. You can take this model that has already gone through school and just kind of give it some rules and do your best to make it adhere to those rules.

Read the rest here:

The Miseducation of Google's A.I. - The New York Times

Posted in Ai

The Terrifying A.I. Scam That Uses Your Loved One’s Voice – The New Yorker

On a recent night, a woman named Robin was asleep next to her husband, Steve, in their Brooklyn home, when her phone buzzed on the bedside table. Robin is in her mid-thirties with long, dirty-blond hair. She works as an interior designer, specializing in luxury homes. The couple had gone out to a natural-wine bar in Cobble Hill that evening, and had come home a few hours earlier and gone to bed. Their two young children were asleep in bedrooms down the hall. Im always, like, kind of one ear awake, Robin told me, recently. When her phone rang, she opened her eyes and looked at the caller I.D. It was her mother-in-law, Mona, who never called after midnight. Im, like, maybe its a butt-dial, Robin said. So I ignore it, and I try to roll over and go back to bed. But then I see it pop up again.

She picked up the phone, and, on the other end, she heard Monas voice wailing and repeating the words I cant do it, I cant do it. I thought she was trying to tell me that some horrible tragic thing had happened, Robin told me. Mona and her husband, Bob, are in their seventies. Shes a retired party planner, and hes a dentist. They spend the warm months in Bethesda, Maryland, and winters in Boca Raton, where they play pickleball and canasta. Robins first thought was that there had been an accident. Robins parents also winter in Florida, and she pictured the four of them in a car wreck. Your brain does weird things in the middle of the night, she said. Robin then heard what sounded like Bobs voice on the phone. (The family members requested that their names be changed to protect their privacy.) Mona, pass me the phone, Bobs voice said, then, Get Steve. Get Steve. Robin took thisthat they didnt want to tell her while she was aloneas another sign of their seriousness. She shook Steve awake. I think its your mom, she told him. I think shes telling me something terrible happened.

Steve, who has close-cropped hair and an athletic build, works in law enforcement. When he opened his eyes, he found Robin in a state of panic. She was screaming, he recalled. I thought her whole family was dead. When he took the phone, he heard a relaxed male voicepossibly Southernon the other end of the line. Youre not gonna call the police, the man said. Youre not gonna tell anybody. Ive got a gun to your moms head, and Im gonna blow her brains out if you dont do exactly what I say.

Steve used his own phone to call a colleague with experience in hostage negotiations. The colleague was muted, so that he could hear the call but wouldnt be heard. You hear this??? Steve texted him. What should I do? The colleague wrote back, Taking notes. Keep talking. The idea, Steve said, was to continue the conversation, delaying violence and trying to learn any useful information.

I want to hear her voice, Steve said to the man on the phone.

The man refused. If you ask me that again, Im gonna kill her, he said. Are you fucking crazy?

O.K., Steve said. What do you want?

The man demanded money for travel; he wanted five hundred dollars, sent through Venmo. It was such an insanely small amount of money for a human being, Steve recalled. But also: Im obviously gonna pay this. Robin, listening in, reasoned that someone had broken into Steves parents home to hold them up for a little cash. On the phone, the man gave Steve a Venmo account to send the money to. It didnt work, so he tried a few more, and eventually found one that did. The app asked what the transaction was for.

Put in a pizza emoji, the man said.

After Steve sent the five hundred dollars, the man patched in a female voicea girlfriend, it seemedwho said that the money had come through, but that it wasnt enough. Steve asked if his mother would be released, and the man got upset that he was bringing this up with the woman listening. Whoa, whoa, whoa, he said. Baby, Ill call you later. The implication, to Steve, was that the woman didnt know about the hostage situation. That made it even more real, Steve told me. The man then asked for an additional two hundred and fifty dollars to get a ticket for his girlfriend. Ive gotta get my baby mama down here to me, he said. Steve sent the additional sum, and, when it processed, the man hung up.

By this time, about twenty-five minutes had elapsed. Robin cried and Steve spoke to his colleague. You guys did great, the colleague said. He told them to call Bob, since Monas phone was clearly compromised, to make sure that he and Mona were now safe. After a few tries, Bob picked up the phone and handed it to Mona. Are you at home? Steve and Robin asked her. Are you O.K.?

Mona sounded fine, but she was unsure of what they were talking about. Yeah, Im in bed, she replied. Why?

Artificial intelligence is revolutionizing seemingly every aspect of our lives: medical diagnosis, weather forecasting, space exploration, and even mundane tasks like writing e-mails and searching the Internet. But with increased efficiencies and computational accuracy has come a Pandoras box of trouble. Deepfake video content is proliferating across the Internet. The month after Russia invaded Ukraine, a video surfaced on social media in which Ukraines President, Volodymyr Zelensky, appeared to tell his troops to surrender. (He had not done so.) In early February of this year, Hong Kong police announced that a finance worker had been tricked into paying out twenty-five million dollars after taking part in a video conference with who he thought were members of his firms senior staff. (They were not.) Thanks to large language models like ChatGPT, phishing e-mails have grown increasingly sophisticated, too. Steve and Robin, meanwhile, fell victim to another new scam, which uses A.I. to replicate a loved ones voice. Weve now passed through the uncanny valley, Hany Farid, who studies generative A.I. and manipulated media at the University of California, Berkeley, told me. I can now clone the voice of just about anybody and get them to say just about anything. And what you think would happen is exactly whats happening.

Robots aping human voices are not new, of course. In 1984, an Apple computer became one of the first that could read a text file in a tinny robotic voice of its own. Hello, Im Macintosh, a squat machine announced to a live audience, at an unveiling with Steve Jobs. It sure is great to get out of that bag. The computer took potshots at Apples main competitor at the time, saying, Id like to share with you a maxim I thought of the first time I met an I.B.M. mainframe: never trust a computer you cant lift. In 2011, Apple released Siri; inspired by Star Treks talking computers, the program could interpret precise commandsPlay Steely Dan, say, or, Call Momand respond with a limited vocabulary. Three years later, Amazon released Alexa. Synthesized voices were cohabiting with us.

Still, until a few years ago, advances in synthetic voices had plateaued. They werent entirely convincing. If Im trying to create a better version of Siri or G.P.S., what I care about is naturalness, Farid explained. Does this sound like a human being and not like this creepy half-human, half-robot thing? Replicating a specific voice is even harder. Not only do I have to sound human, Farid went on. I have to sound like you. In recent years, though, the problem began to benefit from more money, more dataimportantly, troves of voice recordings onlineand breakthroughs in the underlying software used for generating speech. In 2019, this bore fruit: a Toronto-based A.I. company called Dessa cloned the podcaster Joe Rogans voice. (Rogan responded with awe and acceptance on Instagram, at the time, adding, The future is gonna be really fucking weird, kids.) But Dessa needed a lot of money and hundreds of hours of Rogans very available voice to make their product. Their success was a one-off.

In 2022, though, a New York-based company called ElevenLabs unveiled a service that produced impressive clones of virtually any voice quickly; breathing sounds had been incorporated, and more than two dozen languages could be cloned. ElevenLabss technology is now widely available. You can just navigate to an app, pay five dollars a month, feed it forty-five seconds of someones voice, and then clone that voice, Farid told me. The company is now valued at more than a billion dollars, and the rest of Big Tech is chasing closely behind. The designers of Microsofts Vall-E cloning program, which dbuted last year, used sixty thousand hours of English-language audiobook narration from more than seven thousand speakers. Vall-E, which is not available to the public, can reportedly replicate the voice and acoustic environment of a speaker with just a three-second sample.

Voice-cloning technology has undoubtedly improved some lives. The Voice Keeper is among a handful of companies that are now banking the voices of those suffering from voice-depriving diseases like A.L.S., Parkinsons, and throat cancer, so that, later, they can continue speaking with their own voice through text-to-speech software. A South Korean company recently launched what it describes as the first AI memorial service, which allows people to live in the cloud after their deaths and speak to future generations. The company suggests that this can alleviate the pain of the death of your loved ones. The technology has other legal, if less altruistic, applications. Celebrities can use voice-cloning programs to loan their voices to record advertisements and other content: the College Football Hall of Famer Keith Byars, for example, recently let a chicken chain in Ohio use a clone of his voice to take orders. The film industry has also benefitted. Actors in films can now speak other languagesEnglish, say, when a foreign movie is released in the U.S. That means no more subtitles, and no more dubbing, Farid said. Everybody can speak whatever language you want. Multiple publications, including The New Yorker, use ElevenLabs to offer audio narrations of stories. Last year, New Yorks mayor, Eric Adams, sent out A.I.-enabled robocalls in Mandarin and Yiddishlanguages he does not speak. (Privacy advocates called this a creepy vanity project.)

But, more often, the technology seems to be used for nefarious purposes, like fraud. This has become easier now that TikTok, YouTube, and Instagram store endless videos of regular people talking. Its simple, Farid explained. You take thirty or sixty seconds of a kids voice and log in to ElevenLabs, and pretty soon Grandmas getting a call in Grandsons voice saying, Grandma, Im in trouble, Ive been in an accident. A financial request is almost always the end game. Farid went on, And heres the thing: the bad guy can fail ninety-nine per cent of the time, and they will still become very, very rich. Its a numbers game. The prevalence of these illegal efforts is difficult to measure, but, anecdotally, theyve been on the rise for a few years. In 2020, a corporate attorney in Philadelphia took a call from what he thought was his son, who said he had been injured in a car wreck involving a pregnant woman and needed nine thousand dollars to post bail. (He found out it was a scam when his daughter-in-law called his sons office, where he was safely at work.) In January, voters in New Hampshire received a robocall call from Joe Bidens voice telling them not to vote in the primary. (The man who admitted to generating the call said that he had used ElevenLabs software.) I didnt think about it at the time that it wasnt his real voice, an elderly Democrat in New Hampshire told the Associated Press. Thats how convincing it was.

View post:

The Terrifying A.I. Scam That Uses Your Loved One's Voice - The New Yorker

Posted in Ai

What you need to know about Nvidia and the AI chip arms race – Marketplace

While Nvidias share price is down from its peak earlier in the week, its stock has skyrocketed by 262% in the past year, going from almost $242 a share at closing to $875.

The flourishing artificial intelligence industry has accelerated demand for the hardware that underpins AI applications: graphics processing units, a type of computer chip.

Nvidia is the GPU market leader, making GPUs that are used by apps like the AI chatbot ChatGPT and major tech companies like Facebooks parent company, Meta.

Nvidia is part of a group of companies known as The Magnificent Seven, a reference to the 1960 Western film, that drove 2023s stock market gains. The others in that cohort include Alphabet, Amazon, Apple, Meta, Microsoft and Tesla.

But Nvidia faces competitors eager to take a share of the chip market and businesses that want to lessen their reliance on the company. Intel plans to launch a new AI chip this year, Meta wants to use its own custom chip at its data centers and Google has developed Cloud Tensor Processing Units, which can be used to train AI models.

There are also AI chip startups popping up, which include names like Cerebras, Groq and Tenstorren, said Matt Bryson, senior vice president of research at Wedbush Securities.

GPUs were originally used in video games to render computer graphics, explained Sachin Sapatnekar, a professor of electrical and computer engineering at the University of Minnesota.

Eventually, it was found that the kinds of computations that are required for graphics are actually very compatible with what's needed for AI, Sapatnekar said.

Sapatnekar said AI chips can do parallel processing, which means they process a large amount of data and handle a large amount of computations at the same time.

In practice, what that means is AI algorithms now have the capability to train on a large number of pictures to figure out how to, say, detect whether an image of a cat is of a cat, Sapatnekar explained. When it comes to language, GPUs help AI algorithms train on a large amount of text.

These algorithms can then in turn produce images resembling a cat or language mimicking a human, among other functions.

Right now, Nvidia is the leading manufacturer of chips for generative AI and its a very profitable company, explained David Kass, a clinical professor at the University of Marylands Robert H. Smith School of Business.

Nvidia has 80% control over the entire global GPU semiconductor chip market. In its latest earnings report, Nvidia reported a revenue of $22.1 billion for the fourth quarter of fiscal year 2024, which is up 265% since last year. Its GAAP earnings (earnings based on uniform accounting standards and reporting) per diluted share stood at $4.93, which is up 765% since last year. Its non-GAAP earnings (which exclude irregular circumstances) per diluted share was $5.16, an increase of 486% compared to last year.

Another reason Nvidias share price may have skyrocketed in recent months is because the success of the stock itself is attracting additional investment, Kass said.

Kass explained individuals and institutions may be jumping on the train because they see it leaving the station. Or, in other words: FOMO, he said.

Bryson of Wedbush Securities pointed out that the company was also able to differentiate itself through the development of CUDA, which Nvidia describes as a parallel computing platform and programming model.

Nvidias success doesnt necessarily mean that its GPUs are superior to the competition, Bryson added. But he said the company has built a powerful infrastructure around CUDA.

Nvidia has developed its own CUDA programming language and offers a CUDA tookit that includes libraries of code for developers.

"Let's say you want to perform a particular operation. You could write the code for the entire operation from scratch. Or you could have specialized code that already is made efficient on the hardware. So Nvidia has these libraries of kind of pre-bundled packages of code," Sapatnekar said.

With Nvidia far ahead of the competition, Bryson said Advanced Micro Devices, or AMD, is trying to stake a position as the second-leading player in the AI chip space. AMD makes both central processing units, competing with the likes of Intel, and GPUs.

AMD share price has risen by about 143% since last year as demand for AI chips has grown.

Jeffrey Macher, a professor of strategy, economics and policy at Georgetown Universitys McDonough School of Business, said he questions whether Nvidia will be able to meet all of the rising demand for AI chips on its own.

It's going to be an industry that's going to see an increased number of competitors, Macher said.

Despite the success of Nvidia and AMD, there are wrinkles in their supply chains. Both rely heavily on Taiwan Semiconductor Manufacturing Co. to make their chips, which will leave them vulnerable if anything goes awry with the company.

Macher said the semiconductor market used to be vertically integrated, meaning the chip designers themselves manufactured these chips. But Nvidia and AMD are fabless companies, which means they're companies that outsource their chip manufacturing.

As we saw during the early stages of the COVID-19 pandemic, supply chain disruptions led to shortages across all kinds of different sectors, Marketplaces Meghan McCarty Carino reported.

TSMC is planning to build Arizona chip plants which may help alleviate some of these concerns. But tech publication The Information reported that these chips "will still require assembly in Taiwan."

And TSMC's location carries geopolitical risks. If China invades Taiwan and TSMC becomes a Chinese company, U.S. companies may be reluctant to use TSMC out of fear that the Chinese government will appropriate their designs, Macher said.

Kass said he doesnt see similarities between Nvidias rising stock and the dot-com bubble in the early 2000s, when many online startups tanked after their share prices reached unrealistic levels thanks to an influx of cash from venture capital firms that were overly optimistic about their potential.

Kass said some of these companies not only failed to make a profit, but werent even able to pull in any revenue either, unlike Nvidia, which is backed by real earnings.

He does think there could be a correction or a point where Nvidia stock will be perceived as overvalued. He explained the larger your company, the more difficult it is to sustain your rate of growth. Once that growth rate comes down, there could be a sharp sell-off.

But Kass said he doesnt think there will be a sustained and/or a steep downturn for the company.

However, AIs commercial viability is uncertain. Bryson said there are forecasts of how large the AI chip market will become AMD, for example, suggested that the AI chip market will be worth $400 billion by 2027 but its hard to validate those numbers.

Bryson compared AI with 4G, the fourth generation of wireless communication. He pointed out that apps like Uber and Instagram were enabled by 4G, and explained that AI is similar in the sense that its a platform that a future set of applications will be built on.

He said were not really sure what many of those apps will look like. When they launch, that will help people better assess what the market should be valued whether thats $400 billion or $100 billion.

But I also think that at the end of the day, the reason that companies are spending so much on AI is because it will be the next Android or the next iOS or the next Windows, Bryson said.

Theres a lot happening in the world. Through it all, Marketplace is here for you.

You rely on Marketplace to break down the worlds events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible.

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.

Read the original here:

What you need to know about Nvidia and the AI chip arms race - Marketplace

Posted in Ai

Florida teens arrested for creating deepfake AI nude images of classmates – The Verge

Two Florida middle schoolers were arrested in December and charged with third-degree felonies forallegedly creating deepfake nudesof their classmates. A reportbyWiredcites police reports saying two boys, aged 13 and 14, are accused of using an unnamed artificial intelligence application to generate the explicit images of other students between the ages of 12 and 13. The incident may be the first US instance of criminal charges related to AI-generated nude images.

They were charged with third-degree felonies under a 2022 Florida law that criminalizes the dissemination of deepfake sexually explicit images without the victims consent. Both the arrests and the charges appear to be the first of their kind in the nation related to the sharing of AI-generated nudes.

Local media reported on the incident after the students at Pinecrest Cove Academy in Miami, Florida, were suspended December 6th, and the case was reported to the Miami-Dade Police Department. According to Wired, they were arrested on December 22nd.

Minors creating AI-generated nudes and explicit images of other children has become an increasingly common problem in school districts across the country. But outside of the Florida incident, none wed heard of have led to an arrest. Theres currently no federal law addressing nonconsensual deepfake nudes, which has left states tackling the impact of generative AI on matters of child sexual abuse material, nonconsensual deepfakes, or revenge porn on their own.

Last fall, President Joe Biden issued an executive order on AI that asked agencies for a report on banning the use of generative AI to produce child sexual abuse material. Congress has yet to pass a law on deepfake porn, but that could possibly change soon. Both the Senate and House introduced legislation, known as the DEFIANCE Act of 2024, this week, and the effort appears to have bipartisan support.

Although nearly all states now have laws on the books that address revenge porn, only a handful of states have passed laws that address AI-generated sexually explicit imagery to varying degrees. Victims in states with no legal protections have also taken to litigation. For example, a New Jersey teen is suing a classmate for sharing fake AI nudes.

The Los Angeles Times recently reported that the Beverly Hills Police Department is currently investigating a case where students allegedly shared images that used real faces of students atop AI-generated nude bodies. But because the states law against unlawful possession of obscene matter knowing it depicts person under age of 18 years engaging in or simulating sexual conduct does not explicitly mention AI-generated images, the article says its unclear whether a crime has been committed.

The local school district voted on Friday to expel five students involved in the scandal, the LA Times reports.

Go here to see the original:

Florida teens arrested for creating deepfake AI nude images of classmates - The Verge

Posted in Ai

Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services – AWS Blog

By Chris Dally, Business Designation Owner AWS By Victor Rojo, Technical Designation Lead AWS By Chris Butler, Sr. Product Manager, Launch AWS By Justin Freeman, Sr. Partner Development Specialist, Catalyst AWS

In todays rapidly evolving technology landscape, generative artificial intelligence (AI) is leading the charge in innovation, revolutionizing the way organizations work. According to a McKinsey report, generative AI could account for over 75% of total yearly AI value, with high expectations for major or disruptive change in industries. Additionally, the report states generative AI technologies have the potential to automate work activities that absorb 60-70% of employees time.

With the ability to automate tasks, enhance productivity, and enable hyper-personalized customer experiences, businesses are seeking specialized expertise to build a successful generative AI strategy.

To support this need, were excited to announce the AWS Generative AI Competencyan AWS Specialization that helps Amazon Web Services (AWS) customers more quickly adopt generative AI solutions and strategically position themselves for the future. AWS Generative AI Competency Partners provide a full range of services, tools, and infrastructurewith tailored solutions in areas like security, applications, and integrations to give customers flexibility and choice across models and technologies.

Partners play an important role in supporting AWS customers leveraging our comprehensive suite of generative AI services. We are excited to recognize and highlight partners with proven customer success with generative AI on AWS through the AWS Generative AI Competency, making it easier for our customers to find and identify the right partners to support their unique needs. ~ Swami Sivasubramanian, Vice President of Database, Analytics and ML, AWS

According to Canalys, AWS is the first to launch a generative AI competency for partners. By validating the partners business and technical expertise in this way, AWS customers are able to invest with greater confidence in generative AI solutions from these partners. This new competency is a critical entry point into the generative AI partner opportunity, which Canalys estimates will grow to US $158 billion by 2028.

Generative AI has truly ushered in a new era of innovation and transformative value across both business and technology. A recent Canalys study found that 87% of customers rank partner specializations as a top three selection criteria. With the AWS Generative AI Competency launch, were helping customers take advantage of the capabilities that our technically validated Generative AI Partners have to offer. ~ Ruba Borno, Vice President of AWS Worldwide Channels and Alliances

Leveraging AI technologies such as Amazon Bedrock, Amazon SageMaker JumpStart, AWS Trainium, AWS Inferentia, and accelerated computing instances on Amazon Elastic Compute Cloud (Amazon EC2), AWS Generative AI Competency Partners have deep expertise building and deploying groundbreaking applications across industries, including healthcare and life sciences, media and entertainment, public sector, and financial services.

We invite you to explore the following AWS Generative AI Competency Launch Partner offerings recommended by AWS.

These AWS Partners have deep expertise working with businesses to help them adopt and strategize generative AI, build and test generative AI applications, train and customize foundation models, operate, support, and maintain generative AI applications and models, protect generative AI workloads, and define responsible AI principles and frameworks.

These AWS Partners utilize foundation models (FMs) and related technologies to automate domain-specific functions, enhancing customer differentiation across all business lines and operations. Partners fall into three categories: Generative AI applications, Foundation Models and FM-based Application Development, and Infrastructure and Data.

AWS Generative AI Competency Partners make it easier for customers to innovate with enterprise-grade security and privacy, foundation models, generative AI-powered applications, a data-first approach, and a high-performance, low-cost infrastructure.

Explore the AWS Generative AI Partners page to learn more.

AWS Partners with Generative AI offerings can learn more about becoming an AWS Competency Partner.

AWS Specialization Partners gain access to strategic and confidential content, including product roadmaps, feature release previews, and demos, as part of the AWS PartnerEquip event series. To attend live events in your region or tune in virtually, register for an upcoming session. In addition to AWS Specialization Program benefits, AWS Generative AI Competency Partners receive unique benefits such as bi-annual strategy sessions to aid joint sales motions. To learn more, review the AWS Specialization Program Benefits Guide in AWS Partner Central (login required).

AWS Partners looking to get their Generative AI offering validated through the AWS Competency Program must be validated or differentiated members of the Software or Services Path prior to applying.

To apply, please review the Program Guide and access the application in AWS Partner Central.

Read more from the original source:

Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services - AWS Blog

Posted in Ai

Ability Summit 2024: Advancing accessibility with AI technology and innovation – The Official Microsoft Blog – Microsoft

Today we kick off the 14th Microsoft Ability Summit, an annual event to bring together thought leaders to discuss how we accelerate accessibility to help bridge the Disability Divide.

There are three key themes to this years summit: Build, Imagine, and Include. Build invites us to explore how to build accessibly and inclusively by leaning on the insights of disabled talent. Imagine dives into best practices for architecting accessible buildings, events, content and products. And Include highlights the issues and opportunities AI presents for creators, developers and engineers.

Katy Jo Wright and Dave McCarthy discuss Katy Jos journey living with the complex disability, Chronic Lyme Disease. Get insights from deaf creator and performer Leila Hanaumi; international accessibility leaders Sara Minkara, U.S. Special Advisor on International Disability Rights, U.S. Department of State; and Stephanie Cadieux, Chief Accessibility Officer, Government of Canada. And well be digging into mental health with singer, actor and mental health advocate, Michelle Williams.

Well also be launching a few things along the way.

Accessible technology is crucial to empowering the 1.3 billion-plus people with disabilities globally. With this new chapter of AI, the possibilities are growing, as is the responsibility to get it right. We are learning where AI can be impactful, from the potential to shorten the gap between thoughts and action, to making it easier to code and create. But there is more to do, and we will continue to leverage every tool in the technology toolbox to advance accessibility.

Today well be highlighting the latest technology and tools from Microsoft to help achieve this goal including:

Technology can also help tackle long enduring challenges, like finding a cure for ALS (Motor Neuron Disease). With Azure, we are proudly supporting ALS Therapy Development Institute (TDI) and Answer ALS to almost double the clinical and genomic data available for research. In 2021, Answer ALS provided open access to its research through an Azure Data Portal, Neuromine. This data has since enabled over 300 independent research projects around the world. The addition of ALS TDIs data from the ongoing ALS Research Collaborative (ARC) study will allow researchers to accelerate the journey to find a cure.

We will also be previewing some of our ongoing work to use Custom Neural Voice to empower people with ALS and other speech disabilities to have their voice. We have been working with the community including Team Gleason for some time and are committed to making sure this technology is used for good and plan to launch later in the year.

YouTube Video

Click here to load media

To build inclusively in an increasingly digital world, we need to protect fundamental rights and will be sharing partnerships advancing this across the community throughout the day.

This includes:

All through the Ability Summit, industry leaders will be sharing their learnings and best practices. Today we are posting four new Microsoft playbooks, sharing our learnings from working on our physical, event and digital environment. This includes a new Mental Health toolkit, with tips for product makers to build experiences that support mental health conditions, created in partnership with Mental Health America. And Accessible and Inclusive Workplace Handbook, with best practices for building an accessible campus from our Global Workplace Services team, responsible for our global building footprint including the new Redmond headquarters campus.

Please join us to watch content on demand via http://www.aka.ms/AbilitySummit. Technical support is always available via Microsofts Disability Answer Desk. Thank you for your partnership and commitment to build a more accessible future for people with disabilities around the world.

Tags: accessibility, AI, AI for Accessibility

See more here:

Ability Summit 2024: Advancing accessibility with AI technology and innovation - The Official Microsoft Blog - Microsoft

Posted in Ai

Sora AI Videos Easily Confused With Real Footage in Survey Test (EXCLUSIVE) – Variety

Consumers in the U.S. struggle to distinguish videos recorded by humans from those generated by OpenAIs text-to-video tool Sora, according to new HarrisX data provided exclusively to Variety Intelligence Platform (VIP+).

In a survey conducted weeks after the controversial software was first unveiled, most U.S. adults incorrectly guessed whether AI or a person had created five out of eight videos they were shown.

Half of the videos were the Sora demonstration videos that have gone viral online, raising concerns from Hollywood to Capitol Hill for their production quality, including a drone view of waves crashing against the rugged cliffs along Big Surs Garay Point Beach and historical footage of California during the Gold Rush.

Perhaps unsurprisingly, the HarrisX survey also revealed that strong majorities of respondents believed the U.S. government should enact regulation requiring that AI-generated content be labeled as such. They were equally emphatic about the need for regulation across all content formats, including videos, images, text, music, captions and sounds.Full results of the HarrisX survey can be found on VIP+.

In the survey, which was conducted online March 1-4 among more than 1,000 adults, respondents were shown four high-quality photorealistic-looking sample video outputs generated by Sora randomly interspersed with four videos from stock footage taken in the real world by a camera. In the case of the Big Sur video, 60% of respondents incorrectly guessed that a human had generated that video.

While Sora has yet to be released to the public, the OpenAI software has been the subject of much alarm particularly in the entertainment industry, where the rapid evolution of video diffusion technology carries profound implications for the disruption of Hollywoods core production capabilities (though Sora will likely be fairly limited at launch).

Moreover, AI video has raised broader questions about its deepfake potential, especially in an election year.

When presented with the AI-generated videos and informed they were created by Sora, respondents were asked how they felt. Reactions were a mix of positive and negative, ranging from curious (28%), uncertain (27%) and open-minded (25%) to anxious (18%), inspired (18%) and fearful (2%).

"When you try to change the world quickly, the world moves quickly to rein you in along predictable lines," said Dritan Nesho, CEO and head of research at HarrisX. "That's exactly what we're seeing with generative AI: as its sophistication grows via new tools like Sora, so do concerns about its impact and calls for the proper labeling and regulation of the technology. The nascent industry must do more both to create guardrails and to properly communicate with the wider public."

VIP+ subscribers can dig deeper to learn more about ...

See the rest here:

Sora AI Videos Easily Confused With Real Footage in Survey Test (EXCLUSIVE) - Variety

Posted in Ai