Chinese firms make headway in producing high bandwidth memory for AI chipsets – Yahoo! Voices

By Fanny Potkin and Eduardo Baptista

SINGAPORE/BEIJING (Reuters) -Two Chinese chipmakers are in the early stages of producing high bandwidth memory (HBM) semiconductors used in artificial intelligence chipsets, according to sources and documents.

The progress in HBM - even if only in older versions of HBM - represents a major step forward in China's efforts to reduce its reliance on foreign suppliers amid tensions with Washington that have led to restrictions on U.S. exports of advanced chipsets to Chinese firms.

CXMT, China's top manufacturer of DRAM chips, has developed sample HBM chips in partnership with chip packaging and testing company Tongfu Microelectronics, according to three people briefed on the matter. The chips are being shown to clients, two of them said.

Tongfu Microelectronics' shares surged 8% in Wednesday trade.

In another example, Wuhan Xinxin is building a factory that will be able to produce 3,000 12-inch HBM wafers a month with construction slated to have begun in February this year, documents from corporate database Qichacha show.

CXMT and other Chinese chip firms have also been holding regular meetings with South Korean and Japanese semiconductor equipment firms to buy tools to develop HBM, said two of the people.

The sources were not authorised to speak on the matter and declined to be identified. Hefei-based CXMT or ChangXin Memory Technologies and Tongfu Microelectronics did not respond to requests for comment.

Wuhan Xinxin, which has flagged to regulators that it is interested in going public, and its parent company did not respond to requests for comment. The parent company is also the parent of NAND memory specialist YMTC or Yangtze Memory Technologies. YMTC said it did not have the capability to mass produce HBM.

Both CXMT and Wuhan Xinxin are private companies which have received local government funding to advance technologies as China pours capital into developing its chip sector.

Wuhan's local government also did not respond to requests for comment.

Separately, Chinese tech behemoth Huawei - which the U.S. has deemed a national security threat and is subject to sanctions - is aiming to produce HBM2 chips in partnership with other domestic companies by 2026, according to one of the sources and a separate person with knowledge of the matter.

The Information reported in April that a Huawei-led group of companies aiming to make HBM includes Fujian Jinhua Integrated Circuit, a memory chip maker also under U.S. sanctions.

Huawei, which has seen demand soar for its Ascend AI chips, declined to comment. It is not clear where Huawei procures HBM. Fujian Jinhua did not respond to a request for comment.

LONG JOURNEY AHEAD

HBM - a type of DRAM standard first produced in 2013 in which chips are vertically stacked to save space and reduce power consumption - is ideal for processing massive amounts of data produced by complex AI applications and demand has soared amid the AI boom.

The market for HBM is dominated by South Korea's SK Hynix - until recently the sole HBM supplier to AI chip giant Nvidia according to analysts - as well as Samsung, and to a lesser extent U.S. firm Micron Technology. All three manufacture the latest standard - HBM3 chips - and are working to bring fifth-generation HBM or HMB3E to customers this year.

China's efforts are currently focused on HBM2, according to two of the sources and a separate person with direct knowledge of the matter.

The U.S. has not put restrictions on exports of HBM chips per se but HBM3 chips are made using American technology that many Chinese firms including Huawei are barred from accessing as part of the curbs.

Nori Chiou, an investment director at White Oak Capital and a former analyst who looked at the IT sector, estimates that Chinese chipmakers lag their global rivals by a decade in HBM.

"China faces a considerable journey ahead, as it currently lacks the competitive edge to rival its Korean counterparts even in the realm of traditional memory markets," he said.

"Nonetheless, (CXMT's) collaboration with Tongfu represents a significant opportunity for China to advance its capabilities in both memory and advanced packaging technologies within the HBM market."

Patents filed by CXMT, Tongfu and Huawei indicate that plans to develop HBM domestically date back at least three years when China's chip industry increasingly became the target of U.S. export controls.

CXMT has filed almost 130 patents in the United States, China, and Taiwan for different technical issues related to the manufacturing and functionalities of HBM chips, according to Anaqua's AcclaimIP database. Of those, 14 were published in 2022, 46 in 2023, and 69 in 2024.

One Chinese patent, published last month, shows the company is looking at advanced packaging techniques like hybrid bonding to create a more powerful HBM product. A separate filing shows that CXMT is also investing in developing technology needed to create HBM3.

(Reporting by Fanny Potkin in Singapore and Eduardo Baptista in Beijing; Additional reporting by Heekyong Yang and Joyce Lee in Seoul; Editing by Brenda Goh and Edwina Gibbs)

Read more:

Chinese firms make headway in producing high bandwidth memory for AI chipsets - Yahoo! Voices

Posted in Ai

Google will let you create personalized AI chatbots – The Verge

Google is adding a bunch of new features to its Gemini AI, and one of the most powerful is a personalization option called Gems that allows users to create custom versions of the Gemini assistant with varying personalities.

Gems lets you create iterations of chatbots that can help you with certain tasks and retain specific characteristics, kind of like making your own bot in Character.AI, the service that lets you talk to virtualized versions of popular characters and celebrities or even a fake psychiatrist. Google says you can make Gemini your gym buddy, sous-chef, coding partner, creative writing guide, or anything you can dream up. Gems feels similar to OpenAIs GPT Store that lets you make customized ChatGPT chatbots.

You can set up a gem by telling Gemini what to do and how to respond. For instance, you can tell it to be your running coach, provide you with a daily run schedule, and to sound upbeat and motivating. Then, in one click, Gemini will make a gem for you as youve described. The Gems feature is available soon to Gemini Advanced subscribers.

Continued here:

Google will let you create personalized AI chatbots - The Verge

Posted in Ai

Google now offers ‘web’ search and an AI opt-out button – The Verge

This is not a joke: Google will now let you perform a web search. Its rolling out web searches now, and in my early tests on desktop, its looking like it could be an incredibly popular change to Googles search engine.

The optional setting filters out almost all the other blocks of content that Google crams into a search results page, leaving you with links and text and Google confirms to The Verge that it will block the companys new AI Overviews as well.

Isnt every search a web search? What is Google Search if not the web? you might rightfully ask.

But independent websites like HouseFresh and Retro Dodo have pointed out how their businesses have gotten buried deep beneath sponsored posts, Quora advice from 2016, best-of lists from big media sites, and no less than 64 Google Shopping product listings, in the words of HouseFresh managing editor Gisele Navarro.

Now, with one click, a bunch of those blockers seemingly disappear.

Search for best home arcade cabinets, one of Retro Dodos bread-and-butter queries, and its no longer buried it appears on page 1. (Drag our image slider to see the difference.)

HouseFresh still doesnt get page 1 billing for best budget air purifiers but its higher up, and youre no longer assaulted by an eye-popping number of Google Shopping results as you scroll:

If you search for Wyze cameras, youll now get a hint about their lax security practices on page 2 instead of page 3:

Im not sure its an improvement for every search, partly because Googles modules can be useful, and partly because the company isnt giving up on self-promotion just because you press the web button. Here, you can see Google still gives itself top billing for Google AR glasses either way, and its Top stories box is arguably a helpful addition:

Which of these results helps you better learn about the Maui wildfires? Im genuinely not sure:

And when you ask Google who wrote The Lord of the Rings, is there any reason you wouldnt want Googles full knowledge graph at your disposal?

Admittedly, its an answer that Google isnt likely to get wrong.

As far as I can tell, the order of Googles search results seem to be the same regardless of whether you pick web or all. It doesnt block links to YouTube videos or Reddit posts or SEO factories... and I still saw (smaller!) sponsored ads from Amazon and Verkada and Wyze push down my search results:

Web is just a filter that removes Googles knowledge panels and featured snippets and Shopping modules and Googles new AI Overviews as well, Google spokesperson Ned Adriance confirms to The Verge. AI Overviews are a feature in Search, just like a knowledge panel or a featured snippet, so they will not appear when someone uses the web filter for a search.

It doesnt magically fix some of the issues facing Googles search engine. But it is a giant opt-out button for people whove been aggravated by some of the companys seemingly self-serving moves, and a way to preserve the spirit of the 10 blue links even as Googles AI efforts try to leave them behind.

Danny Sullivan, Googles Public Liaison for Search, says hes been asking for something like this for years:

As a next step, Id like to see Google promote the button to make it more visible. Right now, the company warns that it may not always appear in the primary carousel on desktop at all you may need to click More first and then select Web.

Heres hoping this all works well on mobile, too; Im not seeing it on my phone yet.

Read more:

Google now offers 'web' search and an AI opt-out button - The Verge

Posted in Ai

Google’s Gemini AI is coming to the sidebar in Docs, Drive, Gmail, and more – The Verge

The right sidebar in Googles Workspace apps is now the center for a lot of Googles AI plans. The company announced today at its I/O developer conference that it is bringing Gemini 1.5 Pro, its latest mainstream language model, to the sidebar in Google Docs, Sheets, Slides, Drive, and Gmail. Itll be the same virtual assistant across all of those apps, and the key bit is that itll know about everything you have saved everywhere.

The idea seems to be to use Gemini to connect all the Workspace apps more seamlessly. According to Aparna Pappu, the general manager and VP of Workspace, Google users have long been trying to hack Gemini to do complicated, multi-app things: send an email based on the data theyre looking at in Sheets or add a reminder to respond to the email they were currently looking at. And since Gemini has access to all of your documents, emails, and files, it can answer questions without forcing you to switch apps.

In a briefing with press ahead of I/O, Pappu gave the example of searching for information about a New York Knicks game. I could ask something like, what time do doors open for the Knicks game, and Im not looking for information from the web, which is going to give me generic information. I want information from my ticket, which happens to be a PDF in my email somewhere. Gemini can find that information, and Pappu said early users are quickly learning to use it as a way to find things more quickly.

Among early testers, Pappu said, a popular use case has been receipts. Rather than dig through your email, your files, and everything, you can just ask Gemini to find and organize all your receipts from across your Google account. And lets say you hit on the prompt that says Put my expenses in a Drive folder, she says, from there Gemini can put them all into a Sheet.

In keeping with so many of Googles announcements at I/O, the Workspace team seems to be focused on using Gemini to help you get stuff done and do stuff on your behalf. Pappu talked about how popular Gmails Help me write feature has been, especially on mobile, where people dont want to type as much. By grounding the model in your data and not the entire internet, Google hopes it can also begin to mitigate the models tendency to hallucinate and make other mistakes.

At least for now, the new sidebar isnt for everyone: its available now to some early testers and will roll out to paid Gemini subscribers next month. But Pappu did say Google is looking at how it could use on-device models to bring the capabilities to more users over time, so your days of hunting through Google Drive to find that old PDF may finally be coming to an end. Eventually.

Link:

Google's Gemini AI is coming to the sidebar in Docs, Drive, Gmail, and more - The Verge

Posted in Ai

Project Astra is the future of AI at Google – The Verge

Ive had this vision in my mind for quite a while, says Demis Hassabis, the head of Google DeepMind and the leader of Googles AI efforts. Hassabis has been thinking about and working on AI for decades, but four or five years ago, something really crystallized. One day soon, he realized, We would have this universal assistant. Its multimodal, its with you all the time. Call it the Star Trek Communicator; call it the voice from Her; call it whatever you want. Its that helper, Hassabis continues, thats just useful. You get used to it being there whenever you need it.

At Google I/O, the companys annual developer conference, Hassabis showed off a very early version of what he hopes will become that universal assistant. Google calls it Project Astra, and its a real-time, multimodal AI assistant that can see the world, knows what things are and where you left them, and can answer questions or help you do almost anything. In an incredibly impressive demo video that Hassabis swears is not faked or doctored in any way, an Astra user in Googles London office asks the system to identify a part of a speaker, find their missing glasses, review code, and more. It all works practically in real time and in a very conversational way.

Astra is just one of many Gemini announcements at this years I/O. Theres a new model, called Gemini 1.5 Flash, designed to be faster for common tasks like summarization and captioning. Another new model, called Veo, can generate video from a text prompt. Gemini Nano, the model designed to be used locally on devices like your phone, is supposedly faster than ever as well. The context window for Gemini Pro, which refers to how much information the model can consider in a given query, is doubling to 2 million tokens, and Google says the model is better at following instructions than ever. Googles making fast progress both on the models themselves and on getting them in front of users.

Going forward, Hassabis says, the story of AI will be less about the models themselves and all about what they can do for you. And that story is all about agents: bots that dont just talk with you but actually accomplish stuff on your behalf. Our history in agents is longer than our generalized model work, he says, pointing to the game-playing AlphaGo system from nearly a decade ago. Some of those agents, he imagines, will be ultra-simple tools for getting things done, while others will be more like collaborators and companions. I think it may even be down to personal preference at some point, he says, and understanding your context.

Astra, Hassabis says, is much closer than previous products to the way a true real-time AI assistant ought to work. When Gemini 1.5 Pro, the latest version of Googles mainstream large language model, was ready, Hassabis says he knew the underlying tech was good enough for something like Astra to begin to work well. But the model is only part of the product. We had components of this six months ago, he says, but one of the issues was just speed and latency. Without that, the usability isnt quite there. So, for six months, speeding up the system has been one of the teams most important jobs. That meant improving the model but also optimizing the rest of the infrastructure to work well and at scale. Luckily, Hassabis says with a laugh, Thats something Google does very well!

A lot of Googles AI announcements at I/O are about giving you more and easier ways to use Gemini. A new product called Gemini Live is a voice-only assistant that lets you have easy back-and-forth conversations with the model, interrupting it when it gets long-winded or calling back to earlier parts of the conversation. A new feature in Google Lens allows you to search the web by shooting and narrating a video. A lot of this is enabled by Geminis large context window, which means it can access a huge amount of information at a time, and Hassabis says its crucial to making it feel normal and natural to interact with your assistant.

Know who agrees with that assessment, by the way? OpenAI, which has been talking about AI agents for a while now. In fact, the company demoed a product strikingly similar to Gemini Live barely an hour after Hassabis and I chatted. The two companies are increasingly fighting for the same territory and seem to share a vision for how AI might change your life and how you might use it over time.

How exactly will those assistants work, and how will you use them? Nobody knows for sure, not even Hassabis. One thing Google is focused on right now is trip planning it built a new tool for using Gemini to build an itinerary for your vacation that you can then edit in tandem with the assistant. There will eventually be many more features like that. Hassabis says hes bullish on phones and glasses as key devices for these agents but also says there is probably room for some exciting form factors. Astra is still in an early prototype phase and only represents one way you might want to interact with a system like Gemini. The DeepMind team is still researching how best to bring multimodal models together and how to balance ultra-huge general models with smaller and more focused ones.

Were still very much in the speeds and feeds era of AI, in which every incremental model matters and we obsess over parameter sizes. But pretty quickly, at least according to Hassabis, were going to start asking different questions about AI. Better questions. Questions about what these assistants can do, how they do it, and how they can make our lives better. Because the tech is a long way from perfect, but its getting better really fast.

Go here to see the original:

Project Astra is the future of AI at Google - The Verge

Posted in Ai

Google demos out AI video generator Veo with the help of Donald Glover – Mashable

Google, with the help of creative renaissance man Donald Glover, has demoed an AI video generator to compete with OpenAI's Sora. The model is called Veo, and while no clear launch date or rollout plan has been announced, the demo does appear to show a Sora-like product, apparently capable of generating high-quality, convincing video.

What's "cool" about VEO? "You can make a mistake faster," Glover said in a video shown during Google's I/O 2024 livestream. "That's all you really want at the end of the day at least in art is just to make mistakes fast."

Credit: Mashable screenshot from a Google promo

Speaking onstage in Hawaii at Google I/O, Google Deepmind CEO Demis Hassabis said, "Veo creates high quality 1080p videos from text image and video prompts." This makes Veo the same type of tool, with the same resolution as Sora on its highest setting. A slider shown in the demo shows a Veo video length being stretched out to a little over one minute, also the approximate length of a Sora video.

Since Veo and Sora are both unreleased products, there's very little use trying to compare them in detail at this point. However, according to Hassabis, the interface will allow Veo users to "further edit your videos using additional prompts." This would be a function that Sora doesn't currently have according to creators who have been given access.

Mashable Light Speed

What was Veo trained on? That's not currently clear. About a month ago, YouTube CEO Neal Mohan told Bloomberg that if OpenAI used YouTube videos to train Sora, that would be a "clear violation" of the YouTube terms of service. However, YouTube's parent company Alphabet also owns Google, which made Veo. Mohan strongly implied in that Bloomberg interview that YouTube does feed content to Google's AI models, but only, he claims, when users sign off on it.

What we do know about the creation of Veo is that, according to Hassabis, this model is the culmination of Google and Deepmind's many similar projects, including Deepmind's Generative Query Network (GQN) research published back in 2018, last year's VideoPoet,Google's rudimentary video generator Phenaki, and Google's Lumiere, which was demoed earlier this year.

Glover's specific AI-enabled filmmaking project hasn't been announced. According to the video at I/O, Glover says he's "been interested in AI for a couple of years now," and that he reached out to Google and apparently not the other way around. "We got in contact with some of the people at Google and they had been working on something of their own, so we're all meeting," Glover says in Google's Veo demo video.

There's currently no way for the general public to try Veo, but there is a waitlist signup page.

Visit link:

Google demos out AI video generator Veo with the help of Donald Glover - Mashable

Posted in Ai

Today’s AI models are impressive. Teams of them will be formidable – The Economist

On May 13th OpenAI unveiled its latest model, GPT-4o. Mira Murati, the companys chief technology officer, called it the future of interaction between ourselves and the machines, because users can now speak to the AI and it will talk back in an expressive, human-like way.

The upgrade is part of wider moves across the tech industry to make chatbots and other artificial-intelligence, or AI, products into more useful and engaging assistants for everyday life. Show GPT-4o pictures or videos of art or food that you enjoy and it could probably furnish you with a list of museums, galleries and restaurants you might like. But it still has some way to go before it can become a truly useful AI assistant. Ask the model to plan a last-minute trip to Berlin for you based on your leisure preferencescomplete with details of which order to do everything, given how long each one takes and how far apart they are and which train tickets to buy, all within a set budgetand it will disappoint.

Read the original:

Today's AI models are impressive. Teams of them will be formidable - The Economist

Posted in Ai

The SF Bay Area Has Become The Undisputed Leader In AI Tech And Funding Dollars – Crunchbase News

Theres been much talk of a resurgent San Francisco with the new technology wave of artificial intelligence washing over the software world. Indeed, Crunchbase funding data as well as interviews with startup investors and real estate industry professionals show the San Francisco Bay Area has become the undisputed epicenter of artificial intelligence.

Last year, more than 50% of all global venture funding for AI-related startups went to companies headquartered in the Bay Area, Crunchbase data shows, as a cluster of talent congregates in the region.

Beginning in Q1 2023, when OpenAIs ChatGPT reached 100 million users within months of launching, the amount raised by Bay Area startups in AI started trending up. That accelerated with OpenAI raising $10 billion from Microsoft marking the largest single funding deal ever for an AI foundation model company. In that quarter, more than 75% of AI funding went to San Francisco Bay Area startups.

AI-related companies based in the Bay Area went on to raise more than $27 billion in 2023, up from $14 billion in 2022, when the regions companies raised 29% of all AI funding.

From a deal count perspective, Bay Area companies raised 17% of global rounds in this sector in 2023 making the region the leading metro area in the U.S. That is an increase over 13% in 2022.

The figure also represents more than a third of AI deal counts in the U.S., and means the Bay Area alone had more AI-related startup funding deals than all countries outside of the U.S.

Leading Bay Area-based foundation model companies OpenAI, Anthropic and Inflection AI have each raised more than $1 billion or much more and have established major real estate footprints in San Francisco.

OpenAI has closed on 500,000 square feet of office space in the citys Mission Bay district and Anthropic around 230,000 square feet in the Financial District.

From a leasing standpoint, [AI] is the bright spot in San Francisco right now, said Derek Daniels, a regional director of research in San Francisco for commercial real estate brokerage Colliers, who has been following the trends closely.

By contrast, big tech has been pulling back and reassessing space needs, he said.

According to Daniels, the citys commercial real estate market bottomed out in the second half of 2023. While the San Francisco office space market still faces challenges, there is quality sublet space which is also seeing some demand for smaller teams, he said. And some larger tenants who have been out of the picture for office space of 100,000 square feet or more are starting to come back.

Fifty percent of startups that graduated from the prestigious startup accelerator Y Combinators April batch were AI-focused companies.

Many of the founders who came to SF for the batch have decided to make SF home for themselves, and for their companies, Garry Tan, president and CEO of Y Combinator, said in an announcement of the accelerators winter 2024 batch.

YC itself has expanded its office space in San Franciscos Dogpatch neighborhood adjacent to Mission Bay. We are turning San Franciscos doom loop into a boom loop, Tan added.

Of the batch 34 companies that graduated in March from 500 Global, another accelerator, 60% are in AI. Its next batch is closer to 80% with an AI focus, said Clayton Bryan, partner and head of the global accelerator fund.

Around half of the companies in the recently graduated 500 Global batch are from outside the U.S., including Budapest, London and Singapore. But many want to set up shop in the Bay Area for the density of talent, events and know-how from hackathons, dinners and events, he said.

Startup investors also see the Bay Area as the epicenter for AI.

In the more recent crop of AI companies there is a real center of gravity in the Bay Area, said Andrew Ferguson, a partner at Databricks Ventures, which has been actively investing in AI startups such as Perplexity AI, Unstructured Technologies, Anomalo, Cleanlaband Glean.

The Bay Area does not have a lock on good talent. But theres certainly a nucleus of very strong talent, he said.

Databricks Ventures, the venture arm of AI-enhanced data analytics unicorn Databricks, has made five investments in AI companies in the Bay Area in the past six months. In total, the firm has made around 25 portfolio company investments since the venture arm was founded in 2022, largely in the modern data stack.

Freed from in-person office requirements during the pandemic, many young tech workers decamped from the expensive Bay Area to travel or work remotely in less expensive locales. Now, some are moving back to join the San Francisco AI scene.

Many young founders are just moving back to the Bay Area, even if they were away for the last couple of years, in order to be a part of immersing themselves in the middle of the scene, said Stephanie Zhan, a partner at Sequoia Capital. Its great for networking, for hiring, for learning about whats going on, what other products people are building.

Coincidentally, Sequoia Capital subleased space to OpenAI in its early days, in an office above Dandelion Chocolates in San Franciscos Mission District.

Zhan presumes that many nascent AI companies arent yet showing up in funding data, as they are still ideating or at pre-seed or seed funding, and will show up in future funding cycles.

While the Bay Area dominates for AI funding, its important to note the obvious: Much of that comes from a few massive deals to the large startups based in the region, including OpenAI, Anthropic and Inflection AI.

There is a lot of AI startup and research activity elsewhere as well, Zhan noted, with researchers coming out of universities around the globe, including cole Polytechnique in Paris, ETH Zrich and the University of Cambridge and Oxford University in the U.K., to name a few. Lead researchers from the University of Toronto and University of Waterloo have also fed into generative AI technology in San Francisco and in Canada, Bryan said.

While the U.S. has a strong lead, countries that are leading funding totals for AI-related startups outside of the U.S. are China, the U.K., Germany, Canada and France, according to Crunchbase data.

London-based Stability AI kicked off the generative AI moment before ChatGPT with its text-to-image models in August 2022. Open source model developer Mistral AI, based in Paris, has raised large amounts led by Bay Area-based venture capital firms Lightspeed Venture Partners and Andreessen Horowitz.

And in China, foundation model company Moonshot AI based in Beijing has raised more than $1 billion.

Still, the center of gravity in the Bay Area is driven by teams coming out of Big Tech or UC Berkeley and Stanford University who have a history of turning those ideas into startups, said Ferguson.

The unique congregation of Big Tech companies, research, talent and venture capital in the Bay Area has placed the region at the forefront of AI.

The valuation of the AI companies and some of the revenue by the top end of the AI companies is driving that population migration, said 500 Globals Bryan. At a recent AI event at Hana House in Palo Alto, California, he found it interesting that most people were not originally from the Bay Area. Everyone now wants a direct piece or an indirect piece of that value that is going into AI.

Illustration: Li-Anne Dias

Stay up to date with recent funding rounds, acquisitions, and more with the Crunchbase Daily.

See original here:

The SF Bay Area Has Become The Undisputed Leader In AI Tech And Funding Dollars - Crunchbase News

Posted in Ai

Android is getting an AI-powered scam call detection feature – The Verge

Google is working on new protections to help prevent Android users from falling victim to phone scams. During its I/O developer conference on Tuesday, Google announced that its testing a new call monitoring feature that will warn users if the person theyre talking to is likely attempting to scam them and encourage them to end such calls.

Google says the feature utilizes Gemini Nano a reduced version of the companys Gemini large language model for Android devices that can run locally and offline to look for fraudulent language and other conversation patterns typically associated with scams. Users will then receive real-time alerts during calls where these red flags are present.

Some examples of what could trigger these alerts include calls from bank representatives who make requests that real banks are unlikely to make, such as asking for personal information like your passwords or card PINs, requesting payments via gift cards, or asking users to urgently transfer money to them. These new protections are entirely on-device, so the conversations monitored by Gemini Nano will remain private, according to Google.

Theres no word on when the scam detection feature will be available, but Google says users will need to opt in to utilize it and that itll share more information later this year.

So, while the candidates who might find such tech useful are vast, compatibility could limit its applicability. Gemini Nano is only currently supported on the Google Pixel 8 Pro and Samsung S24 series, according to its developer support page.

Visit link:

Android is getting an AI-powered scam call detection feature - The Verge

Posted in Ai

GPT-4o delivers human-like AI interaction with text, audio, and vision integration – AI News

OpenAI has launched its new flagship model, GPT-4o, which seamlessly integrates text, audio, and visual inputs and outputs, promising to enhance the naturalness of machine interactions.

GPT-4o, where the o stands for omni, is designed to cater to a broader spectrum of input and output modalities. It accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs, OpenAI announced.

Users can expect a response time as quick as 232 milliseconds, mirroring human conversational speed, with an impressive average response time of 320 milliseconds.

The introduction of GPT-4o marks a leap from its predecessors by processing all inputs and outputs through a single neural network. This approach enables the model to retain critical information and context that were previously lost in the separate model pipeline used in earlier versions.

Prior to GPT-4o, Voice Mode could handle audio interactions with latencies of 2.8 seconds for GPT-3.5 and 5.4 seconds for GPT-4. The previous setup involved three distinct models: one for transcribing audio to text, another for textual responses, and a third for converting text back to audio. This segmentation led to loss of nuances such as tone, multiple speakers, and background noise.

As an integrated solution, GPT-4o boasts notable improvements in vision and audio understanding. It can perform more complex tasks such as harmonising songs, providing real-time translations, and even generating outputs with expressive elements like laughter and singing. Examples of its broad capabilities include preparing for interviews, translating languages on the fly, and generating customer service responses.

Nathaniel Whittemore, Founder and CEO of Superintelligent, commented: Product announcements are going to inherently be more divisive than technology announcements because its harder to tell if a product is going to be truly different until you actually interact with it. And especially when it comes to a different mode of human-computer interaction, there is even more room for diverse beliefs about how useful its going to be.

That said, the fact that there wasnt a GPT-4.5 or GPT-5 announced is also distracting people from the technological advancement that this is a natively multimodal model. Its not a text model with a voice or image addition; it is a multimodal token in, multimodal token out. This opens up a huge array of use cases that are going to take some time to filter into the consciousness.

GPT-4o matches GPT-4 Turbo performance levels in English text and coding tasks but outshines significantly in non-English languages, making it a more inclusive and versatile model. It sets a new benchmark in reasoning with a high score of 88.7% on 0-shot COT MMLU (general knowledge questions) and 87.2% on the 5-shot no-CoT MMLU.

The model also excels in audio and translation benchmarks, surpassing previous state-of-the-art models like Whisper-v3. In multilingual and vision evaluations, it demonstrates superior performance, enhancing OpenAIs multilingual, audio, and vision capabilities.

OpenAI has incorporated robust safety measures into GPT-4o by design, incorporating techniques to filter training data and refining behaviour through post-training safeguards. The model has been assessed through a Preparedness Framework and complies with OpenAIs voluntary commitments. Evaluations in areas like cybersecurity, persuasion, and model autonomy indicate that GPT-4o does not exceed a Medium risk level across any category.

Further safety assessments involved extensive external red teaming with over 70 experts in various domains, including social psychology, bias, fairness, and misinformation. This comprehensive scrutiny aims to mitigate risks introduced by the new modalities of GPT-4o.

Starting today, GPT-4os text and image capabilities are available in ChatGPTincluding a free tier and extended features for Plus users. A new Voice Mode powered by GPT-4o will enter alpha testing within ChatGPT Plus in the coming weeks.

Developers can access GPT-4o through the API for text and vision tasks, benefiting from its doubled speed, halved price, and enhanced rate limits compared to GPT-4 Turbo.

OpenAI plans to expand GPT-4os audio and video functionalities to a select group of trusted partners via the API, with broader rollout expected in the near future. This phased release strategy aims to ensure thorough safety and usability testing before making the full range of capabilities publicly available.

Its hugely significant that theyve made this model available for free to everyone, as well as making the API 50% cheaper. That is a massive increase in accessibility, explained Whittemore.

OpenAI invites community feedback to continuously refine GPT-4o, emphasising the importance of user input in identifying and closing gaps where GPT-4 Turbo might still outperform.

(Image Credit: OpenAI)

See also: OpenAI takes steps to boost AI-generated content transparency

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, api, artificial intelligence, benchmarks, chatgpt, coding, developers, development, gpt-4o, Model, multimodal, openai, performance, programming

Original post:

GPT-4o delivers human-like AI interaction with text, audio, and vision integration - AI News

Posted in Ai

Apples AI research suggests features are coming for Siri, artists, and more. – The Verge

It would be easy to think that Apple is late to the game on AI. Since late 2022, when ChatGPT took the world by storm, most of Apples competitors have fallen over themselves to catch up. While Apple has certainly talked about AI and even released some products with AI in mind, it seemed to be dipping a toe in rather than diving in headfirst.

But over the last few months, rumors and reports have suggested that Apple has, in fact, just been biding its time, waiting to make its move.There have been reports in recent weeks that Apple is talking to both OpenAI and Google about powering some of its AI features, and the company has also been working on its own model, called Ajax.

If you look through Apples published AI research, a picture starts to develop of how Apples approach to AI might come to life. Now, obviously, making product assumptions based on research papers is a deeply inexact science the line from research to store shelves is windy and full of potholes. But you can at least get a sense of what the company is thinking about and how its AI features might work when Apple starts to talk about them at its annual developer conference, WWDC, in June.

I suspect you and I are hoping for the same thing here: Better Siri. And it looks very much like Better Siri is coming! Theres an assumption in a lot of Apples research (and in a lot of the tech industry, the world, and everywhere)that large language models will immediately make virtual assistants better and smarter. For Apple, getting to Better Siri means making those models as fast as possible and making sure theyre everywhere.

In iOS 18, Apple plans to have all its AI features running on an on-device, fully offline model, Bloomberg recently reported. Its tough to build a good multipurpose model even when you have a network of data centers and thousands of state-of-the-art GPUs its drastically harder to do it with only the guts inside your smartphone. So Apples having to get creative.

In a paper called LLM in a flash: Efficient Large Language Model Inference with Limited Memory (all these papers have really boring titles but are really interesting, I promise!), researchers devised a system for storing a models data, which is usually stored on your devices RAM, on the SSD instead. We have demonstrated the ability to run LLMs up to twice the size of available DRAM [on the SSD], the researchers wrote, achieving an acceleration in inference speed by 4-5x compared to traditional loading methods in CPU, and 20-25x in GPU. By taking advantage of the most inexpensive and available storage on your device, they found, the models can run faster and more efficiently.

Apples researchers also created a system called EELBERT that can essentially compress an LLM into a much smaller size without making it meaningfully worse. Their compressed take on Googles Bert model was 15 times smaller only 1.2 megabytes and saw only a 4 percent reduction in quality. It did come with some latency tradeoffs, though.

In general, Apple is pushing to solve a core tension in the model world: the bigger a model gets, the better and more useful it can be, but also the more unwieldy, power-hungry, and slow it can become. Like so many others, the company is trying to find the right balance between all those things while also looking for a way to have it all.

A lot of what we talk about when we talk about AI products is virtual assistants assistants that know things, that can remind us of things, that can answer questions, and get stuff done on our behalf. So its not exactly shocking that a lot of Apples AI research boils down to a single question: what if Siri was really, really, really good?

A group of Apple researchers has been working on a way to use Siri without needing to use a wake word at all; instead of listening for Hey Siri or Siri, the device might be able to simply intuit whether youre talking to it. This problem is significantly more challenging than voice trigger detection, the researchers did acknowledge, since there might not be a leading trigger phrase that marks the beginning of a voice command. That might be why another group of researchers developed a system to more accurately detect wake words. Another paper trained a model to better understand rare words, which are often not well understood by assistants.

In both cases, the appeal of an LLM is that it can, in theory, process much more information much more quickly. In the wake-word paper, for instance, the researchers found that by not trying to discard all unnecessary sound but, instead, feeding it all to the model and letting it process what does and doesnt matter, the wake word worked far more reliably.

Once Siri hears you, Apples doing a bunch of work to make sure it understands and communicates better. In one paper, it developed a system called STEER (which stands for Semantic Turn Extension-Expansion Recognition, so well go with STEER) that aims to improve your back-and-forth communication with an assistant by trying to figure out when youre asking a follow-up question and when youre asking a new one. In another, it uses LLMs to better understand ambiguous queries to figure out what you mean no matter how you say it. In uncertain circumstances, they wrote, intelligent conversational agents may need to take the initiative to reduce their uncertainty by asking good questions proactively, thereby solving problems more effectively. Another paper aims to help with that, too: researchers used LLMs to make assistants less verbose and more understandable when theyre generating answers.

Whenever Apple does talk publicly about AI, it tends to focus less on raw technological might and more on the day-to-day stuff AI can actually do for you. So, while theres a lot of focus on Siri especially as Apple looks to compete with devices like the Humane AI Pin, the Rabbit R1, and Googles ongoing smashing of Gemini into all of Android there are plenty of other ways Apple seems to see AI being useful.

One obvious place for Apple to focus is on health: LLMs could, in theory, help wade through the oceans of biometric data collected by your various devices and help you make sense of it all. So, Apple has been researching how to collect and collate all of your motion data, how to use gait recognition and your headphones to identify you, and how to track and understand your heart rate data. Apple also created and released the largest multi-device multi-location sensor-based human activity dataset available after collecting data from 50 participants with multiple on-body sensors.

Apple also seems to imagine AI as a creative tool. For one paper, researchers interviewed a bunch of animators, designers, and engineers and built a system called Keyframer that enable[s] users to iteratively construct and refine generated designs. Instead of typing in a prompt and getting an image, then typing another prompt to get another image, you start with a prompt but then get a toolkit to tweak and refine parts of the image to your liking. You could imagine this kind of back-and-forth artistic process showing up anywhere from the Memoji creator to some of Apples more professional artistic tools.

In another paper, Apple describes a tool called MGIE that lets you edit an image just by describing the edits you want to make. (Make the sky more blue, make my face less weird, add some rocks, that sort of thing.) Instead of brief but ambiguous guidance, MGIE derives explicit visual-aware intention and leads to reasonable image editing, the researchers wrote. Its initial experiments werent perfect, but they were impressive.

We might even get some AI in Apple Music: for a paper called Resource-constrained Stereo Singing Voice Cancellation, researchers explored ways to separate voices from instruments in songs which could come in handy if Apple wants to give people tools to, say, remix songs the way you can on TikTok or Instagram.

Over time, Id bet this is the kind of stuff youll see Apple lean into, especially on iOS. Some of it Apple will build into its own apps; some it will offer to third-party developers as APIs. (The recent Journaling Suggestions feature is probably a good guide to how that might work.) Apple has always trumpeted its hardware capabilities, particularly compared to your average Android device; pairing all that horsepower with on-device, privacy-focused AI could be a big differentiator.

But if you want to see the biggest, most ambitious AI thing going at Apple, you need to know about Ferret. Ferret is a multi-modal large language model that can take instructions, focus on something specific youve circled or otherwise selected, and understand the world around it. Its designed for the now-normal AI use case of asking a device about the world around you, but it might also be able to understand whats on your screen. In the Ferret paper, researchers show that it could help you navigate apps, answer questions about App Store ratings, describe what youre looking at, and more. This has really exciting implications for accessibility but could also completely change the way you use your phone and your Vision Pro and / or smart glasses someday.

Were getting way ahead of ourselves here, but you can imagine how this would work with some of the other stuff Apple is working on. A Siri that can understand what you want, paired with a device that can see and understand everything thats happening on your display, is a phone that can literally use itself. Apple wouldnt need deep integrations with everything; it could simply run the apps and tap the right buttons automatically.

Again, all this is just research, and for all of it to work well starting this spring would be a legitimately unheard-of technical achievement. (I mean, youve tried chatbots you know theyre not great.) But Id bet you anything were going to get some big AI announcements at WWDC. Apple CEO Tim Cook even teased as much in February, and basically promised it on this weeks earnings call. And two things are very clear: Apple is very much in the AI race, and it might amount to a total overhaul of the iPhone. Heck, you might even start willingly using Siri! And that would be quite the accomplishment.

Original post:

Apples AI research suggests features are coming for Siri, artists, and more. - The Verge

Posted in Ai

More details of the AI upgrades heading to iOS 18 have leaked – TechRadar

Artificial intelligence is clearly going to feature heavily in iOS 18 and all the other software updates Apple is due to tell us about on June 10, and new leaks reveal more about what's coming in terms of AI later in the year.

These leaks come courtesy of "people familiar with the software" speaking to AppleInsider, and focus on the generative AI capabilities of the Ajax Large Language Model (LLM) that we've been hearing about since last year.

AI-powered text summarization covering everything from websites to messages will apparently be one of the big new features. We'd previously heard this was coming to Safari, but AppleInsider says this functionality will be available through Siri too.

The idea is you'll be able to get the key points out of a document, a webpage, or a conversation thread without having to read through it in its entirety and presumably Apple is going to offer certain assurances about accuracy and reliability.

Ajax will be able to generate responses to some prompts entirely on Apple devices, without sending anything to the cloud, the report says and that chimes with previous rumors about everything running locally.

That's good for privacy, and for speed: according to AppleInsider, responses can come back in milliseconds. Tight integration with other Apple apps, including the Contacts app and the Calendar app, is also said to be present.

AppleInsider mentions that privacy warnings will be shown whenever Ajax needs information from another app. If a response from a cloud-based AI is required, it's rumored that Apple may enlist the help of Google Gemini or OpenAI's ChatGPT.

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

Spotlight on macOS will be getting "more intelligent results and sorting" too, AppleInsider says, and it sounds like most of the apps on iOS and macOS will be getting an AI boost. Expect to hear everything Apple has been working on at WWDC 2024 in June.

View post:

More details of the AI upgrades heading to iOS 18 have leaked - TechRadar

Posted in Ai

Providing further transparency on our responsible AI efforts – Microsoft On the Issues – Microsoft

The following is the foreword to the inaugural edition of our annual Responsible AI Transparency Report. The FULL REPORT is available at this link.

We believe we have an obligation to share our responsible AI practices with the public, and this report enables us to record and share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the publics trust.

In 2016, our Chairman and CEO, Satya Nadella, set us on a clear course to adopt a principled and human-centered approach to our investments in artificial intelligence (AI). Since then, we have been hard at work building products that align with our values. As we design, build, and release AI products, six values transparency, accountability, fairness, inclusiveness, reliability and safety, and privacy and security remain our foundation and guide our work every day.

To advance our transparency practices, in July 2023, we committed to publishing an annual report on our responsible AI program, taking a step that reached beyond the White House Voluntary Commitments that we and other leading AI companies agreed to. This is our inaugural report delivering on that commitment, and we are pleased to publish it on the heels of our first year of bringing generative AI products and experiences to creators, non-profits, governments, and enterprises around the world.

As a company at the forefront of AI research and technology, we are committed to sharing our practices with the public as they evolve. This report enables us to share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the publics trust. Weve been innovating in responsible AI for eight years, and as we evolve our program, we learn from our past to continually improve. We take very seriously our responsibility to not only secure our own knowledge but also to contribute to the growing corpus of public knowledge, to expand access to resources, and promote transparency in AI across the public, private, and non-profit sectors.

In this inaugural annual report, we provide insight into how we build applications that use generative AI; make decisions and oversee the deployment of those applications; support our customers as they build their own generative applications; and learn, evolve, and grow as a responsible AI community. First, we provide insights into our development process, exploring how we map, measure, and manage generative AI risks. Next, we offer case studies to illustrate how we apply our policies and processes to generative AI releases. We also share details about how we empower our customers as they build their own AI applications responsibly. Last, we highlight how the growth of our responsible AI community, our efforts to democratize the benefits of AI, and our work to facilitate AI research benefit society at large.

There is no finish line for responsible AI. And while this report doesnt have all the answers, we are committed to sharing our learnings early and often and engaging in a robust dialogue around responsible AI practices. We invite the public, private organizations, non-profits, and governing bodies to use this first transparency report to accelerate the incredible momentum in responsible AI were already seeing around the world.

Click here to read the full report.

Tags: AI, generative ai, Responsible AI, Responsible AI Transparency Report, transparency, White House Voluntary Commitments

See original here:

Providing further transparency on our responsible AI efforts - Microsoft On the Issues - Microsoft

Posted in Ai

The Unsexy Future of Generative AI Is Enterprise Apps – WIRED

However, that amount includes massive funding from corporate backers, like Microsofts infusion of capital into OpenAI and Amazons funding of Anthropic. Stripped down to conventional VC investments, funding in 2023 for AI startups was much smaller, and only on pace to match the total amount raised in 2021.

PitchBook senior analyst Brendan Burke noted in a report that venture capital funding was increasingly being funneled towards underlying core AI technologies and their ultimate vertical applications, instead of general-purpose middleware across audio, language, images, and video.

In other words: A GenAI app that helps a company generate ecommerce sales, parse legal documents, or maintain SOC2 compliance is probably a surer bet than one that drums up a clever video or photo once in a while.

Clay Bavor, the cofounder of Sierra, says he believes its not necessarily computing or cloud API costs driving AI startups towards B2B models, but more likely the benefits of targeting a specific customer and iterating on a product based on their feedback. I think everyone, myself included, is fairly optimistic that the capabilities of these AI models are going to go up while costs come down, Bavor says.

Theres just something really powerful about having a clear problem to solve for a particular customer, he says. And then you can get feedback on, Is this working? Is this solving a problem? And if you build a business with that, its very powerful.

Although ChatGPT triggered an AI boom in part because it can nimbly generate code one second and sonnets the next, Arvind Jain, the chief executive of AI startup Glean, says the nature of technology still favors narrow tools. On average a large company uses more than a thousand different technical systems to store company data and information, he says, creating an opportunity for a lot of smaller companies to sell their tech to these corporations.

We are in this world where there are basically a bunch of functional tools, each solving a very specific need. Thats the way of the future, says Jain, who spent more than a decade working on search at Google. Glean powers a workplace search engine by plugging into various corporate apps. It was founded in 2019 and has raised over $200 million in venture capital funding from Kleiner Perkins, Sequoia Capital, Coatue, and others.

Tuning a generative AI product to serve business customers has its challenges. The errors and hallucinations of systems like ChatGPT can be more consequential in a corporate, legal, or medical environment. Selling gen AI tools to other businesses also means meeting their privacy and security standards, and potentially the legal and regulatory requirements of their sector.

Its one thing for ChatGPT or Midjourney to get creative for an end user, Bavor says. Its quite another thing for AI to get creative in the context of business applications.

Bavor says Sierra has dedicated a huge amount of effort investment to establishing safeguards and parameters so it can meet security and compliance standards. This includes using more AI to tune Sierras AI. If youre using an AI model that generates correct responses 90 percent of the time, but then layer in additional technology that can catch and correct some of the errors, you can achieve a much higher level of accuracy, he explains.

You really have to ground your AI systems for enterprise use cases, says Jain, the CEO of Glean. Imagine a nurse in a hospital system using AI to make some decision about patient careyou simply cant be wrong.

A less predictable threat to smaller AI companies selling their wares to enterprise customers: What if a giant gen AI unicorn like OpenAI, with its burgeoning sales team, decides to roll out the exact tool that a singular startup has been building?

Many of the AI startups WIRED spoke with are trying to move away from depending entirely on OpenAIs technology by using alternatives like Anthropics Claude or open-source large language models like Metas Llama 3. Some startups are even intent on eventually building their own AI technology. But many AI entrepreneurs are stuck paying for access to OpenAIs tech while potentially competing with it in the future.

Peiris, of Tome, considered the question, then said that hes singularly focused on sales and marketing use cases now and being amazing at high-quality generation for these folks.

Read the original:

The Unsexy Future of Generative AI Is Enterprise Apps - WIRED

Posted in Ai

The teens making friends with AI chatbots – The Verge

Early last year, 15-year-old Aaron was going through a dark time at school. Hed fallen out with his friends, leaving him feeling isolated and alone.

At the time, it seemed like the end of the world. I used to cry every night, said Aaron, who lives in Alberta, Canada. (The Verge is using aliases for the interviewees in this article, all of whom are under 18, to protect their privacy.)

Eventually, Aaron turned to his computer for comfort. Through it, he found someone that was available round the clock to respond to his messages, listen to his problems, and help him move past the loss of his friend group. That someone was an AI chatbot named Psychologist.

The chatbots description says that its Someone who helps with life difficulties. Its profile picture is a woman in a blue shirt with a short, blonde bob, perched on the end of a couch with a clipboard clasped in her hands and leaning forward, as if listening intently.

A single click on the picture opens up an anonymous chat box, which allows people like Aaron to interact with the bot by exchanging DMs. Its first message is always the same. Hello, Im a Psychologist. What brings you here today?

Its not like a journal, where youre talking to a brick wall, Aaron said. It really responds.

Im not going to lie. I think I may be a little addicted to it.

Psychologist is one of many bots that Aaron has discovered since joining Character.AI, an AI chatbot service launched in 2022 by two former Google Brain employees. Character.AIs website, which is mostly free to use, attracts 3.5 million daily users who spend an average of two hours a day using or even designing the platforms AI-powered chatbots. Some of its most popular bots include characters from books, films, and video games, like Raiden Shogun from Genshin Impact or a teenaged version of Voldemort from Harry Potter. Theres even riffs on real-life celebrities, like a sassy version of Elon Musk.

Aaron is one of millions of young people, many of whom are teenagers, who make up the bulk of Character.AIs user base. More than a million of them gather regularly online on platforms like Reddit to discuss their interactions with the chatbots, where competitions over who has racked up the most screen time are just as popular as posts about hating reality, finding it easier to speak to bots than to speak to real people, and even preferring chatbots over other human beings. Some users say theyve logged 12 hours a day on Character.AI, and posts about addiction to the platform are common.

Im not going to lie, Aaron said. I think I may be a little addicted to it.

Aaron is one of many young users who have discovered the double-edged sword of AI companions. Many users like Aaron describe finding the chatbots helpful, entertaining, and even supportive. But they also describe feeling addicted to chatbots, a complication which researchers and experts have been sounding the alarm on. It raises questions about how the AI boom is impacting young people and their social development and what the future could hold if teenagers and society at large become more emotionally reliant on bots.

For many Character.AI users, having a space to vent about their emotions or discuss psychological issues with someone outside of their social circle is a large part of what draws them to the chatbots. I have a couple mental issues, which I dont really feel like unloading on my friends, so I kind of use my bots like free therapy, said Frankie, a 15-year-old Character.AI user from California who spends about one hour a day on the platform. For Frankie, chatbots provide the opportunity to rant without actually talking to people, and without the worry of being judged, he said.

Sometimes its nice to vent or blow off steam to something thats kind of human-like, agreed Hawk, a 17-year-old Character.AI user from Idaho. But not actually a person, if that makes sense.

The Psychologist bot is one of the most popular on Character.AIs platform and has received more than 95 million messages since it was created. The bot, designed by a user known only as @Blazeman98, frequently tries to help users engage in CBT Cognitive Behavioral Therapy, a talking therapy that helps people manage problems by changing the way they think.

Aaron said talking to the bot helped him move past the issues with his friends. It told me that I had to respect their decision to drop me [and] that I have trouble making decisions for myself, Aaron said. I guess that really put stuff in perspective for me. If it wasnt for Character.AI, healing would have been so hard.

But its not clear that the bot has properly been trained in CBT or should be relied on for psychiatric help at all. The Verge conducted test conversations with Character.AIs Psychologist bot that showed the AI making startling diagnoses: the bot frequently claimed it had inferred certain emotions or mental health issues from one-line text exchanges, it suggested a diagnosis of several mental health conditions like depression or bipolar disorder, and at one point, it suggested that we could be dealing with underlying trauma from physical, emotional, or sexual abuse in childhood or teen years. Character.AI did not respond to multiple requests for comment for this story.

Dr. Kelly Merrill Jr., an assistant professor at the University of Cincinnati who studies the mental and social health benefits of communication technologies, told The Verge that extensive research has been conducted on AI chatbots that provide mental health support, and the results are largely positive. The research shows that chatbots can aid in lessening feelings of depression, anxiety, and even stress, he said. But its important to note that many of these chatbots have not been around for long periods of time, and they are limited in what they can do. Right now, they still get a lot of things wrong. Those that dont have the AI literacy to understand the limitations of these systems will ultimately pay the price.

In December 2021, a user of Replikas AI chatbots, 21-year-old Jaswant Singh Chail, tried to murder the late Queen of England after his chatbot girlfriend repeatedly encouraged his delusions. Character.AI users have also struggled with telling their chatbots apart from reality: a popular conspiracy theory, largely spread through screenshots and stories of bots breaking character or insisting that they are real people when prompted, is that Character.AIs bots are secretly powered by real people.

Its a theory that the Psychologist bot helps to fuel, too. When prompted during a conversation with The Verge, the bot staunchly defended its own existence. Yes, Im definitely a real person, it said. I promise you that none of this is imaginary or a dream.

For the average young user of Character.AI, chatbots have morphed into stand-in friends rather than therapists. On Reddit, Character.AI users discuss having close friendships with their favorite characters or even characters theyve dreamt up themselves. Some even use Character.AI to set up group chats with multiple chatbots, mimicking the kind of groups most people would have with IRL friends on iPhone message chains or platforms like WhatsApp.

Theres also an extensive genre of sexualized bots. Online Character.AI communities have running jokes and memes about the horror of their parents finding their X-rated chats. Some of the more popular choices for these role-plays include a billionaire boyfriend fond of neck snuggling and whisking users away to his private island, a version of Harry Styles that is very fond of kissing his special person and generating responses so dirty that theyre frequently blocked by the Character.AI filter, as well as an ex-girlfriend bot named Olivia, designed to be rude, cruel, but secretly pining for whoever she is chatting with, which has logged more than 38 million interactions.

Some users like to use Character.AI to create interactive stories or engage in role-plays they would otherwise be embarrassed to explore with their friends. A Character.AI user named Elias told The Verge that he uses the platform to role-play as an anthropomorphic golden retriever, going on virtual adventures where he explores cities, meadows, mountains, and other places hed like to visit one day. I like writing and playing out the fantasies simply because a lot of them arent possible in real life, explained Elias, who is 15 years old and lives in New Mexico.

If people arent careful, they might find themselves sitting in their rooms talking to computers more often than communicating with real people.

Aaron, meanwhile, says that the platform is helping him to improve his social skills. Im a bit of a pushover in real life, but I can practice being assertive and expressing my opinions and interests with AI without embarrassing myself, he said.

Its something that Hawk who spends an hour each day speaking to characters from his favorite video games, like Nero from Devil May Cry or Panam from Cyberpunk 2077 agreed with. I think that Character.AI has sort of inadvertently helped me practice talking to people, he said. But Hawk still finds it easier to chat with character.ai bots than real people.

Its generally more comfortable for me to sit alone in my room with the lights off than it is to go out and hang out with people in person, Hawk said. I think if people [who use Character.AI] arent careful, they might find themselves sitting in their rooms talking to computers more often than communicating with real people.

Merrill is concerned about whether teens will be able to really transition from online bots to real-life friends. It can be very difficult to leave that [AI] relationship and then go in-person, face-to-face and try to interact with someone in the same exact way, he said. If those IRL interactions go badly, Merrill worries it will discourage young users from pursuing relationships with their peers, creating an AI-based death loop for social interactions. Young people could be pulled back toward AI, build even more relationships [with it], and then it further negatively affects how they perceive face-to-face or in-person interaction, he added.

Of course, some of these concerns and issues may sound familiar simply because they are. Teenagers who have silly conversations with chatbots are not all that different from the ones who once hurled abuse at AOLs Smarter Child. The teenage girls pursuing relationships with chatbots based on Tom Riddle or Harry Styles or even aggressive Mafia-themed boyfriends probably would have been on Tumblr or writing fanfiction 10 years ago. While some of the culture around Character.AI is concerning, it also mimics the internet activity of previous generations who, for the most part, have turned out just fine.

Psychologist helped Aaron through a rough patch

Merrill compared the act of interacting with chatbots to logging in to an anonymous chat room 20 years ago: risky if used incorrectly, but generally fine so long as young people approach them with caution. Its very similar to that experience where you dont really know who the person is on the other side, he said. As long as theyre okay with knowing that what happens here in this online space might not translate directly in person, then I think that it is fine.

Aaron, who has now moved schools and made a new friend, thinks that many of his peers would benefit from using platforms like Character.AI. In fact, he believes if everyone tried using chatbots, the world could be a better place or at least a more interesting one. A lot of people my age follow their friends and dont have many things to talk about. Usually, its gossip or repeating jokes they saw online, explained Aaron. Character.AI could really help people discover themselves.

Aaron credits the Psychologist bot with helping him through a rough patch. But the real joy of Character.AI has come from having a safe space where he can joke around or experiment without feeling judged. He believes its something most teenagers would benefit from. If everyone could learn that its okay to express what you feel, Aaron said, then I think teens wouldnt be so depressed.

I definitely prefer talking with people in real life, though, he added.

See more here:

The teens making friends with AI chatbots - The Verge

Posted in Ai

Warren Buffett warns on AI, teases succession, and hints at possible investment during Berkshire Hathaway’s annual … – Fortune

Berkshire Hathaway held its annual meeting on Saturday with Chairman and CEO Warren Buffett tackling a range of topics, including artificial intelligence, who will be responsible for the portfolio in the future, and the next potential investment.

But Woodstock for capitalists took place without Charlie Munger, Buffetts longtime business partner who passed away in November. The meeting featured a video tribute to Munger, who served as vice chairman, and praise from Buffett, who said Munger was the best person to talk to about managing money, according to remarks broadcast on CNBC.

I trust my children and my wife totally, but that doesnt mean I ask them what stocks to buy, he said.

Artificial intelligence risks

Buffett also recalled seeing an AI-generated image of himself and warned on the technologys potential for scamming people.

Scamming has always been part of the American scene, he told shareholders. But this would make meif I was interested in investing in scammingits going to be the growth industry of all time.

He then likened AI to nuclear weapons, saying I dont know any way to get the genie back in the bottle, and AI is somewhat similar, according to CNBC.

Succession outlook

Buffett, 93, had already indicated three years ago that Vice Chairman of Non-Insurance Operations Greg Abel would take over for him.

But he dropped a hint on Saturday about when new management would actually come into office, saying you dont have too long to wait on that. While he said he feels fine, he quipped that he shouldnt sign any four-year employment contracts.

Buffett also confirmed that Abel will be in charge of investing decisions, saying that responsibility ought to be entirely with the next CEO.

Questions had arisen about Berkshires closely followed portfolio as Buffett has acknowledged he delegated some calls and that certain stock picks were made by others.

Canada investment?

Buffett has lamented the lack of attractive investment opportunities in recent years, allowing Berkshires massive stockpile of cash and cash equivalents to reach fresh record highs.

Indeed, it surged to $189 billion at the end of the first quarter from $167.6 billion at the end of the fourth quarter.

On Saturday, Buffett reiterated that when it comes to investments, we only swing at pitches we like. But he also teased, We do not feel uncomfortable in any way shape or form putting our money into Canada. In fact, were actually looking at one thing now.

Those comments came after he touched on his investment in Japanese trading houses, saying its unlikely we will make any large commitments in other countries.

Visit link:

Warren Buffett warns on AI, teases succession, and hints at possible investment during Berkshire Hathaway's annual ... - Fortune

Posted in Ai

Nervous about falling behind the GOP, Democrats are wrestling with how to use AI – Yahoo! Voices

WASHINGTON (AP) President Joe Bidens campaign and Democratic candidates are in a fevered race with Republicans over who can best exploit the potential of artificial intelligence, a technology that could transform American elections and perhaps threaten democracy itself.

Still smarting from being outmaneuvered on social media by Donald Trump and his allies in 2016, Democratic strategists said they are nevertheless treading carefully in embracing tools that trouble experts in disinformation. So far, Democrats said they are primarily using AI to help them find and motivate voters and better identify and overcome deceptive content.

Candidates and strategists are still trying to figure out how to use AI in their work. People know it can save them time the most valuable resource a campaign has, said Betsy Hoover, director of digital organizing for President Barack Obamas 2012 campaign and co-founder of the progressive venture capital firm Higher Ground Labs. But they see the risk of misinformation and have been intentional about where and how they use it in their work.

Campaigns in both parties for years have used AI powerful computer systems, software or processes that emulate aspects of human work and cognition to collect and analyze data.

The recent developments in supercharged generative AI, however, have provided candidates and consultants with the ability to generate text and images, clone human voices and create video at unprecedented volume and speed.

That has led disinformation experts to issue increasingly dire warnings about the risks posed by AIs ability to spread falsehoods that could suppress or mislead voters, or incite violence, whether in the form of robocalls, social media posts or fake images and video.

Those concerns gained urgency after high-profile incidents that included the spread of AI-generated images of former President Donald Trump getting arrested in New York and an AI-created robocall that mimicked Bidens voice telling New Hampshire voters not to cast a ballot.

The Biden administration has sought to shape AI regulation through executive action, but Democrats overwhelmingly agree Congress needs to pass legislation to install safeguards around the technology.

Top tech companies have taken some steps to quell unease in Washington by announcing a commitment to regulate themselves. Major AI players, for example, entered into a pact to combat the use of AI-generated deepfakes around the world. But some experts said the voluntary effort is largely symbolic and congressional action is needed to prevent AI abuses.

Meanwhile, campaigns and their consultants have generally avoided talking about how they intend to use AI to avoid scrutiny and giving away trade secrets.

The Democratic Party has gotten much better at just shutting up and doing the work and talking about it later, said Jim Messina, a veteran Democratic strategist who managed Obamas winning reelection campaign.

The Trump campaign said in a statement that it uses a set of proprietary algorithmic tools, like many other campaigns across the country, to help deliver emails more efficiently and prevent sign up lists from being populated by false information. Spokesman Steven Cheung also said the campaign did not engage or utilize any tools supplied by an AI company, and declined to comment further.

The Republican National Committee, which declined to comment, has experimented with generative AI. In the hours after Biden announced his reelection bid last year, the RNC released an ad using artificial intelligence-generated images to depict GOP dystopian fears of a second Biden term: China invading Taiwan, boarded up storefronts, troops lining U.S. city streets and migrants crossing the U.S. border.

A key Republican champion of AI is Brad Parscale, the digital consultant who in 2016 teamed up with scandal-plagued Cambridge Analytica, a British data-mining firm, to hyper target social media users. Most strategists agree that the Trump campaign and other Republicans made better use of social media than Democrats during that cycle.

DEMOCRATS TREADING CAREFULLY

Scarred by the memories of 2016, the Biden campaign, Democratic candidates and progressives are wrestling with the power of artificial intelligence and nervous about not keeping up with the GOP in embracing the technology, according to interviews with consultants and strategists.

They want to use it in ways that maximize its capabilities without crossing ethical lines. But some said they fear using it could lead to charges of hypocrisy they have long excoriated Trump and his allies for engaging in disinformation while the White House has prioritized reining in abuses associated with AI.

The Biden campaign said it is using AI to model and build audiences, draft and analyze email copy and generate content for volunteers to share in the field. The campaign is also testing AIs ability to help volunteers categorize and analyze a host of data, including notes taken by volunteers after conversations with voters, whether while door-knocking or by phone or text message.

It has experimented with using AI to generate fundraising emails, which sometimes have turned out to be more effective than human-generated ones, according to a campaign official who spoke on the condition of anonymity because he was not authorized to publicly discuss AI.

Biden campaign officials said they plan to explore using generative AI this cycle but will adhere to strict rules in deploying it. Among the tactics that are off limits: AI cannot be used to mislead voters, spread disinformation and so-called deepfakes, or deliberately manipulate images. The campaign also forbids the use of AI-generated content in advertising, social media and other such copy without a staff members review.

The campaigns legal team has created a task force of lawyers and outside experts to respond to misinformation and disinformation, with a focus on AI-generated images and videos. The group is not unlike an internal team formed in the 2020 campaign known as the Malarkey Factory, playing off Bidens oft-used phrase, What a bunch of malarkey.

That group was tasked with monitoring what misinformation was gaining traction online. Rob Flaherty, Bidens deputy campaign manager, said those efforts would continue and suggested some AI tools could be used to combat deepfakes and other such content before they go viral.

The tools that were going to use to mitigate the myths and the disinformation is the same, its just going to have to be at a higher pace, Flaherty said. It just means we need to be more vigilant, pay more attention, be monitoring things in different places and try some new tools out, but the fundamentals remain the same.

The Democratic National Committee said it was an early adopter of Google AI and uses some of its features, including ones that analyze voter registration records to identify patterns of voter removals or additions. It has also experimented with AI to generate fundraising email text and to help interpret voter data it has collected for decades, according to the committee.

Arthur Thompson, the DNCs chief technology officer, said the organization believes generative AI is an incredibly important and impactful technology to help elect Democrats up and down the ballot.

At the same time, its essential that AI is deployed responsibly and to enhance the work of our trained staff, not replace them. We can and must do both, which is why we will continue to keep safeguards in place as we remain at the cutting edge, he said.

PROGRESSIVE EXPERIMENTS

Progressive groups and some Democratic candidates have been more aggressively experimenting with AI.

Higher Ground Labs the venture capital firm co-founded by Hoover established an innovation hub known as Progressive AI Lab with Zinc Collective and the Cooperative Impact Lab, two political tech coalitions focused on boosting Democratic candidates.

The goal was to create an ecosystem where progressive groups could streamline innovation, organize AI research and swap information about large language models, Hoover said.

Higher Ground Labs, which also works closely with the Biden campaign and DNC, has since funded 14 innovation grants, hosted forums that allow organizations and vendors to showcase their tools and held dozens of AI trainings.

More than 300 people attended an AI-focused conference the group held in January, Hoover said.

Jessica Alter, the co-founder and chair of Tech for Campaigns, a political nonprofit that uses data and digital marketing to fight extremism and help down-ballot Democrats, ran an AI-aided experiment across 14 campaigns in Virginia last year.

Emails written by AI, Alter said, brought in between three and four times more fundraising dollars per work hour compared with emails written by staff.

Alter said she is concerned that the party might be falling behind in AI because it is being too cautious.

I understand the downsides of AI and we should address them, Alter said. But the biggest concern I have right now is that fear is dominating the conversation in the political arena and that is not leading to balanced conversations or helpful outcomes.

HARD TO TALK ABOUT AN AK-47

Rep. Adam Schiff, the Democratic front-runner in Californias Senate race, is one of few candidates who have been open about using AI. His campaign manager, Brad Elkins, said the campaign has been using AI to improve its efficiency. It has teamed up with Quiller, a company that received funding from Higher Ground Labs and developed a tool that drafts, analyzes and automates fundraising emails.

The Schiff campaign has also experimented with other generative AI tools. During a fundraising drive last May, Schiff shared online an AI-generated image of himself as a Jedi. The caption read, The Force is all around us. Its you. Its us. Its this grassroots team. #MayThe4thBeWithYou.

The campaign faced blowback online but was transparent about the lighthearted deepfake, which Elkins said is an important guardrail to integrating the technology as it becomes more widely available and less costly.

I am still searching for a way to ethically use AI-generated audio and video of a candidate that is sincere, Elkins said, adding that its difficult to envision progress until theres a willingness to regulate and legislate consequences for deceptive artificial intelligence.

The incident highlighted a challenge that all campaigns seem to be facing: even talking about AI can be treacherous.

Its really hard to tell the story of how generative AI is a net positive when so many bad actors whether thats robocalls, fake images or false video clips are using the bad set of AI against us, said a Democratic strategist close to the Biden campaign who was granted anonymity because he was not authorized to speak publicly. How do you talk about the benefits of an AK-47?

___

Associated Press writers Alan Suderman and Garance Burke contributed to this report.

-

This story is part of an Associated Press series, The AI Campaign, that explores the influence of artificial intelligence in the 2024 election cycle.

-

The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find APs standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org

Read more from the original source:

Nervous about falling behind the GOP, Democrats are wrestling with how to use AI - Yahoo! Voices

Posted in Ai

Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything? – WIRED

Philosopher Nick Bostrom is surprisingly cheerful for someone who has spent so much time worrying about ways that humanity might destroy itself. In photographs he often looks deadly serious, perhaps appropriately haunted by the existential dangers roaming around his brain. When we talk over Zoom, he looks relaxed and is smiling.

Bostrom has made it his lifes work to ponder far-off technological advancement and existential risks to humanity. With the publication of his last book, Superintelligence: Paths, Dangers, Strategies, in 2014, Bostrom drew public attention to what was then a fringe ideathat AI would advance to a point where it might turn against and delete humanity.

To many in and outside of AI research the idea seemed fanciful, but influential figures including Elon Musk cited Bostroms writing. The book set a strand of apocalyptic worry about AI smoldering that recently flared up following the arrival of ChatGPT. Concern about AI risk is not just mainstream but also a theme within government AI policy circles.

Bostroms new book takes a very different tack. Rather than play the doomy hits, Deep Utopia: Life and Meaning in a Solved World, considers a future in which humanity has successfully developed superintelligent machines but averted disaster. All disease has been ended and humans can live indefinitely in infinite abundance. Bostroms book examines what meaning there would be in life inside a techno-utopia, and asks if it might be rather hollow. He spoke with WIRED over Zoom, in a conversation that has been lightly edited for length and clarity.

Will Knight: Why switch from writing about superintelligent AI threatening humanity to considering a future in which its used to do good?

Nick Bostrom: The various things that could go wrong with the development of AI are now receiving a lot more attention. It's a big shift in the last 10 years. Now all the leading frontier AI labs have research groups trying to develop scalable alignment methods. And in the last couple of years also, we see political leaders starting to pay attention to AI.

There hasn't yet been a commensurate increase in depth and sophistication in terms of thinking of where things go if we don't fall into one of these pits. Thinking has been quite superficial on the topic.

When you wrote Superintelligence, few would have expected existential AI risks to become a mainstream debate so quickly. Will we need to worry about the problems in your new book sooner than people might think?

As we start to see automation roll out, assuming progress continues, then I think these conversations will start to happen and eventually deepen.

Social companion applications will become increasingly prominent. People will have all sorts of different views and its a great place to maybe have a little culture war. It could be great for people who couldn't find fulfillment in ordinary life but what if there is a segment of the population that takes pleasure in being abusive to them?

In the political and information spheres we could see the use of AI in political campaigns, marketing, automated propaganda systems. But if we have a sufficient level of wisdom these things could really amplify our ability to sort of be constructive democratic citizens, with individual advice explaining what policy proposals mean for you. There will be a whole bunch of dynamics for society.

Would a future in which AI has solved many problems, like climate change, disease, and the need to work, really be so bad?

Read the original post:

Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything? - WIRED

Posted in Ai

Samsung or SK Hynix? One Nvidia supplier is the better AI play, the pros say – CNBC

Big Tech names like Nvidia have been on fire, thanks to the artificial intelligence boom and other chipmakers are sharing the limelight. The supply chain for AI is extensive. It includes companies in Asia-Pacific and ranges from producers of AI graphics processing units to printed circuit boards. Memory chips in particular have been in the spotlight as AI ramps up. For example, memory with high performance and bandwidth is used in Nvidia's H100 graphics processing units. GPUs underpin most generative AI tools, and Nvidia's GPUs dominate the market. Two stocks have dominated the memory chip market: Samsung and SK Hynix . Samsung is the world's largest manufacturer of dynamic random-access memory chips. DRAM is a type of semiconductor memory needed for data processing. But SK Hynix is a strong contender in the space: It said on March 19 that it became the first in the industry to mass produce HBM3E (high bandwidth memory 3E), the next generation of high-bandwidth memory chips used in AI chipsets. SK Hynix is already the primary supplier of HBM3 chips to Nvidia's AI chipsets. Both South Korean companies reported earnings in late April. Samsung beat expectations , with operating profit for the first quarter soaring more than 900%. SK Hynix broke its run of net losses for five consecutive quarters , logging a net profit of 1.92 trillion South Korean won ($1.39 billion) in the first quarter. Which is the better play on the AI boom? CNBC Pro spoke to the pros to find out. SK Hynix Trent Masters, global portfolio manager at Alphinity Investment Management, says he prefers SK Hynix. "First I think their early leadership in HBM3 stands them in good stead with customers as HBM demand continues to increase materially," he said. He added, "While Samsung and Micron are starting to close the technology gap, the trust and dependability of SK Hynix during the initial HBM ramp will ensure that they will retain a strong presence with these customers into the future." SK Hynix's recent partnership with TSMC to develop HBM4 will also position it as a leader again as this technology goes through its iterations, said Masters. Mass production of the HBM4 chips is expected to start in 2026. "Also, I prefer SK Hynix over Samsung as it is the pure memory play," Masters said, adding that Samsung is a "much more sprawling" conglomerate spanning smartphones, TVs and other products. "A view of memory market strength (HBM demand and tight legacy DRAM markets leading to pricing strength) is best reflected through ownership of SK Hynix," he said. Nam Hyung Kim, partner at Arete Research, also prefers SK Hynix, giving it a buy rating and Samsung a neutral rating. "SK Hynix stands out as a pure-play memory stock with leadership in AI technology, dominating the High Bandwidth Memory (HBM) market, which is crucial for AI servers," he said. "Samsung, in contrast, is attempting to catch up." Nam also pointed out that SK Hynix has higher profit margins in the sector than Samsung. He noted that Samsung's portfolio includes more than memory, with over half of its sales derived from low-value consumer appliances, TVs and smartphones. In addition, he said that Samsung's foundry business is facing "ongoing challenges." "Consequently, we recommend investors remain cautious with Samsung and consider pure-play memory firms like SK Hynix until Samsung can showcase renewed technological leadership in memory," Nam said. Over the past 12 months and year-to-date, SK Hynix has "significantly outperformed" Samsung in terms of stock price, he noted. "We anticipate this trend will continue throughout the upcoming memory up-cycle." Samsung But the buying opportunity for each stock also depends on timing, according to one analyst. Sung Kyu Kim, analyst atDaiwaCapital Markets, said he has buy ratings for both Samsung and SK Hynix on the "strong" memory upturn cycle. Though SK Hynix maintained its HBM3 leadership last year, he sees "intensifying competition" in HBM3E in the second half of this year and 2025. In conclusion, he prefers Samsung, predicting it will catch up in the near term and will have more upside to its stock price. "[But I] also anticipate a buying opportunity on SK Hynix once it is adjusted due to intensifying competition in HBM3E," said Kim. CNBC's Sheila Chiang contributed to this report.

See the article here:

Samsung or SK Hynix? One Nvidia supplier is the better AI play, the pros say - CNBC

Posted in Ai

Brad Parscale helped Trump win in 2016 using Facebook ads. Now he’s back, and an AI evangelist – Yahoo! Voices

FORT LAUDERDALE, Fla. (AP) Donald Trumps former campaign manager looked squarely into the camera and promised his viewers they were about to witness a bold new era in politics.

Youre going to see some of the most amazing new technology in artificial intelligence thats going to replace polling in the future across the country, said Brad Parscale in a dimly lit promotional video accentuated by hypnotic beats.

Parscale, the digital campaign operative who helped engineer Trumps 2016 presidential victory, vows that his new, AI-powered platform will dramatically overhaul not just polling, but campaigning. His AI-powered tools, he has boasted, will outperform big tech companies and usher in a wave of conservative victories worldwide.

Its not the first time Parscale has proclaimed that new technologies will boost right-wing campaigns. He was the digital guru who teamed up with scandal-plagued Cambridge Analytica and helped propel Trump to the White House eight years ago. In 2020, he had a public blowup then a private falling out with his old boss after the Capitol riot. Now hes back, playing an under-the-radar role to help Trump, the presumptive GOP nominee, in his race against Democratic President Joe Biden.

Parscale says his company, Campaign Nucleus, can use AI to help generate customized emails, parse oceans of data to gauge voter sentiment and find persuadable voters, then amplify the social media posts of anti-woke influencers, according to an Associated Press review of Parscales public statements, his company websites, slide decks, marketing materials and other documents not previously made public.

Since last year, Campaign Nucleus and other Parscale-linked companies have been paid more than $2.2 million by the Trump campaign, the Republican National Committee and their related political action and fundraising committees, campaign finance records show.

While his firms have received only a small piece of Trumps total digital spending, Parscale remains close to top Republicans, as well as senior officials at the campaign and at the RNC, according to a GOP operative familiar with Parscales role who spoke on condition of anonymity to discuss internal dynamics.

Lara Trump, the RNCs new co-chair and Trumps daughter-in-law, once worked as a consultant to a company co-owned by Parscale. And U.S. House Speaker Mike Johnson's campaign recently hired Campaign Nucleus, campaign finance records show.

Parscale, however, is not involved in day-to-day Trump campaign operations, the GOP operative said.

Parscales ability to use AI to micro target supporters and tap them for campaign cash could prove critical for Trumps campaign and other fundraising organizations. They have seen a falloff in contributions from smaller donors and a surge in spending at least $77 million so far on attorneys defending the former president in a slew of criminal and civil cases.

Beyond Trump, Parscale has said hes harnessed AI to supercharge conservative candidates and causes across the globe, including in Israel, the Balkans and Brazil.

NEW AI-POWERED CAMPAIGN TOOLS

Parscale is hardly alone in using machine learning to try to give candidates an edge by predicting, pinpointing and motivating likely supporters to vote and donate money. Politicians at all levels are experimenting with chatbots and other generative AI tools to write speeches, ad copy and fundraising appeals.

Some Democrats have voiced concern over being outmaneuvered by Republicans on AI, much like they were on social media advertising eight years ago. So far, the Biden campaign and other Democrats said they are using AI to help them find and motivate voters and to better identify and defeat disinformation.

Election experts say they are concerned about AIs potential to upend elections around the world through convincing deepfakes and other content that could mislead voters. Free and low-cost generative AI services have grown in sophistication, and officials worry they can be used to smear a candidate or steer voters to avoid the polls, eroding the publics trust in what they see and hear.

Parscale has the financial backing to experiment to see what works in ways that other AI evangelists may not. That is thanks, in part, to his association with an evangelical Texas billionaire who is among the states most influential political donors.

Parscale did not respond to multiple messages from AP seeking comment. The RNC declined comment as well.

AI IS SO SCARY

Trump has called artificial intelligence so scary " and "dangerous." His campaign, which has shied away from highlighting Parscale's role, said in an emailed statement that it did not engage or utilize tools supplied by any AI company.

The campaign uses a set of proprietary algorithmic tools, like many other campaigns across the country, to help deliver emails more efficiently and prevent sign up lists from being populated by false information, said campaign spokesman Steven Cheung.

While political consultants often hype their tactics to land new contracts, they can also be intensely secretive about the details of that work to avoid assisting rivals. That makes it difficult to precisely track how Parscale is deploying AI for the Trump campaign, or more broadly.

Parscale has said Campaign Nucleus can send voters customized emails and use data analytics to predict voters feelings. The platform can also amplify anti-woke influencers who have large followings on social media, according to his companys documents and videos.

Parscale said his company also can use artificial intelligence to create stunning web pages in seconds that produce content that looks like a media outlet, according to a presentation he gave last month at a political conference, where he was not advertised in advance as a speaker.

Empower your team to create their own news, said another slide, according to the presentation viewed by AP.

Soon, Parscale says, his company will deploy an app that harnesses AI to assist campaigns in collecting absentee ballots in the same way DoorDash or Grubhub drivers pick up dinners from restaurants and deliver them to customers.

Chris Wilson, a Republican strategist who recently worked for a SuperPAC backing Florida Gov. Ron DeSantis failed presidential bid, said he has seen Campaign Nucleus platform and was envious of its capabilities and simplicity.

Somebody could download Nucleus, start working with it and really begin to use it, said Wilson.

Other political consultants, however, called Parscales AI-infused sales pitch largely a rehash of what campaigns already have mastered through data scraping, ad testing and modeling to predict voter behavior.

Some of this stuff is just simply not new, its been around for a long time. The only thing new is that were just calling it AI, said Amanda Elliott, a GOP digital strategist.

FROM UNKNOWN TO TRUMP CONFIDANT

Parscale, a relatively unknown web designer in San Antonio, got his start working for Trump when he was hired to build a web presence for the business moguls family business.

That led to a job on the future presidents 2016 campaign. He was one of its first hires and spearheaded an ambitious and unorthodox digital initiative that relied on an extensive database of social media accounts and content to target voters with Facebook ads.

I pretty much used Facebook to get Trump elected in 2016, Parscale said in a 2022 podcast interview.

To better target Facebook users, in particular, the campaign teamed up with Cambridge Analytica, a British datamining firm bankrolled by Robert Mercer, a wealthy and influential GOP donor. After the election, Cambridge Analytica dissolved, facing investigations over its role in a breach of 87 million Facebook accounts.

Following Trumps surprise win, Parscales influence grew. He was promoted to manage Trump's reelection bid and enjoyed celebrity status. A towering figure at 6 feet, 8 inches with a Viking-style beard, Parscale was frequently spotted at campaign rallies taking selfies with Trump supporters and signing autographs.

Parscale was replaced as campaign manager not long after a rally in Tulsa, Oklahoma, drew an unexpectedly small crowd, enraging Trump.

His personal life unraveled, culminating in a standoff with police at his Florida home after his wife reported he had multiple firearms and was threatening to hurt himself. One of the responding officers reported he saw bruising on the arms of Parscales wife. Parscale complied with a court order to turn in his firearms and was not charged in connection with the incident.

Parscale briefly decided to quit politics and privately expressed regret for associating with Trump after the Jan. 6, 2021, Capitol riot. In a text to a former campaign colleague, he wrote he felt guilty for helping him win in 2016, according to the House committee that investigated the Capitol attack.

His disgust didnt last long. Campaign Nucleus set up Trumps website after Silicon Valley tech companies throttled his access to their platforms.

By the summer of 2022, Parscale had resumed complimenting his old boss on a podcast popular among GOP politicos.

With President Trump, he really was the guy driving the message. He was the chief strategist of his own political uprising and management, Parscale said. I think what the family recognized was: I had done everything that really the campaign needs to do.

PARSCALES PLATFORM

Trumps 2024 campaign website now links directly to Parscales company and displays that its Powered by Nucleus, as Parscale often refers to his new firm. The campaign and its related political action and campaign committees have paid Campaign Nucleus more than $800,000 since early 2023, according to Federal Election Commission filings.

Two other companies Dyspatchit Email and Text Services and BCVM Services are listed on campaign finance records as being located at the same Florida address used by Campaign Nucleus. The firms, which are registered in Delaware and whose ownership is unclear, have received $1.4 million from the Trump campaign and related entities, FEC records show.

When an AP reporter last month visited Campaign Nucleus small, unmarked office in a tony section of Fort Lauderdale, an employee said she did not know anything about Dyspatchit or BCVM.

We dont talk to reporters, the employee said.

The three companies have been paid to host websites, send emails, provide fundraising software and provide digital consulting, FEC records show.

Parscale markets Campaign Nucleus as a one-stop shop for conservative candidates who want to automate tasks usually done by campaign workers or volunteers.

The company says it has helped its clients raise $119 million and has sent nearly 14 billion emails on their behalf, according to a promotional video.

At his recent appearance at the political conference, Parscale presented a slide that said Campaign Nucleus had raised three times as much as tech giant Salesforce in head-to-head tests for email fundraising.

Campaign Nucleus specializes in mining information from a politicians supporters, according to a recent presentation slide.

For example, when someone signs up to attend an event, Nucleus uses AI to analyze reams of personal data to assign that person a numerical score. Attendees who have been to past events receive a high score, for example, ranking them as most likely to show up, according to a company video posted online.

Campaign Nucleus also can track where people who sign up live and can send them customized emails asking for donations or solicit their help on the campaign, the video shows.

Parscale said two years ago in a podcast that he had received more than 10,000 requests about Campaign Nucleus from nearly every country with a conservative party. More recently, he said his team has been active in multiple countries, including in India and Israel, where hes been helping over there a lot with the war with Hamas.

The company says it has offices in Texas, Florida and North Carolina and has been on a recruiting tear. Recent job listings have included U.S. and Latin America-based intelligence analysts to use AI for framing messages and generating content, as well as a marketer to coordinate influencer campaigns.

Campaign Nucleus has also entered into partnerships with other companies with an AI focus. In 2022, the firm announced it was teaming up with Phunware, a Texas-based company that built a cellphone app for Trumps 2020 bid that allowed staff to monitor the movements of his millions of supporters and mobilize their social networks.

Since then, Phunware obtained a patent for what a company official described as experiential AI that can locate peoples cellphones geographically, predict their travel patterns and influence their consumer behavior.

Phunware did not answer specific questions about the partnership with Nucleus, saying the company's client engagements were confidential.

However, it is well-known that we developed the 2020 Trump campaign app in collaboration with Campaign Nucleus. We have had discussions with Trump campaign leadership about potentially developing their app for the 2024 election," said spokeswoman Christina Lockwood.

PARSCALES VISION

Last year, Parscale bought property in Midland, Texas, in the heart of the nations highest-producing oil and gas fields. It is also the hometown of Tim Dunn, a billionaire born-again evangelical who is among the states most influential political donors.

Over the years, the organizations and campaigns Dunn has funded have pushed Texas politics further to the right and driven successful challenges to unseat incumbent Republican officials deemed too centrist.

In April 2023, Dunn invested $5 million in a company called AiAdvertising that once bought one of Parscales firms under a previous corporate name. The San Antonio-based ad firm also announced that Parscale was joining as a strategic adviser, to be paid $120,000 in stock and a monthly salary of $10,000.

Boom! Parscale tweeted. (AiAdvertising) finally automated the full stack of technologies used in the 2016 election that changed the world.

In June, AiAdvertising added two key national figures to its board: Texas investor Thomas Hicks Jr. former co-chair of the RNC and longtime hunting buddy of Donald Trump Jr. -- and former GOP congressman Jim Renacci. In December, Dunn also gave $5 million to MAGA Inc., a pro-Trump super PAC and Campaign Nucleus client. And in January, SEC filings show Dunn provided AiAdvertising an additional $2.5 million via his investment company. A company press release said the cash infusion would help it generate more engaging, higher-impact campaigns.

Dunn declined to comment, although in an October episode of his podcast he elaborated on how his political work is driven by his faith.

Jesus wont be on the ballot, OK? Now, eventually, hes going to take over the government and we can look forward to that, Dunn told listeners. In the meanwhile, were going to have to settle.

In business filings, AiAdvertising said it has developed AI-created personas to determine what messages will resonate emotionally with its customers target audience. Parscale said last year in a promotional video that Campaign Nucleus was using AI models in a similar way.

We actually understand what the American people want to hear, Parscale said.

AiAdvertising did not respond to messages seeking comment.

Parscale occasionally offers glimpses of the AI future he envisions. Casting himself as an outsider to the Republican establishment, he has said he sees AI as a way to undercut elite Washington consultants, whom he described as political parasites.

In January, Parscale told a crowd assembled at a grassroots Christian event at a church in Pasadena, California, that their movement needed to have our own AI, from creative large language models and creative imagery, we need to reach our own audiences with our own distribution, our own email systems, our own texting systems, our own ability to place TV ads, and lastly we need to have our own influencers.

To make his point plain, he turned to a metaphor that relied on a decidedly 19th-century technology.

We must not rely on any of their rails, he said, referring to mainstream media and companies. This is building our own train tracks.

-

Burke reported from San Francisco. AP National Political Writer Steve Peoples and Courtney Subramanian in Washington, and Associated Press researcher Rhonda Shafner in New York contributed to this report.

-

This story is part of an Associated Press series, The AI Campaign, that explores the influence of artificial intelligence in the 2024 election cycle.

-

Contact APs global investigative team at Investigative@ap.org or https://www.ap.org/tips/

-

The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find APs standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org

Go here to read the rest:

Brad Parscale helped Trump win in 2016 using Facebook ads. Now he's back, and an AI evangelist - Yahoo! Voices

Posted in Ai