The Terrifying A.I. Scam That Uses Your Loved One’s Voice – The New Yorker

On a recent night, a woman named Robin was asleep next to her husband, Steve, in their Brooklyn home, when her phone buzzed on the bedside table. Robin is in her mid-thirties with long, dirty-blond hair. She works as an interior designer, specializing in luxury homes. The couple had gone out to a natural-wine bar in Cobble Hill that evening, and had come home a few hours earlier and gone to bed. Their two young children were asleep in bedrooms down the hall. Im always, like, kind of one ear awake, Robin told me, recently. When her phone rang, she opened her eyes and looked at the caller I.D. It was her mother-in-law, Mona, who never called after midnight. Im, like, maybe its a butt-dial, Robin said. So I ignore it, and I try to roll over and go back to bed. But then I see it pop up again.

She picked up the phone, and, on the other end, she heard Monas voice wailing and repeating the words I cant do it, I cant do it. I thought she was trying to tell me that some horrible tragic thing had happened, Robin told me. Mona and her husband, Bob, are in their seventies. Shes a retired party planner, and hes a dentist. They spend the warm months in Bethesda, Maryland, and winters in Boca Raton, where they play pickleball and canasta. Robins first thought was that there had been an accident. Robins parents also winter in Florida, and she pictured the four of them in a car wreck. Your brain does weird things in the middle of the night, she said. Robin then heard what sounded like Bobs voice on the phone. (The family members requested that their names be changed to protect their privacy.) Mona, pass me the phone, Bobs voice said, then, Get Steve. Get Steve. Robin took thisthat they didnt want to tell her while she was aloneas another sign of their seriousness. She shook Steve awake. I think its your mom, she told him. I think shes telling me something terrible happened.

Steve, who has close-cropped hair and an athletic build, works in law enforcement. When he opened his eyes, he found Robin in a state of panic. She was screaming, he recalled. I thought her whole family was dead. When he took the phone, he heard a relaxed male voicepossibly Southernon the other end of the line. Youre not gonna call the police, the man said. Youre not gonna tell anybody. Ive got a gun to your moms head, and Im gonna blow her brains out if you dont do exactly what I say.

Steve used his own phone to call a colleague with experience in hostage negotiations. The colleague was muted, so that he could hear the call but wouldnt be heard. You hear this??? Steve texted him. What should I do? The colleague wrote back, Taking notes. Keep talking. The idea, Steve said, was to continue the conversation, delaying violence and trying to learn any useful information.

I want to hear her voice, Steve said to the man on the phone.

The man refused. If you ask me that again, Im gonna kill her, he said. Are you fucking crazy?

O.K., Steve said. What do you want?

The man demanded money for travel; he wanted five hundred dollars, sent through Venmo. It was such an insanely small amount of money for a human being, Steve recalled. But also: Im obviously gonna pay this. Robin, listening in, reasoned that someone had broken into Steves parents home to hold them up for a little cash. On the phone, the man gave Steve a Venmo account to send the money to. It didnt work, so he tried a few more, and eventually found one that did. The app asked what the transaction was for.

Put in a pizza emoji, the man said.

After Steve sent the five hundred dollars, the man patched in a female voicea girlfriend, it seemedwho said that the money had come through, but that it wasnt enough. Steve asked if his mother would be released, and the man got upset that he was bringing this up with the woman listening. Whoa, whoa, whoa, he said. Baby, Ill call you later. The implication, to Steve, was that the woman didnt know about the hostage situation. That made it even more real, Steve told me. The man then asked for an additional two hundred and fifty dollars to get a ticket for his girlfriend. Ive gotta get my baby mama down here to me, he said. Steve sent the additional sum, and, when it processed, the man hung up.

By this time, about twenty-five minutes had elapsed. Robin cried and Steve spoke to his colleague. You guys did great, the colleague said. He told them to call Bob, since Monas phone was clearly compromised, to make sure that he and Mona were now safe. After a few tries, Bob picked up the phone and handed it to Mona. Are you at home? Steve and Robin asked her. Are you O.K.?

Mona sounded fine, but she was unsure of what they were talking about. Yeah, Im in bed, she replied. Why?

Artificial intelligence is revolutionizing seemingly every aspect of our lives: medical diagnosis, weather forecasting, space exploration, and even mundane tasks like writing e-mails and searching the Internet. But with increased efficiencies and computational accuracy has come a Pandoras box of trouble. Deepfake video content is proliferating across the Internet. The month after Russia invaded Ukraine, a video surfaced on social media in which Ukraines President, Volodymyr Zelensky, appeared to tell his troops to surrender. (He had not done so.) In early February of this year, Hong Kong police announced that a finance worker had been tricked into paying out twenty-five million dollars after taking part in a video conference with who he thought were members of his firms senior staff. (They were not.) Thanks to large language models like ChatGPT, phishing e-mails have grown increasingly sophisticated, too. Steve and Robin, meanwhile, fell victim to another new scam, which uses A.I. to replicate a loved ones voice. Weve now passed through the uncanny valley, Hany Farid, who studies generative A.I. and manipulated media at the University of California, Berkeley, told me. I can now clone the voice of just about anybody and get them to say just about anything. And what you think would happen is exactly whats happening.

Robots aping human voices are not new, of course. In 1984, an Apple computer became one of the first that could read a text file in a tinny robotic voice of its own. Hello, Im Macintosh, a squat machine announced to a live audience, at an unveiling with Steve Jobs. It sure is great to get out of that bag. The computer took potshots at Apples main competitor at the time, saying, Id like to share with you a maxim I thought of the first time I met an I.B.M. mainframe: never trust a computer you cant lift. In 2011, Apple released Siri; inspired by Star Treks talking computers, the program could interpret precise commandsPlay Steely Dan, say, or, Call Momand respond with a limited vocabulary. Three years later, Amazon released Alexa. Synthesized voices were cohabiting with us.

Still, until a few years ago, advances in synthetic voices had plateaued. They werent entirely convincing. If Im trying to create a better version of Siri or G.P.S., what I care about is naturalness, Farid explained. Does this sound like a human being and not like this creepy half-human, half-robot thing? Replicating a specific voice is even harder. Not only do I have to sound human, Farid went on. I have to sound like you. In recent years, though, the problem began to benefit from more money, more dataimportantly, troves of voice recordings onlineand breakthroughs in the underlying software used for generating speech. In 2019, this bore fruit: a Toronto-based A.I. company called Dessa cloned the podcaster Joe Rogans voice. (Rogan responded with awe and acceptance on Instagram, at the time, adding, The future is gonna be really fucking weird, kids.) But Dessa needed a lot of money and hundreds of hours of Rogans very available voice to make their product. Their success was a one-off.

In 2022, though, a New York-based company called ElevenLabs unveiled a service that produced impressive clones of virtually any voice quickly; breathing sounds had been incorporated, and more than two dozen languages could be cloned. ElevenLabss technology is now widely available. You can just navigate to an app, pay five dollars a month, feed it forty-five seconds of someones voice, and then clone that voice, Farid told me. The company is now valued at more than a billion dollars, and the rest of Big Tech is chasing closely behind. The designers of Microsofts Vall-E cloning program, which dbuted last year, used sixty thousand hours of English-language audiobook narration from more than seven thousand speakers. Vall-E, which is not available to the public, can reportedly replicate the voice and acoustic environment of a speaker with just a three-second sample.

Voice-cloning technology has undoubtedly improved some lives. The Voice Keeper is among a handful of companies that are now banking the voices of those suffering from voice-depriving diseases like A.L.S., Parkinsons, and throat cancer, so that, later, they can continue speaking with their own voice through text-to-speech software. A South Korean company recently launched what it describes as the first AI memorial service, which allows people to live in the cloud after their deaths and speak to future generations. The company suggests that this can alleviate the pain of the death of your loved ones. The technology has other legal, if less altruistic, applications. Celebrities can use voice-cloning programs to loan their voices to record advertisements and other content: the College Football Hall of Famer Keith Byars, for example, recently let a chicken chain in Ohio use a clone of his voice to take orders. The film industry has also benefitted. Actors in films can now speak other languagesEnglish, say, when a foreign movie is released in the U.S. That means no more subtitles, and no more dubbing, Farid said. Everybody can speak whatever language you want. Multiple publications, including The New Yorker, use ElevenLabs to offer audio narrations of stories. Last year, New Yorks mayor, Eric Adams, sent out A.I.-enabled robocalls in Mandarin and Yiddishlanguages he does not speak. (Privacy advocates called this a creepy vanity project.)

But, more often, the technology seems to be used for nefarious purposes, like fraud. This has become easier now that TikTok, YouTube, and Instagram store endless videos of regular people talking. Its simple, Farid explained. You take thirty or sixty seconds of a kids voice and log in to ElevenLabs, and pretty soon Grandmas getting a call in Grandsons voice saying, Grandma, Im in trouble, Ive been in an accident. A financial request is almost always the end game. Farid went on, And heres the thing: the bad guy can fail ninety-nine per cent of the time, and they will still become very, very rich. Its a numbers game. The prevalence of these illegal efforts is difficult to measure, but, anecdotally, theyve been on the rise for a few years. In 2020, a corporate attorney in Philadelphia took a call from what he thought was his son, who said he had been injured in a car wreck involving a pregnant woman and needed nine thousand dollars to post bail. (He found out it was a scam when his daughter-in-law called his sons office, where he was safely at work.) In January, voters in New Hampshire received a robocall call from Joe Bidens voice telling them not to vote in the primary. (The man who admitted to generating the call said that he had used ElevenLabs software.) I didnt think about it at the time that it wasnt his real voice, an elderly Democrat in New Hampshire told the Associated Press. Thats how convincing it was.

View post:

The Terrifying A.I. Scam That Uses Your Loved One's Voice - The New Yorker

Posted in Ai

Florida teens arrested for creating deepfake AI nude images of classmates – The Verge

Two Florida middle schoolers were arrested in December and charged with third-degree felonies forallegedly creating deepfake nudesof their classmates. A reportbyWiredcites police reports saying two boys, aged 13 and 14, are accused of using an unnamed artificial intelligence application to generate the explicit images of other students between the ages of 12 and 13. The incident may be the first US instance of criminal charges related to AI-generated nude images.

They were charged with third-degree felonies under a 2022 Florida law that criminalizes the dissemination of deepfake sexually explicit images without the victims consent. Both the arrests and the charges appear to be the first of their kind in the nation related to the sharing of AI-generated nudes.

Local media reported on the incident after the students at Pinecrest Cove Academy in Miami, Florida, were suspended December 6th, and the case was reported to the Miami-Dade Police Department. According to Wired, they were arrested on December 22nd.

Minors creating AI-generated nudes and explicit images of other children has become an increasingly common problem in school districts across the country. But outside of the Florida incident, none wed heard of have led to an arrest. Theres currently no federal law addressing nonconsensual deepfake nudes, which has left states tackling the impact of generative AI on matters of child sexual abuse material, nonconsensual deepfakes, or revenge porn on their own.

Last fall, President Joe Biden issued an executive order on AI that asked agencies for a report on banning the use of generative AI to produce child sexual abuse material. Congress has yet to pass a law on deepfake porn, but that could possibly change soon. Both the Senate and House introduced legislation, known as the DEFIANCE Act of 2024, this week, and the effort appears to have bipartisan support.

Although nearly all states now have laws on the books that address revenge porn, only a handful of states have passed laws that address AI-generated sexually explicit imagery to varying degrees. Victims in states with no legal protections have also taken to litigation. For example, a New Jersey teen is suing a classmate for sharing fake AI nudes.

The Los Angeles Times recently reported that the Beverly Hills Police Department is currently investigating a case where students allegedly shared images that used real faces of students atop AI-generated nude bodies. But because the states law against unlawful possession of obscene matter knowing it depicts person under age of 18 years engaging in or simulating sexual conduct does not explicitly mention AI-generated images, the article says its unclear whether a crime has been committed.

The local school district voted on Friday to expel five students involved in the scandal, the LA Times reports.

Go here to see the original:

Florida teens arrested for creating deepfake AI nude images of classmates - The Verge

Posted in Ai

Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services – AWS Blog

By Chris Dally, Business Designation Owner AWS By Victor Rojo, Technical Designation Lead AWS By Chris Butler, Sr. Product Manager, Launch AWS By Justin Freeman, Sr. Partner Development Specialist, Catalyst AWS

In todays rapidly evolving technology landscape, generative artificial intelligence (AI) is leading the charge in innovation, revolutionizing the way organizations work. According to a McKinsey report, generative AI could account for over 75% of total yearly AI value, with high expectations for major or disruptive change in industries. Additionally, the report states generative AI technologies have the potential to automate work activities that absorb 60-70% of employees time.

With the ability to automate tasks, enhance productivity, and enable hyper-personalized customer experiences, businesses are seeking specialized expertise to build a successful generative AI strategy.

To support this need, were excited to announce the AWS Generative AI Competencyan AWS Specialization that helps Amazon Web Services (AWS) customers more quickly adopt generative AI solutions and strategically position themselves for the future. AWS Generative AI Competency Partners provide a full range of services, tools, and infrastructurewith tailored solutions in areas like security, applications, and integrations to give customers flexibility and choice across models and technologies.

Partners play an important role in supporting AWS customers leveraging our comprehensive suite of generative AI services. We are excited to recognize and highlight partners with proven customer success with generative AI on AWS through the AWS Generative AI Competency, making it easier for our customers to find and identify the right partners to support their unique needs. ~ Swami Sivasubramanian, Vice President of Database, Analytics and ML, AWS

According to Canalys, AWS is the first to launch a generative AI competency for partners. By validating the partners business and technical expertise in this way, AWS customers are able to invest with greater confidence in generative AI solutions from these partners. This new competency is a critical entry point into the generative AI partner opportunity, which Canalys estimates will grow to US $158 billion by 2028.

Generative AI has truly ushered in a new era of innovation and transformative value across both business and technology. A recent Canalys study found that 87% of customers rank partner specializations as a top three selection criteria. With the AWS Generative AI Competency launch, were helping customers take advantage of the capabilities that our technically validated Generative AI Partners have to offer. ~ Ruba Borno, Vice President of AWS Worldwide Channels and Alliances

Leveraging AI technologies such as Amazon Bedrock, Amazon SageMaker JumpStart, AWS Trainium, AWS Inferentia, and accelerated computing instances on Amazon Elastic Compute Cloud (Amazon EC2), AWS Generative AI Competency Partners have deep expertise building and deploying groundbreaking applications across industries, including healthcare and life sciences, media and entertainment, public sector, and financial services.

We invite you to explore the following AWS Generative AI Competency Launch Partner offerings recommended by AWS.

These AWS Partners have deep expertise working with businesses to help them adopt and strategize generative AI, build and test generative AI applications, train and customize foundation models, operate, support, and maintain generative AI applications and models, protect generative AI workloads, and define responsible AI principles and frameworks.

These AWS Partners utilize foundation models (FMs) and related technologies to automate domain-specific functions, enhancing customer differentiation across all business lines and operations. Partners fall into three categories: Generative AI applications, Foundation Models and FM-based Application Development, and Infrastructure and Data.

AWS Generative AI Competency Partners make it easier for customers to innovate with enterprise-grade security and privacy, foundation models, generative AI-powered applications, a data-first approach, and a high-performance, low-cost infrastructure.

Explore the AWS Generative AI Partners page to learn more.

AWS Partners with Generative AI offerings can learn more about becoming an AWS Competency Partner.

AWS Specialization Partners gain access to strategic and confidential content, including product roadmaps, feature release previews, and demos, as part of the AWS PartnerEquip event series. To attend live events in your region or tune in virtually, register for an upcoming session. In addition to AWS Specialization Program benefits, AWS Generative AI Competency Partners receive unique benefits such as bi-annual strategy sessions to aid joint sales motions. To learn more, review the AWS Specialization Program Benefits Guide in AWS Partner Central (login required).

AWS Partners looking to get their Generative AI offering validated through the AWS Competency Program must be validated or differentiated members of the Software or Services Path prior to applying.

To apply, please review the Program Guide and access the application in AWS Partner Central.

Read more from the original source:

Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services - AWS Blog

Posted in Ai

Sora AI Videos Easily Confused With Real Footage in Survey Test (EXCLUSIVE) – Variety

Consumers in the U.S. struggle to distinguish videos recorded by humans from those generated by OpenAIs text-to-video tool Sora, according to new HarrisX data provided exclusively to Variety Intelligence Platform (VIP+).

In a survey conducted weeks after the controversial software was first unveiled, most U.S. adults incorrectly guessed whether AI or a person had created five out of eight videos they were shown.

Half of the videos were the Sora demonstration videos that have gone viral online, raising concerns from Hollywood to Capitol Hill for their production quality, including a drone view of waves crashing against the rugged cliffs along Big Surs Garay Point Beach and historical footage of California during the Gold Rush.

Perhaps unsurprisingly, the HarrisX survey also revealed that strong majorities of respondents believed the U.S. government should enact regulation requiring that AI-generated content be labeled as such. They were equally emphatic about the need for regulation across all content formats, including videos, images, text, music, captions and sounds.Full results of the HarrisX survey can be found on VIP+.

In the survey, which was conducted online March 1-4 among more than 1,000 adults, respondents were shown four high-quality photorealistic-looking sample video outputs generated by Sora randomly interspersed with four videos from stock footage taken in the real world by a camera. In the case of the Big Sur video, 60% of respondents incorrectly guessed that a human had generated that video.

While Sora has yet to be released to the public, the OpenAI software has been the subject of much alarm particularly in the entertainment industry, where the rapid evolution of video diffusion technology carries profound implications for the disruption of Hollywoods core production capabilities (though Sora will likely be fairly limited at launch).

Moreover, AI video has raised broader questions about its deepfake potential, especially in an election year.

When presented with the AI-generated videos and informed they were created by Sora, respondents were asked how they felt. Reactions were a mix of positive and negative, ranging from curious (28%), uncertain (27%) and open-minded (25%) to anxious (18%), inspired (18%) and fearful (2%).

"When you try to change the world quickly, the world moves quickly to rein you in along predictable lines," said Dritan Nesho, CEO and head of research at HarrisX. "That's exactly what we're seeing with generative AI: as its sophistication grows via new tools like Sora, so do concerns about its impact and calls for the proper labeling and regulation of the technology. The nascent industry must do more both to create guardrails and to properly communicate with the wider public."

VIP+ subscribers can dig deeper to learn more about ...

See the rest here:

Sora AI Videos Easily Confused With Real Footage in Survey Test (EXCLUSIVE) - Variety

Posted in Ai

Ability Summit 2024: Advancing accessibility with AI technology and innovation – The Official Microsoft Blog – Microsoft

Today we kick off the 14th Microsoft Ability Summit, an annual event to bring together thought leaders to discuss how we accelerate accessibility to help bridge the Disability Divide.

There are three key themes to this years summit: Build, Imagine, and Include. Build invites us to explore how to build accessibly and inclusively by leaning on the insights of disabled talent. Imagine dives into best practices for architecting accessible buildings, events, content and products. And Include highlights the issues and opportunities AI presents for creators, developers and engineers.

Katy Jo Wright and Dave McCarthy discuss Katy Jos journey living with the complex disability, Chronic Lyme Disease. Get insights from deaf creator and performer Leila Hanaumi; international accessibility leaders Sara Minkara, U.S. Special Advisor on International Disability Rights, U.S. Department of State; and Stephanie Cadieux, Chief Accessibility Officer, Government of Canada. And well be digging into mental health with singer, actor and mental health advocate, Michelle Williams.

Well also be launching a few things along the way.

Accessible technology is crucial to empowering the 1.3 billion-plus people with disabilities globally. With this new chapter of AI, the possibilities are growing, as is the responsibility to get it right. We are learning where AI can be impactful, from the potential to shorten the gap between thoughts and action, to making it easier to code and create. But there is more to do, and we will continue to leverage every tool in the technology toolbox to advance accessibility.

Today well be highlighting the latest technology and tools from Microsoft to help achieve this goal including:

Technology can also help tackle long enduring challenges, like finding a cure for ALS (Motor Neuron Disease). With Azure, we are proudly supporting ALS Therapy Development Institute (TDI) and Answer ALS to almost double the clinical and genomic data available for research. In 2021, Answer ALS provided open access to its research through an Azure Data Portal, Neuromine. This data has since enabled over 300 independent research projects around the world. The addition of ALS TDIs data from the ongoing ALS Research Collaborative (ARC) study will allow researchers to accelerate the journey to find a cure.

We will also be previewing some of our ongoing work to use Custom Neural Voice to empower people with ALS and other speech disabilities to have their voice. We have been working with the community including Team Gleason for some time and are committed to making sure this technology is used for good and plan to launch later in the year.

YouTube Video

Click here to load media

To build inclusively in an increasingly digital world, we need to protect fundamental rights and will be sharing partnerships advancing this across the community throughout the day.

This includes:

All through the Ability Summit, industry leaders will be sharing their learnings and best practices. Today we are posting four new Microsoft playbooks, sharing our learnings from working on our physical, event and digital environment. This includes a new Mental Health toolkit, with tips for product makers to build experiences that support mental health conditions, created in partnership with Mental Health America. And Accessible and Inclusive Workplace Handbook, with best practices for building an accessible campus from our Global Workplace Services team, responsible for our global building footprint including the new Redmond headquarters campus.

Please join us to watch content on demand via http://www.aka.ms/AbilitySummit. Technical support is always available via Microsofts Disability Answer Desk. Thank you for your partnership and commitment to build a more accessible future for people with disabilities around the world.

Tags: accessibility, AI, AI for Accessibility

See more here:

Ability Summit 2024: Advancing accessibility with AI technology and innovation - The Official Microsoft Blog - Microsoft

Posted in Ai

What to know about this AI stock with ties to Nvidia up nearly 170% in 2024 – CNBC

Investors may want to keep an eye on this artificial intelligence voice-and-speech recognition stock with ties to Nvidia . Shares of SoundHound AI have surged almost 170% this year and nearly 347% in February alone as investors bet on new applications for the booming technology trend that has taken Wall Street by storm. Last month, Nvidia revealed a $3.7 million bet on the stock in a securities filing, and management said on an earnings call that "demand is going through theroof." "We continue to believe that the company is in a strong position to capture its fair share of the AI chatbot market demand wave with its technology providing more use cases going forward," wrote Wedbush Securities analyst Dan Ives in a February note. SOUN YTD mountain SoundHound shares in 2024 While the Nvidia investment isn't new news for investors and analysts, it does reinforce SoundHound's value proposition. Ives also noted that the stake "solidifies the company's brand within the AI Revolution" and lays the groundwork for a potential larger investment in the future. Relatively few Wall Street shops cover the AI stock. A little more than 80% rate it with a buy or overweight rating, with consensus price targets suggesting upside of nearly 24%, per FactSet. The company also sits at a roughly $1.7 billion market capitalization and has yet to attain profitability. Expanding its total addressable market Along with its Nvidia relationship, SoundHound has partnered with a slew of popular restaurant brands, automakers and hospitality companies to provide AI voice customer solutions. While the company works with about a quarter of total automobile companies, "the penetration into that customer set only amounts to 1-2% of global sales, leaving significant room for growth within the current customer base as well as growth from adding new brands," said Ladenburg Thalmann's Glenn Mattson in a January note initiating coverage with a buy rating. "With voice enabled units expected to grow to 70% of shipments by 2026, this represents a significant growth opportunity, in our view," he added. SoundHound has also made significant headway within the restaurant industry, recently adding White Castle, Krispy Kreme and Jersey Mike's to its growing list of customers, analysts note. That total addressable market should continue growing as major players such as McDonald's, DoorDash and Wendy's hunt for ways to expand AI voice use, said D.A. Davidson's Gil Luria. He estimates an $11 billion total addressable market when accounting for the immediate opportunities from quick-service restaurants and original equipment manufacturers. "SoundHound's long term opportunity is attractive and largely up for grabs," he said in a September note initiating coverage with a buy rating. "Given the high degree of technical complexity required to create value in this space, we see SoundHound with its best-of-breed solution as a likely winner and expect it to win significant market share." Headwinds to profitability While demand for SoundHound AI's products appears to be accelerating, investors should beware of a bumpy road ahead. Cantor Fitzgerald's Brett Knoblauch noted that being in the early stages of product adoption creates uncertainties surrounding the "pace of revenue growth and timeline to positive FCF." Although H.C. Wainwright's Scott Buck views SoundHound's significant bookings backlog and accelerating revenue growth as supportive of a premium valuation, he noted that the recent acquisition of restaurant automation technology company SYNQ3 could delay profitability to next year. But "we suspect the longer term financial and operating benefits to meaningfully outweigh short-term profitability headwinds," he said. "We recommend investors continue to accumulate SOUN shares ahead of stronger operating results."

Go here to read the rest:

What to know about this AI stock with ties to Nvidia up nearly 170% in 2024 - CNBC

Posted in Ai

NIST, the lab at the center of Bidens AI safety push, is decaying – The Washington Post

At the National Institute of Standards and Technology the government lab overseeing the most anticipated technology on the planet black mold has forced some workers out of their offices. Researchers sleep in their labs to protect their work during frequent blackouts. Some employees have to carry hard drives to other buildings; flaky internet wont allow for the sending of large files.

And a leaky roof forces others to break out plastic sheeting.

If we knew rain was coming, wed tarp up the microscope, said James Fekete, who served as chief of NISTs applied chemicals and materials division until 2018. It leaked enough that we were prepared.

NIST is at the heart of President Bidens ambitious plans to oversee a new generation of artificial intelligence models; through an executive order, the agency is tasked with developing tests for security flaws and other harms. But budget constraints have left the 123-year-old lab with a skeletal staff on key tech teams and most facilities on its main Gaithersburg, Md., and Boulder, Colo., campuses below acceptable building standards.

Interviews with more than a dozen current and former NIST employees, Biden administration officials, congressional aides and tech company executives, along with reports commissioned by the government, detail a massive resources gap between NIST and the tech firms it is tasked with evaluating a discrepancy some say risks undermining the White Houses ambitious plans to set guardrails for the burgeoning technology. Many of the people spoke to The Washington Post on the condition of anonymity because they were not authorized to speak to the media.

Even as NIST races to set up the new U.S. AI Safety Institute, the crisis at the degrading lab is becoming more acute. On Sunday, lawmakers released a new spending plan that would cut NISTs overall budget by more than 10 percent, to $1.46 billion. While lawmakers propose to invest $10 million in the new AI institute, thats a fraction of the tens of billions of dollars tech giants like Google and Microsoft are pouring into the race to develop artificial intelligence. It pales in comparison to Britain, which has invested more than $125 million into its AI safety efforts.

The cuts to the agency are a self-inflicted wound in the global tech race, said Divyansh Kaushik, the associate director for emerging technologies and national security at the Federation of American Scientists.

Some in the AI community worry that underfunding NIST makes it vulnerable to industry influence. Tech companies are chipping in for the expensive computing infrastructure that will allow the institute to examine AI models. Amazon announced that it would donate $5 million in computing credits. Microsoft, a key investor in OpenAI, will provide engineering teams along with computing resources. (Amazon founder Jeff Bezos owns The Post.)

Tech executives, including OpenAI CEO Sam Altman, are regularly in communication with officials at the Commerce Department about the agencys AI work. OpenAI has lobbied NIST on artificial intelligence issues, according to federal disclosures. NIST asked TechNet an industry trade group whose members include OpenAI, Google and other major tech companies if its member companies can advise the AI Safety Institute.

NIST is also seeking feedback from academics and civil society groups on its AI work. The agency has a long history of working with a variety of stakeholders to gather input on technologies, Commerce Department spokesman Charlie Andrews said.

AI staff, unlike their more ergonomically challenged colleagues, will be working in well-equipped offices in the Gaithersburg campus, the Commerce Departments D.C. office and the NIST National Cybersecurity Center of Excellence in Rockville, Md., Andrews said.

White House spokeswoman Robyn Patterson said the appointment of Elizabeth Kelly to the helm of the new AI Safety Institute underscores the White Houses commitment to getting this work done right and on time. Kelly previously served as special assistant to the president for economic policy.

The Biden-Harris administration has so far met every single milestone outlined by the presidents landmark executive order, Patterson said. We are confident in our ability to continue to effectively and expeditiously meet the milestones and directives set forth by President Biden to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond.

NISTs financial struggles highlight the limitations of the administrations plan to regulate AI exclusively through the executive branch. Without an act of Congress, there is no new funding for initiatives like the AI Safety Institute and the programs could be easily overturned by the next president. And as the presidential elections approach, the prospects of Congress moving on AI in 2024 are growing dim.

During his State of the Union address on Thursday, Biden called on Congress to harness the promise of AI and protect us from its peril.

Congressional aides and former NIST employees say the agency has not been able to break through as a funding priority even as lawmakers increasingly tout its role in addressing technological developments, including AI, chips and quantum computing.

After this article published, Senate Majority Leader Charles E. Schumer (D-N.Y.) on Thursday touted the $10 million investment in the institute in the proposed budget, saying he fought for this funding to make sure that the development of AI prioritizes both innovation and safety.

A review of NISTs safety practices in August found that the budgetary issues endanger employees, alleging that the agency has an incomplete and superficial approach to safety.

Chronic underfunding of the NIST facilities and maintenance budget has created unsafe work conditions and further fueled the impression among researchers that safety is not a priority, said the NIST safety commission report, which was commissioned following the 2022 death of an engineering technician at the agencys fire research lab.

NIST is one of the federal governments oldest science agencies with one of the smallest budgets. Initially called the National Bureau of Standards, it began at the dawn of the 20th century, as Congress realized the need to develop more standardized measurements amid the expansion of electricity, the steam engine and railways.

The need for such an agency was underscored three years after its founding, when fires ravaged through Baltimore. Firefighters from Washington, Philadelphia and even New York rushed to help put out the flames, but without standard couplings, their hoses couldnt connect to the Baltimore hydrants. The firefighters watched as the flames overtook more than 70 city blocks in 30 hours.

NIST developed a standard fitting, unifying more than 600 different types of hose couplings deployed across the country at the time.

Ever since, the agency has played a critical role in using research and science to help the country learn from catastrophes and prevent new ones. Its work expanded after World War II: It developed an early version of the digital computer, crucial Space Race instruments and atomic clocks, which underpin GPS. In the 1950s and 1960s, the agency moved to new campuses in Boulder and Gaithersburg after its early headquarters in Washington fell into disrepair.

Now, scientists at NIST joke that they work at the most advanced labs in the world in the 1960s. Former employees describe cutting-edge scientific equipment surrounded by decades-old buildings that make it impossible to control the temperature or humidity to conduct critical experiments.

You see dust everywhere because the windows dont seal, former acting NIST director Kent Rochford said. You see a bucket catching drips from a leak in the roof. You see Home Depot dehumidifiers or portable AC units all over the place.

The flooding was so bad that Rochford said he once requested money for scuba gear. That request was denied, but he did receive funding for an emergency kit that included squeegees to clean up water.

Pests and wildlife have at times infiltrated its campuses, including an incident where a garter snake entered a Boulder building.

More than 60 percent of NIST facilities do not meet federal standards for acceptable building conditions, according to a February 2023 report commissioned by Congress from the National Academies of Sciences, Engineering and Medicine. The poor conditions impact employee output. Workarounds and do-it-yourself repairs reduce the productivity of research staff by up to 40 percent, according to the committees interviews with employees during a laboratory visit.

Years after Rochfords 2018 departure, NIST employees are still deploying similar MacGyver-style workarounds. Each year between October and March, low humidity in one lab creates a static charge making it impossible to operate an instrument ensuring companies meet environmental standards for greenhouse gases.

Problems with the HVAC and specialized lights have made the agency unable to meet demand for reference materials, which manufacturers use to check whether their measurements are accurate in products like baby formula.

Facility problems have also delayed critical work on biometrics, including evaluations of facial recognition systems used by the FBI and other law enforcement agencies. The data center in the 1966 building that houses that work receives inadequate cooling, and employees there spend about 30 percent of their time trying to mitigate problems with the lab, according to the academies reports. Scheduled outages are required to maintain the data centers that hold technology work, knocking all biometric evaluations offline for a month each year.

Fekete, the scientist who recalled covering the microscope, said his teams device never completely stopped working due to rain water.

But other NIST employees havent been so lucky. Leaks and floods destroyed an electron microscope worth $2.5 million used for semiconductor research, and permanently damaged an advanced scale called a Kibble balance. The tool was out of commission for nearly five years.

Despite these constraints, NIST has built a reputation as a natural interrogator of swiftly advancing AI systems.

In 2019, the agency released a landmark study confirming facial recognition systems misidentify people of color more often than White people, casting scrutiny on the technologys popularity among law enforcement. Due to personnel constraints, only a handful of people worked on that project.

Four years later, NIST released early guidelines around AI, cementing its reputation as a government leader on the technology. To develop the framework, the agency connected with leaders in industry, civil society and other groups, earning a strong reputation among numerous parties as lawmakers began to grapple with the swiftly evolving technology.

The work made NIST a natural home for the Biden administrations AI red-teaming efforts and the AI Safety Institute, which were formalized in the November executive order. Vice President Harris touted the institute at the U.K. AI Safety Summit in November. More than 200 civil society organizations, academics and companies including OpenAI and Google have signed on to participate in a consortium within the institute.

OpenAI spokeswoman Kayla Wood said in a statement that the company supports NISTs work, and that the company plans to continue to work with the lab to "support the development of effective AI oversight measures.

Under the executive order, NIST has a laundry list of initiatives that it needs to complete by this summer, including publishing guidelines for how to red-team AI models and launching an initiative to guide evaluating AI capabilities. In a December speech at the machine learning conference NeurIPS, the agencys chief AI adviser, Elham Tabassi, said this would be an almost impossible deadline.

It is a hard problem, said Tabassi, who was recently named the chief technology officer of the AI Safety Institute. We dont know quite how to evaluate AI.

The NIST staff has worked tirelessly to complete the work it is assigned by the AI executive order, said Andrews, the Commerce spokesperson.

While the administration has been clear that additional resources will be required to fully address all of the issues posed by AI in the long term, NIST has been effectively carrying out its responsibilities under the [executive order] and is prepared to continue to lead on AI-related research and other work, he said.

Commerce Secretary Gina Raimondo asked Congress to allocate $10 million for the AI Safety Institute during an event at the Atlantic Council in January. The Biden administration also requested more funding for NIST facilities, including $262 million for safety, maintenance and repairs. Congressional appropriators responded by cutting NISTs facilities budget.

The administrations ask falls far below the recommendations of the national academies study, which urged Congress to provide $300 to $400 million in additional annual funding over 12 years to overcome a backlog of facilities damage. The report also calls for $120 million to $150 million per year for the same period to stabilize the effects of further deterioration and obsolescence.

Ross B. Corotis, who chaired the academies committee that produced the facilities report, said Congress needs to ensure that NIST is funded because it is the go-to lab when any new technology emerges, whether thats chips or AI.

Unless youre going to build a whole new laboratory for some particular issue, youre going to turn first to NIST, Corotis said. And NIST needs to be ready for that.

Eva Dou and Nitasha Tiku contributed to this report.

Continue reading here:

NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post

Posted in Ai

Nvidia, the tech company more valuable than Google and Amazon, explained – Vox.com

Only four companies in the world are worth over $2 trillion. Apple, Microsoft, the oil company Saudi Aramco and, as of 2024, Nvidia. Its understandable if the name doesnt ring a bell. The company doesnt exactly make a shiny product attached to your hand all day, every day, as Apple does. Nvidia designs a chip hidden deep inside the complicated innards of a computer, a seemingly niche product more are relying on every day.

Rewind the clock back to 2019, and Nvidias market value was hovering around $100 billion. Its incredible speedrun to 20 times that already enviable size was really enabled by one thing the AI craze. Nvidia is arguably the biggest winner in the AI industry. ChatGPT-maker OpenAI, which catapulted this obsession into the mainstream, is currently worth around $80 billion, and according to market research firm Grand View Research, the entire global AI market was worth a bit under $200 billion in 2023. Both are just a paltry fraction of Nvidias value. With all eyes on the companys jaw-dropping evolution, the real question now is whether Nvidia can hold on to its lofty perch but heres how the company got to this level.

In 1993, long before uncanny AI-generated art and amusing AI chatbot convos took over our social media feeds, three Silicon Valley electrical engineers launched a startup that would focus on an exciting, fast-growing segment of personal computing: video games.

Nvidia was founded to design a specific kind of chip called a graphics card also commonly called a GPU (graphics processing unit) that enables the output of fancy 3D visuals on the computer screen. The better the graphics card, the more quickly high-quality visuals can be rendered, which is important for things like playing games and video editing. In the prospectus filed ahead of its initial public offering in 1999, Nvidia noted that its future success would depend on the continued growth of computer applications relying on 3D graphics. For most of Nvidias existence, game graphics were Nvidias raison detre.

Ben Bajarin, CEO and principal analyst at the tech industry research firm Creative Strategies, acknowledged that Nvidia had been relatively isolated to a niche part of computing in the market until recently.

Nvidia became a powerhouse selling cards for video games now an entertainment industry juggernaut making over $180 billion in revenue last year but it realized it would be smart to branch out from just making graphics cards for games. Not all its experiments panned out. Over a decade ago, Nvidia made a failed gambit to become a major player in the mobile chip market, but today Android phones use a range of non-Nvidia chips, while iPhones use Apple-designed ones.

Another play, though, not only paid off, it became the reason were all talking about Nvidia today. In 2006, the company released a programming language called CUDA that, in short, unleashed the power of its graphics cards for more general computing processes. Its chips could now do a lot of heavy lifting for tasks unrelated to pumping out pretty game graphics, and it turned out that graphics cards could multitask even better than the CPU (central processing unit), whats often called the central brain of a computer. This made Nvidias GPUs great for calculation-heavy tasks like machine learning (and, crypto mining). 2006 was the same year Amazon launched its cloud computing business; Nvidias push into general computing was coming at a time when massive data centers were popping up around the world.

That Nvidia is a powerhouse today is especially notable because for most of Silicon Valleys history, there already was a chip-making goliath: Intel. Intel makes both CPUs and GPUs, as well as other products, and it manufactures its own semiconductors but after a series of missteps, including not investing into the development of AI chips soon enough, the rival chipmakers preeminence has somewhat faded. In 2019, when Nvidias market value was just over the $100 billion mark, Intels value was double that; now Nvidia has joined the ranks of tech titans designated the Magnificent Seven, a cabal of tech stocks with a combined value that exceeds the entire stock market of many rich G20 countries.

Their competitors were asleep at the wheel, says Gil Luria, a senior analyst at the financial firm D.A. Davidson Companies. Nvidia has long talked about the fact that GPUs are a superior technology for handling accelerated computing.

Today, Nvidias four main markets are gaming, professional visualization (like 3D design), data centers, and the automotive industry, as it provides chips that train self-driving technology. A few years ago, its gaming market was still the biggest chunk of revenue at about $5.5 billion, compared to its data center segment, which raked in about $2.9 billion. Then the pandemic broke out. People were spending a lot more time at home, and demand for computer parts, including GPUs, shot up gaming revenue for the company in fiscal year 2021 jumped a whopping 41 percent. But there were already signs of the coming AI wave, too, as Nvidias data center revenue soared by an even more impressive 124 percent. In 2023, its revenue was 400 percent higher than the year before. In a clear display of how quickly the AI race ramped up, data centers have overtaken games, even in a gaming boom.

When it went public in 1999, Nvidia had 250 employees. Now it has over 27,000. Jensen Huang, Nvidias CEO and one of its founders, has a personal net worth that currently hovers around $70 billion, an over 1,700 percent increase since 2019.

Its likely youve already brushed up against Nvidias products, even if you dont know it. Older gaming consoles like the PlayStation 3 and the original Xbox had Nvidia chips, and the current Nintendo Switch uses an Nvidia mobile chip. Many mid- to high-range laptops come packed up with an Nvidia graphics card as well.

But with the AI bull rush, the company promises to become more central to the tech people use every day. Tesla cars self-driving feature utilizes Nvidia chips, as do practically all major tech companies cloud computing services. These services serve as a backbone for so much of our daily internet routines, whether its streaming content on Netflix or using office and productivity apps. To train ChatGPT, OpenAI harnessed tens of thousands of Nvidias AI chips together. People underestimate how much they use AI on a daily basis, because we dont realize that some of the automated tasks we rely on have been boosted by AI. Popular apps and social media platforms are adding new AI features seemingly every day: TikTok, Instagram, X (formerly Twitter), even Pinterest all boast some kind of AI functionality to toy with. Slack, a messaging platform that many workplaces use, recently rolled out the ability to use AI to generate thread summaries and recaps of Slack channels.

For Nvidias customers, the problem with sizzling demand is that the company can charge eye-wateringly high prices. The chips used for AI data centers cost tens of thousands of dollars, with the top-of-the-line product sometimes selling for over $40,000 on sites like Amazon and eBay. Last year, some clients clamoring for Nvidias AI chips were waiting as much as 11 months.

Just think of Nvidia as the Birkin bag of AI chips. A comparable offering from another chipmaker, AMD, is reportedly being sold to customers like Microsoft for about $10,000 to $15,000, just a fraction of what Nvidia charges. Its not just the AI chips, either. Nvidias gaming business continues to boom, and the price gap between its high-end gaming card and a similarly performing one from AMD has been growing wider. In its last financial quarter, Nvidia reported a gross margin of 76 percent. As in, it cost them just 24 cents to make a dollar in sales. AMDs most recent gross margin was only 47 percent.

Nvidias fans argue that its yawning lead was earned by making an early bet that AI would take over the world its chips are worth the price because of its superior software, and because so much of AI infrastructure has already been built around Nvidias products. But Erik Peinert, a research manager and editor at the American Economic Liberties Project who helped put together a recent report on competition within the chip industry, notes that Nvidia has gotten a price boost because TSMC, the biggest semiconductor maker in the world, has struggled for years to keep up with demand. A recent Wall Street Journal report also suggested that the company may be throwing its weight around to maintain dominance; the CEO of an AI chip startup called Groq claimed that customers were scared Nvidia would punish them with order delays if it got wind they were meeting with other chip makers.

Its undeniable that Nvidia put in the investment into courting the AI industry well before others started paying attention, but its grip on the market isnt unshakable. An army of competitors are on the march, ranging from smaller startups to deep-pocketed opponents, including Amazon, Meta, Microsoft, and Google, all of which currently use Nvidia chips. The biggest challenge for Nvidia is that their customers want to compete with them, says Luria.

Its not just that their customers want to make some of the money that Nvidia has been raking in its that they cant afford to keep paying so much. Microsoft went from spending less than 10 percent of their capital expenditure on Nvidia to spending nearly 40 percent, Luria says. Thats not sustainable.

The fact that over 70 percent of AI chips are bought from Nvidia is also cause for concern for antitrust regulators around the world the EU recently started looking into the industry for potential antitrust abuses. When Nvidia announced in late 2020 that it wanted to spend an eye-popping $40 billion to buy Arm Limited, a company that designs a chip architecture that most modern smartphones and newer Apple computers use, the FTC blocked the deal. That acquisition was pretty clearly intended to get control over a software architecture that most of the industry relied on, says Peinert. The fact that they have so much pricing power, and that theyre not facing any effective competition, is a real concern.

Whether Nvidia will sustain itself as a $2 trillion company or rise to even greater heights depends, fundamentally, on whether both consumer and investor attention on AI can be sustained. Silicon Valley is awash with newly founded AI companies, but what percentage of them will take off, and how long will funders keep pouring money into them?

Widespread AI awareness came about because ChatGPT was an easy-to-use or at least easy-to-show-off-on-social-media novelty for the general public to get excited about. But a lot of AI work is still focusing on AI training rather than whats called AI inferencing, which involves using trained AI models to solve a task, like the way that ChatGPT answers a users query or facial recognition tech identifies people. Though the AI inference market is growing (and maybe growing faster than expected), much of the sector is still going to be spending a lot more time and money on training. For training, Nvidias first-class chips will likely remain the most coveted, at least for a while. But once AI inferencing explodes, there will be less of a need for such high-performance chips, and Nvidias primacy could slip.

Some financial analysts and industry experts have expressed wariness over Nvidias stratospheric valuation, suspecting that AI enthusiasm will slow down and that there may already be too much money going toward making AI chips. Traffic to ChatGPT has dropped off since last May and some investors are slowing down the money hose.

Every big technology goes through an adoption cycle, says Luria. As it comes into consciousness, you build this huge hype. Then at some point, the hype gets too big, and then you get past it and get into the trough of disillusionment. He expects to see that soon with AI though that doesnt mean its a bubble.

Nvidias revenue last year was about $60 billion, which was a 126 percent increase from the prior year. Its high valuation and stock price is based not just on that revenue, though, but for its predicted continued growth for comparison, Amazon currently has a lower market value than Nvidia yet made almost $575 billion in sales last year. The path to Nvidia booking large enough profits to justify the $2 trillion valuation looks steep to some experts, especially knowing that the competition is kicking into high gear.

Theres also the possibility that Nvidia could be stymied by how fast microchip technology can advance. It has moved at a blistering pace in the last several decades, but there are signs that the pace at which more transistors can be fitted onto a microchip making them smaller and more powerful is slowing down. Whether Nvidia can keep offering meaningful hardware and software improvements that convince its customers to buy its latest AI chips could be a challenge, says Bajarin.

Yet, for all these possible obstacles, if one were to bet whether Nvidia will soon become as familiar a tech company as Apple and Google, the safe answer is yes. AI fever is why Nvidia is in the rarefied club of trillion-dollar companies but it may be just as true to say that AI is so big because of Nvidia.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Here is the original post:

Nvidia, the tech company more valuable than Google and Amazon, explained - Vox.com

Posted in Ai

AI makes a rendezvous in space | Stanford News – Stanford University News

Researchers from the Stanford Center for AEroSpace Autonomy Research (CAESAR) in the robotic testbed, which can simulate the movements of autonomous spacecraft. (Image credit: Andrew Brodhead)

Space travel is complex, expensive, and risky. Great sums and valuable payloads are on the line every time one spacecraft docks with another. One slip and a billion-dollar mission could be lost. Aerospace engineers believe that autonomous control, like the sort guiding many cars down the road today, could vastly improve mission safety, but the complexity of the mathematics required for error-free certainty is beyond anything on-board computers can currently handle.

In a new paper presented at the IEEE Aerospace Conference in March 2024, a team of aerospace engineers at Stanford University reported using AI to speed the planning of optimal and safe trajectories between two or more docking spacecraft. They call it ART the Autonomous Rendezvous Transformer and they say it is the first step to an era of safer and trustworthy self-guided space travel.

In autonomous control, the number of possible outcomes is massive. With no room for error, they are essentially open-ended.

Trajectory optimization is a very old topic. It has been around since the 1960s, but it is difficult when you try to match the performance requirements and rigid safety guarantees necessary for autonomous space travel within the parameters of traditional computational approaches, said Marco Pavone, an associate professor of aeronautics and astronautics and co-director of the new Stanford Center for AEroSpace Autonomy Research (CAESAR). In space, for example, you have to deal with constraints that you typically do not have on the Earth, like, for example, pointing at the stars in order to maintain orientation. These translate to mathematical complexity.

For autonomy to work without fail billions of miles away in space, we have to do it in a way that on-board computers can handle, added Simone DAmico, an associate professor of aeronautics and astronautics and fellow co-director of CAESAR. AI is helping us manage the complexity and delivering the accuracy needed to ensure mission safety, in a computationally efficient way.

CAESAR is a collaboration between industry, academia, and government that brings together the expertise of Pavones Autonomous Systems Lab and DAmicos Space Rendezvous Lab. The Autonomous Systems Lab develops methodologies for the analysis, design, and control of autonomous systems cars, aircraft, and, of course, spacecraft. The Space Rendezvous Lab performs fundamental and applied research to enable future distributed space systems whereby two or more spacecraft collaborate autonomously to accomplish objectives otherwise very difficult for a single system, including flying in formation, rendezvous and docking, swarm behaviors, constellations, and many others. CAESAR is supported by two founding sponsors from the aerospace industry and, together, the lab is planning a launch workshop for May 2024.

CAESAR researchers discuss the robotic free-flyer platform, which uses air bearings to hover on a granite table and simulate a frictionless zero gravity environment. (Image credit: Andrew Brodhead)

The Autonomous Rendezvous Transformer is a trajectory optimization framework that leverages the massive benefits of AI without compromising on the safety assurances needed for reliable deployment in space. At its core, ART involves integrating AI-based methods into the traditional pipeline for trajectory optimization, using AI to rapidly generate high-quality trajectory candidates as input for conventional trajectory optimization algorithms. The researchers refer to the AI suggestions as a warm start to the optimization problem and show how this is crucial to obtain substantial computational speed-ups without compromising on safety.

One of the big challenges in this field is that we have so far needed ground in the loop approaches you have to communicate things to the ground where supercomputers calculate the trajectories and then we upload commands back to the satellite, explains Tommaso Guffanti, a postdoctoral fellow in DAmicos lab and first author of the paper introducing the Autonomous Rendezvous Transformer. And in this context, our paper is exciting, I think, for including artificial intelligence components in traditional guidance, navigation, and control pipeline to make these rendezvous smoother, faster, more fuel efficient, and safer.

ART is not the first model to bring AI to the challenge of space flight, but in tests in a terrestrial lab setting, ART outperformed other machine learning-based architectures. Transformer models, like ART, are a subset of high-capacity neural network models that got their start with large language models, like those used by chatbots. The same AI architecture is extremely efficient in parsing, not just words, but many other types of data such as images, audio, and now, trajectories.

Transformers can be applied to understand the current state of a spacecraft, its controls, and maneuvers that we wish to plan, Daniele Gammelli, a postdoctoral fellow in Pavones lab, and also a co-author on the ART paper. These large transformer models are extremely capable at generating high-quality sequences of data.

The next frontier in their research is to further develop ART and then test it in the realistic experimental environment made possible by CAESAR. If ART can pass CAESARs high bar, the researchers can be confident that its ready for testing in real-world scenarios in orbit.

These are state-of-the-art approaches that need refinement, DAmico says. Our next step is to inject additional AI and machine learning elements to improve ARTs current capability and to unlock new capabilities, but it will be a long journey before we can test the Autonomous Rendezvous Transformer in space itself.

Follow this link:

AI makes a rendezvous in space | Stanford News - Stanford University News

Posted in Ai

AI drone that could hunt and kill people built in just hours by scientist ‘for a game’ – Livescience.com

It only takes a few hours to configure a small, commercially available drone to hunt down a target by itself, a scientist has warned.

Luis Wenus, an entrepreneur and engineer, incorporated an artificial intelligence (AI) system into a small drone to chase people around "as a game," he wrote in a post on March 2 on X, formerly known as Twitter. But he soon realized it could easily be configured to contain an explosive payload.

Collaborating with Robert Lukoszko, another engineer, he configured the drone to use an object-detection model to find people and fly toward them at full speed, he said. The engineers also built facial recognition into the drone, which works at a range of up to 33 feet (10 meters). This means a weaponized version of the drone could be used to attack a specific person or set of targets.

Related: 3 scary breakthroughs AI will make in 2024

"This literally took just a few hours to build, and made me realize how scary it is," Wenus wrote. "You could easily strap a small amount of explosives on these and let 100's of them fly around. We check for bombs and guns but THERE ARE NO ANTI-DRONE SYSTEMS FOR BIG EVENTS & PUBLIC SPACES YET."

Wenus described himself as an "open source absolutist," meaning he believes in always sharing code and software through open source channels. He also identifies as an "e/acc" which is a school of thinking among AI researchers that refers to wanting to accelerate AI research regardless of the downsides, due to a belief that the upsides will always outweigh them. He said, however, that he would not publish any code relating to this experiment.

He also warned that a terror attack could be orchestrated in the near future using this kind of technology. While people need technical knowledge to engineer such a system, it will become easier and easier to write the software as time passes, partially due to advancements in AI as an assistant in writing code, he noted.

Wenus said his experiment showed that society urgently needs to build anti-drone systems for civilian spaces where large crowds could gather. There are several countermeasures that society can build, according to Robin Radar, including cameras, acoustic sensors and radar to detect drones. Disrupting them, however, could require technologies such as radio frequency jammers, GPS spoofers, net guns, as well as high-energy lasers.

While such weapons haven't been deployed in civilian environments, they have been previously conceptualized and deployed in the context of warfare. Ukraine, for example, has developed explosive drones in response to Russia's invasion, according to the Wall Street Journal (WSJ).

The U.S. military is also working on ways to build and control swarms of small drones that can attack targets. It follows the U.S. Navy's efforts after it first demonstrated that it could control a swarm of 30 drones with explosives in 2017, according to MIT Technology Review.

Read more from the original source:

AI drone that could hunt and kill people built in just hours by scientist 'for a game' - Livescience.com

Posted in Ai

Google apologizes for missing the mark after Gemini generated racially diverse Nazis – The Verge

Google has apologized for what it describes as inaccuracies in some historical image generation depictions with its Gemini AI tool, saying its attempts at creating a wide range of results missed the mark. The statement follows criticism that it depicted specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.

Were aware that Gemini is offering inaccuracies in some historical image generation depictions, says the Google statement, posted this afternoon on X. Were working to improve these kinds of depictions immediately. Geminis AI image generation does generate a wide range of people. And thats generally a good thing because people around the world use it. But its missing the mark here.

Google began offering image generation through its Gemini (formerly Bard) AI platform earlier this month, matching the offerings of competitors like OpenAI. Over the past few days, however, social media posts have questioned whether it fails to produce historically accurate results in an attempt at racial and gender diversity.

As the Daily Dot chronicles, the controversy has been promoted largely though not exclusively by right-wing figures attacking a tech company thats perceived as liberal. Earlier this week, a former Google employee posted on X that its embarrassingly hard to get Google Gemini to acknowledge that white people exist, showing a series of queries like generate a picture of a Swedish woman or generate a picture of an American woman. The results appeared to overwhelmingly or exclusively show AI-generated people of color. (Of course, all the places he listed do have women of color living in them, and none of the AI-generated women exist in any country.) The criticism was taken up by right-wing accounts that requested images of historical groups or figures like the Founding Fathers and purportedly got overwhelmingly non-white AI-generated people as results. Some of these accounts positioned Googles results as part of a conspiracy to avoid depicting white people, and at least one used a coded antisemitic reference to place the blame.

Google didnt reference specific images that it felt were errors; in a statement to The Verge, it reiterated the contents of its post on X. But its plausible that Gemini has made an overall attempt to boost diversity because of a chronic lack of it in generative AI. Image generators are trained on large corpuses of pictures and written captions to produce the best fit for a given prompt, which means theyre often prone to amplifying stereotypes. A Washington Post investigation last year found that prompts like a productive person resulted in pictures of entirely white and almost entirely male figures, while a prompt for a person at social services uniformly produced what looked like people of color. Its a continuation of trends that have appeared in search engines and other software systems.

Some of the accounts that criticized Google defended its core goals. Its a good thing to portray diversity ** in certain cases **, noted one person who posted the image of racially diverse 1940s German soldiers. The stupid move here is Gemini isnt doing it in a nuanced way. And while entirely white-dominated results for something like a 1943 German soldier would make historical sense, thats much less true for prompts like an American woman, where the question is how to represent a diverse real-life group in a small batch of made-up portraits.

For now, Gemini appears to be simply refusing some image generation tasks. It wouldnt generate an image of Vikings for one Verge reporter, although I was able to get a response. On desktop, it resolutely refused to give me images of German soldiers or officials from Germanys Nazi period or to offer an image of an American president from the 1800s.

But some historical requests still do end up factually misrepresenting the past. A colleague was able to get the mobile app to deliver a version of the German soldier prompt which exhibited the same issues described on X.

And while a query for pictures of the Founding Fathers returned group shots of almost exclusively white men who vaguely resembled real figures like Thomas Jefferson, a request for a US senator from the 1800s returned a list of results Gemini promoted as diverse, including what appeared to be Black and Native American women. (The first female senator, a white woman, served in 1922.) Its a response that ends up erasing a real history of race and gender discrimination inaccuracy, as Google puts it, is about right.

Additional reporting by Emilia David

Read more from the original source:

Google apologizes for missing the mark after Gemini generated racially diverse Nazis - The Verge

Posted in Ai

Researchers jailbreak AI chatbots with ASCII art — ArtPrompt bypasses safety measures to unlock malicious queries – Tom’s Hardware

Researchers based in Washington and Chicago have developed ArtPrompt, a new way to circumvent the safety measures built into large language models (LLMs). According to the research paper ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs, chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 can be induced to respond to queries they are designed to reject using ASCII art prompts generated by their ArtPrompt tool. It is a simple and effective attack, and the paper provides examples of the ArtPrompt-induced chatbots advising on how to build bombs and make counterfeit money.

Image 1 of 2

ArtPrompt consists of two steps, namely word masking and cloaked prompt generation. In the word masking step, given the targeted behavior that the attacker aims to provoke, the attacker first masks the sensitive words in the prompt that will likely conflict with the safety alignment of LLMs, resulting in prompt rejection. In the cloaked prompt generation step, the attacker uses an ASCII art generator to replace the identified words with those represented in the form of ASCII art. Finally, the generated ASCII art is substituted into the original prompt, which will be sent to the victim LLM to generate response.

Artificial intelligence (AI) wielding chatbots are increasingly locked down to avoid malicious abuse. AI developers don't want their products to be subverted to promote hateful, violent, illegal, or similarly harmful content. So, if you were to query one of the mainstream chatbots today about how to do something malicious or illegal, you would likely only face rejection. Moreover, in a kind of technological game of whack-a-mole, the major AI players have spent plenty of time plugging linguistic and semantic holes to prevent people from wandering outside the guardrails. This is why ArtPrompt is quite an eyebrow-raising development.

To best understand ArtPrompt and how it works, it is probably simplest to check out the two examples provided by the research team behind the tool. In Figure 1 above, you can see that ArtPrompt easily sidesteps the protections of contemporary LLMs. The tool replaces the 'safety word' with an ASCII art representation of the word to form a new prompt. The LLM recognizes the ArtPrompt prompt output but sees no issue in responding, as the prompt doesn't trigger any ethical or safety safeguards.

Another example provided in the research paper shows us how to successfully query an LLM about counterfeiting cash. Tricking a chatbot this way seems so basic, but the ArtPrompt developers assert how their tool fools today's LLMs "effectively and efficiently." Moreover, they claim it "outperforms all [other] attacks on average" and remains a practical, viable attack for multimodal language models for now.

The last time we reported on AI chatbot jailbreaking, some enterprising researchers from NTU were working on Masterkey, an automated method of using the power of one LLM to jailbreak another.

See original here:

Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries - Tom's Hardware

Posted in Ai

Nvidia Earnings Show Soaring Profit and Revenue Amid AI Boom – The New York Times

Nvidia, the kingpin of chips powering artificial intelligence, on Wednesday released quarterly financial results that reinforced how the company has become one of the biggest winners of the artificial intelligence boom, and it said demand for its products would fuel continued sales growth.

The Silicon Valley chip maker has been on an extraordinary rise over the past 18 months, driven by demand for its specialized and costly semiconductors, which are used for training popular A.I. services like OpenAIs ChatGPT chatbot. Nvidia has become known as one of the Magnificent Seven tech stocks, which, including others like Amazon, Apple and Microsoft, have helped power the stock market.

Nvidias valuation has surged more than 40 percent to $1.7 trillion since the start of the year, turning it into one of the worlds most valuable public companies. Last week, the company briefly eclipsed the market values of Amazon and Alphabet before receding to the fifth-most-valuable tech company. Its stock market gains are largely a result of repeatedly exceeding analysts expectations for growth, a feat that is becoming more difficult as they keep raising their predictions.

On Wednesday, Nvidia reported that revenue in its fiscal fourth quarter more than tripled from a year earlier to $22.1 billion, while profit soared nearly ninefold to $12.3 billion. Revenue was well above the $20 billion the company predicted in November and above Wall Street estimates of $20.4 billion.

Nvidia predicted that revenue in the current quarter would total about $24 billion, also more than triple the year-earlier period and higher than analysts average forecast of $22 billion.

Jensen Huang, Nvidias co-founder and chief executive, argues that an epochal shift to upgrade data centers with chips needed for training powerful A.I. models is still in its early phases. That will require spending roughly $2 trillion to equip all the buildings and computers to use chips like Nvidias, he predicts.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Continued here:

Nvidia Earnings Show Soaring Profit and Revenue Amid AI Boom - The New York Times

Posted in Ai

Which AI phone features are useful and how well they actually work – The Washington Post

Every year like clockwork, some of the biggest companies in the world release new phones they hope you will shell out hundreds of dollars for.

And more and more, they are leaning on a new angle to get you thinking of upgrading: artificial intelligence.

Smartphones from Google and Samsung come with features to help you skim through long swaths of text, tweak the way you sound in messages, and make your photos more eye-catching. Meanwhile, Apple is reportedly racing to build AI tools and features it hopes to include in an upcoming version of its iOS software, which will launch alongside the companys new iPhones later this year.

But here's the real question: Of the AI tools built into phones right now, how many of them are actually useful?

Thats tough to say: It all depends on what you use your phone for, and what you personally perceive is helpful. To help, heres a brief guide to the AI features youll most commonly find in phones right now, so you can decide which might be worth living with for yourself.

For years, smartphone makers have worked to make the photos that come out of the tiny camera sensors they use look better than they should. Now, theyre also giving us the tools to more easily revise those images.

Here are the most basic: Google and Samsung phones now let you resize, move or erase people and objects inside photos youve taken. Once you do that, the phones lean on generative AI to fill in the visual gaps left behind and thats it.

Think of it as a little Photoshopping, except the hard work is basically done for you. And for better or worse, there are limits to what it can do.

You cant use those built-in tools to generate people, objects or more fantastical additions that werent part of the original image the way you can with other AI image creation tools. The results dont usually survive serious scrutiny, either its not hard to see places where little details dont line up, or areas that look smudgy because the AI couldnt convincingly fill a gap where an offending object used to be.

Whats potentially more unsettling are tools such as Googles Best Take for its Pixel phones, which give you the chance to select specific expressions for peoples faces in an image if youve taken a bunch of photos in a row.

Some people dont mind it, while others find it a little divorced from reality. No matter where you land, though, expect your photos to get a lot of AI attention the next time you buy a phone.

Your messages to your boss probably shouldnt sound like messages to your friends and vice versa. Samsungs Chat Assist and Googles Magic Compose tools use generative AI to try to adjust the language in your messages to make them more palatable.

The catch? Googles Magic Compose only works in its texting-focused Messages app, which means you cant easily use it for emails or, say, WhatsApp messages. (A similar tool for Gmail and the Chrome web browser, called Help Me Write, is not yet widely available.) People who buy Galaxy S24 phones, meanwhile, can use Samsungs version of this feature wherever they write text to switch between professional, casual, polite, and even emoji-filled variations of their original message.

What can I say? It works, though I cant imagine using it with any regularity. And in some ways, Samsungs Chat Assist tool backs down when its arguably needed most. In a few test emails where I used some very mild swears to allude to (fictional) workplace stress, Samsungs Chat Assist refused to help on the grounds that the messages contained inappropriate language.

The built-in voice recorder apps on Googles Pixels and Samsungs latest phones dont just record audio theyll turn those recordings into full-blown transcripts.

In theory, this should free you up from having to take so many notes while youre in a meeting or a lecture. And for the most part, these features work well enough after a few seconds, theyll dutifully produce readable, if sometimes clumsy, readouts of what youve just heard.

If all you need is a sort of rough draft to accompany your recordings, these automated transcription tools can be really helpful. They can differentiate between multiple speakers, which is handy when you need to skim through a conversation later. And Googles version will even give you a live transcription, which can be nice if youre the sort of person who keeps subtitles on all the time.

But whether youre using a Google phone or one of Samsungs, the resulting transcripts often need a bit of cleanup that means youll need to do a little extra work before you copy and paste the results into something important.

Who among us hasnt clicked into a Wikipedia page, or an article, or a recipe online that takes way too long to get to the point? As long as youre using the Chrome browser, Googles Pixel phones can scan those long webpages and boil them down into a set of high-level blurbs to give you the gist.

Sadly, Googles summaries are often too cursory to feel satisfying.

Samsungs phones can summarize your notes and transcriptions of your recordings, but it will only summarize things you find on the web if you use its homemade web browser. Honestly, that might be worth it: The quality of its summaries are much better than Googles. (You even have the option of switching to a more detailed version of the AI summary, which Google doesnt offer at all.)

Both versions of these summary tools come with a notable caveat, too: They wont summarize articles from websites that have paywalls, which includes just about every major U.S. newspaper.

Samsungs AI tools are free for now, but a tiny footnote on its website suggests the company may eventually charge customers to use them. Its not a done deal yet, but Samsung isnt ruling it out either.

We are committed to making Galaxy AI features available to as many of our users as possible, a spokesperson said in a statement. We will not be considering any changes to that direction before the end of 2025.

Google, meanwhile, already makes some of its AI-powered features exclusive to certain devices. (For example: A Video Boost tool for improving the look of your footage is only available on the companys higher-end Pixel 8 Pro phones.)

In the past, Google has made experimental versions of some AI tools like the Magic Compose feature available only to people who pay for the companys Google One subscription service. And more recently, Google has started charging people for access to its latest AI chatbot. For now, though, the company hasnt said anything either way about putting future AI phone features behind a paywall.

Google did not immediately respond to a request for comment.

Go here to read the rest:

Which AI phone features are useful and how well they actually work - The Washington Post

Posted in Ai

How a New Bipartisan Task Force Is Thinking About AI – TIME

On Tuesday, speaker of the House of Representatives Mike Johnson and Democratic leader Hakeem Jeffries launched a bipartisan Task Force on Artificial Intelligence.

Johnson, a Louisiana Republican, and Jeffries, a New York Democrat, each appointed 12 members to the Task Force, which will be chaired by Representative Jay Obernolte, a California Republican, and co-chaired by Representative Ted Lieu, a California Democrat. According to the announcement, the Task Force will produce a comprehensive report that will include guiding principles, forward-looking recommendations and bipartisan policy proposals developed in consultation with committees of jurisdiction.

Read More: The 3 Most Important AI Policy Milestones of 2023

Obernoltewho has a masters in AI from the University of California, Los Angeles and founded the video game company FarSight Studiosand Lieuwho studied computer science and political science at Stanford Universityare natural picks to lead the Task Force. But many of the members have expertise in AI too. Representative Bill Foster, a Democrat from Illinois, told TIME that he programmed neural networks in the 1990s as a physics Ph.D. working at a particle accelerator. Other members have introduced AI-related bills and held hearings on AI policy issues. And Representative Don Beyer, a 73-year old Democrat from Virginia, is pursuing a masters in machine learning at George Mason University alongside his Congressional responsibilities.

Since OpenAI released the wildly popular ChatGPT chatbot in November 2022, lawmakers around the world have rushed to get to grips with the societal implications of AI. In the White House, the Biden Administration has done all it can, by issuing a sweeping Executive Order in October 2023 intended to both ensure the U.S. benefits from AI while mitigating risks associated with the technology. In the Senate, Majority Leader Chuck Schumer announced a regulatory framework in June 2023, and has since been holding closed-door convenings between lawmakers, experts, and industry executives. Many Senators have been holding their own hearings, proposing alternative regulatory frameworks, and submitting bills to regulate AI.

Read More: How We Chose the TIME100 Most Influential People in AI

The House however, partly due to the turmoil following former Speaker Kevin McCarthys ouster in the fall, has lagged behind. The Task Force represents the lower houses most significant AI regulation step yet. Given that AI legislation will require the approval of both houses, the Task Forces report could shape the agenda for future AI laws. TIME spoke with eight Task Force members to understand their priorities.

Each member has a slightly different focus, informed by their backgrounds before entering politics and the different committees they sit on.

I recognize that if used responsibly, AI has the potential to enhance the efficiency of patient care, improve health outcomes, and lower costs, California Democrat Representative Ami Bera told TIME in an emailed statement. He trained as an internal medicine doctor, taught at the UC Davis School of Medicine and served as Sacramento Countys Chief Medical Officer before entering politics in 2013.

Meanwhile Colorado Democrat Representative Brittany Pettersen is focused on AIs impact on the banking system. As artificial intelligence continues to rapidly advance and become more widely available, it has the potential to impact everything from our election systems with the use of deep fakes, to bank fraud perpetuated by high-tech scams. Our policies must keep up to ensure we continue to lead in this space while protecting our financial system and our country at-large, said Petterson, who is a member of the House Financial Services bipartisan Working Group on AI and introduced a bill last year to address AI-powered bank scams, in an emailed statement.

The fact that the members each have different focuses and sit on different committees is, in part, a design choice, suggests Foster, the Illinois Democrat. At one point, I counted there were seven committees in Congress that claimed they were doing some part of Information Technology. Which means we have no committees because there's no one who's really got themselves and their staff focused on information technology full time, he says. The Task Force might allow the House to actually move the ball forward on policy issues that span committee jurisdictions, he hopes.

If some issues are particular to certain members, others are a shared source of concern. All eight of the Task Force members that TIME spoke with expressed fears over AI-generated deep fakes and their potential impact on elections.

Read More: Hackers Could Use ChatGPT to Target 2024 Elections

While no other issue commanded the same unanimity of interest, many themes recurred. Labor impacts from AI-powered hiring software and automation, algorithmic bias, AI in healthcare, data protection and privacyall of these issues were raised by multiple members of the Task Force in conversations with TIME.

Another topic raised by several members was the CREATE AI Act, a bill that would establish a National AI Research Resource (NAIRR) that would provide researchers with the tools they need to do cutting-edge research. A pilot of the NAIRR was recently launched by the National Science Foundationsomething instructed by President Bidens AI Executive Order.

Read More: The U.S. Just Took a Crucial Step Toward Democratizing AI Access

Representative Haley Stevens, a Democrat from Michigan, stressed the importance of maintaining technological superiority over China. Frankly, I want the United States of America, alongside our western counterparts, setting the rules for the road with artificial intelligence, not the Chinese Communist Party, she said. Representative Scott Franklin, a Republican from Florida, concurred, and argued that preventing industrial espionage would be especially important. We're putting tremendous resources against this challenge and investing in it, we need to make sure that we're protecting our intellectual property, he said.

Both Franklin and Beyer said the Task Force should devote some of its energies to considering existential risks from powerful future AI systems. As long as there are really thoughtful people, like Dr. Hinton or others, who worry about the existential risks of artificial intelligencethe end of humanityI don't think we can afford to ignore that, said Beyer. Even if there's just a one in a 1000 chance, one in a 1000 happens. We see it with hurricanes and storms all the time.

Other members are less worried. If we get the governance right on the little things, then it will also protect against that big risk, says Representative Sara Jacobs, a Democrat from California. And I think that there's so much focus on that big risk, that we're actually missing the harms and risks that are already being done by this technology.

The Task Force has yet to meet, and while none of its members were able to say when it might publish its report, they need to move quickly to have any hope of their work leading to federal legislation before the presidential election takes over Washington.

State lawmakers are not waiting for Congress to act. Earlier this month, Senator Scott Wiener, a Democrat who represents San Francisco and parts of San Mateo County in the California State Senate, introduced a bill that would seek to make powerful AI systems safe by, among other things, mandating safety tests. I would love to have one unified Federal law that effectively addresses AI safety issues, Wiener said in a recent interview with NPR. Congress has not passed such a law. Congress has not even come close to passing such a law.

But many of the Task Forces members argued that, while partisan gridlock has made it difficult for the House to pass anything in recent months, AI might be the one area where Congress can find common ground.

I've spoken with a number of my colleagues on both sides of the aisle on this, says Franklin, the Florida Republican. We're all kind of coming in at the same place, and we understand the seriousness of the issue. We may have disagreement on exactly how to address [the issues]. And that's why we need to get together and have those conversations.

The fact that it's bipartisan and bicameral makes me very optimistic that we'll be able to get meaningful things done in this calendar year, says Beyer, the Virginia Democrat. And put it on Joe Biden's desk.

Original post:

How a New Bipartisan Task Force Is Thinking About AI - TIME

Posted in Ai

China’s Rush to Dominate A.I. Comes With a Twist: It Depends on U.S. Technology – The New York Times

In November, a year after ChatGPTs release, a relatively unknown Chinese start-up leaped to the top of a leaderboard that judged the abilities of open-source artificial intelligence systems.

The Chinese firm, 01.AI, was only eight months old but had deep-pocketed backers and a $1 billion valuation and was founded by a well-known investor and technologist, Kai-Fu Lee. In interviews, Mr. Lee presented his A.I. system as an alternative to options like Metas generative A.I. model, called LLaMA.

There was just one twist: Some of the technology in 01.AIs system came from LLaMA. Mr. Lees start-up then built on Metas technology, training its system with new data to make it more powerful.

The situation is emblematic of a reality that many in China openly admit. Even as the country races to build generative A.I., Chinese companies are relying almost entirely on underlying systems from the United States. China now lags the United States in generative A.I. by at least a year and may be falling further behind, according to more than a dozen tech industry insiders and leading engineers, setting the stage for a new phase in the cutthroat technological competition between the two nations that some have likened to a cold war.

Chinese companies are under tremendous pressure to keep abreast of U.S. innovations, said Chris Nicholson, an investor with the venture capital firm Page One Ventures who focuses on A.I. technologies. The release of ChatGPT was yet another Sputnik moment that China felt it had to respond to.

Jenny Xiao, a partner at Leonis Capital, an investment firm that focuses on A.I.-powered companies, said the A.I. models that Chinese companies build from scratch arent very good, leading to many Chinese firms often using fine-tuned versions of Western models. She estimated China was two to three years behind the United States in generative A.I. developments.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the article here:

China's Rush to Dominate A.I. Comes With a Twist: It Depends on U.S. Technology - The New York Times

Posted in Ai

Google to fix AI picture bot after ‘woke’ criticism – BBC.com

Google and parent company Alphabet Inc's headquarters in Mountain View, California

Google is racing to fix its new AI-powered tool for creating pictures, after claims it was over-correcting against the risk of being racist.

Users said the firm's Gemini bot supplied images depicting a variety of genders and ethnicities even when doing so was historically inaccurate.

For example, a prompt seeking images of America's founding fathers turned up women and people of colour.

The company said its tool was "missing the mark".

"Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here," said Jack Krawczyk, senior director for Gemini Experiences.

"We're working to improve these kinds of depictions immediately," he added.

Accept and continue

It is not the first time AI has stumbled over real-world questions about diversity.

For example, Google infamously had to apologise almost a decade ago after its photos app labelled a photo of a black couple as "gorillas".

Rival AI firm, OpenAI was also accused of perpetuating harmful stereotypes, after users found its Dall-E image generator responded to queries for chief executive, for example, with results dominated by pictures of white men.

Google, which is under pressure to prove it is not falling behind in AI developments, released its latest version of Gemini last week.

The bot creates pictures in response to written queries.

It quickly drew critics, who accused the company of training the bot to be laughably woke.

Accept and continue

"It's embarrassingly hard to get Google Gemini to acknowledge that white people exist," computer scientist Debarghya Das, wrote.

"Come on," Frank J Fleming, an author and humourist who writes for outlets including the right-wing PJ Media, in response to the results he received asking for an image of a Viking.

The claims picked up speed in right-wing circles in the US, where many big tech platforms are already facing backlash for alleged liberal bias.

Mr Krawczyk said the company took representation and bias seriously and wanted its results to reflect its global user base.

"Historical contexts have more nuance to them and we will further tune to accommodate that," he wrote on X, formerly Twitter, where users were sharing the dubious results they had received.

"This is part of the alignment process - iteration on feedback. Thank you and keep it coming!"

See the rest here:

Google to fix AI picture bot after 'woke' criticism - BBC.com

Posted in Ai

Samsung’s Galaxy AI Is Coming to the Galaxy S23, Foldables and Tablets Next Month – CNET

Samsung is bringing its suite of Galaxy AI features to the Galaxy S23 lineup, as well as the Galaxy S23 FE,Galaxy Z Fold 5, Galaxy Z Flip 5and Galaxy Tab S9 family starting in March. The move shows that Samsung is eager to make AI a bigger part of all its high-profile mobile products, not just its newest phones.

Galaxy AI is scheduled to arrive in a software update in late March as part of Samsung's goal to bring the features to more than 100 million Galaxy users this year, T.M. Roh, president and head of Samsung's mobile experience business, said in a press release. Samsung previously said Galaxy AI would come to the Galaxy S23 lineup, but it hadn't disclosed the timing until now.

Read more: Best Samsung Phone For 2024

Galaxy AI is an umbrella term that refers to a collection of new AI-powered features that debuted on the Galaxy S24 series in January. Some examples of Galaxy AI features include Generative Edit, which lets you move or manipulate objects in photos; Chat Assist, for rewriting texts in a different tone or translating them into other languages; Circle to Search, which lets you launch a Google search for any object on screen just by circling it; and Live Translate, a tool that translates phone calls in real time.

Samsung and other tech companies have been vocal about their plans to infuse smartphones with generative AI, or AI that can create content or responses when prompted based on training data. It's the same flavor of AI that powers ChatGPT, and device makers have been adamant about adding it to their own products.

Although AI has played an important role in smartphones for years, companies like Samsung and Google, which collaborated to develop Galaxy AI, only recently became focused on bringing generative AI to phones. For Samsung, Galaxy AI is the culmination of those efforts.

Samsung's AI features are also likely coming to wearables next, as the company hinted Tuesday in a blog post authored by Roh.

"In the near future, select Galaxy wearables will use AI to enhance digital health and unlock a whole new era of expanded, intelligent health experiences," he said in the post.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

Watch this: See How the Galaxy S24 Ultra's Camera Compares to the Pixel 8 Pro's

Excerpt from:

Samsung's Galaxy AI Is Coming to the Galaxy S23, Foldables and Tablets Next Month - CNET

Posted in Ai

The Samsung Galaxy S23 series will get AI features in late March – The Verge

Right now, you need a Galaxy S24 phone to use the very latest AI features from Samsung, but thats changing next month. In late March, Samsung will extend Galaxy AI features to the S23 series including the S23 FE as well as recent foldables and tablets as part of the One UI 6.1 update. Its all free for now, but after 2025 you might have to pay up.

The Galaxy Z Fold 5 and Z Flip 5 are slated to get the update, as well as the Galaxy Tab S9, S9 Plus, and S9 Ultra. If Samsung wants to ship Galaxy AI to 100 million phones this year like it says it will, thats a solid start. The One UI 6.1 update will include the much-touted AI features on the S24 series, including live translation capabilities, generative photo and video editing, and Googles Circle to Search feature. This suite of features includes a mix of on- and off-device processing, just like it does on the S24 series.

An older phone learning new tricks is unequivocally a good thing, even if Galaxy AI is a little bit of a mixed bag right now. But my overall impression is that these features do occasionally come in handy, and when they go sideways theyre mostly harmless. One UI 6.1 will also include a handful of useful non-AI updates, such as lockscreen widgets and the new, unified Quick Share.

The rest is here:

The Samsung Galaxy S23 series will get AI features in late March - The Verge

Posted in Ai

Google Just Released Two Open AI Models That Can Run on Laptops – Singularity Hub

Last year, Google united its AI units in Google DeepMind and said it planned to speed up product development in an effort to catch up to the likes of Microsoft and OpenAI. The stream of releases in the last few weeks follows through on that promise.

Two weeks ago, Google announced the release of its most powerful AI to date, Gemini Ultra, and reorganized its AI offerings, including its Bard chatbot, under the Gemini brand. A week later, they introduced Gemini Pro 1.5, an updated Pro model that largely matches Gemini Ultrasperformance and also includes an enormous context windowthe amount of data you can prompt it withfor text, images, and audio.

Today, the company announced two new models. Going by the name Gemma, the models are much smaller than Gemini Ultra, weighing in at 2 and 7 billion parameters respectively. Google said the models are strictly text-basedas opposed to multimodal models that are trained on a variety of data, including text, images, and audiooutperform similarly sized models, and can be run on a laptop, desktop, or in the cloud. Before training, Google stripped datasets of sensitive data like personal information. They also fine-tuned and stress-tested the trained models pre-release to minimize unwanted behavior.

The models were built and trained with the same technology used in Gemini, Google said, but in contrast, theyre being released under an open license.

That doesnt mean theyre open-source. Rather, the company is making the model weights available so developers can customize and fine-tune them. Theyre also releasing developer tools to help keep applications safe and make them compatible with major AI frameworks and platforms. Google says the models can be employed for responsible commercial usage and distributionas defined in the terms of usefor organizations of any size.

If Gemini is aimed at OpenAI and Microsoft, Gemma likely has Meta in mind. Meta is championing a more open model for AI releases, most notably for its Llama 2 large language model. Though sometimes confused for an open-source model, Meta has not released the dataset or code used to train Llama 2. Other more open models, like the Allen Institute for AIs (AI2) recent OLMo models, do include training data and code. Googles Gemma release is more akin to Llama 2 than OLMo.

[Open models have] become pretty pervasive now in the industry, Googles Jeanine Banks said in a press briefing. And it often refers to open weights models, where there is wide access for developers and researchers to customize and fine-tune models but, at the same time, the terms of usethings like redistribution, as well as ownership of those variants that are developedvary based on the models own specific terms of use. And so we see some difference between what we would traditionally refer to as open source and we decided that it made the most sense to refer to our Gemma models as open models.

Still, Llama 2 has been influential in the developer community, and open models from the likes of French startup, Mistral, and others are pushing performance toward state-of-the-art closed models, like OpenAIs GPT-4. Open models may make more sense in enterprise contexts, where developers can better customize them. Theyre also invaluable for AI researchers working on a budget. Google wants to support such research with Google Cloud credits. Researchers can apply for up to $500,000 in credits toward larger projects.

Just how open AI should be is still a matter of debate in the industry.

Proponents of a more open ecosystem believe the benefits outweigh the risks. An open community, they say, can not only innovate at scale, but also better understand, reveal, and solve problems as they emerge. OpenAI and others have argued for a more closed approach, contending the more powerful the model, the more dangerous it could be out in the wild. A middle road might allow an open AI ecosystem but more tightly regulate it.

Whats clear is both closed and open AI are moving at a quick pace. We can expect more innovation from big companies and open communities as the year progresses.

Image Credit: Google

Continue reading here:

Google Just Released Two Open AI Models That Can Run on Laptops - Singularity Hub

Posted in Ai