AI agents like Rabbit aim to book your vacation and order your Uber – NPR

The AI-powered Rabbit R1 device is seen at Rabbit Inc.'s headquarters in Santa Monica, California. The gadget is meant to serve as a personal assistant fulfilling tasks such as ordering food on DoorDash for you, calling an Uber or booking your family's vacation. Stella Kalinina for NPR hide caption

The AI-powered Rabbit R1 device is seen at Rabbit Inc.'s headquarters in Santa Monica, California. The gadget is meant to serve as a personal assistant fulfilling tasks such as ordering food on DoorDash for you, calling an Uber or booking your family's vacation.

ChatGPT can give you travel ideas, but it won't book your flight to Cancn.

Now, artificial intelligence is here to help us scratch items off our to-do lists.

A slate of tech startups are developing products that use AI to complete real-world tasks.

Silicon Valley watchers see this new crop of "AI agents" as being the next phase of the generative AI craze that took hold with the launch of chatbots and image generators.

Last year, Sam Altman, the CEO of OpenAI, the maker of ChatGPT, nodded to the future of AI errand-helpers at the company's developer conference.

"Eventually, you'll just ask a computer for what you need, and it'll do all of these tasks for you," Altman said.

One of the most hyped companies doing this is called Rabbit. It has developed a device called the Rabbit R1. Chinese entrepreneur Jesse Lyu launched it at this year's CES, the annual tech trade show, in Las Vegas.

It's a bright orange gadget about half the size of an iPhone. It has a button on the side that you push and talk into like a walkie-talkie. In response to a request, an AI-powered rabbit head pops up and tries to fulfill whatever task you ask.

Chatbots like ChatGPT rely on technology known as a large language model, and Rabbit says it uses both that system and a new type of AI it calls a "large action model." In basic terms, it learns how people use websites and apps and mimics these actions after a voice prompt.

It won't just play a song on Spotify, or start streaming a video on YouTube, which Siri and other voice assistants can already do, but Rabbit will order DoorDash for you, call an Uber, book your family's vacation. And it makes suggestions after learning a user's tastes and preferences.

Storing potentially dozens or hundreds of a person's passwords raises instant questions about privacy. But Rabbit claims it saves user credentials in a way that makes it impossible for the company, or anyone else, to access someone's personal information. The company says it will not sell or share user data with third parties "without your formal, explicit permission."

A Rabbit employee demonstrates the company's Rabbit R1 device. The company says more than 80,000 people have preordered the device for $199. Stella Kalinina for NPR hide caption

A Rabbit employee demonstrates the company's Rabbit R1 device. The company says more than 80,000 people have preordered the device for $199.

The company, which says more than 80,000 people have preordered the Rabbit R1, will start shipping the devices in the coming months.

"This is the first time that AI exists in a hardware format," said Ashley Bao, a spokeswoman for Rabbit at the company's Santa Monica, Calif., headquarters. "I think we've all been waiting for this moment. We've had our Alexa. We've had our smart speakers. But like none of them [can] perform tasks from end to end and bring words to action for you."

Excitement in Silicon Valley over AI agents is fueling an increasingly crowded field of gizmos and services. Google and Microsoft are racing to develop products that harness AI to automate busywork. The web browser Arc is building a tool that uses an AI agent to surf the web for you. Another startup, called Humane, has developed a wearable AI pin that projects a display image on a user's palm. It's supposed to assist with daily tasks and also make people pick up their phones less frequently.

Similarly, Rabbit claims its device will allow people to get things done without opening apps (you log in to all your various apps on a Rabbit web portal, so it uses your credentials to do things on your behalf).

To work, the Rabbit R1 has to be connected to Wi-Fi, but there is also a SIM card slot, in case people want to buy a separate data plan just for the gadget.

When asked why anyone would want to carry around a separate device just to do something your smartphone could do in 30 seconds, Rabbit spokesman Ryan Fenwick argued that using apps to place orders and make requests all day takes longer than we might imagine.

"We are looking at the entire process, end to end, to automate as much as possible and make these complex actions much quicker and much more intuitive than what's currently possible with multiple apps on a smartphone," Fenwick said.

ChatGPT's introduction in late 2022 set off a frenzy at companies in many industries trying to ride the latest tech industry wave. That chatbot exuberance is about to be transferred to the world of gadgets, said Duane Forrester, an analyst at the firm Yext.

Google and Microsoft are racing to develop products that harness AI to automate busywork, which might make other AI-powered assistants obsolete. Stella Kalinina for NPR hide caption

Google and Microsoft are racing to develop products that harness AI to automate busywork, which might make other AI-powered assistants obsolete.

"Early on, with the unleashing of AI, every single product or service attached the letters "A" and "I" to whatever their product or service was," Forrester said. "I think we're going to end up seeing a version of that with hardware as well."

Forrester said an AI walkie-talkie might quickly become obsolete when companies like Apple and Google make their voice assistants smarter with the latest AI innovations.

"You don't need a different piece of hardware to accomplish this," he said. "What you need is this level of intelligence and utility in our current smartphones, and we'll get there eventually."

Researchers are worried that AI-powered personal assistant technology could eventually go wrong. Stella Kalinina for NPR hide caption

Researchers are worried that AI-powered personal assistant technology could eventually go wrong.

Researchers are worried about where such technology could eventually go awry.

The AI assistant purchasing the wrong nonrefundable flight, for instance, or sending a food order to someone else's house are among potential snafus that analysts have mentioned.

A 2023 paper by the Center for AI Safety warned against AI agents going rogue. It said if an AI agent is given an "open-ended goal" say, maximize a person's stock market profits without being told how to achieve that goal, it could go very wrong.

"We risk losing control over AIs as they become more capable. AIs could optimize flawed objectives, drift from their original goals, become power-seeking, resist shutdown, and engage in deception. We suggest that AIs should not be deployed in high-risk settings, such as by autonomously pursuing open-ended goals or overseeing critical infrastructure, unless proven safe," according to a summary of the paper.

At Rabbit's Santa Monica office, Rabbit R1 Creative Director Anthony Gargasz pitches the device as a social media reprieve. Use it to make a doctor's appointment or book a hotel without being sucked into an app's feed for hours.

"Absolutely no doomscrolling on the Rabbit R1," said Gargasz. "The scroll wheel is for intentional interaction."

His colleague Ashley Bao added that the whole point of the gadget is to "get things done efficiently." But she acknowledged there's a cutesy factor too, comparing it to the keychain-size electronic pets that were popular in the 1990s.

"It's like a Tamagotchi but with AI," she said.

Excerpt from:

AI agents like Rabbit aim to book your vacation and order your Uber - NPR

Posted in Ai

Intel Launches World’s First Systems Foundry Designed for the AI Era – Investor Relations :: Intel Corporation (INTC)

Announced at Intel Foundry Direct Connect, Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions and new Intel Foundry Advanced System Assembly and Test capabilities. Intel also affirmed that its ambitious five-nodes-in-four-years process roadmap remains on track and will deliver the industrys first backside power solution. (Credit: Intel Corporation)

Intel announces expanded process roadmap, customers and ecosystem partners to deliver on ambition to be the No. 2 foundry by 2030.

Company hosts Intel Foundry event featuring U.S. Commerce Secretary Gina Raimondo, Arm CEO Rene Haas and Open AI CEO Sam Altman and others.

NEWS HIGHLIGHTS

SAN JOSE, Calif.--(BUSINESS WIRE)-- Intel Corp. (INTC) today launched Intel Foundry as a more sustainable systems foundry business designed for the AI era and announced an expanded process roadmap designed to establish leadership into the latter part of this decade. The company also highlighted customer momentum and support from ecosystem partners including Synopsys, Cadence, Siemens and Ansys who outlined their readiness to accelerate Intel Foundry customers chip designs with tools, design flows and IP portfolios validated for Intels advanced packaging and Intel 18A process technologies.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20240221189319/en/

Announced at Intel Foundry Direct Connect, Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions and new Intel Foundry Advanced System Assembly and Test capabilities. Intel also affirmed that its ambitious five-nodes-in-four-years process roadmap remains on track and will deliver the industrys first backside power solution. (Credit: Intel Corporation)

The announcements were made at Intels first foundry event, Intel Foundry Direct Connect, where the company gathered customers, ecosystem companies and leaders from across the industry. Among the participants and speakers were U.S. Secretary of Commerce Gina Raimondo, Arm CEO Rene Haas, Microsoft CEO Satya Nadella, OpenAI CEO Sam Altman and others.

More: Intel Foundry Direct Connect (Press Kit)

AI is profoundly transforming the world and how we think about technology and the silicon that powers it, said Intel CEO Pat Gelsinger. This is creating an unprecedented opportunity for the worlds most innovative chip designers and for Intel Foundry, the worlds first systems foundry for the AI era. Together, we can create new markets and revolutionize how the world uses technology to improve peoples lives.

Process Roadmap Expands Beyond 5N4Y

Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions. Intel also affirmed that its ambitious five-nodes-in-four-years (5N4Y) process roadmap remains on track and will deliver the industrys first backside power solution. Company leaders expect Intel will regain process leadership with Intel 18A in 2025.

The new roadmap includes evolutions for Intel 3, Intel 18A and Intel 14A process technologies. It includes Intel 3-T, which is optimized with through-silicon vias for 3D advanced packaging designs and will soon reach manufacturing readiness. Also highlighted are mature process nodes, including new 12 nanometer nodes expected through the joint development with UMC announced last month. These evolutions are designed to enable customers to develop and deliver products tailored to their specific needs. Intel Foundry plans a new node every two years and node evolutions along the way, giving customers a path to continuously evolve their offerings on Intels leading process technology.

Intel also announced the addition of Intel Foundry FCBGA 2D+ to its comprehensive suite of ASAT offerings, which already include FCBGA 2D, EMIB, Foveros and Foveros Direct.

Microsoft Design on Intel 18A Headlines Customer Momentum

Customers are supporting Intels long-term systems foundry approach. During Pat Gelsingers keynote, Microsoft Chairman and CEO Satya Nadella stated that Microsoft has chosen a chip design it plans to produce on the Intel 18A process.

We are in the midst of a very exciting platform shift that will fundamentally transform productivity for every individual organization and the entire industry, Nadella said. To achieve this vision, we need a reliable supply of the most advanced, high-performance and high-quality semiconductors. Thats why we are so excited to work with Intel Foundry, and why we have chosen a chip design that we plan to produce on Intel 18A process.

Intel Foundry has design wins across foundry process generations, including Intel 18A, Intel 16 and Intel 3, along with significant customer volume on Intel Foundry ASAT capabilities, including advanced packaging.

In total, across wafer and advanced packaging, Intel Foundrys expected lifetime deal value is greater than $15 billion.

IP and EDA Vendors Declare Readiness for Intel Process and Packaging Designs

Intellectual property and electronic design automation (EDA) partners Synopsys, Cadence, Siemens, Ansys, Lorentz and Keysight disclosed tool qualification and IP readiness to enable foundry customers to accelerate advanced chip designs on Intel 18A, which offers the foundry industrys first backside power solution. These companies also affirmed EDA and IP enablement across Intel node families.

At the same time, several vendors announced plans to collaborate on assembly technology and design flows for Intels embedded multi-die interconnect bridge (EMIB) 2.5D packaging technology. These EDA solutions will ensure faster development and delivery of advanced packaging solutions for foundry customers.

Intel also unveiled an "Emerging Business Initiative" that showcases a collaboration with Arm to provide cutting-edge foundry services for Arm-based system-on-chips (SoCs). This initiative presents an important opportunity for Arm and Intel to support startups in developing Arm-based technology and offering essential IP, manufacturing support and financial assistance to foster innovation and growth.

Systems Approach Differentiates Intel Foundry in the AI Era

Intels systems foundry approach offers full-stack optimization from the factory network to software. Intel and its ecosystem empower customers to innovate across the entire system through continuous technology improvements, reference designs and new standards.

Stuart Pann, senior vice president of Intel Foundry at Intel said, We are offering a world-class foundry, delivered from a resilient, more sustainable and secure source of supply, and complemented by unparalleled systems of chips capabilities. Bringing these strengths together gives customers everything they need to engineer and deliver solutions for the most demanding applications.

Global, Resilient, More Sustainable and Trusted Systems Foundry

Resilient supply chains must also be increasingly sustainable, and today Intel shared its goal of becoming the industrys most sustainable foundry. In 2023, preliminary estimates show that Intel used 99% renewable electricity in its factories worldwide. Today, the company redoubled its commitment to achieving 100% renewable electricity worldwide, net-positive water and zero waste to landfills by 2030. Intel also reinforced its commitment to net-zero Scope 1 and Scope 2 GHG emissions by 2040 and net-zero upstream Scope 3 emissions by 2050.

Forward-Looking Statements

This release contains forward-looking statements, including with respect to Intels:

Such statements involve many risks and uncertainties that could cause our actual results to differ materially from those expressed or implied, including those associated with:

All information in this press release reflects Intel management views as of the date hereof unless an earlier date is specified. Intel does not undertake, and expressly disclaims any duty, to update such statements, whether as a result of new information, new developments, or otherwise, except to the extent that disclosure may be required by law.

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moores Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intels innovations, go to newsroom.intel.com and intel.com.

Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

View source version on businesswire.com: https://www.businesswire.com/news/home/20240221189319/en/

John Hipsher 1-669-223-2416 john.hipsher@intel.com

Robin Holt 1-503-616-1532 robin.holt@intel.com

Source: Intel Corp.

Released Feb 21, 2024 11:30 AM EST

More here:

Intel Launches World's First Systems Foundry Designed for the AI Era - Investor Relations :: Intel Corporation (INTC)

Posted in Ai

Energy companies tap AI to detect defects in an aging grid – E&E News by POLITICO

A helicopter loaded with cameras and sensors sweeps over a utilitys high-voltage transmission line in the southeastern United States.

High-resolution cameras record images of cables, connections and towers. Artificial intelligence tools search for cracks and flaws that could be overlooked by the naked eye, the worn-out component that could spark the next wildfire.

We have trained a lot of AI models to recognize defects, said Marion Baroux, a Germany-based business developer for Siemens Energy, which built the helicopter scanning and analysis technology.

Drones have been inspecting power lines for a decade. Today, the rapid advancement of AI and machine-learning technology has opened the door to faster detection of potential failures in aging power lines, guiding transmission owners on how to upgrade the grid to meet clean energy and extreme weather challenges.

Automating inspections is a first step in a still uncharted future for AI adoption in the electric power sector, echoing the high-stakes international debate over the risks and potential of AI technology.

President Joe Bidens executive order on AI last October emphasized caution. Safety requires robust, reliable, repeatable, and standardized evaluations of AI systems, the order said, as well as policies, institutions, and as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.

There is also a case for accelerating AIs adoption, according to Department of Energy experts speaking at a recent conference.

Balancing supply and demand on the grid is becoming more complex as renewable generation replaces fossil power plants.

AI has the potential to help us operate the grid with much higher percentages of renewables, said Andrew Bochman, senior grid strategist at the Idaho National Laboratory.

But first, AI must earn the confidence of engineers who are responsible for ensuring utilities face as few risks as possible.

Obviously, there are a lot of technical concerns about how these systems work and what we can trust them to do, said Christopher Lamb, a senior cybersecurity researcher at Sandia National Laboratories in New Mexico.

There are definitely risks associated with AI, said Colin Ponce, a computational mathematician at Lawrence Livermore National Laboratory in California. A lot of utilities have a certain amount of hesitation about it because they dont really understand what it will do.

The need for transmission owners and operators to find and prevent breaks in aging power line components was driven home tragically in Californias fatal Camp Fire in 2018.

A 99-year-old metal hook supporting a high-voltage cable on a Pacific Gas & Electric power line wore through, allowing the line to hit the tower causing a short-circuit whose sparks ignited the fire. The fire claimed 85 lives.

Baroux said Siemens Energys system may or may not have prevented the Camp Fire. But the purpose is to find the transmission line components like the failed PG&E hook that are most in need of replacement.

Another California catastrophe demonstrates a case for that capability.

On July 13, 2021, a California grid trouble man driving through Californias rugged, remote Sierra Nevada region spotted a 65-foot-tall Douglas fir that had fallen onto a PG&E power line. According to his court testimony there was nothing he could do to prevent the spread of what would be called the Dixie Fire, which burned for three months, consuming nearly 1 million acres.

Faced with the threat of more impacts between dead or dying trees and its lines, PG&E has received state regulators permission to bury 1,230 miles of its power lines at a cost of roughly $3 million per mile.

The flying inspections produce thousands of gigabytes of data per mile, which would overwhelm human investigators. We will run AI models on data, then the customer-operators will review these results to look for the most urgent actions to take. The human remains the decisionmaking, always, she said. But this saves them time.

Siemens Energy declined to discuss the systems price tag and would not identify the utility in the Southeast using it. The service is in use at the E.ON Group energy operations in Germany, in French grid operator RTE, and TenneT, which runs the Netherlands network, a Siemens Energy spokesperson said.

In addition to the helicopters camera array, its instrument pod also carries sensors that detect wasteful or damaging electrical current leaks in lines. Lidar distance measuring radar scanners are also aboard to create 3D views of towers and nearby vegetation, alerting operators to potential threats from tree impacts with lines.

The possibility of applying AI and other advanced computing solutions to grid operations is the goal of another DOE project called HIPPO, for high-performance power grid optimization. HIPPOs lead partners are the Midcontinent Independent System Operator (MISO); DOEs Pacific Northwest National Laboratory; General Electric; and Gurobi Optimization, a Beaverton, Oregon, technology firm.

HIPPO has designed high-speed computing algorithms employing machine learning tools to improve the speed and accuracy of power plant scheduling decisions by MISO, the grid operator in 15 central U.S. states and Canadas Manitoba province.

Every day, MISO operators must make decisions about which electricity generating resources will run each hour of the following day, based on the generators competing power prices and transmission costs. The growth of wind and solar power, microgrids, and customers rooftop solar power and electric vehicle charging are making decisions harder as forecasting weather impacts on the grid is also more challenging.

HIPPOs heavier computing power and complex calculations produce answers 35 times faster than current systems, allowing greener and more sustainable grid operations, MISO reported last year.

One of the advantage of HIPPO is its flexibility, said Feng Pan, PNNL research scientist and the projects principal investigator. In addition to scheduling generation and confirming grid stability, HIPPO will enable operators to run what-if scenarios involving battery storage customer-based resources, he said in an email.

HIPPO is easing its way into the MISO operation. The project, launched with a 2015 grant from DOEs Advanced Projects Research Agency-Energy, is not yet scheduled for full deployment. It will assist operators, not take over, Pan said.

For AI systems to solve problems, they will need trusted data about grid operations, said Lamb, the senior researcher at Sandia.

Are there biases that could get cooked into algorithms that could create serious risks to operation reliability, and if so, what might they be? Lamb asked.

Data issues arent waiting for AI. Even without the complications AI may bring, operators of the principal Texas grid were dangerously in the dark during Winter Storm Uri in 2021.

If an adversary can insert data into your [computer] training pipeline, there are ways they can poison your data set and cause a variety of problems, Lawrence Livermores Ponce said, adding that designing defenses against rogue data threats is a major priority.

Ponce and Lamb came down on AIs side in the conference.

There is a bunch of hype around AI that is really undeserved, Lamb said. Operators understand their businesses. They are going to be making responsible decisions, and frankly I trust them to do so.

Grid operators should be able to maximize benefits and minimize risks provided they invest wisely in safety technology, he said. It doesnt mean the risks will be zero.

If we get too scared of AI and completely put the brakes on, I fear that will hinder our ability to respond to real threats and significant risk we already have evidence for, like climate change, Ponce said.

Theres a lot of doom and a lot of gloom about the application of AI, Lamb said. Dont be scared.

Read the original post:

Energy companies tap AI to detect defects in an aging grid - E&E News by POLITICO

Posted in Ai

AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

AI developments are happening pretty fast. If you don't stop and look around once in a while, you could miss them.

Fortunately, I'm looking around for you and what I saw this week is that competition between OpenAI, maker of ChatGPT and Dall-E, and Google continues to heat up in a way that's worth paying attention to.

A week after updating its Bard chatbot and changing the name to Gemini, Google's DeepMind AI subsidiary previewed the next version of its generative AI chatbot. DeepMind told CNET's Lisa Lacy that Gemini 1.5 will be rolled out "slowly" to regular people who sign up for a wait list and will be available now only to developers and enterprise customers.

Gemini 1.5 Pro, Lacy reports, is "as capable as" the Gemini 1.0 Ultra model, which Google announced on Feb. 8. The 1.5 Pro model has a win rate -- a measurement of how many benchmarks it can outperform -- of 87% compared with the 1.0 Pro and 55% against the 1.0 Ultra. So the 1.5 Pro is essentially an upgraded version of the best available model now.

Gemini 1.5 Pro can ingest video, images, audio and text to answer questions, added Lacy. Oriol Vinyals, vice president of research at Google DeepMind and co-lead of Gemini, described 1.5 as a "research release" and said the model is "very efficient" thanks to a unique architecture that can answer questions by zeroing in on expert sources in that particular subject rather than seeking the answer from all possible sources.

Meanwhile, OpenAI announced a new text-to-video model called Sora that's capturing a lot of attention because of the photorealistic videos it's able to generate. Sora can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." Following up on a promise it made, along with Google and Meta last week, to watermark AI-generated images and video, OpenAI says it's also creating tools to detect videos created with Sora so they can be identified as being AI generated.

Google and Meta have also announced their own gen AI text-to-video creators.

Sora, which means "sky" in Japanese, is also being called experimental, with OpenAI limiting access for now to so-called "red teamers," security experts and researchers who will assess the tool's potential harms or risks. That follows through on promises made as part of President Joe Biden's AI executive order last year, asking developers to submit the results of safety checks on their gen AI chatbots before releasing them publicly. OpenAI said it's also looking to get feedback on Sora from some visual artists, designers and filmmakers.

How do the photorealistic videos look? Pretty realistic. I agree with the The New York Times, which described the short demo videos -- "of wooly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city" -- as "eye popping."

The MIT Review, which also got a preview of Sora, said the "tech has pushed the envelope of what's possible with text-to-video generation." Meanwhile, The Washington Post noted Sora could exacerbate an already growing problem with video deepfakes, which have been used to "deceive voters" and scam consumers.

One X commentator summarized it this way: "Oh boy here we go what is real anymore." And OpenAI CEO Sam Altman called the news about its video generation model a "remarkable moment."

You can see the four examples of what Sora can produce on OpenAI's intro site, which notes that the tool is "able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions."

But Sora has its weaknesses, which is why OpenAI hasn't yet said whether it will actually be incorporated into its chatbots. Sora "may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right."

All of this is to remind us that tech is a tool -- and that it's up to us humans to decide how, when, where and why to use that technology. In case you didn't see it, the trailer for the new Minions movie (Despicable Me 4: Minion Intelligence) makes this point cleverly, with its sendup of gen AI deepfakes and Jon Hamm's voiceover of how "artificial intelligence is changing how we see the worldtransforming the way we do business."

"With artificial intelligence," Hamm adds over the minions' laughter, "the future is in good hands."

Here are the other doings in AI worth your attention.

Twenty tech companies, including Adobe, Amazon, Anthropic, ElevenLabs, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X, agreed at a security conference in Munich that they will voluntarily adopt "reasonable precautions" to guard against AI tools being used to mislead or deceive voters ahead of elections.

"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says, according to NPR. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

But the agreement is "largely symbolic," the Associated Press reported, noting that "reasonable precautions" is a little vague.

"The companies aren't committing to ban or remove deepfakes," the AP said. "Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 'swift and proportionate responses' when that content starts to spread."

AI has already been used to try to trick voters in the US and abroad. Days before the New Hampshire presidential primary, fraudsters sent an AI robocall that mimicked President Biden's voice, asking them not to vote in the primary. That prompted the Federal Communications Commission this month to make AI-generated robocalls illegal. The AP said that "Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media."

"Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," Nick Clegg, president of global affairs for Meta, told the Associated Press in an interview before the summit.

Over 4 billion people are set to vote in key elections this year in more than 40 countries,. including the US, The Hill reported.

If you're concerned about how deepfakes may be used to scam you or your family members -- someone calls your grandfather and asks them for money by pretending to be you -- Bloomberg reporter Rachel Metz has a good idea. She suggests it may be time for us all to create a "family password" or safe word or phrase to share with our family or personal network that we can ask for to make sure we're talking to who we think we're talking to.

"Extortion has never been easier," Metz reports. "The kind of fakery that used to take time, money and technical know-how can now be accomplished quickly and cheaply by nearly anyone."

That's where family passwords come in, since they're "simple and free," Metz said. "Pick a word that you and your family (or another trusted group) can easily remember. Then, if one of those people reaches out in a way that seems a bit odd -- say, they're suddenly asking you to deliver 5,000 gold bars to a P.O. Box in Alaska -- first ask them what the password is."

How do you pick a good password? She offers a few suggestions, including using a word you don't say frequently and that's not likely to come up in casual conversations. Also, "avoid making the password the name of a pet, as those are easily guessable."

Hiring experts have told me it's going to take years to build an AI-educated workforce, considering that gen AI tools like ChatGPT weren't released until late 2022. So it makes sense that learning platforms like Coursera, Udemy, Udacity, Khan Academy and many universities are offering online courses and certificates to upskill today's workers. Now the University of Pennsylvania's School of Engineering and Applied Science said it's the first Ivy League school to offer an undergraduate major in AI.

"The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning and even national security," Penn said in a Feb. 13 press release. "This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way."

The bachelor of science in AI offers coursework in machine learning, computing algorithms, data analytics and advanced robotics and will have students address questions about "how to align AI with our social values and how to build trustworthy AI systems," Penn professor Zachary Ives said.

"We are training students for jobs that don't yet exist in fields that may be completely new or revolutionized by the time they graduate," added Robert Ghrist, associate dean of undergraduate education in Penn Engineering.

FYI, the cost of an undergraduate education at Penn, which typically spans four years, is over $88,000 per year (including housing and food).

For those not heading to college or who haven't signed up for any of those online AI certificates, their AI upskilling may come courtesy of their current employee. The Boston Consulting Group, for its Feb. 9 report, What GenAI's Top Performer Do Differently, surveyed over 150 senior executives across 10 sectors. Generally:

Bottom line: companies are starting to look at existing job descriptions and career trajectories, and the gaps they're seeing in the workforce when they consider how gen AI will affect their businesses. They've also started offering gen AI training programs. But these efforts don't lessen the need for today's workers to get up to speed on gen AI and how it may change the way they work -- and the work they do.

In related news, software maker SAP looked at Google search data to see which states in the US were most interested in "AI jobs and AI business adoption."

Unsurprisingly, California ranked first in searches for "open AI jobs" and "machine learning jobs." Washington state came in second place, Vermont in third, Massachusetts in fourth and Maryland in fifth.

California, "home to Silicon Valley and renowned as a global tech hub, shows a significant interest in AI and related fields, with 6.3% of California's businesses saying that they currently utilize AI technologies to produce goods and services and a further 8.4% planning to implement AI in the next six months, a figure that is 85% higher than the national average," the study found.

Virginia, New York, Delaware, Colorado and New Jersey, in that order, rounded out the top 10.

Over the past few months, I've highlighted terms you should know if you want to be knowledgeable about what's happening as it relates to gen AI. So I'm going to take a step back this week and provide this vocabulary review for you, with a link to the source of the definition.

It's worth a few minutes of your time to know these seven terms.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it's kind or cruel based on its answers, even though it isn't capable of having emotions, or you may believe the AI is sentient because it's very good at mimicking human language.

Artificial general intelligence (AGI): A description of programs that are as capable as -- or even more capable than -- than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Generative artificial intelligence (gen AI): Technology that creates content -- including text, images, video and computer code -- by identifying patterns in large quantities of training data and then creating original material that has similar characteristics.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that aren't yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze or make up facts about events that aren't in its training data. It's not fully understood why this happens but can arise from sparse data, information gaps and misclassification.

Large language model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset.

Prompt engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm. (e.g. "Give me five popular baby names.")

Temperature: In simple terms, model temperature is a parameter that controls how random a language model's output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization and translation.

The tricky part is finding the perfect model temperature for a specific task. It's kind of like Goldilocks trying to find the perfect bowl of porridge -- not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you're looking for in the output.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

Read the original post:

AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET

Posted in Ai

Generative AI’s environmental costs are soaring and mostly secret – Nature.com

Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years that the artificial intelligence (AI) industry is heading for an energy crisis. Its an unusual admission. At the World Economic Forums annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. Theres no way to get there without a breakthrough, he said.

Im glad he said it. Ive seen consistent downplaying and denial about the AI industrys environmental costs since I started publishing about them in 2018. Altmans admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.

So what energy breakthrough is Altman banking on? Not the design and deployment of more sustainable AI systems but nuclear fusion. He has skin in that game, too: in 2021, Altman started investing in fusion company Helion Energy in Everett, Washington.

Is AI leading to a reproducibility crisis in science?

Most experts agree that nuclear fusion wont contribute significantly to the crucial goal of decarbonizing by mid-century to combat the climate crisis. Helions most optimistic estimate is that by 2029 it will produce enough energy to power 40,000 average US households; one assessment suggests that ChatGPT, the chatbot created by OpenAI in San Francisco, California, is already consuming the energy of 33,000 homes. Its estimated that a search driven by generative AI uses four to five times the energy of a conventional web search. Within years, large AI systems are likely to need as much energy as entire nations.

And its not just energy. Generative AI systems need enormous amounts of fresh water to cool their processors and generate electricity. In West Des Moines, Iowa, a giant data-centre cluster serves OpenAIs most advanced model, GPT-4. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the districts water. As Google and Microsoft prepared their Bard and Bing large language models, both had major spikes in water use increases of 20% and 34%, respectively, in one year, according to the companies environmental reports. One preprint1 suggests that, globally, the demand for water for AI could be half that of the United Kingdom by 2027. In another2, Facebook AI researchers called the environmental effects of the industrys pursuit of scale the elephant in the room.

Rather than pipe-dream technologies, we need pragmatic actions to limit AIs ecological impacts now.

Theres no reason this cant be done. The industry could prioritize using less energy, build more efficient models and rethink how it designs and uses data centres. As the BigScience project in France demonstrated with its BLOOM model3, it is possible to build a model of a similar size to OpenAIs GPT-3 with a much lower carbon footprint. But thats not whats happening in the industry at large.

It remains very hard to get accurate and complete data on environmental impacts. The full planetary costs of generative AI are closely guarded corporate secrets. Figures rely on lab-based studies by researchers such as Emma Strubell4 and Sasha Luccioni3; limited company reports; and data released by local governments. At present, theres little incentive for companies to change.

There are holes in Europes AI Act and researchers can help to fill them

But at last, legislators are taking notice. On 1 February, US Democrats led by Senator Ed Markey of Massachusetts introduced the Artificial Intelligence Environmental Impacts Act of 2024. The bill directs the National Institute for Standards and Technology to collaborate with academia, industry and civil society to establish standards for assessing AIs environmental impact, and to create a voluntary reporting framework for AI developers and operators. Whether the legislation will pass remains uncertain.

Voluntary measures rarely produce a lasting culture of accountability and consistent adoption, because they rely on goodwill. Given the urgency, more needs to be done.

To truly address the environmental impacts of AI requires a multifaceted approach including the AI industry, researchers and legislators. In industry, sustainable practices should be imperative, and should include measuring and publicly reporting energy and water use; prioritizing the development of energy-efficient hardware, algorithms, and data centres; and using only renewable energy. Regular environmental audits by independent bodies would support transparency and adherence to standards.

Researchers could optimize neural network architectures for sustainability and collaborate with social and environmental scientists to guide technical designs towards greater ecological sustainability.

Finally, legislators should offer both carrots and sticks. At the outset, they could set benchmarks for energy and water use, incentivize the adoption of renewable energy and mandate comprehensive environmental reporting and impact assessments. The Artificial Intelligence Environmental Impacts Act is a start, but much more will be needed and the clock is ticking.

K.C. is employed by both USC Annenberg, and Microsoft Research, which makes generative AI systems.

See the original post here:

Generative AI's environmental costs are soaring and mostly secret - Nature.com

Posted in Ai

Tor Books Criticized for Use of AI-Generated Art in ‘Gothikana’ Cover Design – Publishers Weekly

A number of readers are calling out Tor Books over the cover art of Gothikana by RuNyx, published by Tor's romance imprint Bramble on January 23, which incorporates AI-generated assets in its design.

On February 9, BookTok influencer @emmaskies identified two Adobe Stock images that had been used for the book's cover, both of which include the phrase "Generative AI" in their titles and are flagged on the Adobe Stock website as "generated with AI."

"We cannot allow AI-generated anything to infiltrate creative spaces because they are not just going to stop at covers," says @emmaskies in the video. She goes on to suggest that the use of such images is a slippery slope, imagining a publishing industry in the near future in which AI-generated images supplant cover artists, AI language models replace editorial staff, and AI models make acquisition judgements.

The video has since garnered more than 64,000 views. Her initial analysis of the cover, in which she alleged but had not yet confirmed the use of AI-generated images, received more than 300,000 views and 35,000 likes.

This is not the first time that Tor has attracted criticism online for using AI-generated assets in book cover designs. When Tor unveiled the cover of Christopher Paolini's sci-fi thriller Fractal Noise in November 2022, the publisher was quickly met with criticism over the use of an AI-generated asset, which had been posted to Shutterstock and created with Midjourney. The book was subsequently review-bombed on Goodreads.

"During the process of creating this cover, we licensed an image from a reputable stock house. We were not aware that the image may have been created by AI," Tor Books said in a statement posted to X on December 15. "Our in-house designer used the licensed image to create the cover, which was presented to Christopher for approval." Tor decided to move ahead with the cover "due to production constraints."

In response to the statement, Eisner Awardwinning illustrator Trung Le Nguyen commented, "I might not be able to judge a book by its cover, but I sure as hell will judge its publisher."

Tor is not the only publisher catch heat for using AI-generated art on book covers. Last spring, the Verge reported on the controversy over the U.K. paperback edition of Sarah J. Maas's House of Earth and Blood, published by Bloomsbury, which credited Adobe Stock for the illustration of a wolf on the book's cover; the illustration had been marked as AI-generated on Adobe's website. Bloomsbury later claimed that its in-house design team was "unaware" that the licensed image had been created by AI.

Gothikana was originally self-published by author RuNyx in June 2021, and was reissued by Bramble in a hardcover edition featuring sprayed edges, a foil case stamp, and detailed endpapers. Bramble did not respond to PW's request for comment by press time.

See the original post here:

Tor Books Criticized for Use of AI-Generated Art in 'Gothikana' Cover Design - Publishers Weekly

Posted in Ai

Google launches Gemini Business AI, adds $20 to the $6 Workspace bill – Ars Technica

Google

Google went ahead with plans to launch Gemini for Workspace today. The big news is the pricing information, and you can see the Workspace pricing page is new, with every plan offering a "Gemini add-on." Google's old AI-for-Business plan, "Duet AI for Google Workspace," is dead, though it never really launched anyway.

Google has a blog post explaining the changes. Google Workspace starts at $6 per user per month for the "Starter" package, and the AI "Add-on," as Google is calling it, is an extra $20 monthly cost per user (all of these prices require an annual commitment). That is a massive price increase over the normal Workspace bill, but AI processing is expensive. Google says this business package will get you "Help me write in Docs and Gmail, Enhanced Smart Fill in Sheets and image generation in Slides." It also includes the "1.0 Ultra" model for the Gemini chatbotthere's a full feature list here. This $20 plan is subject to a usage limit for Gemini AI features of "1,000 times per month."

Google

Google's second plan is "Gemini Enterprise," which doesn't come with any usage limits, but it's also only available through a "contact us" link and not a normal checkout procedure. Enterprise is $30 per user per month, and it "includes additional capabilities for AI-powered meetings, where Gemini can translate closed captions in more than 100 language pairs, and soon even take meeting notes."

More here:

Google launches Gemini Business AI, adds $20 to the $6 Workspace bill - Ars Technica

Posted in Ai

Can AI help us forecast extreme weather? – Vox.com

Weve learned how to predict weather over the past century by understanding the science that governs Earths atmosphere and harnessing enough computing power to generate global forecasts. But in just the past three years, AI models from companies like Google, Huawei, and Nvidia that use historical weather data have been releasing forecasts rivaling those created through traditional forecasting methods.

This video explains the promise and challenges of these new models built on artificial intelligence rather than numerical forecasting, particularly the ability to foresee extreme weather.

Additional reading:

You can find this video and all of Voxs videos on YouTube.

This video is sponsored by Microsoft Copilot for Microsoft 365. Microsoft has no editorial influence on our videos, but their support makes videos like these possible.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Read more:

Can AI help us forecast extreme weather? - Vox.com

Posted in Ai

What is AI governance? – Cointelegraph

The landscape and importance of AI governance

AI governance encompasses the rules, principles and standards that ensure AI technologies are developed and used responsibly.

AI governance is a comprehensive term encompassing the definition, principles, guidelines and policies designed to steer the ethical creation and utilization of artificial intelligence (AI) technologies. This governance framework is crucial for addressing a wide array of concerns and challenges associated with AI, such as ethical decision-making, data privacy, bias in algorithms, and the broader impact of AI on society.

The concept of AI governance extends beyond mere technical aspects to include legal, social and ethical dimensions. It serves as a foundational structure for organizations and governments to ensure that AI systems are developed and deployed in beneficial ways that do not cause unintentional harm.

In essence, AI governance forms the backbone of responsible AI development and usage, providing a set of standards and norms that guide various stakeholders, including AI developers, policymakers and end-users. By establishing clear guidelines and ethical principles, AI governance aims to harmonize the rapid advancements in AI technology with the societal and ethical values of human communities.

AI governance adapts to organizational needs without fixed levels, employing frameworks like NIST and OECD for guidance.

AI governance doesnt follow universally standardized levels, as seen in fields like cybersecurity. Instead, it utilizes structured approaches and frameworks from various entities, allowing organizations to tailor these to their specific requirements.

Frameworks, such as the National Institute Of Standards and Technology (NIST) AI Risk Management Framework, the Organization for Economic Co-operation and Development (OECD) principles on artificial intelligence, and the European Commissions Ethics Guidelines for Trustworthy AI, are among the most utilized. They cover many topics, including transparency, accountability, fairness, privacy, security and safety, providing a solid foundation for governance practices.

The extent of governance adoption varies with the organizations size, the complexity of the AI systems it employs, and the regulatory landscape it operates within. Three main approaches to AI governance are:

The most basic form relies on an organizations core values and principles, with some informal processes in place, such as ethical review boards, but lacking a formal governance structure.

A more structured approach than informal governance involves creating specific policies and procedures in response to particular challenges. However, it may not be comprehensive or systematic.

The most comprehensive approach entails the development of an extensive AI governance framework that reflects the organizations values, aligns with legal requirements and includes detailed risk assessment and ethical oversight processes.

Illustrating AI governance through diverse examples like GDPR, the OECD AI principles and corporate ethics boards showcases the multifaceted approach to responsible AI use.

AI governance manifests through various policies, frameworks and practices aimed at ethically deploying AI technologies through organizations and governments. These instances highlight the application of AI governance across different scenarios:

The General Data Protection Regulation (GDPR) is a pivotal example of AI governance in safeguarding personal data and privacy. Although the GDPR isnt solely AI-focused, its regulations significantly impact AI applications, particularly those processing personal data within the European Union, emphasizing the need for transparency and data protection.

The OECD AI principles, endorsed by over 40 countries, underscore the commitment to trustworthy AI. These principles advocate for AI systems to be transparent, fair and accountable, guiding international efforts toward responsible AI development and usage.

Corporate AI Ethics Boards represent an organizational approach to AI governance. Numerous corporations have instituted ethics boards to supervise AI projects, ensuring they conform to ethical norms and societal expectations. For instance, IBMs AI Ethics Council reviews AI offerings to ensure they comply with the companys AI ethics, involving a diverse team from various disciplines to provide comprehensive oversight.

Stakeholder engagement is essential for developing inclusive, effective AI governance frameworks that reflect a broad spectrum of perspectives.

A wide range of stakeholders, including governmental entities, international organizations, business associations and civil society organizations, are in charge of AI governance. Because different areas and nations have different legal, cultural and political contexts, their oversight structures can also differ significantly.

The complexity of AI governance requires active participation from all sectors of society, including government, industry, academia and civil society. Engaging a diverse range of stakeholders ensures that multiple perspectives are considered when developing AI governance frameworks, leading to more robust and inclusive policies.

This engagement also fosters a sense of shared responsibility for the ethical development and use of AI technologies. By involving stakeholders in the governance process, policymakers can leverage a wide range of expertise and insights, ensuring that AI governance frameworks are well-informed, adaptable and capable of addressing the multifaceted challenges and opportunities presented by AI.

For instance, the exponential growth of data collection and processing raises significant privacy concerns, necessitating stringent governance frameworks to protect an individuals personal information. This involves compliance with global data protection regulations like GDPR and active participation by stakeholders in implementing advanced data security technologies to prevent unauthorized access and data breaches.

The future of AI governance will be shaped by advancements in technology, evolving societal values and the need for international collaboration.

As AI technologies evolve, so will the frameworks governing them. The future of AI governance is likely to see a greater emphasis on sustainable and human-centered AI practices.

Sustainable AI focuses on developing environmentally friendly and economically viable technologies over the long term. Human-centered AI prioritizes systems that enhance human capabilities and well-being, ensuring that AI serves as a tool for augmenting human potential rather than replacing it.

Moreover, the global nature of AI technologies necessitates international collaboration in AI governance. This involves harmonizing regulatory frameworks across borders, fostering global standards for AI ethics, and ensuring that AI technologies can be safely deployed across different cultural and regulatory environments. Global cooperation is key to addressing challenges, such as cross-border data flow and ensuring that AI benefits are shared equitably worldwide.

Read more here:

What is AI governance? - Cointelegraph

Posted in Ai

Scale AI to set the Pentagon’s path for testing and evaluating large language models – DefenseScoop

The Pentagons Chief Digital and Artificial Intelligence Office (CDAO) tapped Scale AI to produce a trustworthy means for testing and evaluating large language models that can support and potentially disrupt military planning and decision-making.

According to a statement the San Francisco-based company shared exclusively with DefenseScoop, the outcomes of this new one-year contract will supply the CDAO with a framework to deploy AI safely by measuring model performance, offering real-time feedback for warfighters, and creating specialized public sector evaluation sets to test AI models for military support applications, such as organizing the findings from after action reports.

Large language models and the overarching field of generative AI include emerging technologies that can generate (convincing but not always accurate) text, software code, images and other media, based on prompts from humans.

This rapidly evolving realm holds a lot of promise for the Department of Defense, but also poses unknown and serious potential challenges. Last year, Pentagon leadership launched Task Force Lima within the CDAOs Algorithmic Warfare Directorate to accelerate its components grasp, assessment and deployment of generative artificial intelligence.

The department has long leaned on test-and-evaluation (T&E) processes to assess and ensure its systems, platforms and technologies perform in a safe and reliable manner before they are fully fielded. But AI safety standards and policies have not yet been universally set, and the complexities and uncertainties associated with large language models make T&E even more complicated when it comes to generative AI.

Broadly, T&E enables experts to determine the baseline performance of a specific model.

For instance, to test and evaluate a computer vision algorithm that differentiates between images of dogs and cats and things that are not dogs or cats, an official might first train it with millions of different pictures of those type of animals as well as objects that arent dogs or cats. In doing so, the expert will also hold back a diverse subset of data that can then be presented to the algorithm down the line.

They can then assess that evaluation dataset against the test set, or ground truth, and ultimately determine failure rates of where the model is unable to determine if something is or is not one of the classifiers theyre trying to identify.

Experts at Scale AI will adopt a similar approach for T&E with large language models, but because they are generative in nature and the English language can be hard to evaluate, there isnt that same level of ground truth for these complex systems. For example, if prompted to supply five different responses, an LLM might be generally factually accurate in all five, yet contrasting sentence structures could change the meanings of each output.

So, part of the companys effort to develop the framework, methods and technology CDAO can use to test and evaluate large language models will involve creating holdout datasets where they include DOD insiders to prompt response pairs and adjudicate them by layers of review, and ensure that each is as good of a response as would be expected from a human in the military.

The entire process will be iterative in nature.

Once datasets that are germane to the DOD for world knowledge, truthfulness, and other topics are made and refined, the experts can then evaluate existing large language models against them.

Eventually, as they have these holdout datasets, experts will be able to run evaluations and establish model cards or short documents that supply details on the context for best for use of various machine learning models and information for measuring their performance.

Officials plan to automate in this development as much as possible, so that as new models come in, there can be some baseline understanding of how they will perform, where they will perform best, and where they will probably start to fail.

Further in the process, the ultimate intent is for models to essentially send signals to CDAO officials that engage with them, if they start to waver from the domains they have been tested against.

This work will enable the DOD to mature its T&E policies to address generative AI by measuring and assessing quantitative data via benchmarking and assessing qualitative feedback from users. The evaluation metrics will help identify generative AI models that are ready to support military applications with accurate and relevant results using DoD terminology and knowledge bases. The rigorous T&E process aims to enhance the robustness and resilience of AI systems in classified environments, enabling the adoption of LLM technology in secure environments, Scale AIs statement reads.

Beyond the CDAO, the company has also partnered with Meta, Microsoft, the U.S. Army, the Defense Innovation Unit, OpenAI, General Motors, Toyota Research Institute, Nvidia, and others.

Testing and evaluating generative AI will help the DoD understand the strengths and limitations of the technology, so it can be deployed responsibly. Scale is honored to partner with the DoD on this framework, Alexandr Wang, Scale AIs founder and CEO, said in the statement.

Continue reading here:

Scale AI to set the Pentagon's path for testing and evaluating large language models - DefenseScoop

Posted in Ai

Nvidia’s Q4 Earnings Blow Past Expectations as Company Benefits From AI Boom – Investopedia

Key Takeaways

Nvidia Corp. (NVDA)posted revenue and earnings for its fiscal fourth quarter that blew past market expectations, as the company continues to benefit from booming demand for equipment and services to support artificial intelligence (AI).

Shares of the company, which had fallen for four consecutive sessions ahead of Wednesday's eagerly anticipated earnings release, gained 9.1% to $735.94 in after-hours trading.

Nvidia said that revenue jumped to $22.10 billion in the quarter ending Jan. 28, compared with $6.05 billion a year earlier. Net income increased to $12.29 billion from $1.41 billion, while diluted earnings per share came in at $4.93, up from 57 cents a year earlier. Each of those numbers handily topped analysts' expectations.

Revenue for Nvidia's closely watched data-center business, which offers cloud and AI services, jumped to $18.40 billion, a five-fold increase from the year-ago period and also well above expectations.

Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries and nations, Nvidia CEO Jensen Huang said in a press release, noting that the data center business has "increasingly diverse drivers."

"Vertical industriesled by auto, financial services and healthcareare now at a multibillion-dollar level," Huang added.

Nvidia's gross margin for the fourth quarter was 76%, up from 63.3% in the year-ago period. Nvidia's chief financial officer, Colette Kress, said the improvement was a function of the growth in the data center business, which wasprimarily driven by Nvidia's Hopper GPU computing platform.

Looking ahead, Nvidia says that fiscal first-quarter revenue is expected to come in at $24 billion, plus or minus 2%, which is above the consensus view from analysts. The company expects gross margin in the current quarter to rise slightly from the fourth-quarter figure.

Optimism around artificial intelligence helped push Nvidia's stock, which has more than tripled in the past year, to an all-time high last week. In the days leading up to the earnings release, analysts had raised their expectations even as investors expressed some concerns that the quarterly report might fall short of expectations.

The strong earnings report not only lifted Nvidia in extended trading but gave a boost to other chipmakers that have been riding the AI wave. Shares of Advanced Micro Devices (AMD), ARM Holdings (ARM), Broadcom (AVGO), Taiwan Semiconductor (TSM) and Super Micro Computer (SMCI) were all moving higher late Wednesday.

UPDATE: This article has been updated after initial publication to add comments from company executives, additional details from the earnings report and updated share prices.

Go here to read the rest:

Nvidia's Q4 Earnings Blow Past Expectations as Company Benefits From AI Boom - Investopedia

Posted in Ai

HOUSE LAUNCHES BIPARTISAN TASK FORCE ON ARTIFICIAL INTELLIGENCE – Congressman Ted Lieu

WASHINGTON Speaker Mike Johnson and Democratic Leader Hakeem Jeffries announced the establishment of a bipartisan Task Force on Artificial Intelligence (AI) to explore how Congress can ensure America continues to lead the world in AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.

Speaker Johnson and Leader Jeffries have each appointed twelve members to the Task Force that represent key committees of jurisdiction and will be jointly led by Chair JayObernolte (CA-23) and Co-Chair Ted Lieu (CA-36). The Task Force will seek to produce a comprehensive report that will include guiding principles, forward-looking recommendations and bipartisan policy proposals developed in consultation with committees of jurisdiction.

Because advancements in artificial intelligence have the potential to rapidly transform our economy and our society, it is important for Congress to work in a bipartisan manner to understand and plan for both the promises and the complexities of this transformative technology, saidSpeaker Mike Johnson.I am happy to announce with Leader Jeffries this new Bipartisan Task Force on Artificial Intelligenceto ensure America continues leading in this strategic arena.

Led by Rep. JayObernolte (R-Ca.) and Rep. Ted Lieu (D-Ca.), the task force will bring together a bipartisan group of Members who have AI expertise and represent the relevant committees of jurisdiction. As we look to the future, Congress must continue to encourage innovation and maintain our countrys competitive edge, protect our national security, and carefully consider what guardrails may be needed to ensure the development of safe and trustworthy technology.

Congress has a responsibility to facilitate the promising breakthroughs that artificial intelligence can bring to fruition and ensure that everyday Americans benefit from these advancements in an equitable manner, saidDemocratic Leader HakeemJeffries.That is why I am pleased to join Speaker Johnson in announcing the new Bipartisan Task Force on Artificial Intelligence, led by Rep. Ted Lieu and Rep. Jay Obernolte.

The rise of artificial intelligence also presents a unique set of challenges and certain guardrails must be put in place to protect the American people. Congress needs to work in a bipartisan way to ensure that America continues to lead in this emerging space, while also preventing bad actors from exploiting this evolving technology. The Members appointed to this Task Force bring a wide range of experience and expertise across the committees of jurisdiction and I look forward to working with them to tackle these issues in a bipartisan way.

It is an honor to be entrusted by Speaker Johnson to serve as Chairman of the House Task Force on Artificial Intelligence, saidChair Jay Obernolte (CA-23).As new innovations in AI continue to emerge, Congress and our partners in federal government must keep up. House Republicans and Democrats will work together to create a comprehensive report detailing the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI.

The United States has led the world in the development of advanced AI, and we must work to ensure that AI realizes its tremendous potential to improve the lives of people across our country. I look forward to working with Co-Chair Ted Lieu and the rest of the Task Force on this critical bipartisan effort.

Thank you to Leader Jeffries and Speaker Johnson for establishing this bipartisan House Task Force on Artificial intelligence. AI has the capability of changing our lives as we know it. The question is how to ensure AI benefits society instead of harming us. As a recovering Computer Science major, I know this will not be an easy or quick or one-time task, but I believe Congress has an essential role to play in the future of AI. I have been heartened to see so many Members of Congress of all political persuasions agree, saidCo-Chair Ted Lieu (CA-36).

I am honored to join Congressman Jay Obernolte in leading this Task Force on AI, and honored to work with the bipartisan Members on the Task Force. I look forward to engaging with Members of both the Democratic Caucus and Republican Conference, as well as the Senate, to find meaningful, bipartisan solutions with regards to AI.

Membership

Rep. Ted Lieu (CA-36),Co-Chair Rep. Anna Eshoo (CA-16) Rep. Yvette Clarke (NY-09) Rep. Bill Foster (IL-11) Rep. Suzanne Bonamici (OR-01) Rep. Ami Bera (CA-06) Rep. Don Beyer (VA-08) Rep. Alexandria Ocasio-Cortez (NY-14) Rep. Haley Stevens (MI-11) Rep. Sara Jacobs (CA-51) Rep. Valerie Foushee (NC-04) Rep. Brittany Pettersen (CO-07)

Rep. Jay Obernolte (CA-23),Chair Rep. Darrell Issa (CA-48) Rep. French Hill (AR-02) Rep. Michael Cloud (TX-27) Rep. Neal Dunn (FL-02) Rep. Ben Cline (VA-06) Rep. Kat Cammack (FL-03) Rep. Scott Franklin (FL-18) Rep. Michelle Steel (CA-45) Rep. Eric Burlison (MO-07) Rep. Laurel Lee (FL-15) Rep. Rich McCormick (GA-06)

###

Go here to see the original:

HOUSE LAUNCHES BIPARTISAN TASK FORCE ON ARTIFICIAL INTELLIGENCE - Congressman Ted Lieu

Posted in Ai

One month with Microsoft’s AI vision of the future: Copilot Pro – The Verge

Microsofts Copilot Pro launched last month as a $20 monthly subscription that provides access to AI-powered features inside some Office apps, alongside priority access to the latest OpenAI models and improved image generation.

Ive been testing Copilot Pro over the past month to see if its worth the $20 subscription for my daily needs and just how good or bad the AI image and text generation is across Office apps like Word, Excel, and PowerPoint. Some of the Copilot Pro features are a little disappointing right now, whereas others are truly useful improvements that Im not sure I want to live without.

Lets dig into everything you get with Copilot Pro right now.

One of the main draws of subscribing to Copilot Pro is an improved version of Designer, Microsofts image creation tool. Designer uses OpenAIs DALL-E 3 model to generate content, and the paid Copilot Pro version creates widescreen images with far more detail than the free version.

Ive been using Designer to experiment with images, and Ive found it particularly impressive when you feed it as much detail as possible. Asking Designer for an image of a dachshund sitting by a window staring at a slice of bacon generates some good examples, but you can get Designer to do much more with some additional prompting. Adding in more descriptive language to generate a hyper-real painting with natural lighting, medium shot, and shallow depth of field will greatly improve image results.

As you can see in the two examples below, Designer gets the natural lighting correct, with some depth of field around the bacon. Unfortunately, there are multiple slices of bacon here instead of just one, and theyre giant pieces of bacon.

Like most things involving AI, the Designer feature isnt perfect. I generated another separate image of a dog staring at bacon, and a giant piece of bacon was randomly inserted. In fact, Id say most times only one or two of the four images that are produced are usable. DALL-E 3 still struggles with text, too, particularly if you ask Designer to add labels or signs that have text written on them.

It did a good job of an illustrated image of a UPS delivery man from 1910. In the style of early Japanese cartoons, though, adding the UPS logo in even if its a little wonky. Copilot Pro lets you generate 100 images per day, and it does so much faster than the free version.

Copilot Pro isnt all about image generation, though. This subscription unlocks the AI capabilities inside Office apps. Inside Word, you can use Copilot to generate text, which can be helpful for getting an outline of a document started or refining paragraphs.

If you have numerical data, you can also get Copilot to visualize this data as a graph or table, which is particularly useful for making text-heavy documents a little easier to read. If you highlight text, a little Copilot logo appears to nudge you into selecting it to rewrite that text or visualize it. If you select an entire paragraph, Copilot will try to rewrite it with different options you can cycle through and pick.

Like the image generation, the paragraph rewriting can be a little hit-and-miss, introducing different meaning to sentences by swapping out words. Overall, I didnt find that it improved my writing. For someone who doesnt write regularly, it might be a lot more useful.

Copilot in Outlook has been super useful to me personally. I use it every day to check summaries of emails, which helpfully appear at the top of emails. This might even tempt me to buy Copilot Pro just for this feature because it saves me so much time when Im planning a project with multiple people.

Its also really helpful when you have a long-running email thread to just get a quick summary of all the key information. You can also use Copilot in Outlook to generate emails or craft replies. Much like Word, theres a rewrite tool here that lets you write a draft email thats then analyzed to produce suggestions for improving the tone or clarity of an email.

Copilot in PowerPoint is equally useful if youre not used to creating presentations. You can ask it to generate slides in a particular style, and youll get an entire deck back within seconds. Designer is part of this feature, so you can dig into each individual slide and modify the images or text.

As someone who hates creating presentations, this is something I will absolutely use in the future. It certainly beats the PowerPoint templates you can find online. I did run into some PowerPoint slide generation issues, though, particularly where Copilot would sit there saying, Still working on it, and not finish generating the slides.

Copilot in Excel seems to be the most limited part of the Copilot Pro experience right now. You need your data neatly arranged in a table. Otherwise, Copilot will want to convert it. Once you have data that works with Copilot, you can create visualizations, use data insights to create pivot tables, or even get formula suggestions. Copilot for Excel is still in preview, so Id expect well see even more functionality here over time.

The final example of Copilot inside Office apps is OneNote. Much like Word, you can draft notes or plans here and easily rewrite text. Copilot also offers summaries of your notes, which can be particularly amusing if you attempt to summarize shorthand notes or incomplete notes that only make sense to your brain.

Microsoft is also rolling out a number of GPTs for fitness, travel, and cooking. These are essentially individual assistants inside Copilot that can help you find recipes, plan out a vacation itinerary, or create a personalized workout plan. Copilot Pro subscribers will soon be able to build their own custom GPTs around specific topics, too.

Overall, I think Copilot Pro is a good start for Microsofts consumer AI efforts, but Im not sure Id pay $20 a month just yet. The image generation improvements are solid here and might be worth $20 a month for some.

Email summaries in Outlook tempt me into the subscription, but the text generation features arent really all that unique in the Office apps. I feel like you can get just as good results using the free version of Copilot or even ChatGPT, but youll have to do the manual (and less expensive) option of copying and pasting the results into a document.

The consumer Copilot Pro isnt as fully featured as the commercial version just yet, so Id expect well see a lot of improvements over the coming months. Microsoft is showing no sign of slowing down with its AI efforts, and the company is set to detail more of its AI plans at Build in May.

See more here:

One month with Microsoft's AI vision of the future: Copilot Pro - The Verge

Posted in Ai

Satya Nadella says the explicit Taylor Swift AI fakes are ‘alarming and terrible’ – The Verge

Microsoft CEO Satya Nadella has responded to a controversy over sexually explicit AI-made fake images of Taylor Swift. In an interview with NBC Nightly News that will air next Tuesday, Nadella calls the proliferation of nonconsensual simulated nudes alarming and terrible, telling interviewer Lester Holt that I think it behooves us to move fast on this.

In a transcript distributed by NBC ahead of the January 30th show, Holt asks Nadella to react to the internet exploding with fake, and I emphasize fake, sexually explicit images of Taylor Swift. Nadellas response manages to crack open several cans of tech policy worms while saying remarkably little about them which isnt surprising when theres no surefire fix in sight.

I would say two things: One, is again I go back to what I thinks our responsibility, which is all of the guardrails that we need to place around the technology so that theres more safe content thats being produced. And theres a lot to be done and a lot being done there. But it is about global, societal you know, Ill say, convergence on certain norms. And we can do especially when you have law and law enforcement and tech platforms that can come together I think we can govern a lot more than we think we give ourselves credit for.

Microsoft might have a connection to the faked Swift pictures. A 404 Media report indicates they came from a Telegram-based nonconsensual porn-making community that recommends using the Microsoft Designer image generator. Designer theoretically refuses to produce images of famous people, but AI generators are easy to bamboozle, and 404 found you could break its rules with small tweaks to prompts. While that doesnt prove Designer was used for the Swift pictures, its the kind of technical shortcoming Microsoft can tackle.

But AI tools have massively simplified the process of creating fake nudes of real people, causing turmoil for women who have far less power and celebrity than Swift. And controlling their production isnt as simple as making huge companies bolster their guardrails. Even if major Big Tech platforms like Microsofts are locked down, people can retrain open tools like Stable Diffusion to produce NSFW pictures despite attempts to make that harder. Far fewer users might access these generators, but the Swift incident demonstrates how widely a small communitys work can spread.

There are other stopgap options like social networks limiting the reach of nonconsensual imagery or, apparently, Swiftie-imposed vigilante justice against people who spread them. (Does that count as convergence on certain norms?) For now, though, Nadellas only clear plan is putting Microsofts own AI house in order.

Go here to read the rest:

Satya Nadella says the explicit Taylor Swift AI fakes are 'alarming and terrible' - The Verge

Posted in Ai

Researchers Say the Deepfake Biden Robocall Was Likely Made With Tools From AI Startup ElevenLabs – WIRED

This is not the first time that researchers have suspected ElevenLabs tools were used for political propaganda. Last September, NewsGuard, a company that tracks online misinformation, claimed that TikTok accounts sharing conspiracy theories using AI-generated voices, including a clone of Barack Obamas voice, used ElevenLabs technology. Over 99 percent of users on our platform are creating interesting, innovative, useful content, ElevenLabs said in an emailed statement to The New York Times at the time, but we recognize that there are instances of misuse, and weve been continually developing and releasing safeguards to curb them.

If the Pindrop and Berkeley analyses are correct, the deepfake Biden robocall was made with technology from one of the tech industrys most prominent and well-funded AI voice startups. As Farid notes, ElevenLabs is already seen as providing some of the highest-quality synthetic voice offerings on the market.

According to the companys CEO in a recent Bloomberg article, ElevenLabs is valued by investors at more than $1.1 billion. In addition to Andreessen Horowitz, its investors include prominent individuals like Nat Friedman, former CEO of GitHub, and Mustafa Suleyman, cofounder of AI lab DeepMind, now part of Alphabet. Investors also include firms like Sequoia Capital and SV Angel.

With its lavish funding, ElevenLabs is arguably better positioned than other AI startups to pour resources into creating effective safeguards against bad actorsa task made all the more urgent by the upcoming presidential elections in the United States. Having the right safeguards is important, because otherwise anyone can create any likeness of any person, Balasubramaniyan says. As we're approaching an election cycle, it's just going to get crazy.

A Discord server for ElevenLabs enthusiasts features people discussing how they intend to clone Bidens voice, and sharing links to videos and social media posts highlighting deepfaked content featuring Biden or AI-generated dupes of Donald Trump and Barack Obamas voices.

Although ElevenLabs is a market leader in AI voice cloning, in just a few years the technology has become widely available for companies and individuals to experiment with. That has created new business opportunities, such as creating audiobooks more cheaply, but also increases the potential for malicious use of the technology. We have a real problem, says Sam Gregory, program director at the nonprofit Witness, which helps people use technology to promote human rights. When you have these very broadly available tools, it's quite hard to police.

While the Pindrop and Berkeley analyses suggest it could be possible to unmask the source of AI-generated robocalls, the incident also underlines how underprepared authorities, the tech industry, and the public are as the 2024 election season ramps up. It is difficult for people without specialist expertise to confirm the provenance of audio clips or check whether they are AI-generated. And more sophisticated analyses might not be completed quickly enough to offset the damage caused by AI-generated propaganda.

Journalists and election officials and others don't have access to reliable tools to be doing this quickly and rapidly when potentially election-altering audio gets leaked or shared, Gregory says. If this had been something that was relevant on election day, that would be too late.

Updated 1-27-2024, 3:15 pm EST: This article was updated to clarify the attribution of the statement from ElevenLabs. Updated 1-26-2024, 7:20 pm EST: This article was updated with comment from ElevenLabs.

Follow this link:

Researchers Say the Deepfake Biden Robocall Was Likely Made With Tools From AI Startup ElevenLabs - WIRED

Posted in Ai

Samsung’s new phones replace Google AI with Baidu in China – The Verge

The list of AI translation, summarization, and text formatting features on the Chinese version of the Galaxy S24 will be familiar to anyone who kept up with its US-based launch. Theres also real-time call translation like we saw earlier this month, and the phones are even getting a version of Googles Circle to Search feature.

Now featuring Ernies understanding and generation capabilities

Now featuring Ernies understanding and generation capabilities, the upgraded Samsung Note Assistant can translate content and also summarize lengthy content into clear, intelligently organized formats at the click of a button, Samsung Electronics China and Baidu said in a statement published by CNBC.

Samsungs hold on China has waned considerably in the last decade. A report this week from IDC didnt even place it in the top five brands for mobile shipments in 2023. In 2013, the company was the biggest smartphone manufacturer in the country, with a market share of around 20 percent, but its share had fallen to just 1 percent by 2018, where its hovered ever since, Reuters reports, adding that partnerships with local content firms are part of its attempt to rebuild its Chinese business.

The rest is here:

Samsung's new phones replace Google AI with Baidu in China - The Verge

Posted in Ai

2024 health tech budgets to be driven by AI tools, automation – STAT

Hospitals and clinics are expecting a slightly better 2024 compared to last year thanks to a return to mostly in-person care, patients resuming preventive visits and the gradual easing of labor costs and shortages. Still, the evaporation of pandemic-related emergency funding will deal a blow to resource-strained health systems, and leaders say theyll ramp up tech investments, including in artificial intelligence-based tools.

Health care providers financial performance isnt uniform and varies widely by setting, like rural or urban, as well as size and range of services offered. But they already showed signs of gradual improvement in 2023, leaving more funds available for technology investments, analysts told STAT.

HCA Healthcare, a for-profit health system spanning more than 180 hospitals and 2,000 sites across the country, brought in nearly $48 billion in the first three quarters of 2023, compared to about $45 billion in that same period in 2022. Highmark Health, which operates both a health insurance business and a 14-hospital network in Pennsylvania and New York, took in about $20 billion in revenue in that same period, up 4% from 2022.

Get unlimited access to award-winning journalism and exclusive events.

Continue reading here:

2024 health tech budgets to be driven by AI tools, automation - STAT

Posted in Ai

AI and satellite data helped uncover the ocean’s ‘dark vessels’ – Popular Science

Researchers can now access artificial intelligence analysis of global satellite imagery archives for an unprecedented look at humanitys impact and relationship to our oceans. Led by Global Fishing Watch, a Google-backed nonprofit focused on monitoring maritime industries, the open source project is detailed in a study published January 3 in Nature. It showcases never-before-mapped industrial effects on aquatic ecosystems thanks to recent advancements in machine learning technology.

The new research shines a light on dark fleets, a term often referring to the large segment of maritime vessels that do not broadcast their locations. According to Global Fishing Watchs Wednesday announcement, as much as 75 percent of all industrial fishing vessels are hidden from public view.

As The Verge explains, maritime watchdogs have long relied on the Automatic Identification System (AIS) to track vessels radio activity across the globeall the while knowing the tool was far from perfect. AIS requirements differ between countries and vessels, and its easy to simply turn off a ships transponder when a crew wants to stay off the grid. Hence the (previously murky) realm of dark fleets.

On land, we have detailed maps of almost every road and building on the planet. In contrast, growth in our ocean has been largely hidden from public view, David Kroodsma, the nonprofits director of research and innovation, said in an official statement on January 3. This study helps eliminate the blindspots and shed light on the breadth and intensity of human activity at sea.

[Related: How to build offshore wind farms in harmony with nature.]

To solve this data void, researchers first collected 2 million gigabytes of global imaging data taken by the European Space Agencys Sentinel-1 satellite constellation between 2017 and 2021. Unlike AIS limitations, the ESA satellite arrays sensitive radar technology allows it to detect surface activity or movement, regardless of cloud coverage or time of day.

From there, the team combined this information with GPS data to highlight otherwise undetected or overlooked ships. A machine learning program then analyzed the massive information sets to pinpoint previously undocumented fishing vessels.

The newest findings upend previous industry assumptions, and showcase the troublingly larger impact of dark fleets around the world.

Publicly available data wrongly suggests that Asia and Europe have similar amounts of fishing within their borders, but our mapping reveals that Asia dominatesfor every 10 fishing vessels we found on the water, seven were in Asia while only one was in Europe, Jennifer Raynor, a study co-author and University of Wisconsin-Madison assistant professor of natural resource economics, said in the announcement. By revealing dark vessels, we have created the most comprehensive public picture of global industrial fishing available.

Its not all troubling revisions, however. According to the teams findings, the number of green offshore energy projects more than doubled over the five-year timespan analyzed. As of 2021, wind turbines officially outnumbered the worlds oil platforms, with China taking the lead by increasing its number of wind farms by 900 percent.

Previously, this type of satellite monitoring was only available to those who could pay for it. Now it is freely available to all nations, Kroodsma said in Wednesdays announcement, declaring the study as marking the beginning of a new era in ocean management and transparency.

Excerpt from:

AI and satellite data helped uncover the ocean's 'dark vessels' - Popular Science

Posted in Ai

What software developers using ChatGPT can tell us about how it’s changing work – Quartz

In her first job since graduating from college, Eknoor Kaur works at a company where using AI chatbots is not unusual. At first the software engineer at Pathlight, which makes automation tools, was skeptical. But after a colleague mentioned that ChatGPT helped him work better and faster, she eased into the ideaand today, she doesnt spend a workday without it.

Kaur keeps ChatGPT open on her desktop, typically posing the bot four or five questions a day. She doesnt use the tool to write code because shes worried about the hallucinatory, or made-up, answers that AI chatbots can provide. Instead, Kaur uses the system like a search engine, asking it programming-related questions she doesnt want to burden coworkers with.

Not surprisingly, some of the earliest adopters of generative AI at work are software developers. Alongside ChatGPT maker OpenAI, companies like Microsoft and Salesforce have rolled out AI copilots, or digital assistants, for writing code. And while a slew of employers, including Apple, Bank of America, and Goldman Sachs, have blocked or limited ChatGPT on the job, things are different at plenty of tech companiesand startups in particular. Tech workers use a range of AI chatbots. Amazon developers, for instance, have their own version of ChatGPT called CodeWhisperer.

ChatGPT is currently in the giant room-size machine phase, said David Baggett, founder of cybersecurity firm Inky. He likens our current chatbots to the computers of the 1950s: early-stage and used for a narrow range of tasks.

But engineers are leading the charge in using those chatbots at work, even in their limited capacity.Developers Quartz spoke with use ChatGPT to generate code for softwaresaving themselves anywhere from minutes to hours a day on writingor to find information faster than using traditional online search methods.

Instead of starting on Google or Stack Overflow, a popular Q&A site for developers, which could take several pages or clicks to land on the right piece of code, developers can ask ChatGPT or another chatbot and get what they need with one prompt. I do a lot less Googling, said Amin Ahmad, CEO of search software company Vectara and a former researcher at Google.

Developers can also prompt chatbots to write code for themand adjust from there. Everything hasnt worked on the first try, said Cody De Arkland, head of technical marketing at tech management platform LaunchDarkly. De Arkland said he uses ChatGPT as one of his final steps to see if theres a better way to optimize his code, like writing it more efficiently. He uses a few AI chatbots, including GitHub Copilot, which is paid for by his employer.

Generative AI doesnt always work for Baggett, either. In his experience, ChatGPT sometimes spits out an answer that doesnt work at all.

At LaunchDarkly, De Arkland recalls a teammate who estimated that coding a complex pricing calculator would take roughly two monthsbut after using ChatGPT, wrote the code in just a week and a half. The obvious case for coding with chatbots is speed: projects are finished faster, and engineers say theyre shifting freed-up time into building better features.

Were not going to end in a place where theres not enough work to go around, De Arkland said. Theres always going to be projects and new things that have to be built to fill up that space.

But theres a limit to what software engineers will share with AI chatbots. For example, none of the developers Quartz spoke with said they would paste entire blocks of code into ChatGPT or other chatbots, wary that the AI tool could compromise data privacy, or have trouble understanding large volumes of text. For some, it wasnt clear if their employer had guardrails to prevent people from entering personal data into a chatbot.

In general, developers say that ChatGPT takes away boring baseline work. Catherine Yeo, an engineer at coding software maker Warp, has used her companys AI chatbot for nine months. Even today, she always marvels when it does return an answer and solves her problems.

Vectaras Ahmad notes that a chatbot allows him to find new solutions to a problem he wouldnt have initially considered when writing code. But as a developer working on AI technology, helike many non-technical workersworries his job could be automated away.

See the original post:

What software developers using ChatGPT can tell us about how it's changing work - Quartz

Posted in Ai

AI is here and everywhere: 3 AI researchers look to the challenges ahead in 2024 – The Conversation Indonesia

2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse thats already galloping along.

Weve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.

Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.

One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so in ways that often do more harm than good.

However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I dont think we should be revamping education to put AI at the center of everything, but if students dont learn about how AI works, they wont understand its limitations and therefore how it is useful and appropriate to use and how its not. This isnt just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.

So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are often sufficient to dazzle even the most experienced observer, but that once their inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away. The challenge with generative artificial intelligence is that, in contrast to ELIZAs very basic pattern matching and substitution methodology, it is much more difficult to find language sufficiently plain to make the AI magic crumble away.

I think its possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.

Kentaro Toyama, Professor of Community Information, University of Michigan

In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, In from three to eight years we will have a machine with the general intelligence of an average human being. With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence not quite here yet its safe to say that Minsky was off by at least a factor of 10. Its perilous to make predictions about AI.

Still, making predictions for a year out doesnt seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minskys prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.

The big technical question is how soon and how thoroughly AI engineers can address the current Achilles heel of deep learning what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcus suggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.

Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldnt have been possible even five years ago.

Speaking of problems, the very people sounding the loudest alarms about AI like Elon Musk and Sam Altman cant seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. Theyre like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 though it seems slow in coming is stronger AI regulation, at national and international levels.

Anjana Susarla, Professor of Information Systems, Michigan State University

In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.

Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents a world that society is not necessarily prepared for.

These advanced AI capabilities offer immense transformative power in applications ranging from business to precision medicine. My chief concern is that such advanced capabilities will pose new challenges for distinguishing between human-generated content and AI-generated content, as well as pose new types of algorithmic harms.

The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.

The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, theres a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.

A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.

Read the original post:

AI is here and everywhere: 3 AI researchers look to the challenges ahead in 2024 - The Conversation Indonesia

Posted in Ai