What to know about this AI stock with ties to Nvidia up nearly 170% in 2024 – CNBC

Investors may want to keep an eye on this artificial intelligence voice-and-speech recognition stock with ties to Nvidia . Shares of SoundHound AI have surged almost 170% this year and nearly 347% in February alone as investors bet on new applications for the booming technology trend that has taken Wall Street by storm. Last month, Nvidia revealed a $3.7 million bet on the stock in a securities filing, and management said on an earnings call that "demand is going through theroof." "We continue to believe that the company is in a strong position to capture its fair share of the AI chatbot market demand wave with its technology providing more use cases going forward," wrote Wedbush Securities analyst Dan Ives in a February note. SOUN YTD mountain SoundHound shares in 2024 While the Nvidia investment isn't new news for investors and analysts, it does reinforce SoundHound's value proposition. Ives also noted that the stake "solidifies the company's brand within the AI Revolution" and lays the groundwork for a potential larger investment in the future. Relatively few Wall Street shops cover the AI stock. A little more than 80% rate it with a buy or overweight rating, with consensus price targets suggesting upside of nearly 24%, per FactSet. The company also sits at a roughly $1.7 billion market capitalization and has yet to attain profitability. Expanding its total addressable market Along with its Nvidia relationship, SoundHound has partnered with a slew of popular restaurant brands, automakers and hospitality companies to provide AI voice customer solutions. While the company works with about a quarter of total automobile companies, "the penetration into that customer set only amounts to 1-2% of global sales, leaving significant room for growth within the current customer base as well as growth from adding new brands," said Ladenburg Thalmann's Glenn Mattson in a January note initiating coverage with a buy rating. "With voice enabled units expected to grow to 70% of shipments by 2026, this represents a significant growth opportunity, in our view," he added. SoundHound has also made significant headway within the restaurant industry, recently adding White Castle, Krispy Kreme and Jersey Mike's to its growing list of customers, analysts note. That total addressable market should continue growing as major players such as McDonald's, DoorDash and Wendy's hunt for ways to expand AI voice use, said D.A. Davidson's Gil Luria. He estimates an $11 billion total addressable market when accounting for the immediate opportunities from quick-service restaurants and original equipment manufacturers. "SoundHound's long term opportunity is attractive and largely up for grabs," he said in a September note initiating coverage with a buy rating. "Given the high degree of technical complexity required to create value in this space, we see SoundHound with its best-of-breed solution as a likely winner and expect it to win significant market share." Headwinds to profitability While demand for SoundHound AI's products appears to be accelerating, investors should beware of a bumpy road ahead. Cantor Fitzgerald's Brett Knoblauch noted that being in the early stages of product adoption creates uncertainties surrounding the "pace of revenue growth and timeline to positive FCF." Although H.C. Wainwright's Scott Buck views SoundHound's significant bookings backlog and accelerating revenue growth as supportive of a premium valuation, he noted that the recent acquisition of restaurant automation technology company SYNQ3 could delay profitability to next year. But "we suspect the longer term financial and operating benefits to meaningfully outweigh short-term profitability headwinds," he said. "We recommend investors continue to accumulate SOUN shares ahead of stronger operating results."

Go here to read the rest:

What to know about this AI stock with ties to Nvidia up nearly 170% in 2024 - CNBC

Posted in Ai

NIST, the lab at the center of Bidens AI safety push, is decaying – The Washington Post

At the National Institute of Standards and Technology the government lab overseeing the most anticipated technology on the planet black mold has forced some workers out of their offices. Researchers sleep in their labs to protect their work during frequent blackouts. Some employees have to carry hard drives to other buildings; flaky internet wont allow for the sending of large files.

And a leaky roof forces others to break out plastic sheeting.

If we knew rain was coming, wed tarp up the microscope, said James Fekete, who served as chief of NISTs applied chemicals and materials division until 2018. It leaked enough that we were prepared.

NIST is at the heart of President Bidens ambitious plans to oversee a new generation of artificial intelligence models; through an executive order, the agency is tasked with developing tests for security flaws and other harms. But budget constraints have left the 123-year-old lab with a skeletal staff on key tech teams and most facilities on its main Gaithersburg, Md., and Boulder, Colo., campuses below acceptable building standards.

Interviews with more than a dozen current and former NIST employees, Biden administration officials, congressional aides and tech company executives, along with reports commissioned by the government, detail a massive resources gap between NIST and the tech firms it is tasked with evaluating a discrepancy some say risks undermining the White Houses ambitious plans to set guardrails for the burgeoning technology. Many of the people spoke to The Washington Post on the condition of anonymity because they were not authorized to speak to the media.

Even as NIST races to set up the new U.S. AI Safety Institute, the crisis at the degrading lab is becoming more acute. On Sunday, lawmakers released a new spending plan that would cut NISTs overall budget by more than 10 percent, to $1.46 billion. While lawmakers propose to invest $10 million in the new AI institute, thats a fraction of the tens of billions of dollars tech giants like Google and Microsoft are pouring into the race to develop artificial intelligence. It pales in comparison to Britain, which has invested more than $125 million into its AI safety efforts.

The cuts to the agency are a self-inflicted wound in the global tech race, said Divyansh Kaushik, the associate director for emerging technologies and national security at the Federation of American Scientists.

Some in the AI community worry that underfunding NIST makes it vulnerable to industry influence. Tech companies are chipping in for the expensive computing infrastructure that will allow the institute to examine AI models. Amazon announced that it would donate $5 million in computing credits. Microsoft, a key investor in OpenAI, will provide engineering teams along with computing resources. (Amazon founder Jeff Bezos owns The Post.)

Tech executives, including OpenAI CEO Sam Altman, are regularly in communication with officials at the Commerce Department about the agencys AI work. OpenAI has lobbied NIST on artificial intelligence issues, according to federal disclosures. NIST asked TechNet an industry trade group whose members include OpenAI, Google and other major tech companies if its member companies can advise the AI Safety Institute.

NIST is also seeking feedback from academics and civil society groups on its AI work. The agency has a long history of working with a variety of stakeholders to gather input on technologies, Commerce Department spokesman Charlie Andrews said.

AI staff, unlike their more ergonomically challenged colleagues, will be working in well-equipped offices in the Gaithersburg campus, the Commerce Departments D.C. office and the NIST National Cybersecurity Center of Excellence in Rockville, Md., Andrews said.

White House spokeswoman Robyn Patterson said the appointment of Elizabeth Kelly to the helm of the new AI Safety Institute underscores the White Houses commitment to getting this work done right and on time. Kelly previously served as special assistant to the president for economic policy.

The Biden-Harris administration has so far met every single milestone outlined by the presidents landmark executive order, Patterson said. We are confident in our ability to continue to effectively and expeditiously meet the milestones and directives set forth by President Biden to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond.

NISTs financial struggles highlight the limitations of the administrations plan to regulate AI exclusively through the executive branch. Without an act of Congress, there is no new funding for initiatives like the AI Safety Institute and the programs could be easily overturned by the next president. And as the presidential elections approach, the prospects of Congress moving on AI in 2024 are growing dim.

During his State of the Union address on Thursday, Biden called on Congress to harness the promise of AI and protect us from its peril.

Congressional aides and former NIST employees say the agency has not been able to break through as a funding priority even as lawmakers increasingly tout its role in addressing technological developments, including AI, chips and quantum computing.

After this article published, Senate Majority Leader Charles E. Schumer (D-N.Y.) on Thursday touted the $10 million investment in the institute in the proposed budget, saying he fought for this funding to make sure that the development of AI prioritizes both innovation and safety.

A review of NISTs safety practices in August found that the budgetary issues endanger employees, alleging that the agency has an incomplete and superficial approach to safety.

Chronic underfunding of the NIST facilities and maintenance budget has created unsafe work conditions and further fueled the impression among researchers that safety is not a priority, said the NIST safety commission report, which was commissioned following the 2022 death of an engineering technician at the agencys fire research lab.

NIST is one of the federal governments oldest science agencies with one of the smallest budgets. Initially called the National Bureau of Standards, it began at the dawn of the 20th century, as Congress realized the need to develop more standardized measurements amid the expansion of electricity, the steam engine and railways.

The need for such an agency was underscored three years after its founding, when fires ravaged through Baltimore. Firefighters from Washington, Philadelphia and even New York rushed to help put out the flames, but without standard couplings, their hoses couldnt connect to the Baltimore hydrants. The firefighters watched as the flames overtook more than 70 city blocks in 30 hours.

NIST developed a standard fitting, unifying more than 600 different types of hose couplings deployed across the country at the time.

Ever since, the agency has played a critical role in using research and science to help the country learn from catastrophes and prevent new ones. Its work expanded after World War II: It developed an early version of the digital computer, crucial Space Race instruments and atomic clocks, which underpin GPS. In the 1950s and 1960s, the agency moved to new campuses in Boulder and Gaithersburg after its early headquarters in Washington fell into disrepair.

Now, scientists at NIST joke that they work at the most advanced labs in the world in the 1960s. Former employees describe cutting-edge scientific equipment surrounded by decades-old buildings that make it impossible to control the temperature or humidity to conduct critical experiments.

You see dust everywhere because the windows dont seal, former acting NIST director Kent Rochford said. You see a bucket catching drips from a leak in the roof. You see Home Depot dehumidifiers or portable AC units all over the place.

The flooding was so bad that Rochford said he once requested money for scuba gear. That request was denied, but he did receive funding for an emergency kit that included squeegees to clean up water.

Pests and wildlife have at times infiltrated its campuses, including an incident where a garter snake entered a Boulder building.

More than 60 percent of NIST facilities do not meet federal standards for acceptable building conditions, according to a February 2023 report commissioned by Congress from the National Academies of Sciences, Engineering and Medicine. The poor conditions impact employee output. Workarounds and do-it-yourself repairs reduce the productivity of research staff by up to 40 percent, according to the committees interviews with employees during a laboratory visit.

Years after Rochfords 2018 departure, NIST employees are still deploying similar MacGyver-style workarounds. Each year between October and March, low humidity in one lab creates a static charge making it impossible to operate an instrument ensuring companies meet environmental standards for greenhouse gases.

Problems with the HVAC and specialized lights have made the agency unable to meet demand for reference materials, which manufacturers use to check whether their measurements are accurate in products like baby formula.

Facility problems have also delayed critical work on biometrics, including evaluations of facial recognition systems used by the FBI and other law enforcement agencies. The data center in the 1966 building that houses that work receives inadequate cooling, and employees there spend about 30 percent of their time trying to mitigate problems with the lab, according to the academies reports. Scheduled outages are required to maintain the data centers that hold technology work, knocking all biometric evaluations offline for a month each year.

Fekete, the scientist who recalled covering the microscope, said his teams device never completely stopped working due to rain water.

But other NIST employees havent been so lucky. Leaks and floods destroyed an electron microscope worth $2.5 million used for semiconductor research, and permanently damaged an advanced scale called a Kibble balance. The tool was out of commission for nearly five years.

Despite these constraints, NIST has built a reputation as a natural interrogator of swiftly advancing AI systems.

In 2019, the agency released a landmark study confirming facial recognition systems misidentify people of color more often than White people, casting scrutiny on the technologys popularity among law enforcement. Due to personnel constraints, only a handful of people worked on that project.

Four years later, NIST released early guidelines around AI, cementing its reputation as a government leader on the technology. To develop the framework, the agency connected with leaders in industry, civil society and other groups, earning a strong reputation among numerous parties as lawmakers began to grapple with the swiftly evolving technology.

The work made NIST a natural home for the Biden administrations AI red-teaming efforts and the AI Safety Institute, which were formalized in the November executive order. Vice President Harris touted the institute at the U.K. AI Safety Summit in November. More than 200 civil society organizations, academics and companies including OpenAI and Google have signed on to participate in a consortium within the institute.

OpenAI spokeswoman Kayla Wood said in a statement that the company supports NISTs work, and that the company plans to continue to work with the lab to "support the development of effective AI oversight measures.

Under the executive order, NIST has a laundry list of initiatives that it needs to complete by this summer, including publishing guidelines for how to red-team AI models and launching an initiative to guide evaluating AI capabilities. In a December speech at the machine learning conference NeurIPS, the agencys chief AI adviser, Elham Tabassi, said this would be an almost impossible deadline.

It is a hard problem, said Tabassi, who was recently named the chief technology officer of the AI Safety Institute. We dont know quite how to evaluate AI.

The NIST staff has worked tirelessly to complete the work it is assigned by the AI executive order, said Andrews, the Commerce spokesperson.

While the administration has been clear that additional resources will be required to fully address all of the issues posed by AI in the long term, NIST has been effectively carrying out its responsibilities under the [executive order] and is prepared to continue to lead on AI-related research and other work, he said.

Commerce Secretary Gina Raimondo asked Congress to allocate $10 million for the AI Safety Institute during an event at the Atlantic Council in January. The Biden administration also requested more funding for NIST facilities, including $262 million for safety, maintenance and repairs. Congressional appropriators responded by cutting NISTs facilities budget.

The administrations ask falls far below the recommendations of the national academies study, which urged Congress to provide $300 to $400 million in additional annual funding over 12 years to overcome a backlog of facilities damage. The report also calls for $120 million to $150 million per year for the same period to stabilize the effects of further deterioration and obsolescence.

Ross B. Corotis, who chaired the academies committee that produced the facilities report, said Congress needs to ensure that NIST is funded because it is the go-to lab when any new technology emerges, whether thats chips or AI.

Unless youre going to build a whole new laboratory for some particular issue, youre going to turn first to NIST, Corotis said. And NIST needs to be ready for that.

Eva Dou and Nitasha Tiku contributed to this report.

Continue reading here:

NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post

Posted in Ai

Nvidia, the tech company more valuable than Google and Amazon, explained – Vox.com

Only four companies in the world are worth over $2 trillion. Apple, Microsoft, the oil company Saudi Aramco and, as of 2024, Nvidia. Its understandable if the name doesnt ring a bell. The company doesnt exactly make a shiny product attached to your hand all day, every day, as Apple does. Nvidia designs a chip hidden deep inside the complicated innards of a computer, a seemingly niche product more are relying on every day.

Rewind the clock back to 2019, and Nvidias market value was hovering around $100 billion. Its incredible speedrun to 20 times that already enviable size was really enabled by one thing the AI craze. Nvidia is arguably the biggest winner in the AI industry. ChatGPT-maker OpenAI, which catapulted this obsession into the mainstream, is currently worth around $80 billion, and according to market research firm Grand View Research, the entire global AI market was worth a bit under $200 billion in 2023. Both are just a paltry fraction of Nvidias value. With all eyes on the companys jaw-dropping evolution, the real question now is whether Nvidia can hold on to its lofty perch but heres how the company got to this level.

In 1993, long before uncanny AI-generated art and amusing AI chatbot convos took over our social media feeds, three Silicon Valley electrical engineers launched a startup that would focus on an exciting, fast-growing segment of personal computing: video games.

Nvidia was founded to design a specific kind of chip called a graphics card also commonly called a GPU (graphics processing unit) that enables the output of fancy 3D visuals on the computer screen. The better the graphics card, the more quickly high-quality visuals can be rendered, which is important for things like playing games and video editing. In the prospectus filed ahead of its initial public offering in 1999, Nvidia noted that its future success would depend on the continued growth of computer applications relying on 3D graphics. For most of Nvidias existence, game graphics were Nvidias raison detre.

Ben Bajarin, CEO and principal analyst at the tech industry research firm Creative Strategies, acknowledged that Nvidia had been relatively isolated to a niche part of computing in the market until recently.

Nvidia became a powerhouse selling cards for video games now an entertainment industry juggernaut making over $180 billion in revenue last year but it realized it would be smart to branch out from just making graphics cards for games. Not all its experiments panned out. Over a decade ago, Nvidia made a failed gambit to become a major player in the mobile chip market, but today Android phones use a range of non-Nvidia chips, while iPhones use Apple-designed ones.

Another play, though, not only paid off, it became the reason were all talking about Nvidia today. In 2006, the company released a programming language called CUDA that, in short, unleashed the power of its graphics cards for more general computing processes. Its chips could now do a lot of heavy lifting for tasks unrelated to pumping out pretty game graphics, and it turned out that graphics cards could multitask even better than the CPU (central processing unit), whats often called the central brain of a computer. This made Nvidias GPUs great for calculation-heavy tasks like machine learning (and, crypto mining). 2006 was the same year Amazon launched its cloud computing business; Nvidias push into general computing was coming at a time when massive data centers were popping up around the world.

That Nvidia is a powerhouse today is especially notable because for most of Silicon Valleys history, there already was a chip-making goliath: Intel. Intel makes both CPUs and GPUs, as well as other products, and it manufactures its own semiconductors but after a series of missteps, including not investing into the development of AI chips soon enough, the rival chipmakers preeminence has somewhat faded. In 2019, when Nvidias market value was just over the $100 billion mark, Intels value was double that; now Nvidia has joined the ranks of tech titans designated the Magnificent Seven, a cabal of tech stocks with a combined value that exceeds the entire stock market of many rich G20 countries.

Their competitors were asleep at the wheel, says Gil Luria, a senior analyst at the financial firm D.A. Davidson Companies. Nvidia has long talked about the fact that GPUs are a superior technology for handling accelerated computing.

Today, Nvidias four main markets are gaming, professional visualization (like 3D design), data centers, and the automotive industry, as it provides chips that train self-driving technology. A few years ago, its gaming market was still the biggest chunk of revenue at about $5.5 billion, compared to its data center segment, which raked in about $2.9 billion. Then the pandemic broke out. People were spending a lot more time at home, and demand for computer parts, including GPUs, shot up gaming revenue for the company in fiscal year 2021 jumped a whopping 41 percent. But there were already signs of the coming AI wave, too, as Nvidias data center revenue soared by an even more impressive 124 percent. In 2023, its revenue was 400 percent higher than the year before. In a clear display of how quickly the AI race ramped up, data centers have overtaken games, even in a gaming boom.

When it went public in 1999, Nvidia had 250 employees. Now it has over 27,000. Jensen Huang, Nvidias CEO and one of its founders, has a personal net worth that currently hovers around $70 billion, an over 1,700 percent increase since 2019.

Its likely youve already brushed up against Nvidias products, even if you dont know it. Older gaming consoles like the PlayStation 3 and the original Xbox had Nvidia chips, and the current Nintendo Switch uses an Nvidia mobile chip. Many mid- to high-range laptops come packed up with an Nvidia graphics card as well.

But with the AI bull rush, the company promises to become more central to the tech people use every day. Tesla cars self-driving feature utilizes Nvidia chips, as do practically all major tech companies cloud computing services. These services serve as a backbone for so much of our daily internet routines, whether its streaming content on Netflix or using office and productivity apps. To train ChatGPT, OpenAI harnessed tens of thousands of Nvidias AI chips together. People underestimate how much they use AI on a daily basis, because we dont realize that some of the automated tasks we rely on have been boosted by AI. Popular apps and social media platforms are adding new AI features seemingly every day: TikTok, Instagram, X (formerly Twitter), even Pinterest all boast some kind of AI functionality to toy with. Slack, a messaging platform that many workplaces use, recently rolled out the ability to use AI to generate thread summaries and recaps of Slack channels.

For Nvidias customers, the problem with sizzling demand is that the company can charge eye-wateringly high prices. The chips used for AI data centers cost tens of thousands of dollars, with the top-of-the-line product sometimes selling for over $40,000 on sites like Amazon and eBay. Last year, some clients clamoring for Nvidias AI chips were waiting as much as 11 months.

Just think of Nvidia as the Birkin bag of AI chips. A comparable offering from another chipmaker, AMD, is reportedly being sold to customers like Microsoft for about $10,000 to $15,000, just a fraction of what Nvidia charges. Its not just the AI chips, either. Nvidias gaming business continues to boom, and the price gap between its high-end gaming card and a similarly performing one from AMD has been growing wider. In its last financial quarter, Nvidia reported a gross margin of 76 percent. As in, it cost them just 24 cents to make a dollar in sales. AMDs most recent gross margin was only 47 percent.

Nvidias fans argue that its yawning lead was earned by making an early bet that AI would take over the world its chips are worth the price because of its superior software, and because so much of AI infrastructure has already been built around Nvidias products. But Erik Peinert, a research manager and editor at the American Economic Liberties Project who helped put together a recent report on competition within the chip industry, notes that Nvidia has gotten a price boost because TSMC, the biggest semiconductor maker in the world, has struggled for years to keep up with demand. A recent Wall Street Journal report also suggested that the company may be throwing its weight around to maintain dominance; the CEO of an AI chip startup called Groq claimed that customers were scared Nvidia would punish them with order delays if it got wind they were meeting with other chip makers.

Its undeniable that Nvidia put in the investment into courting the AI industry well before others started paying attention, but its grip on the market isnt unshakable. An army of competitors are on the march, ranging from smaller startups to deep-pocketed opponents, including Amazon, Meta, Microsoft, and Google, all of which currently use Nvidia chips. The biggest challenge for Nvidia is that their customers want to compete with them, says Luria.

Its not just that their customers want to make some of the money that Nvidia has been raking in its that they cant afford to keep paying so much. Microsoft went from spending less than 10 percent of their capital expenditure on Nvidia to spending nearly 40 percent, Luria says. Thats not sustainable.

The fact that over 70 percent of AI chips are bought from Nvidia is also cause for concern for antitrust regulators around the world the EU recently started looking into the industry for potential antitrust abuses. When Nvidia announced in late 2020 that it wanted to spend an eye-popping $40 billion to buy Arm Limited, a company that designs a chip architecture that most modern smartphones and newer Apple computers use, the FTC blocked the deal. That acquisition was pretty clearly intended to get control over a software architecture that most of the industry relied on, says Peinert. The fact that they have so much pricing power, and that theyre not facing any effective competition, is a real concern.

Whether Nvidia will sustain itself as a $2 trillion company or rise to even greater heights depends, fundamentally, on whether both consumer and investor attention on AI can be sustained. Silicon Valley is awash with newly founded AI companies, but what percentage of them will take off, and how long will funders keep pouring money into them?

Widespread AI awareness came about because ChatGPT was an easy-to-use or at least easy-to-show-off-on-social-media novelty for the general public to get excited about. But a lot of AI work is still focusing on AI training rather than whats called AI inferencing, which involves using trained AI models to solve a task, like the way that ChatGPT answers a users query or facial recognition tech identifies people. Though the AI inference market is growing (and maybe growing faster than expected), much of the sector is still going to be spending a lot more time and money on training. For training, Nvidias first-class chips will likely remain the most coveted, at least for a while. But once AI inferencing explodes, there will be less of a need for such high-performance chips, and Nvidias primacy could slip.

Some financial analysts and industry experts have expressed wariness over Nvidias stratospheric valuation, suspecting that AI enthusiasm will slow down and that there may already be too much money going toward making AI chips. Traffic to ChatGPT has dropped off since last May and some investors are slowing down the money hose.

Every big technology goes through an adoption cycle, says Luria. As it comes into consciousness, you build this huge hype. Then at some point, the hype gets too big, and then you get past it and get into the trough of disillusionment. He expects to see that soon with AI though that doesnt mean its a bubble.

Nvidias revenue last year was about $60 billion, which was a 126 percent increase from the prior year. Its high valuation and stock price is based not just on that revenue, though, but for its predicted continued growth for comparison, Amazon currently has a lower market value than Nvidia yet made almost $575 billion in sales last year. The path to Nvidia booking large enough profits to justify the $2 trillion valuation looks steep to some experts, especially knowing that the competition is kicking into high gear.

Theres also the possibility that Nvidia could be stymied by how fast microchip technology can advance. It has moved at a blistering pace in the last several decades, but there are signs that the pace at which more transistors can be fitted onto a microchip making them smaller and more powerful is slowing down. Whether Nvidia can keep offering meaningful hardware and software improvements that convince its customers to buy its latest AI chips could be a challenge, says Bajarin.

Yet, for all these possible obstacles, if one were to bet whether Nvidia will soon become as familiar a tech company as Apple and Google, the safe answer is yes. AI fever is why Nvidia is in the rarefied club of trillion-dollar companies but it may be just as true to say that AI is so big because of Nvidia.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Here is the original post:

Nvidia, the tech company more valuable than Google and Amazon, explained - Vox.com

Posted in Ai

AI makes a rendezvous in space | Stanford News – Stanford University News

Researchers from the Stanford Center for AEroSpace Autonomy Research (CAESAR) in the robotic testbed, which can simulate the movements of autonomous spacecraft. (Image credit: Andrew Brodhead)

Space travel is complex, expensive, and risky. Great sums and valuable payloads are on the line every time one spacecraft docks with another. One slip and a billion-dollar mission could be lost. Aerospace engineers believe that autonomous control, like the sort guiding many cars down the road today, could vastly improve mission safety, but the complexity of the mathematics required for error-free certainty is beyond anything on-board computers can currently handle.

In a new paper presented at the IEEE Aerospace Conference in March 2024, a team of aerospace engineers at Stanford University reported using AI to speed the planning of optimal and safe trajectories between two or more docking spacecraft. They call it ART the Autonomous Rendezvous Transformer and they say it is the first step to an era of safer and trustworthy self-guided space travel.

In autonomous control, the number of possible outcomes is massive. With no room for error, they are essentially open-ended.

Trajectory optimization is a very old topic. It has been around since the 1960s, but it is difficult when you try to match the performance requirements and rigid safety guarantees necessary for autonomous space travel within the parameters of traditional computational approaches, said Marco Pavone, an associate professor of aeronautics and astronautics and co-director of the new Stanford Center for AEroSpace Autonomy Research (CAESAR). In space, for example, you have to deal with constraints that you typically do not have on the Earth, like, for example, pointing at the stars in order to maintain orientation. These translate to mathematical complexity.

For autonomy to work without fail billions of miles away in space, we have to do it in a way that on-board computers can handle, added Simone DAmico, an associate professor of aeronautics and astronautics and fellow co-director of CAESAR. AI is helping us manage the complexity and delivering the accuracy needed to ensure mission safety, in a computationally efficient way.

CAESAR is a collaboration between industry, academia, and government that brings together the expertise of Pavones Autonomous Systems Lab and DAmicos Space Rendezvous Lab. The Autonomous Systems Lab develops methodologies for the analysis, design, and control of autonomous systems cars, aircraft, and, of course, spacecraft. The Space Rendezvous Lab performs fundamental and applied research to enable future distributed space systems whereby two or more spacecraft collaborate autonomously to accomplish objectives otherwise very difficult for a single system, including flying in formation, rendezvous and docking, swarm behaviors, constellations, and many others. CAESAR is supported by two founding sponsors from the aerospace industry and, together, the lab is planning a launch workshop for May 2024.

CAESAR researchers discuss the robotic free-flyer platform, which uses air bearings to hover on a granite table and simulate a frictionless zero gravity environment. (Image credit: Andrew Brodhead)

The Autonomous Rendezvous Transformer is a trajectory optimization framework that leverages the massive benefits of AI without compromising on the safety assurances needed for reliable deployment in space. At its core, ART involves integrating AI-based methods into the traditional pipeline for trajectory optimization, using AI to rapidly generate high-quality trajectory candidates as input for conventional trajectory optimization algorithms. The researchers refer to the AI suggestions as a warm start to the optimization problem and show how this is crucial to obtain substantial computational speed-ups without compromising on safety.

One of the big challenges in this field is that we have so far needed ground in the loop approaches you have to communicate things to the ground where supercomputers calculate the trajectories and then we upload commands back to the satellite, explains Tommaso Guffanti, a postdoctoral fellow in DAmicos lab and first author of the paper introducing the Autonomous Rendezvous Transformer. And in this context, our paper is exciting, I think, for including artificial intelligence components in traditional guidance, navigation, and control pipeline to make these rendezvous smoother, faster, more fuel efficient, and safer.

ART is not the first model to bring AI to the challenge of space flight, but in tests in a terrestrial lab setting, ART outperformed other machine learning-based architectures. Transformer models, like ART, are a subset of high-capacity neural network models that got their start with large language models, like those used by chatbots. The same AI architecture is extremely efficient in parsing, not just words, but many other types of data such as images, audio, and now, trajectories.

Transformers can be applied to understand the current state of a spacecraft, its controls, and maneuvers that we wish to plan, Daniele Gammelli, a postdoctoral fellow in Pavones lab, and also a co-author on the ART paper. These large transformer models are extremely capable at generating high-quality sequences of data.

The next frontier in their research is to further develop ART and then test it in the realistic experimental environment made possible by CAESAR. If ART can pass CAESARs high bar, the researchers can be confident that its ready for testing in real-world scenarios in orbit.

These are state-of-the-art approaches that need refinement, DAmico says. Our next step is to inject additional AI and machine learning elements to improve ARTs current capability and to unlock new capabilities, but it will be a long journey before we can test the Autonomous Rendezvous Transformer in space itself.

Follow this link:

AI makes a rendezvous in space | Stanford News - Stanford University News

Posted in Ai

AI drone that could hunt and kill people built in just hours by scientist ‘for a game’ – Livescience.com

It only takes a few hours to configure a small, commercially available drone to hunt down a target by itself, a scientist has warned.

Luis Wenus, an entrepreneur and engineer, incorporated an artificial intelligence (AI) system into a small drone to chase people around "as a game," he wrote in a post on March 2 on X, formerly known as Twitter. But he soon realized it could easily be configured to contain an explosive payload.

Collaborating with Robert Lukoszko, another engineer, he configured the drone to use an object-detection model to find people and fly toward them at full speed, he said. The engineers also built facial recognition into the drone, which works at a range of up to 33 feet (10 meters). This means a weaponized version of the drone could be used to attack a specific person or set of targets.

Related: 3 scary breakthroughs AI will make in 2024

"This literally took just a few hours to build, and made me realize how scary it is," Wenus wrote. "You could easily strap a small amount of explosives on these and let 100's of them fly around. We check for bombs and guns but THERE ARE NO ANTI-DRONE SYSTEMS FOR BIG EVENTS & PUBLIC SPACES YET."

Wenus described himself as an "open source absolutist," meaning he believes in always sharing code and software through open source channels. He also identifies as an "e/acc" which is a school of thinking among AI researchers that refers to wanting to accelerate AI research regardless of the downsides, due to a belief that the upsides will always outweigh them. He said, however, that he would not publish any code relating to this experiment.

He also warned that a terror attack could be orchestrated in the near future using this kind of technology. While people need technical knowledge to engineer such a system, it will become easier and easier to write the software as time passes, partially due to advancements in AI as an assistant in writing code, he noted.

Wenus said his experiment showed that society urgently needs to build anti-drone systems for civilian spaces where large crowds could gather. There are several countermeasures that society can build, according to Robin Radar, including cameras, acoustic sensors and radar to detect drones. Disrupting them, however, could require technologies such as radio frequency jammers, GPS spoofers, net guns, as well as high-energy lasers.

While such weapons haven't been deployed in civilian environments, they have been previously conceptualized and deployed in the context of warfare. Ukraine, for example, has developed explosive drones in response to Russia's invasion, according to the Wall Street Journal (WSJ).

The U.S. military is also working on ways to build and control swarms of small drones that can attack targets. It follows the U.S. Navy's efforts after it first demonstrated that it could control a swarm of 30 drones with explosives in 2017, according to MIT Technology Review.

Read more from the original source:

AI drone that could hunt and kill people built in just hours by scientist 'for a game' - Livescience.com

Posted in Ai

Nvidia Earnings Show Soaring Profit and Revenue Amid AI Boom – The New York Times

Nvidia, the kingpin of chips powering artificial intelligence, on Wednesday released quarterly financial results that reinforced how the company has become one of the biggest winners of the artificial intelligence boom, and it said demand for its products would fuel continued sales growth.

The Silicon Valley chip maker has been on an extraordinary rise over the past 18 months, driven by demand for its specialized and costly semiconductors, which are used for training popular A.I. services like OpenAIs ChatGPT chatbot. Nvidia has become known as one of the Magnificent Seven tech stocks, which, including others like Amazon, Apple and Microsoft, have helped power the stock market.

Nvidias valuation has surged more than 40 percent to $1.7 trillion since the start of the year, turning it into one of the worlds most valuable public companies. Last week, the company briefly eclipsed the market values of Amazon and Alphabet before receding to the fifth-most-valuable tech company. Its stock market gains are largely a result of repeatedly exceeding analysts expectations for growth, a feat that is becoming more difficult as they keep raising their predictions.

On Wednesday, Nvidia reported that revenue in its fiscal fourth quarter more than tripled from a year earlier to $22.1 billion, while profit soared nearly ninefold to $12.3 billion. Revenue was well above the $20 billion the company predicted in November and above Wall Street estimates of $20.4 billion.

Nvidia predicted that revenue in the current quarter would total about $24 billion, also more than triple the year-earlier period and higher than analysts average forecast of $22 billion.

Jensen Huang, Nvidias co-founder and chief executive, argues that an epochal shift to upgrade data centers with chips needed for training powerful A.I. models is still in its early phases. That will require spending roughly $2 trillion to equip all the buildings and computers to use chips like Nvidias, he predicts.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Continued here:

Nvidia Earnings Show Soaring Profit and Revenue Amid AI Boom - The New York Times

Posted in Ai

Researchers jailbreak AI chatbots with ASCII art — ArtPrompt bypasses safety measures to unlock malicious queries – Tom’s Hardware

Researchers based in Washington and Chicago have developed ArtPrompt, a new way to circumvent the safety measures built into large language models (LLMs). According to the research paper ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs, chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 can be induced to respond to queries they are designed to reject using ASCII art prompts generated by their ArtPrompt tool. It is a simple and effective attack, and the paper provides examples of the ArtPrompt-induced chatbots advising on how to build bombs and make counterfeit money.

Image 1 of 2

ArtPrompt consists of two steps, namely word masking and cloaked prompt generation. In the word masking step, given the targeted behavior that the attacker aims to provoke, the attacker first masks the sensitive words in the prompt that will likely conflict with the safety alignment of LLMs, resulting in prompt rejection. In the cloaked prompt generation step, the attacker uses an ASCII art generator to replace the identified words with those represented in the form of ASCII art. Finally, the generated ASCII art is substituted into the original prompt, which will be sent to the victim LLM to generate response.

Artificial intelligence (AI) wielding chatbots are increasingly locked down to avoid malicious abuse. AI developers don't want their products to be subverted to promote hateful, violent, illegal, or similarly harmful content. So, if you were to query one of the mainstream chatbots today about how to do something malicious or illegal, you would likely only face rejection. Moreover, in a kind of technological game of whack-a-mole, the major AI players have spent plenty of time plugging linguistic and semantic holes to prevent people from wandering outside the guardrails. This is why ArtPrompt is quite an eyebrow-raising development.

To best understand ArtPrompt and how it works, it is probably simplest to check out the two examples provided by the research team behind the tool. In Figure 1 above, you can see that ArtPrompt easily sidesteps the protections of contemporary LLMs. The tool replaces the 'safety word' with an ASCII art representation of the word to form a new prompt. The LLM recognizes the ArtPrompt prompt output but sees no issue in responding, as the prompt doesn't trigger any ethical or safety safeguards.

Another example provided in the research paper shows us how to successfully query an LLM about counterfeiting cash. Tricking a chatbot this way seems so basic, but the ArtPrompt developers assert how their tool fools today's LLMs "effectively and efficiently." Moreover, they claim it "outperforms all [other] attacks on average" and remains a practical, viable attack for multimodal language models for now.

The last time we reported on AI chatbot jailbreaking, some enterprising researchers from NTU were working on Masterkey, an automated method of using the power of one LLM to jailbreak another.

See original here:

Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries - Tom's Hardware

Posted in Ai

Google apologizes for missing the mark after Gemini generated racially diverse Nazis – The Verge

Google has apologized for what it describes as inaccuracies in some historical image generation depictions with its Gemini AI tool, saying its attempts at creating a wide range of results missed the mark. The statement follows criticism that it depicted specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.

Were aware that Gemini is offering inaccuracies in some historical image generation depictions, says the Google statement, posted this afternoon on X. Were working to improve these kinds of depictions immediately. Geminis AI image generation does generate a wide range of people. And thats generally a good thing because people around the world use it. But its missing the mark here.

Google began offering image generation through its Gemini (formerly Bard) AI platform earlier this month, matching the offerings of competitors like OpenAI. Over the past few days, however, social media posts have questioned whether it fails to produce historically accurate results in an attempt at racial and gender diversity.

As the Daily Dot chronicles, the controversy has been promoted largely though not exclusively by right-wing figures attacking a tech company thats perceived as liberal. Earlier this week, a former Google employee posted on X that its embarrassingly hard to get Google Gemini to acknowledge that white people exist, showing a series of queries like generate a picture of a Swedish woman or generate a picture of an American woman. The results appeared to overwhelmingly or exclusively show AI-generated people of color. (Of course, all the places he listed do have women of color living in them, and none of the AI-generated women exist in any country.) The criticism was taken up by right-wing accounts that requested images of historical groups or figures like the Founding Fathers and purportedly got overwhelmingly non-white AI-generated people as results. Some of these accounts positioned Googles results as part of a conspiracy to avoid depicting white people, and at least one used a coded antisemitic reference to place the blame.

Google didnt reference specific images that it felt were errors; in a statement to The Verge, it reiterated the contents of its post on X. But its plausible that Gemini has made an overall attempt to boost diversity because of a chronic lack of it in generative AI. Image generators are trained on large corpuses of pictures and written captions to produce the best fit for a given prompt, which means theyre often prone to amplifying stereotypes. A Washington Post investigation last year found that prompts like a productive person resulted in pictures of entirely white and almost entirely male figures, while a prompt for a person at social services uniformly produced what looked like people of color. Its a continuation of trends that have appeared in search engines and other software systems.

Some of the accounts that criticized Google defended its core goals. Its a good thing to portray diversity ** in certain cases **, noted one person who posted the image of racially diverse 1940s German soldiers. The stupid move here is Gemini isnt doing it in a nuanced way. And while entirely white-dominated results for something like a 1943 German soldier would make historical sense, thats much less true for prompts like an American woman, where the question is how to represent a diverse real-life group in a small batch of made-up portraits.

For now, Gemini appears to be simply refusing some image generation tasks. It wouldnt generate an image of Vikings for one Verge reporter, although I was able to get a response. On desktop, it resolutely refused to give me images of German soldiers or officials from Germanys Nazi period or to offer an image of an American president from the 1800s.

But some historical requests still do end up factually misrepresenting the past. A colleague was able to get the mobile app to deliver a version of the German soldier prompt which exhibited the same issues described on X.

And while a query for pictures of the Founding Fathers returned group shots of almost exclusively white men who vaguely resembled real figures like Thomas Jefferson, a request for a US senator from the 1800s returned a list of results Gemini promoted as diverse, including what appeared to be Black and Native American women. (The first female senator, a white woman, served in 1922.) Its a response that ends up erasing a real history of race and gender discrimination inaccuracy, as Google puts it, is about right.

Additional reporting by Emilia David

Read more from the original source:

Google apologizes for missing the mark after Gemini generated racially diverse Nazis - The Verge

Posted in Ai

Which AI phone features are useful and how well they actually work – The Washington Post

Every year like clockwork, some of the biggest companies in the world release new phones they hope you will shell out hundreds of dollars for.

And more and more, they are leaning on a new angle to get you thinking of upgrading: artificial intelligence.

Smartphones from Google and Samsung come with features to help you skim through long swaths of text, tweak the way you sound in messages, and make your photos more eye-catching. Meanwhile, Apple is reportedly racing to build AI tools and features it hopes to include in an upcoming version of its iOS software, which will launch alongside the companys new iPhones later this year.

But here's the real question: Of the AI tools built into phones right now, how many of them are actually useful?

Thats tough to say: It all depends on what you use your phone for, and what you personally perceive is helpful. To help, heres a brief guide to the AI features youll most commonly find in phones right now, so you can decide which might be worth living with for yourself.

For years, smartphone makers have worked to make the photos that come out of the tiny camera sensors they use look better than they should. Now, theyre also giving us the tools to more easily revise those images.

Here are the most basic: Google and Samsung phones now let you resize, move or erase people and objects inside photos youve taken. Once you do that, the phones lean on generative AI to fill in the visual gaps left behind and thats it.

Think of it as a little Photoshopping, except the hard work is basically done for you. And for better or worse, there are limits to what it can do.

You cant use those built-in tools to generate people, objects or more fantastical additions that werent part of the original image the way you can with other AI image creation tools. The results dont usually survive serious scrutiny, either its not hard to see places where little details dont line up, or areas that look smudgy because the AI couldnt convincingly fill a gap where an offending object used to be.

Whats potentially more unsettling are tools such as Googles Best Take for its Pixel phones, which give you the chance to select specific expressions for peoples faces in an image if youve taken a bunch of photos in a row.

Some people dont mind it, while others find it a little divorced from reality. No matter where you land, though, expect your photos to get a lot of AI attention the next time you buy a phone.

Your messages to your boss probably shouldnt sound like messages to your friends and vice versa. Samsungs Chat Assist and Googles Magic Compose tools use generative AI to try to adjust the language in your messages to make them more palatable.

The catch? Googles Magic Compose only works in its texting-focused Messages app, which means you cant easily use it for emails or, say, WhatsApp messages. (A similar tool for Gmail and the Chrome web browser, called Help Me Write, is not yet widely available.) People who buy Galaxy S24 phones, meanwhile, can use Samsungs version of this feature wherever they write text to switch between professional, casual, polite, and even emoji-filled variations of their original message.

What can I say? It works, though I cant imagine using it with any regularity. And in some ways, Samsungs Chat Assist tool backs down when its arguably needed most. In a few test emails where I used some very mild swears to allude to (fictional) workplace stress, Samsungs Chat Assist refused to help on the grounds that the messages contained inappropriate language.

The built-in voice recorder apps on Googles Pixels and Samsungs latest phones dont just record audio theyll turn those recordings into full-blown transcripts.

In theory, this should free you up from having to take so many notes while youre in a meeting or a lecture. And for the most part, these features work well enough after a few seconds, theyll dutifully produce readable, if sometimes clumsy, readouts of what youve just heard.

If all you need is a sort of rough draft to accompany your recordings, these automated transcription tools can be really helpful. They can differentiate between multiple speakers, which is handy when you need to skim through a conversation later. And Googles version will even give you a live transcription, which can be nice if youre the sort of person who keeps subtitles on all the time.

But whether youre using a Google phone or one of Samsungs, the resulting transcripts often need a bit of cleanup that means youll need to do a little extra work before you copy and paste the results into something important.

Who among us hasnt clicked into a Wikipedia page, or an article, or a recipe online that takes way too long to get to the point? As long as youre using the Chrome browser, Googles Pixel phones can scan those long webpages and boil them down into a set of high-level blurbs to give you the gist.

Sadly, Googles summaries are often too cursory to feel satisfying.

Samsungs phones can summarize your notes and transcriptions of your recordings, but it will only summarize things you find on the web if you use its homemade web browser. Honestly, that might be worth it: The quality of its summaries are much better than Googles. (You even have the option of switching to a more detailed version of the AI summary, which Google doesnt offer at all.)

Both versions of these summary tools come with a notable caveat, too: They wont summarize articles from websites that have paywalls, which includes just about every major U.S. newspaper.

Samsungs AI tools are free for now, but a tiny footnote on its website suggests the company may eventually charge customers to use them. Its not a done deal yet, but Samsung isnt ruling it out either.

We are committed to making Galaxy AI features available to as many of our users as possible, a spokesperson said in a statement. We will not be considering any changes to that direction before the end of 2025.

Google, meanwhile, already makes some of its AI-powered features exclusive to certain devices. (For example: A Video Boost tool for improving the look of your footage is only available on the companys higher-end Pixel 8 Pro phones.)

In the past, Google has made experimental versions of some AI tools like the Magic Compose feature available only to people who pay for the companys Google One subscription service. And more recently, Google has started charging people for access to its latest AI chatbot. For now, though, the company hasnt said anything either way about putting future AI phone features behind a paywall.

Google did not immediately respond to a request for comment.

Go here to read the rest:

Which AI phone features are useful and how well they actually work - The Washington Post

Posted in Ai

How a New Bipartisan Task Force Is Thinking About AI – TIME

On Tuesday, speaker of the House of Representatives Mike Johnson and Democratic leader Hakeem Jeffries launched a bipartisan Task Force on Artificial Intelligence.

Johnson, a Louisiana Republican, and Jeffries, a New York Democrat, each appointed 12 members to the Task Force, which will be chaired by Representative Jay Obernolte, a California Republican, and co-chaired by Representative Ted Lieu, a California Democrat. According to the announcement, the Task Force will produce a comprehensive report that will include guiding principles, forward-looking recommendations and bipartisan policy proposals developed in consultation with committees of jurisdiction.

Read More: The 3 Most Important AI Policy Milestones of 2023

Obernoltewho has a masters in AI from the University of California, Los Angeles and founded the video game company FarSight Studiosand Lieuwho studied computer science and political science at Stanford Universityare natural picks to lead the Task Force. But many of the members have expertise in AI too. Representative Bill Foster, a Democrat from Illinois, told TIME that he programmed neural networks in the 1990s as a physics Ph.D. working at a particle accelerator. Other members have introduced AI-related bills and held hearings on AI policy issues. And Representative Don Beyer, a 73-year old Democrat from Virginia, is pursuing a masters in machine learning at George Mason University alongside his Congressional responsibilities.

Since OpenAI released the wildly popular ChatGPT chatbot in November 2022, lawmakers around the world have rushed to get to grips with the societal implications of AI. In the White House, the Biden Administration has done all it can, by issuing a sweeping Executive Order in October 2023 intended to both ensure the U.S. benefits from AI while mitigating risks associated with the technology. In the Senate, Majority Leader Chuck Schumer announced a regulatory framework in June 2023, and has since been holding closed-door convenings between lawmakers, experts, and industry executives. Many Senators have been holding their own hearings, proposing alternative regulatory frameworks, and submitting bills to regulate AI.

Read More: How We Chose the TIME100 Most Influential People in AI

The House however, partly due to the turmoil following former Speaker Kevin McCarthys ouster in the fall, has lagged behind. The Task Force represents the lower houses most significant AI regulation step yet. Given that AI legislation will require the approval of both houses, the Task Forces report could shape the agenda for future AI laws. TIME spoke with eight Task Force members to understand their priorities.

Each member has a slightly different focus, informed by their backgrounds before entering politics and the different committees they sit on.

I recognize that if used responsibly, AI has the potential to enhance the efficiency of patient care, improve health outcomes, and lower costs, California Democrat Representative Ami Bera told TIME in an emailed statement. He trained as an internal medicine doctor, taught at the UC Davis School of Medicine and served as Sacramento Countys Chief Medical Officer before entering politics in 2013.

Meanwhile Colorado Democrat Representative Brittany Pettersen is focused on AIs impact on the banking system. As artificial intelligence continues to rapidly advance and become more widely available, it has the potential to impact everything from our election systems with the use of deep fakes, to bank fraud perpetuated by high-tech scams. Our policies must keep up to ensure we continue to lead in this space while protecting our financial system and our country at-large, said Petterson, who is a member of the House Financial Services bipartisan Working Group on AI and introduced a bill last year to address AI-powered bank scams, in an emailed statement.

The fact that the members each have different focuses and sit on different committees is, in part, a design choice, suggests Foster, the Illinois Democrat. At one point, I counted there were seven committees in Congress that claimed they were doing some part of Information Technology. Which means we have no committees because there's no one who's really got themselves and their staff focused on information technology full time, he says. The Task Force might allow the House to actually move the ball forward on policy issues that span committee jurisdictions, he hopes.

If some issues are particular to certain members, others are a shared source of concern. All eight of the Task Force members that TIME spoke with expressed fears over AI-generated deep fakes and their potential impact on elections.

Read More: Hackers Could Use ChatGPT to Target 2024 Elections

While no other issue commanded the same unanimity of interest, many themes recurred. Labor impacts from AI-powered hiring software and automation, algorithmic bias, AI in healthcare, data protection and privacyall of these issues were raised by multiple members of the Task Force in conversations with TIME.

Another topic raised by several members was the CREATE AI Act, a bill that would establish a National AI Research Resource (NAIRR) that would provide researchers with the tools they need to do cutting-edge research. A pilot of the NAIRR was recently launched by the National Science Foundationsomething instructed by President Bidens AI Executive Order.

Read More: The U.S. Just Took a Crucial Step Toward Democratizing AI Access

Representative Haley Stevens, a Democrat from Michigan, stressed the importance of maintaining technological superiority over China. Frankly, I want the United States of America, alongside our western counterparts, setting the rules for the road with artificial intelligence, not the Chinese Communist Party, she said. Representative Scott Franklin, a Republican from Florida, concurred, and argued that preventing industrial espionage would be especially important. We're putting tremendous resources against this challenge and investing in it, we need to make sure that we're protecting our intellectual property, he said.

Both Franklin and Beyer said the Task Force should devote some of its energies to considering existential risks from powerful future AI systems. As long as there are really thoughtful people, like Dr. Hinton or others, who worry about the existential risks of artificial intelligencethe end of humanityI don't think we can afford to ignore that, said Beyer. Even if there's just a one in a 1000 chance, one in a 1000 happens. We see it with hurricanes and storms all the time.

Other members are less worried. If we get the governance right on the little things, then it will also protect against that big risk, says Representative Sara Jacobs, a Democrat from California. And I think that there's so much focus on that big risk, that we're actually missing the harms and risks that are already being done by this technology.

The Task Force has yet to meet, and while none of its members were able to say when it might publish its report, they need to move quickly to have any hope of their work leading to federal legislation before the presidential election takes over Washington.

State lawmakers are not waiting for Congress to act. Earlier this month, Senator Scott Wiener, a Democrat who represents San Francisco and parts of San Mateo County in the California State Senate, introduced a bill that would seek to make powerful AI systems safe by, among other things, mandating safety tests. I would love to have one unified Federal law that effectively addresses AI safety issues, Wiener said in a recent interview with NPR. Congress has not passed such a law. Congress has not even come close to passing such a law.

But many of the Task Forces members argued that, while partisan gridlock has made it difficult for the House to pass anything in recent months, AI might be the one area where Congress can find common ground.

I've spoken with a number of my colleagues on both sides of the aisle on this, says Franklin, the Florida Republican. We're all kind of coming in at the same place, and we understand the seriousness of the issue. We may have disagreement on exactly how to address [the issues]. And that's why we need to get together and have those conversations.

The fact that it's bipartisan and bicameral makes me very optimistic that we'll be able to get meaningful things done in this calendar year, says Beyer, the Virginia Democrat. And put it on Joe Biden's desk.

Original post:

How a New Bipartisan Task Force Is Thinking About AI - TIME

Posted in Ai

China’s Rush to Dominate A.I. Comes With a Twist: It Depends on U.S. Technology – The New York Times

In November, a year after ChatGPTs release, a relatively unknown Chinese start-up leaped to the top of a leaderboard that judged the abilities of open-source artificial intelligence systems.

The Chinese firm, 01.AI, was only eight months old but had deep-pocketed backers and a $1 billion valuation and was founded by a well-known investor and technologist, Kai-Fu Lee. In interviews, Mr. Lee presented his A.I. system as an alternative to options like Metas generative A.I. model, called LLaMA.

There was just one twist: Some of the technology in 01.AIs system came from LLaMA. Mr. Lees start-up then built on Metas technology, training its system with new data to make it more powerful.

The situation is emblematic of a reality that many in China openly admit. Even as the country races to build generative A.I., Chinese companies are relying almost entirely on underlying systems from the United States. China now lags the United States in generative A.I. by at least a year and may be falling further behind, according to more than a dozen tech industry insiders and leading engineers, setting the stage for a new phase in the cutthroat technological competition between the two nations that some have likened to a cold war.

Chinese companies are under tremendous pressure to keep abreast of U.S. innovations, said Chris Nicholson, an investor with the venture capital firm Page One Ventures who focuses on A.I. technologies. The release of ChatGPT was yet another Sputnik moment that China felt it had to respond to.

Jenny Xiao, a partner at Leonis Capital, an investment firm that focuses on A.I.-powered companies, said the A.I. models that Chinese companies build from scratch arent very good, leading to many Chinese firms often using fine-tuned versions of Western models. She estimated China was two to three years behind the United States in generative A.I. developments.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the article here:

China's Rush to Dominate A.I. Comes With a Twist: It Depends on U.S. Technology - The New York Times

Posted in Ai

Google to fix AI picture bot after ‘woke’ criticism – BBC.com

Google and parent company Alphabet Inc's headquarters in Mountain View, California

Google is racing to fix its new AI-powered tool for creating pictures, after claims it was over-correcting against the risk of being racist.

Users said the firm's Gemini bot supplied images depicting a variety of genders and ethnicities even when doing so was historically inaccurate.

For example, a prompt seeking images of America's founding fathers turned up women and people of colour.

The company said its tool was "missing the mark".

"Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here," said Jack Krawczyk, senior director for Gemini Experiences.

"We're working to improve these kinds of depictions immediately," he added.

Accept and continue

It is not the first time AI has stumbled over real-world questions about diversity.

For example, Google infamously had to apologise almost a decade ago after its photos app labelled a photo of a black couple as "gorillas".

Rival AI firm, OpenAI was also accused of perpetuating harmful stereotypes, after users found its Dall-E image generator responded to queries for chief executive, for example, with results dominated by pictures of white men.

Google, which is under pressure to prove it is not falling behind in AI developments, released its latest version of Gemini last week.

The bot creates pictures in response to written queries.

It quickly drew critics, who accused the company of training the bot to be laughably woke.

Accept and continue

"It's embarrassingly hard to get Google Gemini to acknowledge that white people exist," computer scientist Debarghya Das, wrote.

"Come on," Frank J Fleming, an author and humourist who writes for outlets including the right-wing PJ Media, in response to the results he received asking for an image of a Viking.

The claims picked up speed in right-wing circles in the US, where many big tech platforms are already facing backlash for alleged liberal bias.

Mr Krawczyk said the company took representation and bias seriously and wanted its results to reflect its global user base.

"Historical contexts have more nuance to them and we will further tune to accommodate that," he wrote on X, formerly Twitter, where users were sharing the dubious results they had received.

"This is part of the alignment process - iteration on feedback. Thank you and keep it coming!"

See the rest here:

Google to fix AI picture bot after 'woke' criticism - BBC.com

Posted in Ai

The Samsung Galaxy S23 series will get AI features in late March – The Verge

Right now, you need a Galaxy S24 phone to use the very latest AI features from Samsung, but thats changing next month. In late March, Samsung will extend Galaxy AI features to the S23 series including the S23 FE as well as recent foldables and tablets as part of the One UI 6.1 update. Its all free for now, but after 2025 you might have to pay up.

The Galaxy Z Fold 5 and Z Flip 5 are slated to get the update, as well as the Galaxy Tab S9, S9 Plus, and S9 Ultra. If Samsung wants to ship Galaxy AI to 100 million phones this year like it says it will, thats a solid start. The One UI 6.1 update will include the much-touted AI features on the S24 series, including live translation capabilities, generative photo and video editing, and Googles Circle to Search feature. This suite of features includes a mix of on- and off-device processing, just like it does on the S24 series.

An older phone learning new tricks is unequivocally a good thing, even if Galaxy AI is a little bit of a mixed bag right now. But my overall impression is that these features do occasionally come in handy, and when they go sideways theyre mostly harmless. One UI 6.1 will also include a handful of useful non-AI updates, such as lockscreen widgets and the new, unified Quick Share.

The rest is here:

The Samsung Galaxy S23 series will get AI features in late March - The Verge

Posted in Ai

Samsung’s Galaxy AI Is Coming to the Galaxy S23, Foldables and Tablets Next Month – CNET

Samsung is bringing its suite of Galaxy AI features to the Galaxy S23 lineup, as well as the Galaxy S23 FE,Galaxy Z Fold 5, Galaxy Z Flip 5and Galaxy Tab S9 family starting in March. The move shows that Samsung is eager to make AI a bigger part of all its high-profile mobile products, not just its newest phones.

Galaxy AI is scheduled to arrive in a software update in late March as part of Samsung's goal to bring the features to more than 100 million Galaxy users this year, T.M. Roh, president and head of Samsung's mobile experience business, said in a press release. Samsung previously said Galaxy AI would come to the Galaxy S23 lineup, but it hadn't disclosed the timing until now.

Read more: Best Samsung Phone For 2024

Galaxy AI is an umbrella term that refers to a collection of new AI-powered features that debuted on the Galaxy S24 series in January. Some examples of Galaxy AI features include Generative Edit, which lets you move or manipulate objects in photos; Chat Assist, for rewriting texts in a different tone or translating them into other languages; Circle to Search, which lets you launch a Google search for any object on screen just by circling it; and Live Translate, a tool that translates phone calls in real time.

Samsung and other tech companies have been vocal about their plans to infuse smartphones with generative AI, or AI that can create content or responses when prompted based on training data. It's the same flavor of AI that powers ChatGPT, and device makers have been adamant about adding it to their own products.

Although AI has played an important role in smartphones for years, companies like Samsung and Google, which collaborated to develop Galaxy AI, only recently became focused on bringing generative AI to phones. For Samsung, Galaxy AI is the culmination of those efforts.

Samsung's AI features are also likely coming to wearables next, as the company hinted Tuesday in a blog post authored by Roh.

"In the near future, select Galaxy wearables will use AI to enhance digital health and unlock a whole new era of expanded, intelligent health experiences," he said in the post.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

Watch this: See How the Galaxy S24 Ultra's Camera Compares to the Pixel 8 Pro's

Excerpt from:

Samsung's Galaxy AI Is Coming to the Galaxy S23, Foldables and Tablets Next Month - CNET

Posted in Ai

AI agents like Rabbit aim to book your vacation and order your Uber – NPR

The AI-powered Rabbit R1 device is seen at Rabbit Inc.'s headquarters in Santa Monica, California. The gadget is meant to serve as a personal assistant fulfilling tasks such as ordering food on DoorDash for you, calling an Uber or booking your family's vacation. Stella Kalinina for NPR hide caption

The AI-powered Rabbit R1 device is seen at Rabbit Inc.'s headquarters in Santa Monica, California. The gadget is meant to serve as a personal assistant fulfilling tasks such as ordering food on DoorDash for you, calling an Uber or booking your family's vacation.

ChatGPT can give you travel ideas, but it won't book your flight to Cancn.

Now, artificial intelligence is here to help us scratch items off our to-do lists.

A slate of tech startups are developing products that use AI to complete real-world tasks.

Silicon Valley watchers see this new crop of "AI agents" as being the next phase of the generative AI craze that took hold with the launch of chatbots and image generators.

Last year, Sam Altman, the CEO of OpenAI, the maker of ChatGPT, nodded to the future of AI errand-helpers at the company's developer conference.

"Eventually, you'll just ask a computer for what you need, and it'll do all of these tasks for you," Altman said.

One of the most hyped companies doing this is called Rabbit. It has developed a device called the Rabbit R1. Chinese entrepreneur Jesse Lyu launched it at this year's CES, the annual tech trade show, in Las Vegas.

It's a bright orange gadget about half the size of an iPhone. It has a button on the side that you push and talk into like a walkie-talkie. In response to a request, an AI-powered rabbit head pops up and tries to fulfill whatever task you ask.

Chatbots like ChatGPT rely on technology known as a large language model, and Rabbit says it uses both that system and a new type of AI it calls a "large action model." In basic terms, it learns how people use websites and apps and mimics these actions after a voice prompt.

It won't just play a song on Spotify, or start streaming a video on YouTube, which Siri and other voice assistants can already do, but Rabbit will order DoorDash for you, call an Uber, book your family's vacation. And it makes suggestions after learning a user's tastes and preferences.

Storing potentially dozens or hundreds of a person's passwords raises instant questions about privacy. But Rabbit claims it saves user credentials in a way that makes it impossible for the company, or anyone else, to access someone's personal information. The company says it will not sell or share user data with third parties "without your formal, explicit permission."

A Rabbit employee demonstrates the company's Rabbit R1 device. The company says more than 80,000 people have preordered the device for $199. Stella Kalinina for NPR hide caption

A Rabbit employee demonstrates the company's Rabbit R1 device. The company says more than 80,000 people have preordered the device for $199.

The company, which says more than 80,000 people have preordered the Rabbit R1, will start shipping the devices in the coming months.

"This is the first time that AI exists in a hardware format," said Ashley Bao, a spokeswoman for Rabbit at the company's Santa Monica, Calif., headquarters. "I think we've all been waiting for this moment. We've had our Alexa. We've had our smart speakers. But like none of them [can] perform tasks from end to end and bring words to action for you."

Excitement in Silicon Valley over AI agents is fueling an increasingly crowded field of gizmos and services. Google and Microsoft are racing to develop products that harness AI to automate busywork. The web browser Arc is building a tool that uses an AI agent to surf the web for you. Another startup, called Humane, has developed a wearable AI pin that projects a display image on a user's palm. It's supposed to assist with daily tasks and also make people pick up their phones less frequently.

Similarly, Rabbit claims its device will allow people to get things done without opening apps (you log in to all your various apps on a Rabbit web portal, so it uses your credentials to do things on your behalf).

To work, the Rabbit R1 has to be connected to Wi-Fi, but there is also a SIM card slot, in case people want to buy a separate data plan just for the gadget.

When asked why anyone would want to carry around a separate device just to do something your smartphone could do in 30 seconds, Rabbit spokesman Ryan Fenwick argued that using apps to place orders and make requests all day takes longer than we might imagine.

"We are looking at the entire process, end to end, to automate as much as possible and make these complex actions much quicker and much more intuitive than what's currently possible with multiple apps on a smartphone," Fenwick said.

ChatGPT's introduction in late 2022 set off a frenzy at companies in many industries trying to ride the latest tech industry wave. That chatbot exuberance is about to be transferred to the world of gadgets, said Duane Forrester, an analyst at the firm Yext.

Google and Microsoft are racing to develop products that harness AI to automate busywork, which might make other AI-powered assistants obsolete. Stella Kalinina for NPR hide caption

Google and Microsoft are racing to develop products that harness AI to automate busywork, which might make other AI-powered assistants obsolete.

"Early on, with the unleashing of AI, every single product or service attached the letters "A" and "I" to whatever their product or service was," Forrester said. "I think we're going to end up seeing a version of that with hardware as well."

Forrester said an AI walkie-talkie might quickly become obsolete when companies like Apple and Google make their voice assistants smarter with the latest AI innovations.

"You don't need a different piece of hardware to accomplish this," he said. "What you need is this level of intelligence and utility in our current smartphones, and we'll get there eventually."

Researchers are worried that AI-powered personal assistant technology could eventually go wrong. Stella Kalinina for NPR hide caption

Researchers are worried that AI-powered personal assistant technology could eventually go wrong.

Researchers are worried about where such technology could eventually go awry.

The AI assistant purchasing the wrong nonrefundable flight, for instance, or sending a food order to someone else's house are among potential snafus that analysts have mentioned.

A 2023 paper by the Center for AI Safety warned against AI agents going rogue. It said if an AI agent is given an "open-ended goal" say, maximize a person's stock market profits without being told how to achieve that goal, it could go very wrong.

"We risk losing control over AIs as they become more capable. AIs could optimize flawed objectives, drift from their original goals, become power-seeking, resist shutdown, and engage in deception. We suggest that AIs should not be deployed in high-risk settings, such as by autonomously pursuing open-ended goals or overseeing critical infrastructure, unless proven safe," according to a summary of the paper.

At Rabbit's Santa Monica office, Rabbit R1 Creative Director Anthony Gargasz pitches the device as a social media reprieve. Use it to make a doctor's appointment or book a hotel without being sucked into an app's feed for hours.

"Absolutely no doomscrolling on the Rabbit R1," said Gargasz. "The scroll wheel is for intentional interaction."

His colleague Ashley Bao added that the whole point of the gadget is to "get things done efficiently." But she acknowledged there's a cutesy factor too, comparing it to the keychain-size electronic pets that were popular in the 1990s.

"It's like a Tamagotchi but with AI," she said.

Excerpt from:

AI agents like Rabbit aim to book your vacation and order your Uber - NPR

Posted in Ai

Google Just Released Two Open AI Models That Can Run on Laptops – Singularity Hub

Last year, Google united its AI units in Google DeepMind and said it planned to speed up product development in an effort to catch up to the likes of Microsoft and OpenAI. The stream of releases in the last few weeks follows through on that promise.

Two weeks ago, Google announced the release of its most powerful AI to date, Gemini Ultra, and reorganized its AI offerings, including its Bard chatbot, under the Gemini brand. A week later, they introduced Gemini Pro 1.5, an updated Pro model that largely matches Gemini Ultrasperformance and also includes an enormous context windowthe amount of data you can prompt it withfor text, images, and audio.

Today, the company announced two new models. Going by the name Gemma, the models are much smaller than Gemini Ultra, weighing in at 2 and 7 billion parameters respectively. Google said the models are strictly text-basedas opposed to multimodal models that are trained on a variety of data, including text, images, and audiooutperform similarly sized models, and can be run on a laptop, desktop, or in the cloud. Before training, Google stripped datasets of sensitive data like personal information. They also fine-tuned and stress-tested the trained models pre-release to minimize unwanted behavior.

The models were built and trained with the same technology used in Gemini, Google said, but in contrast, theyre being released under an open license.

That doesnt mean theyre open-source. Rather, the company is making the model weights available so developers can customize and fine-tune them. Theyre also releasing developer tools to help keep applications safe and make them compatible with major AI frameworks and platforms. Google says the models can be employed for responsible commercial usage and distributionas defined in the terms of usefor organizations of any size.

If Gemini is aimed at OpenAI and Microsoft, Gemma likely has Meta in mind. Meta is championing a more open model for AI releases, most notably for its Llama 2 large language model. Though sometimes confused for an open-source model, Meta has not released the dataset or code used to train Llama 2. Other more open models, like the Allen Institute for AIs (AI2) recent OLMo models, do include training data and code. Googles Gemma release is more akin to Llama 2 than OLMo.

[Open models have] become pretty pervasive now in the industry, Googles Jeanine Banks said in a press briefing. And it often refers to open weights models, where there is wide access for developers and researchers to customize and fine-tune models but, at the same time, the terms of usethings like redistribution, as well as ownership of those variants that are developedvary based on the models own specific terms of use. And so we see some difference between what we would traditionally refer to as open source and we decided that it made the most sense to refer to our Gemma models as open models.

Still, Llama 2 has been influential in the developer community, and open models from the likes of French startup, Mistral, and others are pushing performance toward state-of-the-art closed models, like OpenAIs GPT-4. Open models may make more sense in enterprise contexts, where developers can better customize them. Theyre also invaluable for AI researchers working on a budget. Google wants to support such research with Google Cloud credits. Researchers can apply for up to $500,000 in credits toward larger projects.

Just how open AI should be is still a matter of debate in the industry.

Proponents of a more open ecosystem believe the benefits outweigh the risks. An open community, they say, can not only innovate at scale, but also better understand, reveal, and solve problems as they emerge. OpenAI and others have argued for a more closed approach, contending the more powerful the model, the more dangerous it could be out in the wild. A middle road might allow an open AI ecosystem but more tightly regulate it.

Whats clear is both closed and open AI are moving at a quick pace. We can expect more innovation from big companies and open communities as the year progresses.

Image Credit: Google

Continue reading here:

Google Just Released Two Open AI Models That Can Run on Laptops - Singularity Hub

Posted in Ai

Intel Launches World’s First Systems Foundry Designed for the AI Era – Investor Relations :: Intel Corporation (INTC)

Announced at Intel Foundry Direct Connect, Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions and new Intel Foundry Advanced System Assembly and Test capabilities. Intel also affirmed that its ambitious five-nodes-in-four-years process roadmap remains on track and will deliver the industrys first backside power solution. (Credit: Intel Corporation)

Intel announces expanded process roadmap, customers and ecosystem partners to deliver on ambition to be the No. 2 foundry by 2030.

Company hosts Intel Foundry event featuring U.S. Commerce Secretary Gina Raimondo, Arm CEO Rene Haas and Open AI CEO Sam Altman and others.

NEWS HIGHLIGHTS

SAN JOSE, Calif.--(BUSINESS WIRE)-- Intel Corp. (INTC) today launched Intel Foundry as a more sustainable systems foundry business designed for the AI era and announced an expanded process roadmap designed to establish leadership into the latter part of this decade. The company also highlighted customer momentum and support from ecosystem partners including Synopsys, Cadence, Siemens and Ansys who outlined their readiness to accelerate Intel Foundry customers chip designs with tools, design flows and IP portfolios validated for Intels advanced packaging and Intel 18A process technologies.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20240221189319/en/

Announced at Intel Foundry Direct Connect, Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions and new Intel Foundry Advanced System Assembly and Test capabilities. Intel also affirmed that its ambitious five-nodes-in-four-years process roadmap remains on track and will deliver the industrys first backside power solution. (Credit: Intel Corporation)

The announcements were made at Intels first foundry event, Intel Foundry Direct Connect, where the company gathered customers, ecosystem companies and leaders from across the industry. Among the participants and speakers were U.S. Secretary of Commerce Gina Raimondo, Arm CEO Rene Haas, Microsoft CEO Satya Nadella, OpenAI CEO Sam Altman and others.

More: Intel Foundry Direct Connect (Press Kit)

AI is profoundly transforming the world and how we think about technology and the silicon that powers it, said Intel CEO Pat Gelsinger. This is creating an unprecedented opportunity for the worlds most innovative chip designers and for Intel Foundry, the worlds first systems foundry for the AI era. Together, we can create new markets and revolutionize how the world uses technology to improve peoples lives.

Process Roadmap Expands Beyond 5N4Y

Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions. Intel also affirmed that its ambitious five-nodes-in-four-years (5N4Y) process roadmap remains on track and will deliver the industrys first backside power solution. Company leaders expect Intel will regain process leadership with Intel 18A in 2025.

The new roadmap includes evolutions for Intel 3, Intel 18A and Intel 14A process technologies. It includes Intel 3-T, which is optimized with through-silicon vias for 3D advanced packaging designs and will soon reach manufacturing readiness. Also highlighted are mature process nodes, including new 12 nanometer nodes expected through the joint development with UMC announced last month. These evolutions are designed to enable customers to develop and deliver products tailored to their specific needs. Intel Foundry plans a new node every two years and node evolutions along the way, giving customers a path to continuously evolve their offerings on Intels leading process technology.

Intel also announced the addition of Intel Foundry FCBGA 2D+ to its comprehensive suite of ASAT offerings, which already include FCBGA 2D, EMIB, Foveros and Foveros Direct.

Microsoft Design on Intel 18A Headlines Customer Momentum

Customers are supporting Intels long-term systems foundry approach. During Pat Gelsingers keynote, Microsoft Chairman and CEO Satya Nadella stated that Microsoft has chosen a chip design it plans to produce on the Intel 18A process.

We are in the midst of a very exciting platform shift that will fundamentally transform productivity for every individual organization and the entire industry, Nadella said. To achieve this vision, we need a reliable supply of the most advanced, high-performance and high-quality semiconductors. Thats why we are so excited to work with Intel Foundry, and why we have chosen a chip design that we plan to produce on Intel 18A process.

Intel Foundry has design wins across foundry process generations, including Intel 18A, Intel 16 and Intel 3, along with significant customer volume on Intel Foundry ASAT capabilities, including advanced packaging.

In total, across wafer and advanced packaging, Intel Foundrys expected lifetime deal value is greater than $15 billion.

IP and EDA Vendors Declare Readiness for Intel Process and Packaging Designs

Intellectual property and electronic design automation (EDA) partners Synopsys, Cadence, Siemens, Ansys, Lorentz and Keysight disclosed tool qualification and IP readiness to enable foundry customers to accelerate advanced chip designs on Intel 18A, which offers the foundry industrys first backside power solution. These companies also affirmed EDA and IP enablement across Intel node families.

At the same time, several vendors announced plans to collaborate on assembly technology and design flows for Intels embedded multi-die interconnect bridge (EMIB) 2.5D packaging technology. These EDA solutions will ensure faster development and delivery of advanced packaging solutions for foundry customers.

Intel also unveiled an "Emerging Business Initiative" that showcases a collaboration with Arm to provide cutting-edge foundry services for Arm-based system-on-chips (SoCs). This initiative presents an important opportunity for Arm and Intel to support startups in developing Arm-based technology and offering essential IP, manufacturing support and financial assistance to foster innovation and growth.

Systems Approach Differentiates Intel Foundry in the AI Era

Intels systems foundry approach offers full-stack optimization from the factory network to software. Intel and its ecosystem empower customers to innovate across the entire system through continuous technology improvements, reference designs and new standards.

Stuart Pann, senior vice president of Intel Foundry at Intel said, We are offering a world-class foundry, delivered from a resilient, more sustainable and secure source of supply, and complemented by unparalleled systems of chips capabilities. Bringing these strengths together gives customers everything they need to engineer and deliver solutions for the most demanding applications.

Global, Resilient, More Sustainable and Trusted Systems Foundry

Resilient supply chains must also be increasingly sustainable, and today Intel shared its goal of becoming the industrys most sustainable foundry. In 2023, preliminary estimates show that Intel used 99% renewable electricity in its factories worldwide. Today, the company redoubled its commitment to achieving 100% renewable electricity worldwide, net-positive water and zero waste to landfills by 2030. Intel also reinforced its commitment to net-zero Scope 1 and Scope 2 GHG emissions by 2040 and net-zero upstream Scope 3 emissions by 2050.

Forward-Looking Statements

This release contains forward-looking statements, including with respect to Intels:

Such statements involve many risks and uncertainties that could cause our actual results to differ materially from those expressed or implied, including those associated with:

All information in this press release reflects Intel management views as of the date hereof unless an earlier date is specified. Intel does not undertake, and expressly disclaims any duty, to update such statements, whether as a result of new information, new developments, or otherwise, except to the extent that disclosure may be required by law.

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moores Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intels innovations, go to newsroom.intel.com and intel.com.

Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

View source version on businesswire.com: https://www.businesswire.com/news/home/20240221189319/en/

John Hipsher 1-669-223-2416 john.hipsher@intel.com

Robin Holt 1-503-616-1532 robin.holt@intel.com

Source: Intel Corp.

Released Feb 21, 2024 11:30 AM EST

More here:

Intel Launches World's First Systems Foundry Designed for the AI Era - Investor Relations :: Intel Corporation (INTC)

Posted in Ai

Energy companies tap AI to detect defects in an aging grid – E&E News by POLITICO

A helicopter loaded with cameras and sensors sweeps over a utilitys high-voltage transmission line in the southeastern United States.

High-resolution cameras record images of cables, connections and towers. Artificial intelligence tools search for cracks and flaws that could be overlooked by the naked eye, the worn-out component that could spark the next wildfire.

We have trained a lot of AI models to recognize defects, said Marion Baroux, a Germany-based business developer for Siemens Energy, which built the helicopter scanning and analysis technology.

Drones have been inspecting power lines for a decade. Today, the rapid advancement of AI and machine-learning technology has opened the door to faster detection of potential failures in aging power lines, guiding transmission owners on how to upgrade the grid to meet clean energy and extreme weather challenges.

Automating inspections is a first step in a still uncharted future for AI adoption in the electric power sector, echoing the high-stakes international debate over the risks and potential of AI technology.

President Joe Bidens executive order on AI last October emphasized caution. Safety requires robust, reliable, repeatable, and standardized evaluations of AI systems, the order said, as well as policies, institutions, and as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.

There is also a case for accelerating AIs adoption, according to Department of Energy experts speaking at a recent conference.

Balancing supply and demand on the grid is becoming more complex as renewable generation replaces fossil power plants.

AI has the potential to help us operate the grid with much higher percentages of renewables, said Andrew Bochman, senior grid strategist at the Idaho National Laboratory.

But first, AI must earn the confidence of engineers who are responsible for ensuring utilities face as few risks as possible.

Obviously, there are a lot of technical concerns about how these systems work and what we can trust them to do, said Christopher Lamb, a senior cybersecurity researcher at Sandia National Laboratories in New Mexico.

There are definitely risks associated with AI, said Colin Ponce, a computational mathematician at Lawrence Livermore National Laboratory in California. A lot of utilities have a certain amount of hesitation about it because they dont really understand what it will do.

The need for transmission owners and operators to find and prevent breaks in aging power line components was driven home tragically in Californias fatal Camp Fire in 2018.

A 99-year-old metal hook supporting a high-voltage cable on a Pacific Gas & Electric power line wore through, allowing the line to hit the tower causing a short-circuit whose sparks ignited the fire. The fire claimed 85 lives.

Baroux said Siemens Energys system may or may not have prevented the Camp Fire. But the purpose is to find the transmission line components like the failed PG&E hook that are most in need of replacement.

Another California catastrophe demonstrates a case for that capability.

On July 13, 2021, a California grid trouble man driving through Californias rugged, remote Sierra Nevada region spotted a 65-foot-tall Douglas fir that had fallen onto a PG&E power line. According to his court testimony there was nothing he could do to prevent the spread of what would be called the Dixie Fire, which burned for three months, consuming nearly 1 million acres.

Faced with the threat of more impacts between dead or dying trees and its lines, PG&E has received state regulators permission to bury 1,230 miles of its power lines at a cost of roughly $3 million per mile.

The flying inspections produce thousands of gigabytes of data per mile, which would overwhelm human investigators. We will run AI models on data, then the customer-operators will review these results to look for the most urgent actions to take. The human remains the decisionmaking, always, she said. But this saves them time.

Siemens Energy declined to discuss the systems price tag and would not identify the utility in the Southeast using it. The service is in use at the E.ON Group energy operations in Germany, in French grid operator RTE, and TenneT, which runs the Netherlands network, a Siemens Energy spokesperson said.

In addition to the helicopters camera array, its instrument pod also carries sensors that detect wasteful or damaging electrical current leaks in lines. Lidar distance measuring radar scanners are also aboard to create 3D views of towers and nearby vegetation, alerting operators to potential threats from tree impacts with lines.

The possibility of applying AI and other advanced computing solutions to grid operations is the goal of another DOE project called HIPPO, for high-performance power grid optimization. HIPPOs lead partners are the Midcontinent Independent System Operator (MISO); DOEs Pacific Northwest National Laboratory; General Electric; and Gurobi Optimization, a Beaverton, Oregon, technology firm.

HIPPO has designed high-speed computing algorithms employing machine learning tools to improve the speed and accuracy of power plant scheduling decisions by MISO, the grid operator in 15 central U.S. states and Canadas Manitoba province.

Every day, MISO operators must make decisions about which electricity generating resources will run each hour of the following day, based on the generators competing power prices and transmission costs. The growth of wind and solar power, microgrids, and customers rooftop solar power and electric vehicle charging are making decisions harder as forecasting weather impacts on the grid is also more challenging.

HIPPOs heavier computing power and complex calculations produce answers 35 times faster than current systems, allowing greener and more sustainable grid operations, MISO reported last year.

One of the advantage of HIPPO is its flexibility, said Feng Pan, PNNL research scientist and the projects principal investigator. In addition to scheduling generation and confirming grid stability, HIPPO will enable operators to run what-if scenarios involving battery storage customer-based resources, he said in an email.

HIPPO is easing its way into the MISO operation. The project, launched with a 2015 grant from DOEs Advanced Projects Research Agency-Energy, is not yet scheduled for full deployment. It will assist operators, not take over, Pan said.

For AI systems to solve problems, they will need trusted data about grid operations, said Lamb, the senior researcher at Sandia.

Are there biases that could get cooked into algorithms that could create serious risks to operation reliability, and if so, what might they be? Lamb asked.

Data issues arent waiting for AI. Even without the complications AI may bring, operators of the principal Texas grid were dangerously in the dark during Winter Storm Uri in 2021.

If an adversary can insert data into your [computer] training pipeline, there are ways they can poison your data set and cause a variety of problems, Lawrence Livermores Ponce said, adding that designing defenses against rogue data threats is a major priority.

Ponce and Lamb came down on AIs side in the conference.

There is a bunch of hype around AI that is really undeserved, Lamb said. Operators understand their businesses. They are going to be making responsible decisions, and frankly I trust them to do so.

Grid operators should be able to maximize benefits and minimize risks provided they invest wisely in safety technology, he said. It doesnt mean the risks will be zero.

If we get too scared of AI and completely put the brakes on, I fear that will hinder our ability to respond to real threats and significant risk we already have evidence for, like climate change, Ponce said.

Theres a lot of doom and a lot of gloom about the application of AI, Lamb said. Dont be scared.

Read the original post:

Energy companies tap AI to detect defects in an aging grid - E&E News by POLITICO

Posted in Ai

Tor Books Criticized for Use of AI-Generated Art in ‘Gothikana’ Cover Design – Publishers Weekly

A number of readers are calling out Tor Books over the cover art of Gothikana by RuNyx, published by Tor's romance imprint Bramble on January 23, which incorporates AI-generated assets in its design.

On February 9, BookTok influencer @emmaskies identified two Adobe Stock images that had been used for the book's cover, both of which include the phrase "Generative AI" in their titles and are flagged on the Adobe Stock website as "generated with AI."

"We cannot allow AI-generated anything to infiltrate creative spaces because they are not just going to stop at covers," says @emmaskies in the video. She goes on to suggest that the use of such images is a slippery slope, imagining a publishing industry in the near future in which AI-generated images supplant cover artists, AI language models replace editorial staff, and AI models make acquisition judgements.

The video has since garnered more than 64,000 views. Her initial analysis of the cover, in which she alleged but had not yet confirmed the use of AI-generated images, received more than 300,000 views and 35,000 likes.

This is not the first time that Tor has attracted criticism online for using AI-generated assets in book cover designs. When Tor unveiled the cover of Christopher Paolini's sci-fi thriller Fractal Noise in November 2022, the publisher was quickly met with criticism over the use of an AI-generated asset, which had been posted to Shutterstock and created with Midjourney. The book was subsequently review-bombed on Goodreads.

"During the process of creating this cover, we licensed an image from a reputable stock house. We were not aware that the image may have been created by AI," Tor Books said in a statement posted to X on December 15. "Our in-house designer used the licensed image to create the cover, which was presented to Christopher for approval." Tor decided to move ahead with the cover "due to production constraints."

In response to the statement, Eisner Awardwinning illustrator Trung Le Nguyen commented, "I might not be able to judge a book by its cover, but I sure as hell will judge its publisher."

Tor is not the only publisher catch heat for using AI-generated art on book covers. Last spring, the Verge reported on the controversy over the U.K. paperback edition of Sarah J. Maas's House of Earth and Blood, published by Bloomsbury, which credited Adobe Stock for the illustration of a wolf on the book's cover; the illustration had been marked as AI-generated on Adobe's website. Bloomsbury later claimed that its in-house design team was "unaware" that the licensed image had been created by AI.

Gothikana was originally self-published by author RuNyx in June 2021, and was reissued by Bramble in a hardcover edition featuring sprayed edges, a foil case stamp, and detailed endpapers. Bramble did not respond to PW's request for comment by press time.

See the original post here:

Tor Books Criticized for Use of AI-Generated Art in 'Gothikana' Cover Design - Publishers Weekly

Posted in Ai

Generative AI’s environmental costs are soaring and mostly secret – Nature.com

Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years that the artificial intelligence (AI) industry is heading for an energy crisis. Its an unusual admission. At the World Economic Forums annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. Theres no way to get there without a breakthrough, he said.

Im glad he said it. Ive seen consistent downplaying and denial about the AI industrys environmental costs since I started publishing about them in 2018. Altmans admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.

So what energy breakthrough is Altman banking on? Not the design and deployment of more sustainable AI systems but nuclear fusion. He has skin in that game, too: in 2021, Altman started investing in fusion company Helion Energy in Everett, Washington.

Is AI leading to a reproducibility crisis in science?

Most experts agree that nuclear fusion wont contribute significantly to the crucial goal of decarbonizing by mid-century to combat the climate crisis. Helions most optimistic estimate is that by 2029 it will produce enough energy to power 40,000 average US households; one assessment suggests that ChatGPT, the chatbot created by OpenAI in San Francisco, California, is already consuming the energy of 33,000 homes. Its estimated that a search driven by generative AI uses four to five times the energy of a conventional web search. Within years, large AI systems are likely to need as much energy as entire nations.

And its not just energy. Generative AI systems need enormous amounts of fresh water to cool their processors and generate electricity. In West Des Moines, Iowa, a giant data-centre cluster serves OpenAIs most advanced model, GPT-4. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the districts water. As Google and Microsoft prepared their Bard and Bing large language models, both had major spikes in water use increases of 20% and 34%, respectively, in one year, according to the companies environmental reports. One preprint1 suggests that, globally, the demand for water for AI could be half that of the United Kingdom by 2027. In another2, Facebook AI researchers called the environmental effects of the industrys pursuit of scale the elephant in the room.

Rather than pipe-dream technologies, we need pragmatic actions to limit AIs ecological impacts now.

Theres no reason this cant be done. The industry could prioritize using less energy, build more efficient models and rethink how it designs and uses data centres. As the BigScience project in France demonstrated with its BLOOM model3, it is possible to build a model of a similar size to OpenAIs GPT-3 with a much lower carbon footprint. But thats not whats happening in the industry at large.

It remains very hard to get accurate and complete data on environmental impacts. The full planetary costs of generative AI are closely guarded corporate secrets. Figures rely on lab-based studies by researchers such as Emma Strubell4 and Sasha Luccioni3; limited company reports; and data released by local governments. At present, theres little incentive for companies to change.

There are holes in Europes AI Act and researchers can help to fill them

But at last, legislators are taking notice. On 1 February, US Democrats led by Senator Ed Markey of Massachusetts introduced the Artificial Intelligence Environmental Impacts Act of 2024. The bill directs the National Institute for Standards and Technology to collaborate with academia, industry and civil society to establish standards for assessing AIs environmental impact, and to create a voluntary reporting framework for AI developers and operators. Whether the legislation will pass remains uncertain.

Voluntary measures rarely produce a lasting culture of accountability and consistent adoption, because they rely on goodwill. Given the urgency, more needs to be done.

To truly address the environmental impacts of AI requires a multifaceted approach including the AI industry, researchers and legislators. In industry, sustainable practices should be imperative, and should include measuring and publicly reporting energy and water use; prioritizing the development of energy-efficient hardware, algorithms, and data centres; and using only renewable energy. Regular environmental audits by independent bodies would support transparency and adherence to standards.

Researchers could optimize neural network architectures for sustainability and collaborate with social and environmental scientists to guide technical designs towards greater ecological sustainability.

Finally, legislators should offer both carrots and sticks. At the outset, they could set benchmarks for energy and water use, incentivize the adoption of renewable energy and mandate comprehensive environmental reporting and impact assessments. The Artificial Intelligence Environmental Impacts Act is a start, but much more will be needed and the clock is ticking.

K.C. is employed by both USC Annenberg, and Microsoft Research, which makes generative AI systems.

See the original post here:

Generative AI's environmental costs are soaring and mostly secret - Nature.com

Posted in Ai