Daily Archives: February 26, 2024

‘The Worlds I See’ by AI visionary Fei-Fei Li ’99 selected as Princeton Pre-read – Princeton University

Posted: February 26, 2024 at 12:18 am

Trailblazing computer scientist Fei-Fei Lis memoir The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI has been selected as the next Princeton Pre-read.

The book, which connects Lis personal story as a young immigrant and scientist with the origin stories of artificial intelligence and human-centered AI, was named to technology book lists for 2023 by the Financial Times and former President Barack Obama.

President Christopher L. Eisgruber, who began the Pre-read tradition in 2013, said he hopes Lis story will inspire incoming first-year students. After reading the book over the summer, members of the Class of 2028 will discuss the The Worlds I See with Li and Eisgruber at the Pre-read Assembly during Orientation.

Wherever your interests lie in the humanities, the social sciences, the natural sciences, or engineering, I hope that Professor Lis example will inspire and encourage you as you explore the joys of learning at Princeton, a place that Professor Li calls a paradise for the intellect, Eisgruber said in a forward written for the Pre-read edition of the book.

Li is the inaugural Sequoia Capital Professor in Computer Science at Stanford University and co-director of Stanfords Human-Centered Artificial Intelligence Institute. Last year, she was named to the TIME100 list of the most influential people in AI.

She graduated from Princeton in 1999 with a degree in physics and will be honored with the Universitys Woodrow Wilson Award during Alumni Day on Feb. 24.

Li has spent two decades at the forefront of research related to artificial intelligence, machine learning, deep learning and computer vision.

While on the faculty at Princeton in 2009, she began the project that became ImageNet, an online database that was instrumental in the development of computer vision.Princeton computer scientists Jia Deng, Kai Li andOlga Russakovsky are also members of the ImageNet senior research team.

In 2017, Fei-Fei Liand Russakovskyco-founded AI4All, which supports educational programs designed to introduce high school students with diverse perspectives, voices and experiences to the field of AI to unlock its potential to benefit humanity.

Li is an elected member of the National Academy of Engineering, the National Academy of Medicine, and the American Academy of Arts and Sciences.

Courtesy of Macmillan Publishers

The Worlds I See shares her firsthand account of how AI has already revolutionized our world and what it means for our future. Li writes about her work with national and local policymakers to ensure the responsible use of technology. She has testified on the issue before U.S. Senate and Congressional committees.

Professor Li beautifully illuminates the persistence that science demands, the disappointments and detours that are inevitable parts of research, and the discoveries, both large and small, that sustain her energy, Eisgruber said.

Li also shares deeply personal stories in her memoir, from moving to the U.S. from China at age 15 to flourishing as an undergraduate at Princeton while also helping run her familys dry-cleaning business.

Professor Lis book weaves together multiple narratives, Eisgruber said. One of them is about her life as a Chinese immigrant in America. She writes poignantly about the challenges that she and her family faced, the opportunities they treasured, and her search for a sense of belonging in environments that sometimes made her feel like an outsider.

During a talk on campus last November, Li said she sees a deep cosmic connection between her experiences as an immigrant and a scientist.

They share one very interesting characteristic, which is the uncertainty, Li said during the Princeton University Public Lecture. When you are an immigrant, or you are at the beginning of your young adult life, there is so much unknown. ... You have to explore and you have to really find your way. It is very similar to becoming a scientist.

Li said she became a scientist to find answers to the unknown, and in The Worlds I See she describes her quest for a North Star in science and life.

In the Pre-read forward, Eisgruber encouraged students to think about their own North Stars and what may guide them through their Princeton journeys.

Copies of The Worlds I See, published by Macmillan Publishers, will be sent this summer to students enrolled in the Class of 2028. (Information on admission dates and deadlines for the Class of 2028 is available on the Admission website).

More information about the Pre-read tradition for first-year students can be found on the Pre-read website. A list of previous Pre-read books follows.

2013 The Honor Code: How Moral Revolutions Happen by Kwame Anthony Appiah

2014 Meaning in Life and Why It Matters by Susan Wolf

2015 Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do by Claude Steele

2016 Our Declaration: A Reading of the Declaration of Independence in Defense of Equality by Danielle Allen

2017 What Is Populism? by Jan-Werner Mller

2018 Speak Freely: Why Universities Must Defend Free Speech by Keith Whittington

2019 Stand Out of Our Light: Freedom and Resistance in the Attention Economy by James Williams

2020 This America by Jill Lepore

2021 Moving Up Without Losing Your Way by Jennifer Morton

2022 Every Day the River Changes by Jordan Salama

2023 How to Stand Up to a Dictator: The Fight for Our Future by Maria Ressa.

Read the original post:

'The Worlds I See' by AI visionary Fei-Fei Li '99 selected as Princeton Pre-read - Princeton University

Posted in Ai | Comments Off on ‘The Worlds I See’ by AI visionary Fei-Fei Li ’99 selected as Princeton Pre-read – Princeton University

Vatican research group’s book outlines AI’s ‘brave new world’ – National Catholic Reporter

Posted: at 12:18 am

In her highly acclaimed book God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning, Meghan O'Gieblyn claims that "[t]oday artificial intelligence and information technology have absorbed many of the questions that were once taken up by theologians and philosophers: the mind's relationship to the body, the question of free will, the possibility of immortality." Encountering Artificial Intelligence: Ethical and Anthropological Investigations is evidence that Catholic theologians and philosophers, among others, aren't quite willing yet to cede the field and retreat into merely historical studies.

Encountering Artificial Intelligence: Ethical and Anthropological Investigations

A.I. Research Group for the Centre for Digital Culture and the Dicastery for Culture and Education of the Holy See

274 pages; Pickwick Publications

$37.00

At the same time, this book confirms O'Gieblyns point that advances in AI have raised anew, and become the intellectual background for, what the authors of Encountering Artificial Intelligence term "a set of existential questions about the meaning and nature not only of intelligence but also of personhood, consciousness, and relationship." In brief, how to think about AI has raised deep questions about how to think about human beings.

Encountering Artificial Intelligence is the initial publication in the book series Theological Investigations of Artificial Intelligence, a collaboration between the Journal of Moral Theology and the AI Research Group for the Centre for Digital Culture, which is comprised of North American theologians, philosophers and ethicists assembled at the invitation of the Vatican.

The lead authors of this book, which represents several years of work, are Matthew Gaudet (Santa Clara University), Noreen Herzfeld (College of St. Benedict), Paul Scherz (University of Virginia) and Jordan Wales (Hillsdale College); 16 further contributing authors are also credited. The book is presented as an instrumentum laboris, which is to say, "a point of departure for further discussion and reflection." Judged by that aim, it is a great success. It is a stimulant to wonder.

The book is organized in two parts. The first takes up anthropological questions no less than "the meaning of terms such as person, intelligence, consciousness, and relationship" while the second concentrates on "ethical issues already emerging from the AI world," such as the massive accumulation of power and wealth by big technology companies. (The "cloud," after all, depends on huge economies of scale and intensive extraction of the earth's minerals.) As the authors acknowledge, these sets of questions are interconnected. For example, "the way that we think about and treat AI will shape our own exercise of personhood." Thus, anthropological questions have high ethical stakes.

The book's premise is that the Catholic intellectual and social teaching traditions, far from being obsolete in our disenchanted, secular age, offer conceptual tools to help us grapple with the challenges of our brave new world. The theology of the Trinity figures pivotally in the book's analysis of personhood and consciousness. "Ultimately," the authors claim, "an understanding of consciousness must be grounded in the very being of the Triune God, whose inner life is loving mutual self-gift." In addressing emerging ethical issues, the authors turn frequently to Pope Francis' critique of the technocratic paradigm and his call for a culture of encounter, which they claim give us "specific guidance for addressing the pressing concerns of this current moment."

Part of the usefulness of the book is that, at points, its investigations clearly need to go deeper. For example, the book's turn to the heavy machinery of the theology of the Trinity in order to shed light on personhood short-circuits the philosophical reflection it admirably begins. A key question the authors raise is "whether [machines] can have that qualitative and subjectively private experience that we call consciousness." But in what sense is consciousness an "experience?"

It seems, at least, that we don't experience it in the same way that we have the experience of seeing the sky as blue unless we want to reduce consciousness precisely to such experiences. Arguably, though, consciousness is better understood either as the necessary condition for having such an experience, or as an awareness or form of knowledge (consider the etymology of the term) that goes along with it and is accessible through it. One way or the other, the question needs more attention and care.

It is also important for the discussion of AI that there are distinct forms or levels of consciousness. When I interact with my dog, he is evidently aware of me, but he gives little evidence of being aware of my awareness of his awareness of me. (He is hopelessly bad, accordingly, at trying to trick or deceive me.) By contrast, when I interact with another human being (say, my wife), there is at play what the philosopher Stephen Darwall calls "a rich set of higher-order attitudes: I am aware of her awareness of me, aware of her awareness of my awareness of her, aware of her awareness of my awareness of her awareness of me, and so on." There's a reason why the science fiction writer and essayist Ted Chiang has claimed that AI should have been called merely applied statistics: It's just not in the same ballpark as human beings, or even animals like dogs.

An interesting counter to this line of thought is that AI systems, embodied as robots, may eventually be able to behave in ways indistinguishable from human beings and other animals. In that case, what grounds would we have to deny that the systems are conscious? Further, if we do want to deny that behavior serves as evidence of consciousness, wouldn't we also have to deny it in the case of human beings and other animals? Skepticism about AI would give rise to rampant skepticism about there being other minds.

The authors counter this worry by doubling down on the claim that AI lacks "a personal, subjective grasp of reality, an intentional engagement in it." From this point of view, so long as AI systems lack this sort of consciousness, it follows that they cannot, for example, "be our friends, for they cannot engage in the voluntary empathic self-gift that characterizes the intimacy of friends." But I wonder if this way of countering the worry goes at it backward.

Perhaps what we need first and foremost is not a "phenomenology of consciousness" (in support of the claim that AI systems don't have it in the way we do), but a "phenomenology of friendship" (to make it clear that AI systems don't provide it as human beings can, with "empathic self-gift"). Perhaps, in other words, the focus on consciousness as the human difference isn't the place to start. A strange moment in the book, when it is allowed that God could make a machine conscious and thereby like us, suggests a deeper confusion. Whatever else consciousness is, it's surely not a thing that could be plopped into other things, like life into the puppet Pinocchio. (Not that life is such a thing either!)

The second part of the book, on emerging ethical issues, doesn't provoke the same depth of wonder as the first, but it does admirably call attention to the question of who benefits in the race to implement AI. Without a doubt, big corporations like Microsoft and Google do; it's by no means a given that the common good will benefit at all.

The book also offers some wise advice. For example, in a Mennonite-like moment, "We ought to analyze the use of AI and AI-embedded technologies in terms of how they foster or diminish relational virtues so that we strengthen fraternity, social friendship, and our relationship with the environment." Further, we "ought to inquire into ways that AI and related technologies deepen or diminish our experience of awe and wonder "

Amen to that. Encountering Artificial Intelligence makes an important start.

Originally posted here:

Vatican research group's book outlines AI's 'brave new world' - National Catholic Reporter

Posted in Ai | Comments Off on Vatican research group’s book outlines AI’s ‘brave new world’ – National Catholic Reporter

Honor’s Magic 6 Pro launches internationally with AI-powered eye tracking on the way – The Verge

Posted: at 12:18 am

A month and a half after debuting the Magic 6 Pro in China, Honor is announcing global availability of its latest flagship at Mobile World Congress in Barcelona, Spain. Alongside it, the company has also announced pricing for the new Porsche Design Honor Magic V2 RSR, a special edition of the Magic V2 foldable with higher specs and a design themed around the German car brand.

The Magic 6 Pro is set to retail for 1,299 (1,099.99, around $1,407) with 12GB of RAM and 512GB of storage and be available from March 1st, while the Porsche Design Magic V2 RSR will cost 2,699 (2,349.99, around $2,625) with 16GB of RAM and 1TB of storage and will ship on March 18th. Expect both to be available in European markets, but theyre unlikely to be officially available in the US.

Since its 2024, naturally, a big part of Honors pitch for the Magic 6 Pro is its AI-powered features. For starters, Honor says it will eventually support the AI-powered eye-tracking feature it teased at Qualcomms Snapdragon Summit last year. Honor claims the feature will be able to spot when youre looking at notifications on the Dynamic Island-style interface at the top of the screen (Honor calls this its Magic Capsule) and open the relevant app without you needing to physically tap on it. I, for one, will be very interested in seeing how Honor draws the line between a quick glance and an intentional look.

Other AI-powered features include Magic Portal, which attempts to spot when details like events or addresses are mentioned in your messages and automatically link to the appropriate maps or calendar app. Honor also says its developing an AI-powered tool thatll auto-generate a video based on your photos using a text prompt, which Honor claims is processed on-device using its MagicLM technology. (Yes, the company remains a big fan of its Magic branding.)

Aside from its AI-powered features, this is a more typical flagship smartphone. Its powered by Qualcomms latest Snapdragon 8 Gen 3 processor and has a large 5,600mAh battery that can be fast-charged at up to 80W over a cable or 66W wirelessly. Its screen is a 6.8-inch 120Hz OLED display with a resolution of 2800 x 1280 and a claimed peak brightness of up to 5,000 nits (though, in regular usage, the maximum brightness of the screen will be closer to 1,800 nits).

On the back, you get a trio of cameras built into the phones massive circular camera bump. Its 50-megapixel main camera has a variable aperture that can switch between f/1.4 and f/2.0 depending on how much depth of field you want in your shots. Thats joined by a 50-megapixel ultrawide and a 180-megapixel periscope with a 2.5x optical zoom. The whole device is IP68 rated for dust and water resistance, which is the highest level of protection you typically get on mainstream phones.

Alongside the international launch of the Magic 6 Pro, Honor is also bringing a Porsche-themed version of the Magic V2 foldable I reviewed back in January to international markets. As well as getting the words Porsche Design printed on the back of the phone, and a camera bump design thats supposed to evoke the look of the German sports car, the Porsche version of the phone has 1TB of onboard storage rather than 512GB, more durable glass on its external display, and comes with a stylus in the box. A similar Porsche-themed edition of the Magic 6 is coming in March, but Honor isnt sharing any images of the design just yet.

Otherwise, the Porsche Design Honor Magic V2 RSR is the same as the Magic V2 that preceded it. It maintains the same thin and light design, measuring just 9.9mm thick when folded (not including the camera bump) and weighing in at 234 grams thanks in part to its titanium hinge construction. Camera setups are the same across the two devices, with a 50-megapixel main camera, 50-megapixel ultrawide, and a 20-megapixel telephoto.

Unfortunately, despite this being a newly launched variant, the Porsche edition of the phone still uses last years Snapdragon 8 Gen 2 processor due to the Magic V2 having originally launched in China way back in July 2023.

Photography by Jon Porter / The Verge

Read this article:

Honor's Magic 6 Pro launches internationally with AI-powered eye tracking on the way - The Verge

Posted in Ai | Comments Off on Honor’s Magic 6 Pro launches internationally with AI-powered eye tracking on the way – The Verge

Google explains Gemini’s embarrassing AI pictures of diverse Nazis – The Verge

Posted: at 12:18 am

Google has issued an explanation for the embarrassing and wrong images generated by its Gemini AI tool. In a blog post on Friday, Google says its model produced inaccurate historical images due to tuning issues. The Verge and others caught Gemini generating images of racially diverse Nazis and US Founding Fathers earlier this week.

Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearlynotshow a range, Prabhakar Raghavan, Googles senior vice president, writes in the post. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely wrongly interpreting some very anodyne prompts as sensitive.

This led Gemini AI to overcompensate in some cases, like what we saw with the images of the racially diverse Nazis. It also caused Gemini to become over-conservative. This resulted in it refusing to generate specific images of a Black person or a white person when prompted.

In the blog post, Raghavan says Google is sorry the feature didnt work well. He also notes that Google wants Gemini to work well for everyone and that means getting depictions of different types of people (including different ethnicities) when you ask for images of football players or someone walking a dog. But, he says:

However, if you prompt Gemini for images of a specific type of person such as a Black teacher in a classroom, or a white veterinarian with a dog or people in particular cultural or historical contexts, you should absolutely get a response that accurately reflects what you ask for.

Raghavan says Google is going to continue testing Gemini AIs image-generation abilities and work to improve it significantly before reenabling it. As weve said from the beginning, hallucinations are a known challenge with all LLMs [large language models] there are instances where the AI just gets things wrong, Raghavan notes. This is something that were constantly working on improving.

See the original post here:

Google explains Gemini's embarrassing AI pictures of diverse Nazis - The Verge

Posted in Ai | Comments Off on Google explains Gemini’s embarrassing AI pictures of diverse Nazis – The Verge

Google cut a deal with Reddit for AI training data – The Verge

Posted: at 12:18 am

The collaboration will give Google access to Reddits data API, which delivers real-time content from Reddits platform. This will provide Google with an efficient and structured way to access the vast corpus of existing content on Reddit, while also allowing the company to display content from Reddit in new ways across its products.

When Reddit CEO Steve Huffman spoke to The Verge last year about Reddits API changes and the subsequent protests, he said, The API usage is about covering costs and data licensing is a new potential business for us, suggesting Reddit may seek out similar revenue-generating arrangements in the future.

The partnership will give Reddit access to Vertex AI as well, Googles AI-powered service thats supposed to help companies improve their search results. Reddit says the change doesnt affect the companys data API terms, which prevent developers or companies from accessing it for commercial purposes without approval.

Despite this deal, Google and Reddit havent always seen eye to eye. Reddit previously threatened to block Google from crawling its site over concerns that companies would use its data for free to train AI models. Reddit is also poised to announce its initial public offering within the coming weeks, and its likely making this change as part of its effort to boost its valuation, which sat at more than $10 billion in 2021.

Read more:

Google cut a deal with Reddit for AI training data - The Verge

Posted in Ai | Comments Off on Google cut a deal with Reddit for AI training data – The Verge

What’s the point of Elon Musk’s AI company? – The Verge

Posted: at 12:18 am

Look, Ive been following the adventures of xAI, Elon Musks AI company, and Ive come to a conclusion: its only real idea is What if AI, but with Elon Musk this time?

Whats publicly available about xAI makes it seem like Musk showed up to the generative AI party late, and without any beer. This is 2024. The party is crowded now. xAI doesnt seem to have anything that would let it stand out beyond, well, Musk.

That hasnt stopped Musk from shopping his idea to investors, though! Last December, xAI said it was trying to raise $1 billion in a filing with the Securities and Exchange Commission. (This is not the same company as the X that was formerly known as Twitter.) There is also reporting from the Financial Times saying Musk is looking for up to $6 billion in funding.

xAI (not Twitter) so far has one product, a supposedly sassy LLM called Grok

To be sure, Musk has tweeted that xAI is not raising capital and I have had no conversations with anyone in this regard. Musk says a lot of things in public and only some of them are true, so Im going to rock with the filing, which I have seen with my own eyes.

xAI (not Twitter) is sort of an odd entity. Besides its entanglement with X (Twitter), it doesnt really seem to have a defined purpose. The xAI pitch deck obtained by Bloomberg relies on two things:

xAI (not Twitter) so far has one product, a supposedly sassy LLM called Grok, which users can access by paying $16 a month to X (the Twitter company)and then going through the X (Twitter)interface. xAI (not Twitter) does not have a standalone interface for Grok. My colleague Emilia David has characterized it as having no reason to exist, because it isnt meaningfully better than free chatbot offerings from its competitors. Its clearest distinguishing feature is that it uses X (Twitter) data as real-time input, letting it serve as kind of opera glasses for platform drama. The Discover / Trends section of the X (Twitter) app is being internally reworked to feature Groks summaries of the news, according to a person familiar with the development.

Grok was developed very fast. One possible explanation is that Musk has hired a very in-demand team of the absolute best in the field. Another is that its a fine-tuned version of an open-sourced LLM like Metas Llama. Maybe there is even a secret third thing that explains its speedy development.

Besides X (Twitter), the other source of data for xAI (not Twitter) is Tesla, according to Bloombergs reporting. That is curious! In January, Musk said, I would prefer to build products outside of Tesla unless hes given ~25 percent voting control. Musk has also said that he feels he doesnt have enough ownership over Tesla to feel comfortable growing Tesla to be a leader in AI and that without more Tesla shares, he would prefer to build products outside of Tesla.

Tesla has been working on AI in the context of self-driving cars for quite some time, and has experienced some of the same roadblocks as other self-driving car companies. Theres also the Optimus robot, I guess. These do seem like specific use cases that are considerably less general than building another LLM. That Tesla data is valuable and stretches back years. If xAI is siphoning it off, I wonder how Tesla shareholders will feel about that.

Who wants to fund yet another very general AI company in a crowded space?

There are real uses for AI, sure. Databricks exists! Its not consumer facing, but it does appear to have a specific purpose: data storage and analytics. There are smaller, more specialized firms that deal with industry-specific kinds of data. Take Fabric AI its aim is to streamline patient intake data for telemedicine. (It is also making a chatbot that threatens to replace WebMD as the most frightening place to ask about symptoms.) Or Abnormal Security, which is an AI approach to blocking malware, ransomware, and other threats. I dont know whether these companies will accomplish their goals, but they do at least have a compelling reason to exist.

So Im wondering who wants to fund yet another very general AI company in a crowded space. And Im wondering if the reason Musk is denying that hes fundraising at all is that theres not much appetite for xAI, and hes trying to minimize his embarrassment. Why does one of the worlds richest men need outside funding for this, anyway?

Silicon Valleys estimation of Musk has been remarkably resilient, probably because he has made a lot of people a lot of money in the past. But the debacle at X (Twitter) has been disastrous for his investors. And Musk has been distracted with it at a crucial time for Tesla, which has been facing increased competition. Teslas newest product, the Cybertruck, ships without a clear coat; some owners say it is rusting. (A Tesla engineer claims the orange specks are surface contamination.) And in its most recent earnings, Tesla warned its growth was slowing. Meanwhile, Rivians CEO has been open about trying to undercut Tesla directly.

A perhaps under-appreciated development in the last 20 years or so has been watching Elon Musk go from being ahead of the investing curve to being a top signal. Take, for instance, the GameStonk movement, when Musks tweet was the perfect sell signal not just for retail investors, but for sophisticated hedge funds. Or the Dogecoin crash that occurred as he called himself the Dogefather on SNL. Or even Twitter, which certainly wasnt worth what Musk ultimately paid for it and has been rapidly degrading in value ever since, to the point where the debt on the deal has been called uninvestable by a firm that specializes in distressed debt.

I dont see a compelling case being made for xAI. It doesnt have a specialized purpose; Grok is an also-ran LLM, and its meant to bolster an existing product: X. xAI isnt pitching an AI-native application, its mostly just saying, Hey, look at OpenAI.

Musk is trying to pitch a new AI startup without a clear focus as the generative AI hype is starting to die down. Its not just ChatGPT Microsofts Copilot experiences a steep drop-off in use after a month. There is now an open question about whether the productivity gains from AI are enough to justify how much it costs. So heres what Im wondering: how many investors believe just add Elon will fix it?

With reporting by Alex Heath.

Follow this link:

What's the point of Elon Musk's AI company? - The Verge

Posted in Ai | Comments Off on What’s the point of Elon Musk’s AI company? – The Verge

AI agents like Rabbit aim to book your vacation and order your Uber – NPR

Posted: at 12:18 am

The AI-powered Rabbit R1 device is seen at Rabbit Inc.'s headquarters in Santa Monica, California. The gadget is meant to serve as a personal assistant fulfilling tasks such as ordering food on DoorDash for you, calling an Uber or booking your family's vacation. Stella Kalinina for NPR hide caption

The AI-powered Rabbit R1 device is seen at Rabbit Inc.'s headquarters in Santa Monica, California. The gadget is meant to serve as a personal assistant fulfilling tasks such as ordering food on DoorDash for you, calling an Uber or booking your family's vacation.

ChatGPT can give you travel ideas, but it won't book your flight to Cancn.

Now, artificial intelligence is here to help us scratch items off our to-do lists.

A slate of tech startups are developing products that use AI to complete real-world tasks.

Silicon Valley watchers see this new crop of "AI agents" as being the next phase of the generative AI craze that took hold with the launch of chatbots and image generators.

Last year, Sam Altman, the CEO of OpenAI, the maker of ChatGPT, nodded to the future of AI errand-helpers at the company's developer conference.

"Eventually, you'll just ask a computer for what you need, and it'll do all of these tasks for you," Altman said.

One of the most hyped companies doing this is called Rabbit. It has developed a device called the Rabbit R1. Chinese entrepreneur Jesse Lyu launched it at this year's CES, the annual tech trade show, in Las Vegas.

It's a bright orange gadget about half the size of an iPhone. It has a button on the side that you push and talk into like a walkie-talkie. In response to a request, an AI-powered rabbit head pops up and tries to fulfill whatever task you ask.

Chatbots like ChatGPT rely on technology known as a large language model, and Rabbit says it uses both that system and a new type of AI it calls a "large action model." In basic terms, it learns how people use websites and apps and mimics these actions after a voice prompt.

It won't just play a song on Spotify, or start streaming a video on YouTube, which Siri and other voice assistants can already do, but Rabbit will order DoorDash for you, call an Uber, book your family's vacation. And it makes suggestions after learning a user's tastes and preferences.

Storing potentially dozens or hundreds of a person's passwords raises instant questions about privacy. But Rabbit claims it saves user credentials in a way that makes it impossible for the company, or anyone else, to access someone's personal information. The company says it will not sell or share user data with third parties "without your formal, explicit permission."

A Rabbit employee demonstrates the company's Rabbit R1 device. The company says more than 80,000 people have preordered the device for $199. Stella Kalinina for NPR hide caption

A Rabbit employee demonstrates the company's Rabbit R1 device. The company says more than 80,000 people have preordered the device for $199.

The company, which says more than 80,000 people have preordered the Rabbit R1, will start shipping the devices in the coming months.

"This is the first time that AI exists in a hardware format," said Ashley Bao, a spokeswoman for Rabbit at the company's Santa Monica, Calif., headquarters. "I think we've all been waiting for this moment. We've had our Alexa. We've had our smart speakers. But like none of them [can] perform tasks from end to end and bring words to action for you."

Excitement in Silicon Valley over AI agents is fueling an increasingly crowded field of gizmos and services. Google and Microsoft are racing to develop products that harness AI to automate busywork. The web browser Arc is building a tool that uses an AI agent to surf the web for you. Another startup, called Humane, has developed a wearable AI pin that projects a display image on a user's palm. It's supposed to assist with daily tasks and also make people pick up their phones less frequently.

Similarly, Rabbit claims its device will allow people to get things done without opening apps (you log in to all your various apps on a Rabbit web portal, so it uses your credentials to do things on your behalf).

To work, the Rabbit R1 has to be connected to Wi-Fi, but there is also a SIM card slot, in case people want to buy a separate data plan just for the gadget.

When asked why anyone would want to carry around a separate device just to do something your smartphone could do in 30 seconds, Rabbit CEO Lyu argued that using apps to place orders and make requests all day takes longer than we might imagine.

"We are looking at the entire process, end to end, to automate as much as possible and make these complex actions much quicker and much more intuitive than what's currently possible with multiple apps on a smartphone," Lyu said.

ChatGPT's introduction in late 2022 set off a frenzy at companies in many industries trying to ride the latest tech industry wave. That chatbot exuberance is about to be transferred to the world of gadgets, said Duane Forrester, an analyst at the firm Yext.

Google and Microsoft are racing to develop products that harness AI to automate busywork, which might make other AI-powered assistants obsolete. Stella Kalinina for NPR hide caption

Google and Microsoft are racing to develop products that harness AI to automate busywork, which might make other AI-powered assistants obsolete.

"Early on, with the unleashing of AI, every single product or service attached the letters "A" and "I" to whatever their product or service was," Forrester said. "I think we're going to end up seeing a version of that with hardware as well."

Forrester said an AI walkie-talkie might quickly become obsolete when companies like Apple and Google make their voice assistants smarter with the latest AI innovations.

"You don't need a different piece of hardware to accomplish this," he said. "What you need is this level of intelligence and utility in our current smartphones, and we'll get there eventually."

Researchers are worried that AI-powered personal assistant technology could eventually go wrong. Stella Kalinina for NPR hide caption

Researchers are worried that AI-powered personal assistant technology could eventually go wrong.

Researchers are worried about where such technology could eventually go awry.

The AI assistant purchasing the wrong nonrefundable flight, for instance, or sending a food order to someone else's house are among potential snafus that analysts have mentioned.

A 2023 paper by the Center for AI Safety warned against AI agents going rogue. It said if an AI agent is given an "open-ended goal" say, maximize a person's stock market profits without being told how to achieve that goal, it could go very wrong.

"We risk losing control over AIs as they become more capable. AIs could optimize flawed objectives, drift from their original goals, become power-seeking, resist shutdown, and engage in deception. We suggest that AIs should not be deployed in high-risk settings, such as by autonomously pursuing open-ended goals or overseeing critical infrastructure, unless proven safe," according to a summary of the paper.

At Rabbit's Santa Monica office, Rabbit R1 Creative Director Anthony Gargasz pitches the device as a social media reprieve. Use it to make a doctor's appointment or book a hotel without being sucked into an app's feed for hours.

"Absolutely no doomscrolling on the Rabbit R1," said Gargasz. "The scroll wheel is for intentional interaction."

His colleague Ashley Bao added that the whole point of the gadget is to "get things done efficiently." But she acknowledged there's a cutesy factor too, comparing it to the keychain-size electronic pets that were popular in the 1990s.

"It's like a Tamagotchi but with AI," she said.

View original post here:

AI agents like Rabbit aim to book your vacation and order your Uber - NPR

Posted in Ai | Comments Off on AI agents like Rabbit aim to book your vacation and order your Uber – NPR

Announcing Microsofts open automation framework to red team generative AI Systems – Microsoft

Posted: at 12:18 am

Today we are releasing an open automation framework, PyRIT (Python Risk Identification Toolkit for generative AI), to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.

At Microsoft, we believe that security practices and generative AI responsibilities need to be a collaborative effort. We are deeply committed to developing tools and resources that enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances. This tool, and the previous investments we have made in red teaming AI since 2019, represents our ongoing commitment to democratize securing AI for our customers, partners, and peers.

Red teaming AI systems is a complex, multistep process. Microsofts AI Red Team leverages a dedicated interdisciplinary group of security, adversarial machine learning, and responsible AI experts. The Red Team also leverages resources from the entire Microsoft ecosystem, including the Fairness center in Microsoft Research; AETHER, Microsofts cross-company initiative on AI Ethics and Effects in Engineering and Research; and the Office of Responsible AI. Our red teaming is part of our larger strategy to map AI risks, measure the identified risks, and then build scoped mitigations to minimize them.

Over the past year, we have proactively red teamed several high-value generative AI systems and models before they were released to customers. Through this journey, we found that red teaming generative AI systems is markedly different from red teaming classical AI systems or traditional software in three prominent ways.

We first learned that while red teaming traditional software or classical AI systems mainly focuses on identifying security failures, red teaming generative AI systems includes identifying both security risk as well as responsible AI risks. Responsible AI risks, like security risks, can vary widely, ranging from generating content that includes fairness issues to producing ungrounded or inaccurate content. AI red teaming needs to explore the potential risk space of security and responsible AI failures simultaneously.

Secondly, we found that red teaming generative AI systems is more probabilistic than traditional red teaming. Put differently, executing the same attack path multiple times on traditional software systems would likely yield similar results. However, generative AI systems have multiple layers of non-determinism; in other words, the same input can provide different outputs. This could be because of the app-specific logic; the generative AI model itself; the orchestrator that controls the output of the system can engage different extensibility or plugins; and even the input (which tends to be language), with small variations can provide different outputs. Unlike traditional software systems with well-defined APIs and parameters that can be examined using tools during red teaming, we learned that generative AI systems require a strategy that considers the probabilistic nature of their underlying elements.

Finally, the architecture of these generative AI systems varies widely: from standalone applications to integrations in existing applications to the input and output modalities, such as text, audio, images, and videos.

These three differences make a triple threat for manual red team probing. To surface just one type of risk (say, generating violent content) in one modality of the application (say, a chat interface on browser), red teams need to try different strategies multiple times to gather evidence of potential failures. Doing this manually for all types of harms, across all modalities across different strategies, can be exceedingly tedious and slow.

This does not mean automation is always the solution. Manual probing, though time-consuming, is often needed for identifying potential blind spots. Automation is needed for scaling but is not a replacement for manual probing. We use automation in two ways to help the AI red team: automating our routine tasks and identifying potentially risky areas that require more attention.

In 2021, Microsoft developed and released a red team automation framework for classical machine learning systems. Although Counterfit still delivers value for traditional machine learning systems, we found that for generative AI applications, Counterfit did not meet our needs, as the underlying principles and the threat surface had changed. Because of this, we re-imagined how to help security professionals to red team AI systems in the generative AI paradigm and our new toolkit was born.

We like to acknowledge out that there have been work in the academic space to automate red teaming such as PAIR and open source projects including garak.

PyRIT is battle-tested by the Microsoft AI Red Team. It started off as a set of one-off scripts as we began red teaming generative AI systems in 2022. As we red teamed different varieties of generative AI systems and probed for different risks, we added features that we found useful. Today, PyRIT is a reliable tool in the Microsoft AI Red Teams arsenal.

The biggest advantage we have found so far using PyRIT is our efficiency gain. For instance, in one of our red teaming exercises on a Copilot system, we were able to pick a harm category, generate several thousand malicious prompts, and use PyRITs scoring engine to evaluate the output from the Copilot system all in the matter of hours instead of weeks.

PyRIT is not a replacement for manual red teaming of generative AI systems. Instead, it augments an AI red teamers existing domain expertise and automates the tedious tasks for them. PyRIT shines light on the hot spots of where the risk could be, which the security professional than can incisively explore. The security professional is always in control of the strategy and execution of the AI red team operation, and PyRIT provides the automation code to take the initial dataset of harmful prompts provided by the security professional, then uses the LLM endpoint to generate more harmful prompts.

However, PyRIT is more than a prompt generation tool; it changes its tactics based on the response from the generative AI system and generates the next input to the generative AI system. This automation continues until the security professionals intended goal is achieved.

Abstraction and Extensibility is built into PyRIT. Thats because we always want to be able to extend and adapt PyRITs capabilities to new capabilities that generative AI models engender. We achieve this by five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies and providing the system with memory.

PyRIT was created in response to our belief that the sharing of AI red teaming resources across the industry raises all boats. We encourage our peers across the industry to spend time with the toolkit and see how it can be adopted for red teaming your own generative AI application.

Project created by Gary Lopez; Engineering: Richard Lundeen, Roman Lutz, Raja Sekhar Rao Dheekonda, Dr. Amanda Minnich; Broader involvement from Shiven Chawla, Pete Bryan, Peter Greko, Tori Westerhoff, Martin Pouliot, Bolor-Erdene Jagdagdorj, Chang Kawaguchi, Charlotte Siska, Nina Chikanov, Steph Ballard, Andrew Berkley, Forough Poursabzi, Xavier Fernandes, Dean Carignan, Kyle Jackson, Federico Zarfati, Jiayuan Huang, Chad Atalla, Dan Vann, Emily Sheng, Blake Bullwinkel, Christiano Bianchet, Keegan Hines, eric douglas, Yonatan Zunger, Christian Seifert, Ram Shankar Siva Kumar. Grateful for comments from Jonathan Spring.

To learn more about Microsoft Security solutions, visit ourwebsite.Bookmark theSecurity blogto keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity)for the latest news and updates on cybersecurity.

Link:

Announcing Microsofts open automation framework to red team generative AI Systems - Microsoft

Posted in Ai | Comments Off on Announcing Microsofts open automation framework to red team generative AI Systems – Microsoft

ISSCC 2024: Inside AMD’s Zen 4cThe Area-Optimized Cloud Computing Core – News – All About Circuits

Posted: at 12:17 am

We continue our coverage this week of the International Solid State Circuits Conference (ISSCC), held in San Francisco, CA. At this event, groups from both academia and industry have gathered to present their latest and greatest research and developments.

As the conference program shows, many new solid-state circuits, architectures, and processors have been revealed as well as more comprehensive performance assessments.

Among the presented papers lies AMDs Zen 4c core, which may sound familiar as it has been included in recent years such as 9004 series processors or the Bergamo cloud-focused processor. This article dives into AMDs ISSCC paper to give designers more context on what the Zen 4c core offers and how AMD engineers were able to achieve it.

As more focus is shifted toward cloud computing, traditional approaches to processor design begin to become obsolete as number of cores can outweigh individual core performance. Conversely, edge-based devices can benefit from a small but powerful processor core.

These markets are where AMD hopes that the Zen 4c core can flourish. Compared to the older Zen 4 architecture, the Zen 4c is fully ISA and feature compatible, allowing for easier transitions to and from Zen 4 and Zen 4c software. In addition, the same TSMC 5 nm process is used, with more focus placed on area and power efficiency.

In terms of features, the Zen 4c core is highly similar to the normal Zen 4, with the primary deviation appearing in the L3 cache. Compared to Zen 4, the Zen 4c core only has 2 MB L3 cache compared to Zen 4s 4 MB. This tradeoff is compensated for, however, due to the increased number of cores allowed with the Zen 4c core architecture.

In order to support high-core-count dies, Zen 4c required aggressive optimization. In each SRAM cell, two transistors were shaved off by using a double-pumped architecture to perform reads and writes in one clock cycle. This allowed for a 40% macro area reduction while only reducing the clock speed by 20%.

While a lower clock frequency isnt necessarily a good thing, the lower clock ultimately means more area and power reduction per core due to 50% decreased leakage and 25% less switching capacitance. Ultimately, the Zen 4c core reported a 35% smaller core area in the same process node.

The increased area efficiency ultimately allowed AMD engineers to double the core count on a Zen 4c chiplet while maintaining the same amount of L3 cache and with only a 10% total area increase.

The resulting Zen 4c chiplet touted a 9% performance improvement over Zen 4 chiplets, showing the cores effectiveness in compute applications. Normalized to area and power, the Zen 4c shows a 25% and 9% improvement over Zen 4 architectures, paving the way for better and more efficient high-core-count processors in cloud computing applications.

Designers curious about the performance of Zen 4c cores can look to AMD EPYC 9004 series processors or 7000 series of mobile processors to evaluate the performance of the new cores. From what we know now, we can see that the Zen 4c cores excel in applications where efficient computing is required.

All images used courtesy of ISSCC and AMD

Here is the original post:

ISSCC 2024: Inside AMD's Zen 4cThe Area-Optimized Cloud Computing Core - News - All About Circuits

Posted in Cloud Computing | Comments Off on ISSCC 2024: Inside AMD’s Zen 4cThe Area-Optimized Cloud Computing Core – News – All About Circuits

Huawei Cloud: Infrastructure of Choice for AI with 10 Systematic Innovations Unveiled in MWC Barcelona 2024 – Morningstar

Posted: at 12:17 am

Huawei Cloud: Infrastructure of Choice for AI with 10 Systematic Innovations Unveiled in MWC Barcelona 2024

PR Newswire

BARCELONA, Spain, Feb. 25, 2024

BARCELONA, Spain, Feb. 25, 2024 /PRNewswire/ -- This year's Huawei Cloud Summit demonstrates how Huawei Cloud is the infrastructure of choice for AI applications. With the theme of "Accelerate Intelligence with Everything as a Service", the 500-strong event brought together executives and experts from diverse industries, such as carrier, finance, and Internet. Huawei Cloud presented 10 AI-oriented innovations and extensive industry expertise of Pangu models. The objective is an AI-ready infrastructure tailored to each industry for a faster journey towards intelligence.

Jacqueline Shi, President of Huawei Cloud Global Marketing and Sales Service, said in her speech: "Huawei Cloud is one of the fastest growing cloud service providers in the world. At Huawei Cloud, we're all about pushing boundaries and bringing cutting-edge tech to customers around the world. We have launched a series of local cloud Regions in recent years, such as in Ireland, Trkiye, Indonesia, Saudi Arabia, giving customers easy access to the best-performing cloud. With over 120 security certifications worldwide, you can be sure your business and data are safe and sound. But it is not just about the tech. We believe in helping our partners grow alongside us, and this goal is now backed by our GoCloud and GrowCloud programs. And let's not forget AI it is reshaping everything, and we're at the forefront. We're building a solid cloud foundation for everyone, for every industry, to accelerate intelligence."

Today's foundation models redefine production, interaction, service paradigms, and business models for traditional applications. They make AI a new engine for the growth of cloud computing. While the potential is vast, implementing AI in line with business objectives requires systematic innovation. Huawei Cloud CTO Bruno Zhang said that "Huawei Cloud will help you with two strategies. AI for Cloud uses AI and foundation models to elevate your experience. They revolutionize software development, digital content production, and more. Cloud for AI makes AI adoption seamless and efficient. Architectural innovation, AI-native storage, and data-AI convergence empower you to train and use AI like never before."

At the Summit, Huawei Cloud unveiled ten AI-oriented innovations that make it the cloud infrastructure of choice for AI.

KooVerse: Huawei Cloud has 85 AZs in 30 Regions across over 170 countries and regions. This global cloud infrastructure covering compute, storage, networking, and security pushes latency down to 50 ms.

Distributed QingTian architecture: Foundation models require a 10-fold growth in demand for compute resources every 18 months, far surpassing Moore's Law.To address this challenge, this architecture evolved from the conventional primary/secondary one. Built on a high-speed interconnect bus (Unified Bus), QingTian surpasses the limitations in compute, storage, and networking for a top-class AI compute backbone with heterogeneous, peer-to-peer, full-mesh computing.

AI compute: Hyperscale and stable, AI Cloud Service supports trillion-parameter model training, and training jobs can run uninterrupted on a cluster over thousands of cards for 30 days, 90% of the time. Service downtime stays within 10 minutes. It provides over 100 Pangu model capability sets and 100 adapted open source large models out of the box.

AI-Native storage: Training models needs mountains of data, and Huawei Cloud handles this demand with a three-pronged approach: EMS memory service stores petabytes of parameters with 220 TB ultra-large bandwidth and ultra-low latency down to the microsecond; SFS Turbo cache service for high throughput and concurrency of tens of millions IOPS enables warm-up of 1 billion data records in just 5 hours, not 100; Object Storage Service (OBS) knowledge lake reduces 30% costs in storing training and inference data.

E2E security: The full lifecycle covers model runtime environments, training data, the models themselves, generated content, and applications. This ensures robust, secure, and compliant models and applications.

GaussDB: This next-generation database features high availability, security, performance, flexibility, and intelligence, as well as simple and smart deployment and migration. Specifically, its enterprise-class distributed architecture ensures high availability thanks to zero intra-city dual-cluster RPO, complete isolation of software and hardware faults, and zero service downtime. For security, it is certified CC EAL4+, the highest level in the industry. For automation, GaussDB enhances database migration, deployment, and migration as the world's first AI-native database.

Data-AI convergence: The explosion of foundation models means "Data+AI" is now "Data4AI and AI4Data". Huawei Cloud LakeFormation unifies data lake from multiple lakes or warehouses so one copy of data is shared among multiple data analytics engines and AI engines without data migration. Three collaborative pipelines DataArts, ModelArts, and CodeArts then orchestrate and schedule data and AI workflows. They drive online model training and inference with real-time data. The AI4Data engine makes data governance more intelligent, from data integration, development, to quality and asset management.

Media infrastructure: In this AIGC and 3D Internet era, Huawei Cloud has built a media infrastructure of efficiency, experience, and evolution. For efficiency, Huawei Cloud MetaStudio, the content production pipeline that include Workspace and AIGC-based virtual humans, generates content more quickly and better. For experience, Huawei Cloud Live, Low Latency Live, and SparkRTC empower more seamless interactions. For evolution, Huawei Cloud provides AIGC and 3D space services with real-time user interaction. All these combine to boost the business and user experience to the next level.

Landing Zone: Enterprises use and manage resources better on Huawei Cloud thanks to unified account, identity, permissions, network, compliance, and cost management. Now multi-tenancy and collaboration are seamless among personnel, finance, resources, permissions, and security compliance.

Flexible deployment: All mentioned Pangu model capabilities and services can work in public cloud, dedicated cloud, or hybrid cloud. For example, customers can build and run dedicated AI platform and foundation models in their existing data centers using Huawei Cloud Stack, a hybrid cloud solution.

The Mobile World Congress (MWC) 2024 is taking place in Barcelona from February 26 to 29. Huawei Cloud is set to collaborate with customers and partners to present a wide range of engaging topics. Additionally, innovative products and real-world cases covering Pangu models, GaussDB, data-AI convergence, virtual human, and software development will be showcased during the event.

View original content to download multimedia:https://www.prnewswire.com/news-releases/huawei-cloud-infrastructure-of-choice-for-ai-with-10-systematic-innovations-unveiled-in-mwc-barcelona-2024-302070515.html

SOURCE HUAWEI CLOUD

Here is the original post:

Huawei Cloud: Infrastructure of Choice for AI with 10 Systematic Innovations Unveiled in MWC Barcelona 2024 - Morningstar

Posted in Cloud Computing | Comments Off on Huawei Cloud: Infrastructure of Choice for AI with 10 Systematic Innovations Unveiled in MWC Barcelona 2024 – Morningstar