Page 21234..1020..»

Category Archives: Ai

Microsoft confirms Surface and Windows AI event for March 21st – The Verge

Posted: March 8, 2024 at 6:25 am

The Surface Pro 10 and Surface Laptop 6 commercial versions will be minor spec bumps, according to sources familiar with Microsofts plans. Microsoft will also offer an OLED display on the Surface Pro 10 for consumers, which the company is expected to reveal later this spring.

The Surface Laptop 6 may include a new design

The new Surface Laptop 6 could be the most interesting device, thanks to a new design that will reportedly include thinner display bezels, rounded corners, a haptic touchpad, and two USB-C and one USB-A ports. Microsoft is rumored to be shipping both Intel Core Ultra and Snapdragon X Elite-based models of its latest Surface hardware, with Intel models expected in April and the Arm ones in June.

The event page for Microsofts March event simply says tune in here for the latest in scaling AI in your environment with Copilot, Windows, and Surface, suggesting this will be a rather low-key event thats focused on Microsofts big AI PC push.

Microsoft is also working on a new AI Explorer experience for Windows 11, thats designed as a far more advanced version of its AI assistant. Windows Central reports that it will catalog everything you do on your PC so you can search for moments in a timeline using natural language. Microsoft tried to bring this same idea to life as a Timeline feature in Windows 10, but the lack of app support meant it never really took off and was eventually removed years later.

See the article here:

Microsoft confirms Surface and Windows AI event for March 21st - The Verge

Posted in Ai | Comments Off on Microsoft confirms Surface and Windows AI event for March 21st – The Verge

Adobes new Express app brings Firefly AI tools to iOS and Android – The Verge

Posted: at 6:25 am

Adobe has released a new app for Adobe Express, its cloud-based mobile design platform, bringing the same creative, editing, and Firefly-powered generative AI features enjoyed by desktop users to iOS and Android devices. Available to try for free today in beta, the new Adobe Express app allows users to easily produce creative assets like social media posts, posters, and website banners, with Creative Cloud members able to access and edit Photoshop and Illustrator files directly within the mobile app.

The Adobe Express beta is a free download, with Premium features (which will eventually require a $9.99 monthly subscription) like the erase and remove background tools available at no additional cost while the app is in testing. Firefly-powered generative AI features like Generative Fill and Text to Image effects, however, will require Adobes generative credits. Adobe Express beta users will receive 25 credits per month. The number of monthly credits received will be tied to the users' subscription tier when the mobile app is generally available.

Adobe Express users wont see their projects from the existing mobile app in the new beta app on day one. However, when the new app leaves beta, it will then have all the historical data from the old app carried over in a seamless migration, according to Ian Wang, vice president of product for Adobe Express, on a call with The Verge.

The new Express mobile beta shares the same platform as the desktop version that was updated last year, which means collaborative workflows have been restored if youre using the beta app allowing teams to work together on the same creative projects across both desktop and mobile devices. Anyone still using the current Adobe Express mobile app wont be able to use these features.

The processing for generative AI features is cloud-based rather than on the device itself, but not every smartphone is compatible with the new beta. You can find a list of supported devices here. The Adobe Express mobile app beta is available on the Google Play Store for Android, but iOS users will need to sign up here due to restrictions Apple places on the number of beta users.

Adobes Firefly AI features have been available as standalone web apps since September 2023 (and are very much accessible on mobile devices), which are good enough to experiment with but inconvenient to use in design workflows. By contrast, the new Express beta is the first mobile app to feature them alongside other design tools, giving it a much-needed leg up over Canva a rival design platform that hasnt made its own Magic Studio AI features available to mobile users.

Correction, March 7th, 4:00PM ET: Adobes original press release said that premium features are available at no cost during the Express beta. The company informed us after publication that these premium features do not include generative AI tools, which use a separate credit-based system.

The rest is here:

Adobes new Express app brings Firefly AI tools to iOS and Android - The Verge

Posted in Ai | Comments Off on Adobes new Express app brings Firefly AI tools to iOS and Android – The Verge

A Google AI Watched 30,000 Hours of Video GamesNow It Makes Its Own – Singularity Hub

Posted: at 6:25 am

AI continues to generate plenty of light and heat. The best models in text and imagesnow commanding subscriptions and being woven into consumer productsare competing for inches. OpenAI, Google, and Anthropic are all, more or less, neck and neck.

Its no surprise then that AI researchers are looking to push generative models into new territory. As AI requires prodigious amounts of data, one way to forecast where things are going next is to look at what data is widely available online, but still largely untapped.

Video, of which there is plenty, is an obvious next step. Indeed, last month, OpenAI previewed a new text-to-video AI called Sora that stunned onlookers.

But what about videogames?

It turns out there are quite a few gamer videos online. Google DeepMind says it trained a new AI, Genie, on 30,000 hours of curated video footage showing gamers playing simple platformersthink early Nintendo gamesand now it can create examples of its own.

Genie turns a simple image, photo, or sketch into an interactive video game.

Given a prompt, say a drawing of a character and its surroundings, the AI can then take input from a player to move a character through its world. In a blog post, DeepMind showed Genies creations navigating 2D landscapes, walking around or jumping between platforms. Like a snake eating its tail, some of these worlds were even sourced from AI-generated images.

In contrast to traditional video games, Genie generates these interactive worlds frame by frame. Given a prompt and command to move, it predicts the most likely next frames and creates them on the fly. It even learned to include a sense of parallax, a common feature in platformers where the foreground moves faster than the background.

Notably, the AIs training didnt include labels. Rather, Genie learned to correlate input commandslike, go left, right, or jumpwith in-game movements simply by observing examples in its training. That is, when a character in a video moved left, there was no label linking the command to the motion. Genie figured that part out by itself. That means, potentially, future versions could be trained on as much applicable video as there is online.

The AI is an impressive proof of concept, but its still very early in development, and DeepMind isnt planning to make the model public yet.

The games themselves are pixellated worlds streaming by at a plodding one frame per second. By comparison, contemporary video games can hit 60 or 120 frames per second. Also, like all generative algorithms, Genie generates strange or inconsistent visual artifacts. Its also prone to hallucinating unrealistic futures, the team wrote in their paper describing the AI.

That said, there are a few reasons to believe Genie will improve from here.

Because the AI can learn from unlabeled online videos and is still a modest sizejust 11 billion parameterstheres ample opportunity to scale up. Bigger models trained on more information tend to improve dramatically. And with a growing industry focused on inferencethe process of by which a trained AI performs tasks, like generating images or textits likely to get faster.

DeepMind says Genie could help people, like professional developers, make video games. But like OpenAIwhich believes Sora is about more than videosthe team is thinking bigger. The approach could go well beyond video games.

One example: AI that can control robots. The team trained a separate model on video of robotic arms completing various tasks. The model learned to manipulate the robots and handle a variety of objects.

DeepMind also said Genie-generated video game environments could be used to train AI agents. Its not a new strategy. In a 2021 paper, another DeepMind team outlined a video game called XLand that was populated by AI agents and an AI overlord generating tasks and games to challenge them. The idea that the next big step in AI will require algorithms that can train one another or generate synthetic training data is gaining traction.

All this is the latest salvo in an intense competition between OpenAI and Google to show progress in AI. While others in the field, like Anthropic, are advancing multimodal models akin to GPT-4, Google and OpenAI also seem focused on algorithms that simulate the world. Such algorithms may be better at planning and interaction. Both will be crucial skills for the AI agents both organizations seem intent on producing.

Genie can be prompted with images it has never seen before, such as real world photographs or sketches, enabling people to interact with their imagined virtual worldsessentially acting as a foundation world model, the researchers wrote in the Genie blog post. We focus on videos of 2D platformer games and roboticsbut our method is general and should work for any type of domain, and is scalable to ever larger internet datasets.

Similarly, when OpenAI previewed Sora last month, researchers suggested it might herald something more foundational: a world simulator. That is, both teams seem to view the enormous cache of online video as a way to train AI to generate its own video, yes, but also to more effectively understand and operate out in the world, online or off.

Whether this pays dividends, or is sustainable long term, is an open question. The human brain operates on a light bulbs worth of power; generative AI uses up whole data centers. But its best not to underestimate the forces at play right nowin terms of talent, tech, brains, and cashaiming to not only improve AI but make it more efficient.

Weve seen impressive progress in text, images, audio, and all three together. Videos are the next ingredient being thrown in the pot, and they may make for an even more potent brew.

Image Credit: Google DeepMind

Go here to read the rest:

A Google AI Watched 30,000 Hours of Video GamesNow It Makes Its Own - Singularity Hub

Posted in Ai | Comments Off on A Google AI Watched 30,000 Hours of Video GamesNow It Makes Its Own – Singularity Hub

Elliptic Curve Murmurations Found With AI Take Flight – Quanta Magazine

Posted: at 6:25 am

Almost immediately, the preprint garnered interest, particularly from Andrew Sutherland, a research scientist at MIT who is one of the managing editors of the LMFDB. Sutherland realized that 3 million elliptic curves werent enough for his purposes. He wanted to look at much larger conductor ranges to see how robust the murmurations were. He pulled data from another immense repository of about 150 million elliptic curves. Still unsatisfied, he then pulled in data from a different repository with 300 million curves.

But even those werent enough, so I actually computed a new data set of over a billion elliptic curves, and thats what I used to compute the really high-res pictures, Sutherland said. The murmurations showed up whether he averaged over 15,000 elliptic curves at a time or a million at a time. The shape stayed the same even as he looked at the curves over larger and larger prime numbers, a phenomenon called scale invariance. Sutherland also realized that murmurations are not unique to elliptic curves, but also appear in more general L-functions. He wrote a letter summarizing his findings and sent it to Sarnak and Michael Rubinstein at the University of Waterloo.

If there is a known explanation for it I expect you will know it, Sutherland wrote.

They didnt.

Lee, He and Oliver organized a workshop on murmurations in August 2023 at Brown Universitys Institute for Computational and Experimental Research in Mathematics (ICERM). Sarnak and Rubinstein came, as did Sarnaks student Nina Zubrilina.

Zubrilina presented her research into murmuration patterns in modular forms, special complex functions which, like elliptic curves, have associated L-functions. In modular forms with large conductors, the murmurations converge into a sharply defined curve, rather than forming a discernible but dispersed pattern. In a paper posted on October 11, 2023, Zubrilina proved that this type of murmuration follows an explicit formula she discovered.

Ninas big achievement is that shes given a formula for this; I call it the Zubrilina murmuration density formula, Sarnak said. Using very sophisticated math, she has proven an exact formula which fits the data perfectly.

Her formula is complicated, but Sarnak hails it as an important new kind of function, comparable to the Airy functions that define solutions to differential equations used in a variety of contexts in physics, ranging from optics to quantum mechanics.

Though Zubrilinas formula was the first, others have followed. Every week now, theres a new paper out, Sarnak said, mainly using Zubrilinas tools, explaining other aspects of murmurations.

Jonathan Bober, Andrew Booker and Min Lee of the University of Bristol, together with David Lowry-Duda of ICERM, proved the existence of a different type of murmuration in modular forms in another October paper. And Kyu-Hwan Lee, Oliver and Pozdnyakov proved the existence of murmurations in objects called Dirichlet characters that are closely related to L-functions.

Sutherland was impressed by the significant dose of luck that had led to the discovery of murmurations. If the elliptic curve data hadnt been ordered by conductor, the murmurations would have disappeared. They were fortunate to be taking data from the LMFDB, which came pre-sorted according to the conductor, he said. Its what relates an elliptic curve to the corresponding modular form, but thats not at all obvious. Two curves whose equations look very similar can have very different conductors. For example, Sutherland noted that y2 = x3 11x + 6 has conductor 17, but flipping the minus sign to a plus sign, y2 = x3+ 11x + 6 has conductor 100,736.

Even then, the murmurations were only found because of Pozdnyakovs inexperience. I dont think we would have found it without him, Oliver said, because the experts traditionally normalize ap to have absolute value 1. But he didnt normalize them so the oscillations were very big and visible.

The statistical patterns that AI algorithms use to sort elliptic curves by rank exist in a parameter space with hundreds of dimensions too many for people to sort through in their minds, let alone visualize, Oliver noted. But though machine learning found the hidden oscillations, only later did we understand them to be the murmurations.

Editors Note: Andrew Sutherland, Kyu-Hwan Lee and the L-functions and modular forms database (LMFDB) have all received funding from the Simons Foundation, which also funds this editorially independent publication. Simons Foundation funding decisions have no influence on our coverage. More information is available here.

View post:

Elliptic Curve Murmurations Found With AI Take Flight - Quanta Magazine

Posted in Ai | Comments Off on Elliptic Curve Murmurations Found With AI Take Flight – Quanta Magazine

Amid record high energy demand, America is running out of electricity – The Washington Post

Posted: at 6:24 am

Vast swaths of the United States are at risk of running short of power as electricity-hungry data centers and clean-technology factories proliferate around the country, leaving utilities and regulators grasping for credible plans to expand the nations creaking power grid.

In Georgia, demand for industrial power is surging to record highs, with the projection of new electricity use for the next decade now 17 times what it was only recently. Arizona Public Service, the largest utility in that state, is also struggling to keep up, projecting it will be out of transmission capacity before the end of the decade absent major upgrades.

Northern Virginia needs the equivalent of several large nuclear power plants to serve all the new data centers planned and under construction. Texas, where electricity shortages are already routine on hot summer days, faces the same dilemma.

The soaring demand is touching off a scramble to try to squeeze more juice out of an aging power grid while pushing commercial customers to go to extraordinary lengths to lock down energy sources, such as building their own power plants.

When you look at the numbers, it is staggering, said Jason Shaw, chairman of the Georgia Public Service Commission, which regulates electricity. It makes you scratch your head and wonder how we ended up in this situation. How were the projections that far off? This has created a challenge like we have never seen before.

A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing. Tech firms like Amazon, Apple, Google, Meta and Microsoft are scouring the nation for sites for new data centers, and many lesser-known firms are also on the hunt.

The proliferation of crypto-mining, in which currencies like bitcoin are transacted and minted, is also driving data center growth. It is all putting new pressures on an overtaxed grid the network of transmission lines and power stations that move electricity around the country. Bottlenecks are mounting, leaving both new generators of energy, particularly clean energy, and large consumers facing growing wait times for hookups.

The situation is sparking battles across the nation over who will pay for new power supplies, with regulators worrying that residential ratepayers could be stuck with the bill for costly upgrades. It also threatens to stifle the transition to cleaner energy, as utility executives lobby to delay the retirement of fossil fuel plants and bring more online. The power crunch imperils their ability to supply the energy that will be needed to charge the millions of electric cars and household appliances required to meet state and federal climate goals.

The nations 2,700 data centers sapped more than 4 percent of the countrys total electricity in 2022, according to the International Energy Agency. Its projections show that by 2026, they will consume 6 percent. Industry forecasts show the centers eating up a larger share of U.S. electricity in the years that follow, as demand from residential and smaller commercial facilities stays relatively flat thanks to steadily increasing efficiencies in appliances and heating and cooling systems.

Data center operators are clamoring to hook up to regional electricity grids at the same time the Biden administrations industrial policy is luring companies to build factories in the United States at a pace not seen in decades. That includes manufacturers of clean tech, such as solar panels and electric car batteries, which are being enticed by lucrative federal incentives. Companies announced plans to build or expand more than 155 factories in this country during the first half of the Biden administration, according to the Electric Power Research Institute, a research and development organization. Not since the early 1990s has factory-building accounted for such a large share of U.S. construction spending, according to the group.

Utility projections for the amount of power they will need over the next five years have nearly doubled and are expected to grow, according to a review of regulatory filings by the research firm Grid Strategies.

In the past, companies tried to site their data centers in areas with major internet infrastructure, a large pool of tech talent, and attractive government incentives. But these locations are getting tapped out.

Communities that had little connection to the computing industry now find themselves in the middle of a land rush, with data center developers flooding their markets with requests for grid hookups. Officials in Columbus, Ohio; Altoona, Iowa; and Fort Wayne, Ind. are being aggressively courted by data center developers. But power supply in some of these second-choice markets is already running low, pushing developers ever farther out, in some cases into cornfields, according to JLL, a commercial real estate firm that serves the tech industry.

Grid Strategies warns in its report that there are real risks some regions may miss out on economic development opportunities because the grid cant keep up.

Across the board, we are seeing power companies say, We dont know if we can handle this; we have to audit our system; weve never dealt with this kind of influx before, said Andy Cvengros, managing director of data center markets at JLL. Everyone is now chasing power. They are willing to look everywhere for it.

We saw a quadrupling of land values in some parts of Columbus, and a tripling in areas of Chicago, he said. Its not about the land. It is about access to power. Some developers, he said, have had to sell the property they bought at inflated prices at a loss, after utilities became overwhelmed by the rush for grid hookups.

It is all happening at the same time the energy transition is steering large numbers of Americans to rely on the power grid to fuel vehicles, heat pumps, induction stoves and all manner of other household appliances that previously ran on fossil fuels. A huge amount of clean energy is also needed to create the green hydrogen championed by the White House, as developers rush to build plants that can produce the powerful zero-emissions fuel, lured by generous federal subsidies.

Planners are increasingly concerned that the grid wont be green enough or powerful enough to meet these demands.

Already, soaring power consumption is delaying coal plant closures in Kansas, Nebraska, Wisconsin and South Carolina.

In Georgia, the states major power company, Georgia Power, stunned regulators when it revealed recently how wildly off its projections were, pointing to data centers as the main culprit.

The demand has Georgia officials rethinking the states policy of offering incentives to lure computing operations, which generate few jobs but can boost community budgets through the hefty property taxes they pay. The top leaders of Georgias House and Senate, both Republicans, are championing a pause in data center incentives.

Georgia regulators, meanwhile, are exploring how to protect ratepayers while ensuring there is enough power to meet the needs of the states most-prized new tenants: clean-technology companies. Factories supplying the electric vehicle and green-energy markets have been rushing to locate in Georgia in large part on promises of cheap, reliable electricity.

When the data center industry began looking for new hubs, Atlanta was like, Bring it on, said Pat Lynch, who leads the Data Center Solutions team at real estate giant CBRE. Now Georgia Power is warning of limitations. ... Utility shortages in the face of these data center demands are happening in almost every market.

A similar dynamic is playing out in a very different region: the Pacific Northwest. In Oregon, Portland General Electric recently doubled its forecast for new electricity demand over the next five years, citing data centers and rapid industrial growth as the drivers.

That power crunch threw a wrench into the plans of Michael Halaburda and Arman Khalili, longtime data center developers whose latest project involves converting a mothballed tile factory in the Portland area. The two were under the impression only a couple of months ago that they would have no problem getting the electricity they needed to run the place. Then the power company alerted them that it would need to do a line and load study to assess whether it could supply the facility with 60 megawatts of electricity roughly the amount needed to power 45,000 homes.

The Portland project Halaburda and Khalili are developing will now be powered in large part by off-the-grid, high-tech fuel cells that convert natural gas into low-emissions electricity. The technology will be supplemented by whatever power can be secured from the grid. The partners decided that on their next project, in South Texas, theyre not going to take their chances with the grid at all. Instead, they will drill thousands of feet into the ground to draw geothermal energy.

Halaburda sees the growth as good for the country and the economy. But no one took into consideration where this is all going, he said. In the next couple of years, unless there is a real focus on expanding the grid and making it more robust, we are going to see opportunities fall by the wayside because we cant get power to where it is needed.

Companies are increasingly turning to such off-the-grid experiments as their frustration with the logjam in the nations traditional electricity network mounts. Microsoft and Google are among the firms hoping that energy-intensive industrial operations can ultimately be powered by small nuclear plants on-site, with Microsoft even putting AI to work trying to streamline the burdensome process of getting plants approved. Microsoft has also inked a deal to buy power from a company trying to develop zero-emissions fusion power. But going off the grid brings its own big regulatory and land acquisition challenges. The type of nuclear plants envisioned, for example, are not yet even operational in the United States. Fusion power does not yet exist.

The big tech companies are also exploring ways AI can help make the grid operate more efficiently. And they are developing platforms that during times of peak power demand can shift compute tasks and their associated energy consumption to the times and places where carbon-free energy is available on the grid, according to Google. But meeting both their zero-emissions pledges and their AI innovation ambitions is becoming increasingly complicated as the energy needs of their data centers grow.

These problems are not going to go away, said Michael Ortiz, CEO of Layer 9 Data Centers, a U.S. company that is looking to avoid the logjam here by building in Mexico. Data centers are going to have to become more efficient, and we need to be using more clean sources of efficient energy, like nuclear.

Officials at Equinix, one of the worlds largest data center companies, said they have been experimenting with fuel cells as backup power, but they remain hopeful they can keep the power grid as their main source of electricity for new projects.

The logjam is already pushing officials overseeing the clean-energy transition at some of the nations largest airports to look beyond the grid. The amount of energy they will need to charge fleets of electric rental vehicles and ground maintenance trucks alone is immense. An analysis shows electricity demand doubling by 2030 at both the Denver and Minneapolis airports. By 2040, they will need more than triple the electricity they are using now, according to the study, commissioned by car rental giant Enterprise, Xcel Energy and Jacobs, a consulting firm.

Utilities are not going to be able to move quickly enough to provide all this capacity, said Christine Weydig, vice president of transportation at AlphaStruxure, which designs and operates clean-energy projects. The infrastructure is not there. Different solutions will be needed. Airports, she said, are looking into dramatically expanding the use of clean-power microgrids they can build on-site.

The Biden administration has made easing the grid bottleneck a priority, but it is a politically fraught process, and federal powers are limited. Building the transmission lines and transfer stations needed involves huge land acquisitions, exhaustive environmental reviews and negotiations to determine who should pay what costs.

The process runs through state regulatory agencies, and fights between states over who gets stuck with the bill and where power lines should go routinely sink and delay proposed projects. The amount of new transmission line installed in the United States has dropped sharply since 2013, when 4,000 miles were added. Now, the nation struggles to bring online even 1,000 new miles a year. The slowdown has real consequences not just for companies but for the climate. A group of scientists led by Princeton University professor Jesse Jenkins warned in a report that by 2030 the United States risks losing out on 80 percent of the potential emission reductions from President Bidens signature climate law, the Inflation Reduction Act, if the pace of transmission construction does not pick up dramatically now.

While the proliferation of data centers puts more pressure on states to approve new transmission lines, it also complicates the task. Officials in Maryland, for example, are protesting a plan for $5.2 billion in infrastructure that would transmit power to huge data centers in Loudoun County, Va. The Maryland Office of Peoples Council, a government agency that advocates for ratepayers, called grid operator PJMs plan fundamentally unfair, arguing it could leave Maryland utility customers paying for power transmission to data centers that Virginia aggressively courted and is leveraging for a windfall in tax revenue.

Tensions over who gets power from the grid and how it gets to them are only going to intensify as the supply becomes scarcer.

In Texas, a dramatic increase in data centers for crypto mining is touching off a debate over whether they are a costly drain on an overtaxed grid. An analysis by the consulting firm Wood Mackenzie found that the energy needed by crypto operations aiming to link to the grid would equal a quarter of the electricity used in the state at peak demand. Unlike data centers operated by big tech companies such as Google and Meta, crypto miners generally dont build renewable-energy projects with the aim of supplying enough zero-emissions energy to the grid to cover their operations.

The result, said Ben Hertz-Shargel, who authored the Wood Mackenzie analysis, is that cryptos drain on the grid threatens to inhibit the ability of Texas to power other energy-hungry operations that could drive innovation and economic growth, such as factories that produce zero-emissions green hydrogen fuel or industrial charging depots that enable electrification of truck and bus fleets.

But after decades in which power was readily available, regulators and utility executives across the country generally are not empowered to prioritize which projects get connected. It is first come, first served. And the line is growing longer. To answer the call, some states have passed laws to protect crypto minings access to huge amounts of power.

Lawmakers need to think about this, Hertz-Shargel said of allocating an increasingly limited supply of power. There is a risk that strategic industries they want in their states are going to have a challenging time setting up in those places.

See the original post here:

Amid record high energy demand, America is running out of electricity - The Washington Post

Posted in Ai | Comments Off on Amid record high energy demand, America is running out of electricity – The Washington Post

‘The Worlds I See’ by AI visionary Fei-Fei Li ’99 selected as Princeton Pre-read – Princeton University

Posted: February 26, 2024 at 12:18 am

Trailblazing computer scientist Fei-Fei Lis memoir The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI has been selected as the next Princeton Pre-read.

The book, which connects Lis personal story as a young immigrant and scientist with the origin stories of artificial intelligence and human-centered AI, was named to technology book lists for 2023 by the Financial Times and former President Barack Obama.

President Christopher L. Eisgruber, who began the Pre-read tradition in 2013, said he hopes Lis story will inspire incoming first-year students. After reading the book over the summer, members of the Class of 2028 will discuss the The Worlds I See with Li and Eisgruber at the Pre-read Assembly during Orientation.

Wherever your interests lie in the humanities, the social sciences, the natural sciences, or engineering, I hope that Professor Lis example will inspire and encourage you as you explore the joys of learning at Princeton, a place that Professor Li calls a paradise for the intellect, Eisgruber said in a forward written for the Pre-read edition of the book.

Li is the inaugural Sequoia Capital Professor in Computer Science at Stanford University and co-director of Stanfords Human-Centered Artificial Intelligence Institute. Last year, she was named to the TIME100 list of the most influential people in AI.

She graduated from Princeton in 1999 with a degree in physics and will be honored with the Universitys Woodrow Wilson Award during Alumni Day on Feb. 24.

Li has spent two decades at the forefront of research related to artificial intelligence, machine learning, deep learning and computer vision.

While on the faculty at Princeton in 2009, she began the project that became ImageNet, an online database that was instrumental in the development of computer vision.Princeton computer scientists Jia Deng, Kai Li andOlga Russakovsky are also members of the ImageNet senior research team.

In 2017, Fei-Fei Liand Russakovskyco-founded AI4All, which supports educational programs designed to introduce high school students with diverse perspectives, voices and experiences to the field of AI to unlock its potential to benefit humanity.

Li is an elected member of the National Academy of Engineering, the National Academy of Medicine, and the American Academy of Arts and Sciences.

Courtesy of Macmillan Publishers

The Worlds I See shares her firsthand account of how AI has already revolutionized our world and what it means for our future. Li writes about her work with national and local policymakers to ensure the responsible use of technology. She has testified on the issue before U.S. Senate and Congressional committees.

Professor Li beautifully illuminates the persistence that science demands, the disappointments and detours that are inevitable parts of research, and the discoveries, both large and small, that sustain her energy, Eisgruber said.

Li also shares deeply personal stories in her memoir, from moving to the U.S. from China at age 15 to flourishing as an undergraduate at Princeton while also helping run her familys dry-cleaning business.

Professor Lis book weaves together multiple narratives, Eisgruber said. One of them is about her life as a Chinese immigrant in America. She writes poignantly about the challenges that she and her family faced, the opportunities they treasured, and her search for a sense of belonging in environments that sometimes made her feel like an outsider.

During a talk on campus last November, Li said she sees a deep cosmic connection between her experiences as an immigrant and a scientist.

They share one very interesting characteristic, which is the uncertainty, Li said during the Princeton University Public Lecture. When you are an immigrant, or you are at the beginning of your young adult life, there is so much unknown. ... You have to explore and you have to really find your way. It is very similar to becoming a scientist.

Li said she became a scientist to find answers to the unknown, and in The Worlds I See she describes her quest for a North Star in science and life.

In the Pre-read forward, Eisgruber encouraged students to think about their own North Stars and what may guide them through their Princeton journeys.

Copies of The Worlds I See, published by Macmillan Publishers, will be sent this summer to students enrolled in the Class of 2028. (Information on admission dates and deadlines for the Class of 2028 is available on the Admission website).

More information about the Pre-read tradition for first-year students can be found on the Pre-read website. A list of previous Pre-read books follows.

2013 The Honor Code: How Moral Revolutions Happen by Kwame Anthony Appiah

2014 Meaning in Life and Why It Matters by Susan Wolf

2015 Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do by Claude Steele

2016 Our Declaration: A Reading of the Declaration of Independence in Defense of Equality by Danielle Allen

2017 What Is Populism? by Jan-Werner Mller

2018 Speak Freely: Why Universities Must Defend Free Speech by Keith Whittington

2019 Stand Out of Our Light: Freedom and Resistance in the Attention Economy by James Williams

2020 This America by Jill Lepore

2021 Moving Up Without Losing Your Way by Jennifer Morton

2022 Every Day the River Changes by Jordan Salama

2023 How to Stand Up to a Dictator: The Fight for Our Future by Maria Ressa.

Read the original post:

'The Worlds I See' by AI visionary Fei-Fei Li '99 selected as Princeton Pre-read - Princeton University

Posted in Ai | Comments Off on ‘The Worlds I See’ by AI visionary Fei-Fei Li ’99 selected as Princeton Pre-read – Princeton University

Vatican research group’s book outlines AI’s ‘brave new world’ – National Catholic Reporter

Posted: at 12:18 am

In her highly acclaimed book God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning, Meghan O'Gieblyn claims that "[t]oday artificial intelligence and information technology have absorbed many of the questions that were once taken up by theologians and philosophers: the mind's relationship to the body, the question of free will, the possibility of immortality." Encountering Artificial Intelligence: Ethical and Anthropological Investigations is evidence that Catholic theologians and philosophers, among others, aren't quite willing yet to cede the field and retreat into merely historical studies.

Encountering Artificial Intelligence: Ethical and Anthropological Investigations

A.I. Research Group for the Centre for Digital Culture and the Dicastery for Culture and Education of the Holy See

274 pages; Pickwick Publications

$37.00

At the same time, this book confirms O'Gieblyns point that advances in AI have raised anew, and become the intellectual background for, what the authors of Encountering Artificial Intelligence term "a set of existential questions about the meaning and nature not only of intelligence but also of personhood, consciousness, and relationship." In brief, how to think about AI has raised deep questions about how to think about human beings.

Encountering Artificial Intelligence is the initial publication in the book series Theological Investigations of Artificial Intelligence, a collaboration between the Journal of Moral Theology and the AI Research Group for the Centre for Digital Culture, which is comprised of North American theologians, philosophers and ethicists assembled at the invitation of the Vatican.

The lead authors of this book, which represents several years of work, are Matthew Gaudet (Santa Clara University), Noreen Herzfeld (College of St. Benedict), Paul Scherz (University of Virginia) and Jordan Wales (Hillsdale College); 16 further contributing authors are also credited. The book is presented as an instrumentum laboris, which is to say, "a point of departure for further discussion and reflection." Judged by that aim, it is a great success. It is a stimulant to wonder.

The book is organized in two parts. The first takes up anthropological questions no less than "the meaning of terms such as person, intelligence, consciousness, and relationship" while the second concentrates on "ethical issues already emerging from the AI world," such as the massive accumulation of power and wealth by big technology companies. (The "cloud," after all, depends on huge economies of scale and intensive extraction of the earth's minerals.) As the authors acknowledge, these sets of questions are interconnected. For example, "the way that we think about and treat AI will shape our own exercise of personhood." Thus, anthropological questions have high ethical stakes.

The book's premise is that the Catholic intellectual and social teaching traditions, far from being obsolete in our disenchanted, secular age, offer conceptual tools to help us grapple with the challenges of our brave new world. The theology of the Trinity figures pivotally in the book's analysis of personhood and consciousness. "Ultimately," the authors claim, "an understanding of consciousness must be grounded in the very being of the Triune God, whose inner life is loving mutual self-gift." In addressing emerging ethical issues, the authors turn frequently to Pope Francis' critique of the technocratic paradigm and his call for a culture of encounter, which they claim give us "specific guidance for addressing the pressing concerns of this current moment."

Part of the usefulness of the book is that, at points, its investigations clearly need to go deeper. For example, the book's turn to the heavy machinery of the theology of the Trinity in order to shed light on personhood short-circuits the philosophical reflection it admirably begins. A key question the authors raise is "whether [machines] can have that qualitative and subjectively private experience that we call consciousness." But in what sense is consciousness an "experience?"

It seems, at least, that we don't experience it in the same way that we have the experience of seeing the sky as blue unless we want to reduce consciousness precisely to such experiences. Arguably, though, consciousness is better understood either as the necessary condition for having such an experience, or as an awareness or form of knowledge (consider the etymology of the term) that goes along with it and is accessible through it. One way or the other, the question needs more attention and care.

It is also important for the discussion of AI that there are distinct forms or levels of consciousness. When I interact with my dog, he is evidently aware of me, but he gives little evidence of being aware of my awareness of his awareness of me. (He is hopelessly bad, accordingly, at trying to trick or deceive me.) By contrast, when I interact with another human being (say, my wife), there is at play what the philosopher Stephen Darwall calls "a rich set of higher-order attitudes: I am aware of her awareness of me, aware of her awareness of my awareness of her, aware of her awareness of my awareness of her awareness of me, and so on." There's a reason why the science fiction writer and essayist Ted Chiang has claimed that AI should have been called merely applied statistics: It's just not in the same ballpark as human beings, or even animals like dogs.

An interesting counter to this line of thought is that AI systems, embodied as robots, may eventually be able to behave in ways indistinguishable from human beings and other animals. In that case, what grounds would we have to deny that the systems are conscious? Further, if we do want to deny that behavior serves as evidence of consciousness, wouldn't we also have to deny it in the case of human beings and other animals? Skepticism about AI would give rise to rampant skepticism about there being other minds.

The authors counter this worry by doubling down on the claim that AI lacks "a personal, subjective grasp of reality, an intentional engagement in it." From this point of view, so long as AI systems lack this sort of consciousness, it follows that they cannot, for example, "be our friends, for they cannot engage in the voluntary empathic self-gift that characterizes the intimacy of friends." But I wonder if this way of countering the worry goes at it backward.

Perhaps what we need first and foremost is not a "phenomenology of consciousness" (in support of the claim that AI systems don't have it in the way we do), but a "phenomenology of friendship" (to make it clear that AI systems don't provide it as human beings can, with "empathic self-gift"). Perhaps, in other words, the focus on consciousness as the human difference isn't the place to start. A strange moment in the book, when it is allowed that God could make a machine conscious and thereby like us, suggests a deeper confusion. Whatever else consciousness is, it's surely not a thing that could be plopped into other things, like life into the puppet Pinocchio. (Not that life is such a thing either!)

The second part of the book, on emerging ethical issues, doesn't provoke the same depth of wonder as the first, but it does admirably call attention to the question of who benefits in the race to implement AI. Without a doubt, big corporations like Microsoft and Google do; it's by no means a given that the common good will benefit at all.

The book also offers some wise advice. For example, in a Mennonite-like moment, "We ought to analyze the use of AI and AI-embedded technologies in terms of how they foster or diminish relational virtues so that we strengthen fraternity, social friendship, and our relationship with the environment." Further, we "ought to inquire into ways that AI and related technologies deepen or diminish our experience of awe and wonder "

Amen to that. Encountering Artificial Intelligence makes an important start.

Originally posted here:

Vatican research group's book outlines AI's 'brave new world' - National Catholic Reporter

Posted in Ai | Comments Off on Vatican research group’s book outlines AI’s ‘brave new world’ – National Catholic Reporter

Honor’s Magic 6 Pro launches internationally with AI-powered eye tracking on the way – The Verge

Posted: at 12:18 am

A month and a half after debuting the Magic 6 Pro in China, Honor is announcing global availability of its latest flagship at Mobile World Congress in Barcelona, Spain. Alongside it, the company has also announced pricing for the new Porsche Design Honor Magic V2 RSR, a special edition of the Magic V2 foldable with higher specs and a design themed around the German car brand.

The Magic 6 Pro is set to retail for 1,299 (1,099.99, around $1,407) with 12GB of RAM and 512GB of storage and be available from March 1st, while the Porsche Design Magic V2 RSR will cost 2,699 (2,349.99, around $2,625) with 16GB of RAM and 1TB of storage and will ship on March 18th. Expect both to be available in European markets, but theyre unlikely to be officially available in the US.

Since its 2024, naturally, a big part of Honors pitch for the Magic 6 Pro is its AI-powered features. For starters, Honor says it will eventually support the AI-powered eye-tracking feature it teased at Qualcomms Snapdragon Summit last year. Honor claims the feature will be able to spot when youre looking at notifications on the Dynamic Island-style interface at the top of the screen (Honor calls this its Magic Capsule) and open the relevant app without you needing to physically tap on it. I, for one, will be very interested in seeing how Honor draws the line between a quick glance and an intentional look.

Other AI-powered features include Magic Portal, which attempts to spot when details like events or addresses are mentioned in your messages and automatically link to the appropriate maps or calendar app. Honor also says its developing an AI-powered tool thatll auto-generate a video based on your photos using a text prompt, which Honor claims is processed on-device using its MagicLM technology. (Yes, the company remains a big fan of its Magic branding.)

Aside from its AI-powered features, this is a more typical flagship smartphone. Its powered by Qualcomms latest Snapdragon 8 Gen 3 processor and has a large 5,600mAh battery that can be fast-charged at up to 80W over a cable or 66W wirelessly. Its screen is a 6.8-inch 120Hz OLED display with a resolution of 2800 x 1280 and a claimed peak brightness of up to 5,000 nits (though, in regular usage, the maximum brightness of the screen will be closer to 1,800 nits).

On the back, you get a trio of cameras built into the phones massive circular camera bump. Its 50-megapixel main camera has a variable aperture that can switch between f/1.4 and f/2.0 depending on how much depth of field you want in your shots. Thats joined by a 50-megapixel ultrawide and a 180-megapixel periscope with a 2.5x optical zoom. The whole device is IP68 rated for dust and water resistance, which is the highest level of protection you typically get on mainstream phones.

Alongside the international launch of the Magic 6 Pro, Honor is also bringing a Porsche-themed version of the Magic V2 foldable I reviewed back in January to international markets. As well as getting the words Porsche Design printed on the back of the phone, and a camera bump design thats supposed to evoke the look of the German sports car, the Porsche version of the phone has 1TB of onboard storage rather than 512GB, more durable glass on its external display, and comes with a stylus in the box. A similar Porsche-themed edition of the Magic 6 is coming in March, but Honor isnt sharing any images of the design just yet.

Otherwise, the Porsche Design Honor Magic V2 RSR is the same as the Magic V2 that preceded it. It maintains the same thin and light design, measuring just 9.9mm thick when folded (not including the camera bump) and weighing in at 234 grams thanks in part to its titanium hinge construction. Camera setups are the same across the two devices, with a 50-megapixel main camera, 50-megapixel ultrawide, and a 20-megapixel telephoto.

Unfortunately, despite this being a newly launched variant, the Porsche edition of the phone still uses last years Snapdragon 8 Gen 2 processor due to the Magic V2 having originally launched in China way back in July 2023.

Photography by Jon Porter / The Verge

Read this article:

Honor's Magic 6 Pro launches internationally with AI-powered eye tracking on the way - The Verge

Posted in Ai | Comments Off on Honor’s Magic 6 Pro launches internationally with AI-powered eye tracking on the way – The Verge

Google explains Gemini’s embarrassing AI pictures of diverse Nazis – The Verge

Posted: at 12:18 am

Google has issued an explanation for the embarrassing and wrong images generated by its Gemini AI tool. In a blog post on Friday, Google says its model produced inaccurate historical images due to tuning issues. The Verge and others caught Gemini generating images of racially diverse Nazis and US Founding Fathers earlier this week.

Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearlynotshow a range, Prabhakar Raghavan, Googles senior vice president, writes in the post. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely wrongly interpreting some very anodyne prompts as sensitive.

This led Gemini AI to overcompensate in some cases, like what we saw with the images of the racially diverse Nazis. It also caused Gemini to become over-conservative. This resulted in it refusing to generate specific images of a Black person or a white person when prompted.

In the blog post, Raghavan says Google is sorry the feature didnt work well. He also notes that Google wants Gemini to work well for everyone and that means getting depictions of different types of people (including different ethnicities) when you ask for images of football players or someone walking a dog. But, he says:

However, if you prompt Gemini for images of a specific type of person such as a Black teacher in a classroom, or a white veterinarian with a dog or people in particular cultural or historical contexts, you should absolutely get a response that accurately reflects what you ask for.

Raghavan says Google is going to continue testing Gemini AIs image-generation abilities and work to improve it significantly before reenabling it. As weve said from the beginning, hallucinations are a known challenge with all LLMs [large language models] there are instances where the AI just gets things wrong, Raghavan notes. This is something that were constantly working on improving.

See the original post here:

Google explains Gemini's embarrassing AI pictures of diverse Nazis - The Verge

Posted in Ai | Comments Off on Google explains Gemini’s embarrassing AI pictures of diverse Nazis – The Verge

Google cut a deal with Reddit for AI training data – The Verge

Posted: at 12:18 am

The collaboration will give Google access to Reddits data API, which delivers real-time content from Reddits platform. This will provide Google with an efficient and structured way to access the vast corpus of existing content on Reddit, while also allowing the company to display content from Reddit in new ways across its products.

When Reddit CEO Steve Huffman spoke to The Verge last year about Reddits API changes and the subsequent protests, he said, The API usage is about covering costs and data licensing is a new potential business for us, suggesting Reddit may seek out similar revenue-generating arrangements in the future.

The partnership will give Reddit access to Vertex AI as well, Googles AI-powered service thats supposed to help companies improve their search results. Reddit says the change doesnt affect the companys data API terms, which prevent developers or companies from accessing it for commercial purposes without approval.

Despite this deal, Google and Reddit havent always seen eye to eye. Reddit previously threatened to block Google from crawling its site over concerns that companies would use its data for free to train AI models. Reddit is also poised to announce its initial public offering within the coming weeks, and its likely making this change as part of its effort to boost its valuation, which sat at more than $10 billion in 2021.

Read more:

Google cut a deal with Reddit for AI training data - The Verge

Posted in Ai | Comments Off on Google cut a deal with Reddit for AI training data – The Verge

Page 21234..1020..»