Daily Archives: April 12, 2024

Nvidia, Google Expand Partnership With Nvidia Blackwell Coming to Google Cloud in 2025 – Investopedia

Posted: April 12, 2024 at 5:53 am

Key Takeaways

Alphabet's (GOOGL) Google announced that Blackwell, Nvidia's (NVDA) latest and most capable artificial intelligence (AI) tech, will be available to cloud customers in early 2025, ahead of theGoogle Cloud Nextkeynote on Tuesday.

The Blackwell platform is coming to Google Cloud in two variations, the HGX B200 and theGB200 NVL72. Nvidia said the HGX B200 is designed for "the most demanding AI," data analytics, and high-performance computing workloads, while theGB200 NVL72 is designed for "next-frontier, massive-scale" model trading and real-time inferencing.

Nvidia unveiled the Blackwell platform, the latest version of its AI-powering tech in March, with analysts calling it the "most ambitious project in Silicon Valley." The chipmaker, which has establisheditself as an early leader of the AI boom, had named Amazon (AMZN) and Microsoft (MSFT) alongside Google as partners set to use the new hardware.

Google also announced an initiative to help startups accelerate the creation of generative AI (genAI) applications and services in collaboration with Nvidia. The two companies teamed up to offer some members of Nvidia Inception, the chipmaker's startup program, up to $350,000 of Google Cloud credits for startups focusing on AI.

Shares of Google parent Alphabet were up 0.8% at $156.20 as of 11:15 a.m. ET Tuesday and have gained close to 13% year to date. While Nvidia shares were down 3% at $843.68 in early trading Tuesday, they have surged 75% since the start of the year.

See more here:

Nvidia, Google Expand Partnership With Nvidia Blackwell Coming to Google Cloud in 2025 - Investopedia

Posted in Google | Comments Off on Nvidia, Google Expand Partnership With Nvidia Blackwell Coming to Google Cloud in 2025 – Investopedia

Exclusive: Google Workers Revolt Over $1.2 Billion Israel Contract – TIME

Posted: at 5:52 am

In midtown Manhattan on March 4, Googles managing director for Israel, Barak Regev, was addressing a conference promoting the Israeli tech industry when a member of the audience stood up in protest. I am a Google Cloud software engineer, and I refuse to build technology that powers genocide, apartheid, or surveillance, shouted the protester, wearing an orange t-shirt emblazoned with a white Google logo. No tech for apartheid!

The Google worker, a 23-year-old software engineer named Eddie Hatfield, was booed by the audience and quickly bundled out of the room, a video of the event shows. After a pause, Regev addressed the act of protest. One of the privileges of working in a company which represents democratic values is giving space for different opinions, he told the crowd.

Three days later, Google fired Hatfield.

Hatfield is part of a growing movement inside Google that is calling on the company to drop Project Nimbus, a $1.2 billion contract with Israel, jointly held with Amazon. The protest group, called No Tech for Apartheid, now has more than 200 Google employees closely involved in organizing, according to members, who say there are hundreds more workers sympathetic to their goals. TIME spoke to five current and five former Google workers for this story, many of whom described a growing sense of anger at the possibility of Google aiding Israel in its war in Gaza. Two of the former Google workers said they had resigned from Google in the last month in protest against Project Nimbus. These resignations, and Hatfields identity, have not previously been reported.

No Tech for Apartheids protest is as much about what the public doesnt know about Project Nimbus as what it does. The contract is for Google and Amazon to provide AI and cloud computing services to the Israeli government and military, according to the Israeli finance ministry, which announced the deal in 2021. Nimbus reportedly involves Google establishing a secure instance of Google Cloud on Israeli soil, which would allow the Israeli government to perform large-scale data analysis, AI training, database hosting, and other forms of powerful computing using Googles technology, with little oversight by the company. Google documents, first reported by the Intercept in 2022, suggest that the Google services on offer to Israel via its Cloud have capabilities such as AI-enabled facial detection, automated image categorization, and object tracking.

Further details of the contract are scarce or non-existent, and much of the workers frustration lies in what they say is Googles lack of transparency about what else Project Nimbus entails and the full nature of the companys relationship with Israel. Neither Google, nor Amazon, nor Israel, has described the specific capabilities on offer to Israel under the contract. In a statement, a Google spokesperson said: We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education. Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services. All Google Cloud customers, the spokesperson said, must abide by the company's terms of service and acceptable use policy. That policy forbids the use of Google services to violate the legal rights of others, or engage in violence that can cause death, serious harm, or injury. An Amazon spokesperson said the company is focused on making the benefits of our world-leading cloud technology available to all our customers, wherever they are located," adding it is supporting employees affected by the war and working with humanitarian agencies. The Israeli government did not immediately respond to requests for comment.

There is no evidence Google or Amazons technology has been used in killings of civilians. The Google workers say they base their protests on three main sources of concern: the Israeli finance ministrys 2021 explicit statement that Nimbus would be used by the ministry of defense; the nature of the services likely available to the Israeli government within Googles cloud; and the apparent inability of Google to monitor what Israel might be doing with its technology. Workers worry that Googles powerful AI and cloud computing tools could be used for surveillance, military targeting, or other forms of weaponization. Under the terms of the contract, Google and Amazon reportedly cannot prevent particular arms of the government, including the Israeli military, from using their services, and cannot cancel the contract due to public pressure.

Recent reports in the Israeli press indicate that air-strikes are being carried out with the support of an AI targeting system; it is not known which cloud provider, if any, provides the computing infrastructure likely required for such a system to run. Google workers note that for security reasons, tech companies often have very limited insight, if any, into what occurs on the sovereign cloud servers of their government clients. We don't have a lot of oversight into what cloud customers are doing, for understandable privacy reasons, says Jackie Kay, a research engineer at Googles DeepMind AI lab. But then what assurance do we have that customers aren't abusing this technology for military purposes?

With new revelations continuing to trickle out about AIs role in Israels bombing campaign in Gaza; the recent killings of foreign aid workers by the Israeli military; and even President Biden now urging Israel to begin an immediate ceasefire, No Tech for Apartheids members say their campaign is growing in strength. A previous bout of worker organizing inside Google successfully pressured the company to drop a separate Pentagon contract in 2018. Now, in a wider climate of growing international indignation at the collateral damage of Israels war in Gaza, many workers see Googles firing of Hatfield as an attempt at silencing a growing threat to its business. I think Google fired me because they saw how much traction this movement within Google is gaining, says Hatfield, who agreed to speak on the record for the first time for this article. I think they wanted to cause a kind of chilling effect by firing me, to make an example out of me.

Hatfield says that his act of protest was the culmination of an internal effort, during which he questioned Google leaders about Project Nimbus but felt he was getting nowhere. I was told by my manager that I can't let these concerns affect my work, he tells TIME. Which is kind of ironic, because I see it as part of my work. I'm trying to ensure that the users of my work are safe. How can I work on what I'm being told to do, if I don't think it's safe?

Three days after he disrupted the conference, Hatfield was called into a meeting with his Google manager and an HR representative, he says. He was told he had damaged the companys public image and would be terminated with immediate effect. This employee disrupted a coworker who was giving a presentation interfering with an official company-sponsored event, the Google spokesperson said in a statement to TIME. This behavior is not okay, regardless of the issue, and the employee was terminated for violating our policies.

Seeing Google fire Hatfield only confirmed to Vidana Abdel Khalek that she should resign from the company. On March 25, she pressed send on an email to company leaders, including CEO Sundar Pichai, announcing her decision to quit in protest over Project Nimbus. No one came to Google to work on offensive military technology, the former trust and safety policy employee wrote in the email, seen by TIME, which noted that over 13,000 children had been killed by Israeli attacks on Gaza since the beginning of the war; that Israel had fired upon Palestinians attempting to reach humanitarian aid shipments; and had fired upon convoys of evacuating refugees. Through Nimbus, your organization provides cloud AI technology to this government and is thereby contributing to these horrors, the email said.

Workers argue that Googles relationship with Israel runs afoul of the companys AI principles, which state that the company will not pursue applications of AI that are likely to cause overall harm, contribute to weapons or other technologies whose purpose is to cause injury, or build technologies whose purpose contravenes widely accepted principles of international law and human rights. If you are providing cloud AI technology to a government which you know is committing a genocide, and which you know is misusing this technology to harm innocent civilians, then you're far from being neutral, Khalek says. If anything, you are now complicit.

Two workers for Google DeepMind, the companys AI division, expressed fears that the labs ability to prevent its AI tools being used for military purposes had been eroded, following a restructure last year. When it was acquired by Google in 2014, DeepMind reportedly signed an agreement that said its technology would never be used for military or surveillance purposes. But a series of governance changes ended with DeepMind being bound by the same AI principles that apply to Google at large. Those principles havent prevented Google signing lucrative military contracts with the Pentagon and Israel. While DeepMind may have been unhappy to work on military AI or defense contracts in the past, I do think this isnt really our decision any more, said one DeepMind employee who asked not to be named because they were not authorized to speak publicly. Google DeepMind produces frontier AI models that are deployed via [Google Clouds Vertex AI platform] that can then be sold to public-sector and other clients. One of those clients is Israel.

For me to feel comfortable with contributing to an AI model that is released on [Google] Cloud, I would want there to be some accountability where usage can be revoked if, for example, it is being used for surveillance or military purposes that contravene international norms, says Kay, the DeepMind employee. Those principles apply to applications that DeepMind develops, but its ambiguous if they apply to Googles Cloud customers.

A Google spokesperson did not address specific questions about DeepMind for this story.

Other Google workers point to what they know about Google Cloud as a source of concern about Project Nimbus. The cloud technology that the company ordinarily offers to its clients includes a tool called AutoML that allows a user to rapidly train a machine learning model using a custom dataset. Three workers interviewed by TIME said that the Israeli government could theoretically use AutoML to build a surveillance or targeting tool. There is no evidence that Israel has used Google Cloud to build such a tool, although the New York Times recently reported that Israeli soldiers were using the freely-available facial recognition feature on Google Photos, along with other non-Google technologies, to identify suspects at checkpoints. Providing powerful technology to an institution that has demonstrated the desire to abuse and weaponize AI for all parts of war is an unethical decision, says Gabriel Schubiner, a former researcher at Google. Its a betrayal of all the engineers that are putting work into Google Cloud.

A Google spokesperson did not address a question asking whether AutoML was provided to Israel under Project Nimbus.

Members of No Tech for Apartheid argue it would be naive to imagine Israel is not using Googles hardware and software for violent purposes. If we have no oversight into how this technology is used, says Rachel Westrick, a Google software engineer, then the Israeli military will use it for violent means.

Construction of massive local cloud infrastructure within Israels borders, [the Israeli government] said, is basically to keep information within Israel under their strict security, says Mohammad Khatami, a Google software engineer. But essentially we know that means were giving them free rein to use our technology for whatever they want, and beyond any guidelines that we set.

Current and former Google workers also say that they are fearful of speaking up internally against Project Nimbus or in support of Palestinians, due to what some described as fear of retaliation. I know hundreds of people that are opposing whats happening, but theres this fear of losing their jobs, [or] being retaliated against, says Khalek, the worker who resigned in protest against Project Nimbus. People are scared. Googles firing of Hatfield, Khalek says, was direct, clear retaliation it was a message from Google that we shouldnt be talking about this.

The Google spokesperson denied that the company's firing of Hatfield was an act of retaliation.

Regardless, internal dissent is growing, workers say. What Eddie did, I think Google wants us to think it was some lone act, which is absolutely not true, says Westrick, the Google software engineer. The things that Eddie expressed are shared very widely in the company. People are sick of their labor being used for apartheid.

Were not going to stop, says Zelda Montes, a YouTube software engineer, of No Tech for Apartheid. I can say definitively that this is not something that is just going to die down. Its only going to grow stronger.

Correction, April 10

The original version of this story misstated the number of Google staff actively involved in No Tech for Apartheid. It is more than 200, not 40.

Visit link:

Exclusive: Google Workers Revolt Over $1.2 Billion Israel Contract - TIME

Posted in Google | Comments Off on Exclusive: Google Workers Revolt Over $1.2 Billion Israel Contract – TIME

Google’s new Arm-based CPU will challenge Microsoft and Amazon in the AI race – The Verge

Posted: at 5:52 am

Google is making its own custom Arm-based CPU to support its AI work in data centers and introducing a more powerful version of its Tensor Processing Units (TPU) AI chips. Googles new Arm-based CPU, dubbed Axion, will be used to support Googles AI workloads before it rolls out to business customers of Google Cloud later this year.

The Axion chips are already powering YouTube ads, the Google Earth Engine, and other Google services. Were making it easy for customers to bring their existing workloads to Arm, says Mark Lohmeyer, Google Clouds vice president and general manager of compute and machine learning infrastructure, in a statement to Reuters. Axion is built on open foundations but customers using Arm anywhere can easily adopt Axion without re-architecting or re-writing their apps.

Google says customers will be able to use its Axion CPU in cloud services like Google Compute Engine, Google Kubernetes Engine, Dataproc, Dataflow, Cloud Batch, and more. Reuters reports that the Axion Arm-based CPU will also offer 30 percent better performance than general-purpose Arm chips and 50 percent more than Intels existing processors.

Google is also updating its TPU AI chips that are used as alternatives to Nvidias GPUs for AI acceleration tasks.TPU v5p is a next-generation accelerator that is purpose-built to train some of the largest and most demanding generative AI models, says Lohmeyer. A single TPU v5p pod contains 8,960 chips, which is more than double the amount of chips found on the TPU v4 pod.

Googles announcement of an Arm-based CPU comes months after Microsoft revealed its own custom silicon chips designed for its cloud infrastructure. Microsoft has built its own custom AI chip to train large language models and a custom Arm-based CPU for cloud and AI workloads. Amazon has also offered Arm-based servers for years through its own custom CPU, with the latest workloads able to use Graviton3 servers on AWS.

Google wont be selling these chips to customers, instead making them available for cloud services that businesses can rent and use. Becoming a great hardware company is very different from becoming a great cloud company or a great organizer of the worlds information, says Amin Vahdat, the executive in charge of Googles in-house chip operations, in a statement to The Wall Street Journal.

Google, like Microsoft and Amazon before it, can now reduce its reliance on partners like Intel and Nvidia, while also competing with them on custom chips to power AI and cloud workloads.

Excerpt from:

Google's new Arm-based CPU will challenge Microsoft and Amazon in the AI race - The Verge

Posted in Google | Comments Off on Google’s new Arm-based CPU will challenge Microsoft and Amazon in the AI race – The Verge

Google releases first Android 15 beta with improved performance and edge-to-edge display by default – SiliconANGLE News

Posted: at 5:52 am

Google LLC today announced the release of the first beta test version of Android 15, opening up the upcoming operating system for both developers and early adopters to test out.

The new release of the Android platform includes updates that improve underlying performance, such as OS-level support for app archiving, improved communication and better support for apps targeting large screens. Communications on Android have also been updated for Braille displays and the beta prioritized security and privacy.

Apps in Android 15 are designed to fit large screens by default now, meaning that they no longer need to explicitly call the functionsWindow.setDecorFitsSystemWindows(false)orenableEdgeToEdge()to show content behind system bars. However, apps should use enableEdgeToEdge() to work properly on earlier versions of Android.

Many of theMaterial 3 composablesand components will handle inserts for developers, based on how they are placed in the app according to framework specifications. This will assist the app in going edge-to-edge and avoid prominent touch targets in the app from being overlapped by system targets or other elements.

App archiving capability was announcedlast yearby Android and Google Play, which allows users to free up space by partially removing infrequently used apps from the device.

Android 15 has added OS-level support for app archiving and app unarchiving, which will make it easier for app stores to take advantage of it and users will have more free space on their devices. When an app is archived, it removes the Android package kit, the package that contains the files an app needs to install and function on a device as well as any cached data, while retaining user data.

Apps can now manage and collect profiling information from within apps to help improve performance using a newPerformanceManagerclass. The Android team said plans are to wrap this with an Android Jetpack application programming interface that will make it possible to simplify collecting heap dumps, stack sampling and more. The API also includes a rate limiter so that it does not impact performance.

Androids built-in screen reader,TalkBack, has been updated with support for Braille displays using the Human Interface Devices standard over both USB and secure Bluetooth. The HID standard is a device class definition that opened up USB drivers to a variety of devices such as mice, keyboards and game controllers, so it should provide more accessibility for Braille displays for Android devices over time.

Android 15 has made additional changes to how background apps operate to prevent them from bringing other apps to the foreground, elevating privileges and abusing user interactions. This will give users more control over their devices and prevent malicious apps from controlling devices behind the scenes. Background activity launches have been restricted since Android 10 and this enhancement should further solidify that control.

Now that Android 15 has entered its beta phase, users canenroll any supported Pixel deviceto get this release and future Android Beta updates installed over the air. It is also possible to test out this beta release with a 64-bit system image onAndroid Emulatorin Android Studio.

THANK YOU

Read the original:

Google releases first Android 15 beta with improved performance and edge-to-edge display by default - SiliconANGLE News

Posted in Google | Comments Off on Google releases first Android 15 beta with improved performance and edge-to-edge display by default – SiliconANGLE News

Google’s Gemini 1.5 Pro can now hear – The Verge

Posted: at 5:52 am

Googles update to Gemini 1.5 Pro gives the model ears. The model can now listen to uploaded audio files and churn out information from things like earnings calls or audio from videos without the need to refer to a written transcript.

During its Google Next event, Google also announced itll make Gemini 1.5 Pro available to the public for the first time through its platform to build AI applications, Vertex AI. Gemini 1.5 Pro was first announced in February.

This new version of Gemini Pro, which is supposed to be the middle-weight model of the Gemini family, already surpasses the biggest and most powerful model, Gemini Ultra, in performance. Gemini 1.5 Pro can understand complicated instructions and eliminates the need to fine-tune models, Google claims.

Gemini 1.5 Pro is not available to people without access to Vertex AI and AI Studio. Right now, most people encounter Gemini language models through the Gemini chatbot. Gemini Ultra powers the Gemini Advanced chatbot, and while it is powerful and also able to understand long commands, its not as fast as Gemini 1.5 Pro.

Gemini 1.5 Pro is not the only large AI model from Google getting an update. Imagen 2, the text-to-image generation model that helps power Geminis image-generation capabilities, will also add inpainting and outpainting, which let users add or remove elements from images. Google also made its SynthID digital watermarking feature available on all pictures created through Imagen models. SynthID adds an invisible to the viewer watermark on images that marks its provenance when viewed through a detection tool.

Google says its also publicly previewing a way to ground its AI responses with Google Search so they answer with up-to-date information. Thats not always a given with the responses produced by large language models, sometimes intentionally; Google has intentionally kept Gemini from answering questions related to the 2024 US election.

Original post:

Google's Gemini 1.5 Pro can now hear - The Verge

Posted in Google | Comments Off on Google’s Gemini 1.5 Pro can now hear – The Verge

WPP and Google Cloud forge groundbreaking new collaboration to lead generative AI-driven marketing into its next … – WPP

Posted: at 5:52 am

9 Apr 2024

Gemini 1.5 Pro integration with WPP Open marketing operating system sets new standards in marketing creativity, personalisation and efficiency

Cloud Next '24, Las Vegas 9 April, 2024 Today on the keynote stage at Google Cloud Next, WPP and Google Cloud will announce a new collaboration that will redefine marketing through the integration of Googles Gemini models with WPP Open, WPPs AI-powered marketing operating system already used by more than 35,000 of its people and adopted by key clients including The Coca-Cola Company, LOral and Nestl.

By combining Google's deep expertise in data analytics, generative AI (gen AI) technology and cyber security with WPP's end-to-end marketing capabilities, global creative scale and understanding of its clients brands, this partnership seeks to drive a step-change in marketing efficiency and effectiveness.

As part of the collaboration, Google Cloud's advanced gen AI tools will be used with WPP's proprietary marketing and advertising data. This will enable WPPs clients to create brand- and product-specific content using gen AI, to gain deeper insights into their target audiences, to accurately predict and explain content effectiveness, and to optimise campaigns with ongoing adaptive processes.

Applying WPPs decades of experience serving enterprise clients, WPP Open integrates with any client, partner or technology vendor to create optimised and automated marketing capabilities.

The inaugural phase of the partnership is focusing on the development of the following four innovative use cases:

Stephan Pretorius, Chief Technology Officer at WPP, said: "This collaboration marks a pivotal moment in marketing innovation. Our integration of Gemini 1.5 Pro into WPP Open has significantly accelerated our gen AI innovation and enables us to do things we could only dream of a few months ago. With Gemini models, we're not only able to enhance traditional marketing tasks but also to integrate the end-to-end marketing process for continuous, adaptive optimisation. I believe this will be a game-changer for our clients and the marketing industry at large.

Thomas Kurian, CEO of Google Cloud, said: AI has the potential to unlock new levels of effectiveness for marketers, whether it is optimising campaigns, automating repetitive tasks like brand descriptions, or sparking entirely new ideas. This partnership brings the power of Google Clouds gen AI capabilities together with WPPs marketing domain expertise to help our mutual customers create better campaigns that resonate with consumers in a deeper way.

The move forms part of WPPs ongoing annual investment of 250 million in AI, data and technology, and its strategy to capitalise on its lead in the space by partnering with AI experts like Google.

Further information

Susie Metnaoui, WPP +44 (0)7557 591 879 [emailprotected]

Louise Lacourarie, WPP +44 (0)7741 360 931 [emailprotected]

Laura Wheeler, Google Cloud [emailprotected]

About WPP

WPP is the creative transformation company. We use the power of creativity to build better futures for our people, planet, clients and communities. For more information, visit http://www.wpp.com.

About Google Cloud

Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.

Read the original here:

WPP and Google Cloud forge groundbreaking new collaboration to lead generative AI-driven marketing into its next ... - WPP

Posted in Google | Comments Off on WPP and Google Cloud forge groundbreaking new collaboration to lead generative AI-driven marketing into its next … – WPP

These Google Photo Editing Tools Will Be Free Soon – Lifehacker

Posted: at 5:52 am

Google has announced that some of its AI-powered editing tools will soon be free to all Google Photos users. Previously, to use some of these tools, you would either need to be a Google Pixel user or have a paid Google One subscription.

The new features available to all Google Photos includes:

These features will be available within Google Photos on the Android and iOS apps as well as Chromebooks. You'll need an Android phone with at least Android 8.0 an iOS device with iOS 15 and newer. For Chromebooks, you'll need at least 3GB of RAM and ChromeOS 118 and newer.

Magic Editor, which was exclusive to the Pixel 8 series, uses generative AI to easily drag and drop objects in a photo. There is one catch: Android and iOS Google Photos users will be limited to ten Magic Editor saves per month. If you want more Magic Editor saves, you'll need a Pixel device or a Google One subscription (the 2TB plan and above).

The AI tools will gradually start to roll out to Google Photos users starting May 15.

Original post:

These Google Photo Editing Tools Will Be Free Soon - Lifehacker

Posted in Google | Comments Off on These Google Photo Editing Tools Will Be Free Soon – Lifehacker

I shot the eclipse with an iPhone 15 Pro Max, Google Pixel 8 Pro and a Samsung Galaxy S23 Ultra here’s which one … – TechRadar

Posted: at 5:52 am

I had three flagship phones on three different tripods all aimed at a sun rapidly being crowded by a nuisance moon, and all I wanted was one or two excellent eclipse shots.

Turns out that photographing a solar eclipse with your smartphone is not that easy. In fact, figuring out a repeatable process without cauterizing your retinas is downright challenging. But I did it. I grabbed some of the best smartphones money can buy, the iPhone 15 Pro Max, Google Pixel 8 Pro, and the Samsung Galaxy S23 Ultra, and prepared for 180 minutes of celestial excitement.

That last selection might turn a few heads. It is, after all, a now aging flagship Android phone that does not have the latest image processing or even the fastest Qualcomm Snapdragon 8 Gen 3 chip found in the Galaxy S24 Ultra (the S23 Ultra has the Gen 2). However, one thing it has that none of my other flagship smartphones offer is a 10X optical zoom (not even the S24 Ultra has that).

Throughout this endeavor I committed to not using any enhancements, leaving the phones' zoom lenses to do their best work without digital magic. I never pinched and zoomed. I just pointed each phone at the eclipse and hit the shutter.

Except as soon as I did this, I realized it wasn't going to work. The sun naturally blows out the exposure on all the phones. It's not that I haven't taken pictures of the sun before. I've snapped quite a few with the iPhone and to get over the blowout, I tap the sun on screen and that speeds up the exposure to lower the light and bring out the sun's definition.

An eclipse wreaks havoc with a smartphone's exposure controls, and the more the moon occludes the sun, the sharper that light becomes. My solution was simple and likely one you've seen elsewhere. I took my Celestron eclipse glasses and carefully placed the film of one sunglass lens over each phone's zoom lens. If you ever have trouble identifying which camera is the zoom, just open the camera app, select the max optical zoom, and put your finger over each camera lens until you see your finger on the screen.

The solar sunglasses helped with cutting down the massive glare. After that, I tapped on the screen and adjusted the exposure until I could see the sun getting the Pac-man treatment from the moon. In most cases, the result was a very orange-looking sun.

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

For the next hour or so, I shifted from one phone to the other, repositioning my tripods, lining up the sun, and snapping away.

There were some non-smartphone-related glitches, like cloud cover right before our peak totality (90% where I live) but I was more successful than I expected and the smartphones, for the most part, were up to the challenge.

You'll see some of my comparisons above and below (I've used the best from all the phones in the above shots) which I did not resize or enhance, other than cropping them where possible to show them side-by-side.

While the iPhone 15 Pro Max and Pixel 8 Pro shoot at 12MP (the latter is binned from a 48MP sensor, meaning four pixels combined into each one), the Samsung Galaxy S24 Ultra's 10X zoom camera is only 10MP. I think those numbers do factor into the overall quality.

The Google Pixel 8 Pro matched the iPhone 15 Pro Max's 5x zoom and sometimes seemed sharper than either the iPhone or Galaxy S23 Ultra, but I also struggled the most with the Pixel 8 to capture a properly exposed shot. It was also the only phone that forced a long exposure after the peak 90% coverage. The good news is that some of those long exposures offered up the most atmosphere, managing to collect some of the cloud cover blocking my full view of the eclipse.

Things got more interesting with the iPhone 15 Pro Max and its 5x Tertrapism lens. The eclipse appears a little closer than on the Pixel 8 Pro, but also more vibrant. There are a handful of iPhone 15 Pro Max pictures where I can see the clouds and it's quite beautiful. As with all the phones, this image capture process was a bit hit-and-miss. Colors shifted from orange to almost black and white, and sticking the focus was a challenge. When I did manage to capture a decent photo, I was thrilled.

The Samsung Galaxy S23 Ultra's 10x optical zoom pulled me thrillingly close to the eclipse. It was certainly easier to get the exposure and focus right. At a glance, the S23's images are better but closer examination reveals significant graininess, so much so that some appear almost like paintings and rough canvas.

As I dug deeper into all the photos, I noted how each phone camera used ISO settings to manage the image capture and quality. The iPhone 15 Pro Max ranged from ISO50 (for ultra-bright situations and action shots) to ISO 800 (very slow light capture, and usually introduces a lot of grain). Naturally, those at the upper end of the spectrum are just as grainy as those from the Galaxy S23 Ultra, which ranges from as low as ISO 250 to 800.

The Google Pixel 8 Pro has the widest range from as low as ISO 16 to an astonishing ISO 1,536. It used that for a capture of the 90% eclipsed sun behind clouds. Aesthetically, it is one of the better shots.

If I had to choose a winner here, it would be the Samsung Galaxy S23 Ultra by a nose. That extra optical zoom means you have more detail before the graininess kicks in.

The iPhone 15 Pro Max is a very close second, but only because it was easier to capture a decent shot. I also think that if it had a bigger optical zoom, the iPhone's powerful image processing might've outdone the year-old Galaxy.

Google Pixel Pro 8 has some great shots but also a lot of bad ones because I couldn't get it to lock in on the converging sun and moon. It also suffered the most when it came to exposure. Even so, I am impressed with the ISO range and the sharpness of some shots.

The iPhone 15 Pro Max and Google Pixel 8 Pro also deserve special mention for producing my two favorite shots. They're not the closest or clearest ones, but by capturing some of the clouds, they add an ethereal, atmospheric element.

If I live long enough to see another eclipse (there's one in the American Midwest in 2044), I'll look for special smartphone eclipse filters and give it another try. By then we could well have 200x optical zoom cameras with 1,000MP sensors.

Correction: An earlier version of this article transposed the description of ISO performance.

Read the original here:

I shot the eclipse with an iPhone 15 Pro Max, Google Pixel 8 Pro and a Samsung Galaxy S23 Ultra here's which one ... - TechRadar

Posted in Google | Comments Off on I shot the eclipse with an iPhone 15 Pro Max, Google Pixel 8 Pro and a Samsung Galaxy S23 Ultra here’s which one … – TechRadar

Responses To Google Search About Amarillo – 101.9 The Bull

Posted: at 5:52 am

It's always fun to listen to people talk about the things they love and hate about living in Amarillo. The things they love about it always seem to be the same. Nice people for the most part, predictable weather, and lots of restaurants.

It's seeing all the comments and things people hate about the city that crack me up, and if you look at a history of things people have said over time, it'll amuse you. Some things change, some stay the same.

Out of fun, I decided to Google "things we hate about Amarillo". It started bringing up old Reddit threads going back to 2018, and it's hilarious to read some of these.

In one of the threads I found, the person had a job offer here in Amarillo and was looking at moving here from Augusta, GA. They had asked about pros and cons of the city. The post was from six years ago, so not sure if they accepted the job and maybe still live here.

The responses for the most part were positive about the city, but even six years ago, things such as "crime rate is ridiculous", "the amount of meth and crack addicts are unbelievable", "corrupt small town politics", "crumbling infrastructure...except for the plethora of churches and gas stations" were popping up. None of these things have seemingly changed.

However, my favorite response to this was simple, basic, and to the point. The user simply stated, "Stay in Augusta". I might have laughed a little, read it again, and just burst out in laughter.

I'm not saying Amarillo is a bad place to live, it's definitely grown on me over time. Just like any city, there is good and bad. You have to take the bad with the good in order to truly enjoy a city. I just thought the responses were funny.

So go ahead, Google "things we hate about Amarillo" and enjoy the plethora of comments. It's good for a laugh.

I love Google. It can literally answer ANYTHING you need it to answer. Whether it's right or wrong is a totally different question.

Recently, I found myself wondering about something and went to Google. And that's when I started noticing the "people also asked..." section and BOY...Some of them made perfect sense, some of them were interesting, and one of them was downright baffling...and it was a top 10 question which is even more absurd.

So let's see what we've got. Here's the top 10 questions as asked to Google about Amarillo.

Gallery Credit: Sarah Clark

Amarillo is a pretty quirky place. We've got the Big Texan, the weird signs, and (obviously) the famed Cadillac Ranch. But more than that, there's a distinct culture of close-knit community, eccentrics, a thriving arts scene, and much more.

The point is, living in Amarillo is a unique experience. Because of that, it's really pretty easy to tell who's a native or a long-time resident. And I can prove it.

Here's a couple of ways you can say you're from Amarillo.....withoutsaying you're from Amarillo.

Gallery Credit: Sarah Clark

See the rest here:

Responses To Google Search About Amarillo - 101.9 The Bull

Posted in Google | Comments Off on Responses To Google Search About Amarillo – 101.9 The Bull

Google releases ‘prompting guide’ with tips for Gemini in Workspace – 9to5Google

Posted: at 5:52 am

At Cloud Next 2024, Google published a prompting guide for Gemini in Workspace, especially in Gmail and Docs.

This handbook (April 2024 edition) identifies four main areas to consider when writing an effective prompt:

Prompts, which should be written naturally with complete thoughts in full sentence, dont need all four especially in the initial message to Gemini/Help me write. Google says using a few will help, while what you write should be concise and avoid jargon.

You will likely need to try a few different approaches for your prompt if you dont get your desired outcome the first time.

Theres a particularly big emphasis on mak[ing] it a conversation with follow-up prompts that include more of the four areas.

Referring to prompting as an art, Google found during the Workspace Labs beta program that the most successful prompts average around 21 words.

Coming in at 45 pages, there are example personas and prompts that go through refinements for: Customer service, Executives and entrepreneurs, Human resources, Marketing , Project management, Sales. Ultimately, Google says to review outputs for clarity, relevance, and accuracy before using it.

Additional tips include:

You can download the guide here.

FTC: We use income earning auto affiliate links. More.

See the original post here:

Google releases 'prompting guide' with tips for Gemini in Workspace - 9to5Google

Posted in Google | Comments Off on Google releases ‘prompting guide’ with tips for Gemini in Workspace – 9to5Google