Page 89«..1020..88899091..100110..»

Category Archives: Google

Google awards Tulane professor for work promoting diversity and fairness in AI systems – News from Tulane

Posted: April 17, 2021 at 11:50 am

Nicholas Mattei, an assistant professor of computer science, is an expert on the theory and practice of artificial intelligence. (Photo by Paula Burch-Celentano)

Artificial intelligence experts from Tulane University and the University of Maryland have received a Google Scholar Research Award to develop better ways of promoting diversity and fairness in a variety of pipeline and selection problems, including hiring, graduate school admission and customer acquisition.

Nicholas Mattei, an assistant professor of computer science at Tulane, and John Dickerson, an assistant professor of computer science at the University of Maryland, received the $60,000 award as part of the Google Research Scholars program, an initiative by Google to support early career professors who are pursing research in fields relevant to Google.

Our work aims to operationalize and incorporate responsible AI practices and techniques into real-world systems, informed by data from real processes at our two universities, Mattei said. Were not going to develop a solution in a year. Our intention is to produce an open-source toolkit, preliminary studies and whitepaper to be discussed by policymakers.

This research directly addresses questions of transparency, constraints, and fairness when working with complex, multi-stage decision-making problems."

Nicholas Mattei, assistant professor of computer science at Tulane

The work is an outgrowth of their recent paper, We Need Fairness and Explainability in Algorithmic Hiring, presented virtually at the 2020 Autonomous Agents and Multi-Agent Systems Conference.

Our proposal focuses on graduate admissions but is broadly applicable to any problem where we need to expend a limited set of resources to validate and select a small group, Mattei said.

In focusing specifically on graduate admissions, a form of academic hiring, Mattei and Dickerson will look at two key factors related to admissions how to allocate aspects of the process, such as budget and interview slots, and how to explain decisions made by their algorithm in a transparent and compliant way.

Several recent reports related to algormithic hiring, including one from the non-profit UpTurn, motivates us to focus on how to allocate additional human resources to these problems, and we feel that we must treat issues of bias and fairness as first-order concerns in any system that may have an impact on people, Dickerson said.

This research directly addresses questions of transparency, constraints, and fairness when working with complex, multi-stage decision making problems where we need to end up with a recommendation or selection at the end. This type of sequential decision-making problem is typically optimized using multi-armed bandit algorithms, Mattei said. But these algorithms may optimize for criteria that we may not intend, or may not even be legal.

Such algorithms are in widespread use across many of Googles core areas, including recommendation and advertising and hence understanding them in detail is critically important, he said.

Mattei and Dickerson believe their work will support their thesis that data-driven approaches to measuring and promoting fairness at a single stage of the talent sourcing process can be extended beyond graduate admissions. They said the technologies could be applied to internal product ideation and review, academic proposal reviewing, advertising selection or any setting that involves the collection of recommendations from experts.

We envision these as algorithms and workflows that could be deployed both internally at Google or offered broadly as screening tools throughout GCP (Google Cloud Platform) and other Google products.

Mattei and Dickerson have worked closely together for many years, and the fact that they work at two very different academic institutions one a large public university in a wealthy geographic area and the other a smaller private university in a lower income part of the country will actually be an advantage in their research.

The student application profiles at both schools are very different, and will lead to different concerns and distributions of data Mattei said. We believe this diversity strengthens the results that will arise from the data-driven validation of our model.

See the original post here:

Google awards Tulane professor for work promoting diversity and fairness in AI systems - News from Tulane

Posted in Google | Comments Off on Google awards Tulane professor for work promoting diversity and fairness in AI systems – News from Tulane

Amazon, Google, GM, Starbucks and hundreds of companies join to oppose voting restrictions | NewsChannel 3-12 – KEYT

Posted: at 11:50 am

Hundreds of prominent executives from high-profile companies, including Amazon, Google, BlackRock and Starbucks, signed a statement that opposes discriminatory legislation that makes voting harder.

The statement, printed Wednesday in an advertisement in the New York Times, was organized by Ken Chenault and Ken Frazier, two of Americas most prominent Black corporate leaders. The statement called democracy a beautifully American ideal and for it to work, we must ensure the right to vote for all of us.

We all should feel a responsibility to defend the right to vote and to oppose any discriminatory legislation on measures that restrict or prevent any eligible voter from having an equal and fair opportunity to cast a ballot, it continued.

The statement, which is described as nonpartisan, doesnt directly address any specific legislation, notably in Georgia, Texas, and other key states where Republican lawmakers are trying to clamp down on ballot access.

Wednesdays statement was a sign of solidarity after weeks of warring between Republicans and a smaller group of Georgia-based companies that spoke out after public pressure.

Three of those companies, including Coca-Cola, Delta Air Lines and Home Depot, declined to add their names, according to the New York Times. All three are based in Georgia and were wary because of the blowback they had received after their earlier statements on voting rights but also did not feel the need to speak again, it reported.

Its the latest high-profile letter from companies in the last month.

A few weeks ago, chief executives and other high-ranking leaders from more than 100 companies including Target, Snapchat and Uber, issued a public statement opposing any measures that deny eligible voters the right to cast ballots. That letter was organized by Civic Alliance, a coalition that recognizes that a strong democracy is good for business, according to its website.

Another letter, released in late March, from Black business executives challenged their fellow corporate leaders to be more forceful in condemning what they describe as deliberate attempts by Republicans to limit the number of Black Americans casting ballots in key states.

More here:

Amazon, Google, GM, Starbucks and hundreds of companies join to oppose voting restrictions | NewsChannel 3-12 - KEYT

Posted in Google | Comments Off on Amazon, Google, GM, Starbucks and hundreds of companies join to oppose voting restrictions | NewsChannel 3-12 – KEYT

Google executives 2020 move to Coinbase worth $US646 million – Sydney Morning Herald

Posted: at 11:50 am

Even for Silicon Valley, Surojit Chatterjees rise to extraordinary wealth was lightning fast.

The chief product officer has been with Coinbase, the biggest US crypto exchange, for just 15 months. The former Google executives Coinbase stake was worth about $US180.8 million after its first volatile day of trading in New York on Wednesday (US time). Hes also set to receive share options within the next five years that are currently worth about $US465.5 million, according to data compiled by Bloomberg.

Coinbase chief product officer Surojit Chatterjee with CEO and co-founder Brian Armstrong.

Chatterjee, who was 46 as of February and oversees product management and design, joins Coinbase founders Brian Armstrong and Fred Ehrsam as major winners of the firms Nasdaq debut. Together, their stakes are worth more than $US16 billion, according to the Bloomberg Billionaires Index.

The listing wasnt without drama. Coinbase, listed on the Nasdaq under the ticker COIN, opened at $US381 The price rose to almost $US430 before retreating to close at $US328.28, up 31 per cent from the $US250 reference price set by Nasdaq ahead of the first trade. That puts Coinbases market value at $US85.78 billion.

Its another milestone though in putting crypto further into the mainstream. The company went public as Bitcoin - which together with Ethereum made up most of its 2020 trading volume - is trading around its all-time high. Bitcoin has more than doubled in price this year to about $US64,000.

Loading

Chatterjees Coinbase stake is a dramatic example of the instant equity employees can receive when joining startups. Gone are the days when equity was mainly distributed in tranches over many years, a reward for loyalty as well as performance.

Chatterjee joined Coinbase last February after three years at Alphabets Google, where he led the companys shopping platform during his second stint at the search giant. He previously headed product and delivery for mobile search ads and AdSense before a brief stop at Indian e-commerce site Flipkart. His experience at the Bangalore-based company appealed to Armstrong.

Read more from the original source:

Google executives 2020 move to Coinbase worth $US646 million - Sydney Morning Herald

Posted in Google | Comments Off on Google executives 2020 move to Coinbase worth $US646 million – Sydney Morning Herald

Google Maps has a wild new feature that will guide you through indoor spaces like airports – CNBC

Posted: March 31, 2021 at 5:31 am

Google Maps Live View AR indoors

Google

Google on Tuesday announced several new features that are coming to the Google Maps app. The coolest one will help you find your way through indoor spaces like airports, malls and train stations using augmented reality.

The updated Live View AR feature, which overlays digital guides on top of the real world to provide directions as you look through your phone's display, now works indoors. So, say you're in an airport and need to find your gate or an ATM. You search for what you're looking for in Google Maps and markers will guide you with arrows and other digital indicators.

Here's an example:

Google Maps Live View indoors

Google

Live View for Google Maps first launched for Android and iPhone in 2019, but it initially only provided these sorts of directions outdoors. You can access Live View by searching for something in Google maps on your phone, tapping "Directions" and then, when available, tapping the "Live View" option next to "Start."

Google said it's first rolling out in some malls in Chicago, Long Island, New York, Los Angeles, Newark, New Jersey, San Francisco, San Jose, California, and Seattle. In the coming months, it will also launch in airports, malls and transit stations in Tokyo and Zurich. Other cities and locations will eventually support the feature, too.

Google's big maps update also includes other features that will roll out in the coming months, like air quality information, integration with grocery stores for curbside pickup, and an option to select the most eco-friendly route when driving.

Read the rest here:

Google Maps has a wild new feature that will guide you through indoor spaces like airports - CNBC

Posted in Google | Comments Off on Google Maps has a wild new feature that will guide you through indoor spaces like airports – CNBC

Google announces new Nest Hub with Soli ‘Sleep Sensing …

Posted: at 5:31 am

In deciding to not put a camera on what was initially called the Home Hub, Google made its first Assistant Smart Display ideal for nightstands. The new 2nd-generation Nest Hub is further embracing that role with Sleep Sensing to track your rest. Otherwise, it looks the same and is not that much more expensive than the original.

Google has retained the floating display design of the Nest Hub and larger Hub Max. The 7-inch screen (1024 x 600) is still surrounded by bezels that are rather thick and feature three cutouts at the top. That said, a new edgeless glass display gets rid of the raised perimeter lip to make cleaning easier.

The company told 9to5Google that the white borders help mimic a real-life photo frame, though the 2nd-gen unit benefits from a slightly lighter middle piece. Its available in Chalk (white), Charcoal (black), Sand (pink), and a new Mist (blue). Those colors are dyed in a process that uses less water, while the entire thing is made with 54% post-consumer recycled plastic.

Inside, a machine learning chip allows your most common Assistant commands to be processed faster locally. The Nest Hub has 50% more bass with a 1.7-inch driver than the original for richer sound, as well as a third far-field microphone. Theres also a Thread radio that makes possible Project Connected Home over IP (CHIP) support to make it a better smart home hub, though the capability is not yet live at launch.

The biggest addition is a hidden Soli radar chip located in the upper-right corner. At launch, it allows you to play/pause media by tapping in the air, with Google not yet supporting the ability to skip tracks. However, there is a wave gesture to snooze alarms, which brings us to the biggest selling point of Googles latest Smart Display.

Sleep quality is a top concern among adults, but the company found that todays bedroom gadgets havent caught on because people forget to charge smartwatches/fitness trackers, while others dont like wearing jewelry to sleep. Meanwhile, 20% of original Hubs were placed in bedrooms.

This 2nd-gen Nest Hub has Sleep Sensing powered by the Soli technology that first debuted on the Pixel 4 and was later used on the new Nest Thermostat for screen wake. Google set out to make sleep tracking, which is opt-in, more effortless and help people make nightly improvements.

Sensors analyze your sleep based on movement and breathing, while also identifying disturbances (coughing, snoring, light fluctuations, and temperature changes) that make an impact. Soli is an ideal technology for this purpose because it precisely tracks movement at both macro (limbs flailing) and micro (chest moving up/down as you breathe) levels.

Soli is not a camera that can see or identify you, with the Nest Hub only generating movement graphs that along with audio data are processed locally and do not leave your device. Only high-level sleep occurrences/events, such as number of coughs and snore minutes, are sent to Google, and a nights results can be easily deleted in the morning. Meanwhile, you can disable cough and snore detection but retain sleep tracking.

To use Soli for this purpose, Google had thousands of people submit over 100,000 nights to create the sensing algorithms. Results were validated against sleep (polysomnography) studies, as well as consumer and clinical-grade sleep trackers. Nest Hubs performance matched or surpassed those existing offerings.

The custom ML model efficiently processes a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) to automatically compute probabilities for the likelihood of user presence and wakefulness (awake or asleep).

The Sleep Sensing experience starts with a calibration tutorial that tells you to point the Smart Display at your body. It can be placed 1-2 feet away from you, with Google emitting low-energy radar waves to create a tracking bubble that ignores everything outside it.

When you wake up in the morning, you can say Hey Google, how did I sleep? or tap the Sleep Summary button on screen. A graphic that uses overlapping circles will note duration, schedule (consistency), and the number of disturbances/rest quality.

Good morning. You slept for 5 hours and 15 minutes. It looks like you went to bed and woke up a little late, and had quite a short sleep with some disturbances.

The three-page Sleep details view provides a granular breakdown. Quality uses timelines to note how much light was in your room and the temperature, which the Nest Hub now has a sensor for at the moment, its just for sleep tracking. Google will also identify snoring, coughing, and changes in light. The sleep bar notes when youre asleep and restless periods.

There are weekly summaries to quickly see breathes per minute (RPM), the number of minutes you snored, and cough count. All this data syncs with Google Fit, as we reported, and can be seen on your Android or iOS device. The company worked with the American Academy of Sleep Medicine to provide sleep science and guidance to users. That said, its not intended to diagnose, cure, mitigate, prevent or treat any disease or condition.

Besides showing your sleep stats, Google will analyze them to find issues and suggest actionable recommendations that are personalized to you. This might include setting up a Relaxation routine and setting reminders. Over time (14 or more days), the Nest Hub will understand your sleep patterns and recommend an ideal schedule.

Sleep Sensing will be available as a free preview until next year. Google has not yet determined how it will charge users since Fitbit, which offers a Premium subscription, was only recently acquired. Its exploring how best to integrate with the existing sleep tracking features. Plenty of warning will be given if there are changes.

The new 2nd-generation Nest Hub costs $99.99 in only a $10 price increase compared to the current model. Google Store pre-orders start today in the US, Canada, UK, Germany, France, and Australia, with a launch on March 30.

FTC: We use income earning auto affiliate links. More.

Check out 9to5Google on YouTube for more news:

Read the original post:

Google announces new Nest Hub with Soli 'Sleep Sensing ...

Posted in Google | Comments Off on Google announces new Nest Hub with Soli ‘Sleep Sensing …

Google starts trialing its FLoC cookie alternative in Chrome – Yahoo Tech

Posted: at 5:31 am

Google today announced that it is rolling out Federated Learning of Cohorts (FLoC), a crucial part of its Privacy Sandbox project for Chrome, as a developer origin trial.

FLoC is meant to be an alternative to the kind of cookies that advertising technology companies use today to track you across the web. Instead of a personally identifiable cookie, FLoC runs locally and analyzes your browsing behavior to group you into a cohort of like-minded people with similar interests (and doesn't share your browsing history with Google). That cohort is specific enough to allow advertisers to do their thing and show you relevant ads, but without being so specific as to allow marketers to identify you personally.

This "interest-based advertising," as Google likes to call it, allows you to hide within the crowd of users with similar interests. All the browser displays is a cohort ID and all your browsing history and other data stay locally.

Image Credits: Google / Getty Images

The trial will start in the U.S., Australia, Brazil, Canada, India, Indonesia, Japan, Mexico, New Zealand and the Philippines. Over time, Google plans to scale it globally. As we learned earlier this month, Google is not running any tests in Europe because of concerns around GDPR and other privacy regulations (in part, because it's unclear whether FLoC IDs should be considered personal data under these regulations).

Users will be able to opt out from this origin trial, just like they will be able to do so with all other Privacy Sandbox trials.

Unsurprisingly, given how FLoC upends many of the existing online advertising systems in place, not everybody loves this idea. Advertisers obviously love the idea of being able to target individual users, though Google's preliminary data shows that using these cohorts leads to similar results for them and that advertisers can expect to see "at least 95% of the conversions per dollar spent when compared to cookie-based advertising."

Story continues

Google notes that its own advertising products will get the same access to FLoC IDs as its competitors in the ads ecosystem.

But it's not just the advertising industry that is eyeing this project skeptically. Privacy advocates aren't fully sold on the idea either. The EFF, for example, argues that FLoC will make it easier for marketing companies that want to fingerprint users based on the various FLoC IDs they expose, for example. That's something Google is addressing with its Privacy Budget proposal, but how well that will work remains to be seen.

Meanwhile, users would probably prefer to just browse the web without seeing ads (no matter what the advertising industry may want us to believe) and without having to worry about their privacy. But online publishers continue to rely on advertising income to fund their sites.

With all of these divergent interests, it was always clear that Google's initiatives weren't going to please everyone. That friction was always built into the process. And while other browser vendors can outright block ads and third-party cookies, Google's role in the advertising ecosystem makes this a bit more complicated.

"When other browsers started blocking third-party cookies by default, we were excited about the direction, but worried about the immediate impact," Marshall Vale, Google's product manager for Privacy Sandbox, writes in today's announcement. "Excited because we absolutely need a more private web, and we know third-party cookies aren't the long-term answer. Worried because today many publishers rely on cookie-based advertising to support their content efforts, and we had seen that cookie blocking was already spawning privacy-invasive workarounds (such as fingerprinting) that were even worse for user privacy. Overall, we felt that blocking third-party cookies outright without viable alternatives for the ecosystem was irresponsible, and even harmful, to the free and open web we all enjoy."

It's worth noting that FLoC, as well as Google's other privacy sandbox initiatives, are still under development. The company says the idea here is to learn from these initial trials and evolve the project accordingly.

Read more here:

Google starts trialing its FLoC cookie alternative in Chrome - Yahoo Tech

Posted in Google | Comments Off on Google starts trialing its FLoC cookie alternative in Chrome – Yahoo Tech

Facebook and Google reveal plans to build subsea cables between U.S. and Southeast Asia – CNBC

Posted: at 5:31 am

The vessel used to lay one of Google's other subsea cables.

Google

Facebook and Google are planning to lay two huge subsea cables that will link the U.S. West Coast to Singapore and Indonesia, Southeast Asia's biggest economy and home to a growing number of smartphone users.

The Echo and Bifrost trans-Pacific cables will increase the data capacity between the regions by 70% and improve internet reliability, Facebook said Monday.

While Facebook is investing in both cables, Google is only investing in Echo. The cost of the projects, which are still subject to regulatory approvals, has not been disclosed.

"We are committed to bringing more people online to a faster internet," Facebook's vice president of network investments, Kevin Salvadori, and network investment manager Nico Roehrich wrote in a joint blog post. "As part of this effort, we're proud to announce that we have partnered with leading regional and global partners to build two new subsea cables Echo and Bifrost that will provide vital new connections between the Asia-Pacific region and North America."

Partners include Indonesian firms Telin and XL Axiata, and Singapore-based Keppel.

The aim is for Echo to be completed by late 2023, while Bifrost is set to be finished by late 2024.

Last May, Facebook announced plans to build a 37,000-kilometer (22,991-mile) long undersea cable around Africa to provide it with better internet access.

Google is also working on an underwater cable called Equiano, which aims to connect Africa with Europe. The web search titan has another unit, Loon, which makes high-altitude balloons that deliver 4G internet to rural communities. It recently announced an expansion of that plan to Mozambique.

Facebook previously had plans to beam internet to remote areas using solar-powered drones. Called Aquila, the company shuttered the project in 2018 but has reportedly been working with Airbus to test similar drones again in Australia.

Read more:

Facebook and Google reveal plans to build subsea cables between U.S. and Southeast Asia - CNBC

Posted in Google | Comments Off on Facebook and Google reveal plans to build subsea cables between U.S. and Southeast Asia – CNBC

Games, Teams, and Moonshots: Google Cloud’s Will Grannis – MIT Sloan – MIT Sloan

Posted: at 5:31 am

Topics Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

Get updates by email

Please enter a valid email address

Thank you for signing up

Privacy Policy

Will Grannis discovered his love for technology playing Tron and Oregon Trail as a child. After attending West Point and The Wharton School at the University of Pennsylvania, he translated his passion for game theory into an aptitude for solving problems for companies, a central component of his role as founder and leader of the Office of the CTO at Google Cloud. Will leads a team of customer-facing technology leaders who, while tasked with bringing machine learning solutions to market, approach their projects with a user-first mindset, ensuring that they first identify the problem to be solved.

Will Grannis is the founder and leader of Google Clouds CTO Office, a team of senior engineers whose mission is to foster collaborative innovation between Google and its largest customers. Prior to joining Google in 2015, Grannis spent the last two decades as an entrepreneur, enterprise technology executive, and investor, building and scaling technical platforms that today power commerce, transportation, and the public sector. Hes been a developer, product manager, CTO, SVP of Engineering, and CEO, building a wide variety of platforms and teams along the way.

Your reviews are essential to the success of Me, Myself, and AI. For a limited time, were offering a free download of MIT SMRs best articles on artificial intelligence to listeners who review the show. Send a screenshot of your review to smrfeedback@mit.edu to receive the download.

In Season 2, Episode 2, of Me, Myself, and AI, Will makes it clear that great ideas dont only come from the obvious subject-area experts in the room; diverse perspectives, coupled with a codified approach to innovation, lead to the best ideas. The collaboration principles and processes Google Cloud relies on can be applied at other organizations across industries.

Read more about our show and follow along with the series at https://sloanreview.mit.edu/aipodcast.

Subscribe to Me, Myself, and AI on Apple Podcasts, Spotify, or Google Podcasts.

Shervin Khodabandeh: Can you get to the moon without first getting to your own roof? This will be the topic of our conversation with Will Grannis, Google Cloud CTO.

Sam Ransbotham: Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. Im Sam Ransbotham, professor of information systems at Boston College. Im also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.

Shervin Khodabandeh: And Im Shervin Khodabandeh, senior partner with BCG, and I colead BCGs AI practice in North America. Together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities and really transform the way organizations operate.

Sam Ransbotham: Were talking with Will Grannis today; hes the founder and leader of the Office of the CTO at Google Cloud. Thank you for joining us today, Will.

Will Grannis: Great to be here. Thanks for having me.

Sam Ransbotham: So its quite a difference between being at Google Cloud and your background. Can you tell us a little bit about how you ended up where you are?

Will Grannis: Call it maybe a mix of formal education and informal education. Formally, Arizona public school system and then, later on, West Point math and engineering undergrad. And then, later on, UPenn University of Pennsylvania, Wharton for my MBA. Now, maybe the more interesting part is the informal education, and this started in the third grade, and back then, I think it was gaming that originally spiked my curiosity in technology, and so this was Pong, Oregon Trail in television, Nintendo all the gaming platforms. I was just fascinated that you could turn a disk on a handset and you could see Tron move around on a screen; that was like the coolest thing ever.

And so todays manifestation Khan Academy, edX, Codecademy, platforms like that you have this entire online catalog of knowledge, thanks to my current employer, Google. And just as an example, this week Im porting some machine learning code to a microcontroller and brushing up on my C thanks to these what I call informal education platforms. So [its] a journey that started with formal education but was really accelerated by others, by curiosity, and by these informal platforms where I could go explore the things I was really interested in.

Sam Ransbotham: I think, particularly with artificial intelligence, were so focused about games and whether or not the machines beat a human at this game or that game, when there seems to be such difference between games and business scenarios. So how can we make that connection? How can we move from what we can learn from games to what businesses can learn from artificial intelligence?

Will Grannis: Gaming is exciting and it is interesting, but lets take a foundational element of games: understanding the environment that youre in and defining the problem you want to solve whats the objective function, if you will. That is exactly the same question that every manufacturer, every retailer, every financial services organization asks themselves when theyre first starting to apply machine learning. And so in games, the objective functions tend to be a little bit more fun it could be an adversarial game, where youre trying to win and beat others, but those underpinnings of how to win in a game actually are very, very relevant to how you design machine learning in the real world to maximize any other type of objective function that you have. So for example, in retail, if youre trying to decrease the friction of a consumers online experience, you actually have some objectives that youre trying to optimize, and thinking about it like a game is actually a useful construct at the beginning of problem definition: What is it that we really want to achieve? And Ill tell you that, being around AI machine learning now for a couple of decades when it was cool, when it wasnt cool I can tell you that the problem definition and really getting a rich sense of the problem youre trying to solve is absolutely the No. 1 most important criterion for being successful with AI and machine learning.

Shervin Khodabandeh: I think thats quite insightful, Will, and its probably a very good segue to my question. That is, it feels like in almost any sector, what we are seeing is that there are winners and losers in terms of getting impact from AI. There are a lot less winners than there are losers, and Im sure that many CEOs are looking at this wondering what is going on, and I deeply believe that a lot of it is what you said, which is it absolutely has to start with the problem definition and getting the perspective of business users and process owners and line managers into that problem definition, which should be critical. And since were talking about this, it would be interesting to get your views on what are some of the success factors from where youre sitting and where youre observing to get maximum impact from AI.

Will Grannis: Well, I cant speak to exactly why every company is successful or unsuccessful with AI, but I can give you a couple of principles that we try to apply and that I try to apply generally. I think today we hear and we see a lot about AI and the magic that it creates. And I think sometimes it does a disservice to people who are trying to implement it in production. Ill give you an example: Where did we start with AI at Google? Well, it was in a place where we already had really well-constructed data pipelines, where we had already exhausted the heuristics that we were using to determine performance, and instead we looked at machine learning as one option to improve our lift on advertising, for example.

And it was only because we already had all the foundational work done we understood how to curate, extract, transform, [and] load data; how to share it; how to think about what that data might yield in terms of outcomes; how to construct experiments, [the] design of experiments; and utilize that data effectively and efficiently that we were able to test the frontier of machine learning within our organization. And maybe, to your question, maybe one of the biggest opportunities for most organizations today, maybe it will be machine learning, but maybe today its actually in how they leverage data how they share, how they collaborate around data, how they enrich it, how they make it easy to share with groups that have high sophistication levels, like data scientists, but also analysts and business intelligence professionals who are trying to answer a difficult question in a short period of time for the head of a line of business. And unless you have that level of data sophistication, machine learning will probably be out of reach for the foreseeable future.

Shervin Khodabandeh: Yeah, Will, one other place I thought you might go is building on what you were saying earlier about the analog between gaming and business, all around problem definition how its important to get the problem definition right. And what resonated with me when you were saying that was, probably a lot of companies just dont know how to make that connection and dont know where to get started, which is actually, What is the actual problem that were trying to solve with AI? And many are focusing on, What are all the cool things AI can do, and whats all the data and technology we need? rather than actually starting with the problem definition and working their way backwards from the problem definition to the data and then how can AI help them solve that problem.

Will Grannis: Its really a mindset. Ill share a little inside scoop: At Google, we have an internal document that our engineers have written to help each other out with getting started on machine learning. And the No. 1 because theres a list of like 72 factors, things you need to do to be successful in machine learning and No. 1 is you dont need machine learning. And the reason why its stated so strongly is actually to get the mindset of uncovering the richness of the problem, and the nuances of that problem actually create all of the downstream to your point all of the downstream implementation decisions. So if you want to reduce friction in online checkout, that is a different problem than trying to optimize really great recommendations within someones e-commerce experience online for retail. Those are two very different problems, and you might approach them very differently; they might have completely different data sets, they might have completely different outcomes on your business. And so one of the things that weve done here at Google over time is weve tried to take our internal shorthand for innovation, [our] approach to innovation and creativity, and weve tried to codify it so that we can be consistent in how we execute projects, especially the ones that venture into the murkiness of the future.

And this framework, it really has three principles. And the first one, as you might expect, is to focus on the user, which is really a way of saying, Lets get after the problem the pain that they care the most about. The second step is to think 10x because we know [that] if its going to be worth the investment of all of these cross-functional teams time and to create the data pipelines, and to curate them, and to test for potential bias within these pipelines and within data sets, to build models and to test those models, thats a significant investment of time and expertise and attention, and so we want to make sure were solving for a problem that also has the scale that will be worth it and really advances whatever were trying to do not in a small way, but in a really big way. And then the third one is rapidly prototyping, and you cant get to the rapid prototyping unless youve thought through the problem, youve constructed your environment so that you can conduct these experiments rapidly. And sometimes well proxy outcomes just to see if wed care about them at all without running them at full production. So that framework, that focusing on the user, thinking 10x, and then rapid prototyping, is an approach that we use across Google, regardless of product domain.

Shervin Khodabandeh: Thats really insightful, especially the think 10x piece, which I think is really, really helpful. I really like that.

Sam Ransbotham: Youre lobbying, I think, for I would call it a very strong exploration mindset toward your approach to artificial intelligence, versus more of an incremental or Lets do what we have, better. Is that right for everybody? Do you think thats idiosyncratic to Google? Almost everyone listening today is not going to be working at Google. Is that something that you think works in all kinds of places? That may be beyond what you can speak to, but how well do you think that that works across all organizations?

Will Grannis: Well, I think theres a difference between a mindset and then the way that these principles manifest themselves. Machine learning, just in its nature, is exploration, right? Its approximations, and youre looking through the math, and youre looking for the places where youre pretty confident that things have changed significantly for the better or for the worse so that you can do your feature engineering and you can understand the impact of choices that youre making. And in a lot of ways, the mathematical exploration is an analog to the human exploration, in that we try to encourage people by the way, just because we have a great idea doesnt mean it gets funded at Google. Yes, we are a very large company, yes were doing pretty well, but most of our big breakthroughs have not come from some top-down-mandated gigantic project that everybody said was going to be successful.

Gmail was built by people who were told very early on that it would never succeed. And we find that this is a very common path and before Google, Ive been an entrepreneur a couple of times, my own company and somebody elses, and Ive worked in other large companies that had world-class engineering teams as well. And I can tell you this is a pattern, which is, just giving people just enough freedom to think about what the future could look like. We have a way of describing 10x at Google you may have heard, called moonshots. Well, our internal engineering team has also coined the term roof shots, because the moonshots are often accomplished by a series of these roof shots, and if people dont believe in the end state, the big transformation, theyre usually much less likely to journey across those roof shots and to keep going when things get hard. And we dont flood people with resources and help at the beginning because its this is hard for me to say as a senior executive leading technology innovation, but quite often, I dont have perfect knowledge of what will be the most impactful project that teams are working on. My job is to create an environment where people feel empowered, encouraged, and excited to try and [I] try to demotivate them as little as possible, because theyll find their way to the roof shot, and then the next one, and then the next one, and then pretty soon youre three years in, and I couldnt stop a project if I wanted to; its going to happen because of that spirit, that [Star Trek:] Voyager spirit.

Shervin Khodabandeh: Tell us a bit about your role at Google Cloud.

Will Grannis: I think I have the best job in the industry, which is I get to lead a collective of CTOs who have come from every industry and every geography and every place in the stack, from hardware engineering all the way up to SaaS, quantum security, and I get to lead this incredible team. And our mission is to create this bridge between our customers our top customers, and our top partners of Google, who are trying to do incredible things with technology and the people who are building these foundational platforms at Google, and to try to harmonize them. Because with the evolution of Google now, especially with our cloud business we have become a partner to many of the worlds top organizations.

And so, for example, if Major League Baseball wants to create a new immersive experience for you at home through a digital device or, eventually when we get back to it, into the stadiums, its not just us creating technology, surfacing it to them, them telling us what they like about it, and then sending it back, and then we spin it; its actually collaborative innovation. So we have these approaches to machine learning that we think could be pretty interesting: We have technologies in AR/VR [augmented reality and virtual reality], we have content delivery networks, we have all of these different platforms that we have at Google. And in this exploratory mode, we get together with these large customers and they help guide not only the features, but they help us think about what were going to build next. And then they layer on top of these foundational platforms the experience that they want as Major League Baseball [for] us as baseball fans. And that intertwined, collaborative technology development is at the heart, and that collaborative innovation thats at the heart of what we do here in the CTO group.

Shervin Khodabandeh: Thats a great example. Can you say a bit more about how you set strategy for projects like that?

Will Grannis: Im very, very bullish about having the CTO and the CIO at the top table in an organization, because the CIO often is involved in the technology that a company uses for itself, for its own innovation. And Ive often found that the tools and the collaboration and the culture that you have internally manifests itself in the technology that you build for others. And so a CIOs perspective on how to collaborate the tools, how people are working together, how they could be working together is just as important as the CTOs view into what technology could be most impactful, most disruptive coming from the outside in, but you also want them sitting next to the CMO. You want them sitting next to the chief revenue officer, you want them with the CEO and the CFO. And the reason is because it creates a tension, right? I would never advocate that all of my ideas are great. Some of them are, but some of them [havent] panned out. And its really important that that unfiltered tension is created at the point at which corporate strategy is delivered. In fact, this is one of the things I learned a lot from working for a couple of CEOs, both outside of Google and here, is that its a shared responsibility: [Its] the responsibility of the CTO to put themselves in the room, to add that value, and its the responsibility of the CEO to pull it through the organization when the mode of operation may not be that way today.

Shervin Khodabandeh: Thats very true. And it corroborates our work, Sam, to a large extent that its not just about building the layers of tech; its about process change, its about strategy alignment, and also its about ultimately what humans have to do differently, and to work with AI collaboratively. Its also about how managers and midmanagers and the folks that are using AI to be more productive, to be more precise, to be more innovative, more imaginative in their day-to-day work. Can you comment a bit on that, in terms of how it could have changed the roles of individual employees lets say, in different roles, whether its in marketing or in pricing or customer servicing? Any thoughts or ideas on that?

Will Grannis: We had an exercise like this with a large retail customer, and it turned out that someone from outside of the organization the physical security and monitoring organization it turns out that one of the most disruptive and interesting and impactful framings of that problem came from someone who was in a product team, totally unrelated to this area, that just got invited to this workshop as a representative of their org. So we cant have everybody in every brainstorming session, despite the technology [that] allows us to put a lot of people in one place at one time, but choosing who is in those moments is absolutely critical. Just going to default roles or going to default responsibilities is one way to just keep the same information coming back again and again and again.

Sam Ransbotham: Thats certainly something were thinking about at a humanities-based university, that blend and that role of people. Its interesting to me that in all your examples, you talked about joining people, and people from cross-functional teams, [but] youve never mentioned a machine as one of these roles or a player. Is that too far-fetched? How are these combinations of humans going to add the combination of machine in here? Weve got a lot of learning from machines, and I think certainly at a task level, at what point does it get elevated to a more strategic level? Is that too far away?

Will Grannis: No, I dont think so, but [its] certainly in its early days. One of the ways you can see this manifest is [in] natural language processing, for example. I remember one project we had, we were training a chatbot, and it turned out we used raw logs all privacy assured and everything but we used these logs that a customer had provided because they wanted to see if we could build a better model. And it turns out that the chat agent wasnt exactly speaking the way wed want another human being to speak to us. And why? Because people get pretty upset when theyre talking to customer support, and the language that they use isnt necessarily language I think we would use with each other on this podcast. And so we do think that machines will be able to offer some interesting response inputs, generalized inputs at some point, but I can tell you right now, you want to be really careful about letting loose a natural language-enabled partner that is a machine inside of your creativity and innovation session, because you may not hear things that you like.

Sam Ransbotham: Well, it seems like theres a role here, too that I dont know, these machines. Theres going to be bias in these things. This is inevitable. And in some sense, Im often happy to see biased decisions coming out of these AI and ML systems, because then its at least surfaced. Weve got a lot of that unconsciously going on in our world right now, and if one of the things that were learning is that the machines are pointing out how ugly were talking to chatbots or how poorly were making other decisions, that may be a Step 1 to improving overall.

Will Grannis: Yeah. The responsible AI push, its never over; its one of those things. Ensuring those responsible and ethical practices requires a focus across the entire activity chain. And two areas that weve seen as really impactful are when you can focus on principles as an organization. So, what are the principles through which you will take your projects and shine the light on them and examine them and think about the ramifications? Because you cant a priori define all of the potential outputs that machine learning and AI may generate.

And thats where I refer to it as a journey, and Im not sure that there is a final destination. I think its one that is a constant and, in the theme of a lot of what we talked about today, its iterative. You think about how you want to approach it: You have principles, you have governance, and then you see what happens, and then you make the adjustments along the way. But not having that foundation means youre dealing with every single instance as its own unique instance, and that becomes untenable at scale, even small scale. This isnt just a Google-scale thing this is, any company that wants to distinguish itself with AI at any type of scale is going to bump into that.

Sam Ransbotham: Will, we really appreciate you taking the time to talk with us today. Its been fabulous. Weve learned so much.

Shervin Khodabandeh: Really, really an insightful and candid conversation. Really appreciate it.

Will Grannis: Oh, absolutely. My pleasure. Thanks for having me.

Shervin Khodabandeh: Sam, I thought that was a really good conversation. Weve been talking with Will Grannis, founder and leader of the Office of the CTO at Google Cloud.

Sam Ransbotham: Well, I think we may have lost some listeners saying that you dont need ML as item one on his checklist, but I think he had 71 other items on his checklist that do involve machine learning.

Shervin Khodabandeh: But I thought he was making a really important point: Dont get hung up on the technology and the feature functionality, and think about the business problem and the impact and shoot really, really big for the impact. And then also, dont think you have to achieve the moonshot in one jump, and that you could get there in progressive jumps, but you always have to keep your eye on the moon, which I think is really, really insightful.

Sam Ransbotham: Thats a great way of putting it, because I do think we got focused on thinking about the 10x, and we maybe paid less attention to his No. 1, which was the user focus and the problem.

Shervin Khodabandeh: The other thing I thought that is an important point is collaboration. I think its really an overused term, because in every organization, every team would say, Yes, yes, were completely collaborative; everybodys collaborating; theyre keeping each other informed. But I think the true meaning of what Will was talking about is beyond that. Theres multiple meanings to collaboration. You could say, As long as Im keeping people informed or sending them documents, then Im collaborating. But what he said is, Theres not a single person on my team that can succeed on his or her own, and thats a different kind of collaboration; it actually means youre so interlinked with the rest of your team that your own outcome and output depends on everybody elses work, so you cant succeed without them and they cant succeed without you. Its really beyond collaboration. Its like the team is an amalgam of all the people and theyre all embedded in each other as just one substance. Whats the chemical term for that?

Sam Ransbotham: Yeah, see, I knew you were going to make a chemical reference there. There you go: amalgam.

Shervin Khodabandeh: Amalgam or amalgam? I should know this as a chemical engineer.

Sam Ransbotham: Exactly. Were not going to be tested on this part of the program.

Shervin Khodabandeh: I hope my Caltech colleagues arent listening to this.

Sam Ransbotham: Yeah, actually, the collaboration thing. Its easy to espouse collaboration. If you think about it, nobody we interview is going to say, All right, you know, I really think people should not collaborate. I mean, just, no ones going to [say] that, but whats different about what he said is they have process around it. And they had, it sounded like, structure and incentives so that people were incentivized to align well.

Shervin Khodabandeh: I like the gaming analog the objective function in the game, whether its adversarial or youre trying to beat a course or unleash some hidden prize somewhere; that there is some kind of an optimization or simulation or approximation or correlation going on in these games, and so the analog of that to a business problem resting so heavily on the very definition of the objective function.

Sam Ransbotham: Yeah, I thought the twist that he said on games was important, because he did pull out immediately that we can think about these as games, but what have we learned from games? Weve learned from games that we need an objective, we need a structure, we need to define the problem. And he tied that really well into the transition from what we think of as super well-defined games of perfect information to unstructured. It still needs that problem definition. I thought that was a good switch.

Shervin Khodabandeh: Thats right.

Sam Ransbotham: Will brought out the importance of having good data for ML to work. He also highlighted how Google Cloud collaborates both internally and with external customers. Next time well talk with Amit Shah, president of 1-800-Flowers, about the unique collaboration challenges that it uses AI to address through its platform. Please join us next time.

Allison Ryder: Thanks for listening to Me, Myself, and AI. If youre enjoying the show, take a minute to write us a review. If you send us a screenshot, well send you a collection of MIT SMRs best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback@mit.edu.

Sam Ransbotham (@ransbotham) is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for MIT Sloan Management Reviews Artificial Intelligence and Business Strategy Big Ideas initiative. Shervin Khodabandeh is a senior partner and managing director at BCG and the coleader of BCG GAMMA (BCGs AI practice) in North America. He can be contacted at shervin@bcg.com.

Me, Myself, and AI is a collaborative podcast from MIT Sloan Management Review and Boston Consulting Group and is hosted by Sam Ransbotham and Shervin Khodabandeh. Our engineer is David Lishansky, and the coordinating producers are Allison Ryder and Sophie Rdinger.

Read more:

Games, Teams, and Moonshots: Google Cloud's Will Grannis - MIT Sloan - MIT Sloan

Posted in Google | Comments Off on Games, Teams, and Moonshots: Google Cloud’s Will Grannis – MIT Sloan – MIT Sloan

Google Is Putting Money Into That Fast Undersea Internet Cable Between Eureka and Singapore, and They Think It’ll be Done by 2023 – Lost Coast Outpost

Posted: at 5:31 am

Surely a first: Google has literally put Eureka on one of its maps. Source.

###

Youve heard about that big, new, fat fiber optic pipe theyre going to lay down between Singapore and Eureka? (If you havent, check the links below.) The plan is that itll descend down into the deep from one of the worlds major financial capitals and make its way eastward, with stopovers in Indonesia and Guam, before crawling up to shore via the old Samoa pulp mills outfall pipe, which extends about a mile out to sea.

This is an exciting prospect! Already a big, international datacenter company has laid down plans for a new facility in Arcata, and its everyones dearest hope that even more tech dolla will rain down upon our shores once our slick new line is in place.

Which is when? Well, according to Google, its expected to be operational by the summer of 2023.

And when I say according to Google, Im not saying that I just Googled it. I am saying that literally Google is saying that. Because, as the company announced on its Google Cloud Blog yesterday, its throwing a bunch of money at the Singapore-Eureka undersea line, which is codenamed Echo.

Echos architecture is designed for maximum resilience, writes the Goog. Its unique Trans Pacific route to Southeast Asia avoids crowded, traditional paths to the north and is expected to be ready for service in 2023. We look forward to the expanded connectivity that Echo will bring to Southeast Asia, enabling new opportunities for people and businesses in the region.

And this region too, hopefully.

Read more from the original source:

Google Is Putting Money Into That Fast Undersea Internet Cable Between Eureka and Singapore, and They Think It'll be Done by 2023 - Lost Coast Outpost

Posted in Google | Comments Off on Google Is Putting Money Into That Fast Undersea Internet Cable Between Eureka and Singapore, and They Think It’ll be Done by 2023 – Lost Coast Outpost

Google Earth didn’t block the Suez Canal. The difference in color is from lighting, camera angles – PolitiFact

Posted: at 5:31 am

A giant container ship that found itself stuck sideways in the Suez Canal has finally been freed after creating a maritime traffic jam and disrupting global shipping.

But the incident spurred some online to claim that it was no accident and that Google Earth suspiciously blocked people from seeing the canal.

"Whyd Google earth block the Suez Canal?" one Facebook post questions while displaying screenshots from Google Earth that appear to show a large portion of the canal in a starkly darker blue compared to the rest of the passage.

"Very VERY interesting and worrisome," one commenter wrote.

"You know, because..censorship," another said.

The post was flagged as part of Facebooks efforts to combat false news and misinformation on its News Feed. (Read more about our partnership with Facebook.)

There is nothing nefarious about Google Earths imagery of the canal, which was there before the ship blockage occurred. The program does not operate in real time, and includes satellite, aerial, 3D and street view images that are collected over time from various providers and platforms.

"The mosaic of satellite and aerial photographs you can see in Google Maps and Google Earth is sourced from many different providers, including state agencies, geological survey organizations and commercial imagery providers," Matt Manolides, a satellite imagery expert at Google, saidin a company blog post. "These images are taken on different dates and under different lighting and weather conditions."

The inconsistency in the water color of the Suez Canal is due to a difference of day-to-day lighting conditions and camera angles, and a similar effect can be seen at other locations on Google Earth.

For example, the first image shared in the post has a moderate amount of surface reflection, making it look blue, and waves can be seen in it. The third image has a high amount of surface reflection and is brighter than the first image due to more sun reflecting back up to the satellite.

The artifacts are sometimes produced when Google Earth stitches satellite images together. They normally aren't so noticeable, but satellite imagery of water can sometimes make stitching more visible, according to Google.

Other places that show a similar effect include the Panama Canal, Straits of Gibraltar and Loch Ness.

We rate this post False.

More here:

Google Earth didn't block the Suez Canal. The difference in color is from lighting, camera angles - PolitiFact

Posted in Google | Comments Off on Google Earth didn’t block the Suez Canal. The difference in color is from lighting, camera angles – PolitiFact

Page 89«..1020..88899091..100110..»