Page 80«..1020..79808182..90100..»

Category Archives: Google

Google isnt ready to turn search into a conversation – The Verge

Posted: May 31, 2021 at 2:31 am

The future of search is a conversation at least, according to Google.

Its a pitch the company has been making for years, and it was the centerpiece of last weeks I/O developer conference. There, the company demoed two groundbreaking AI systems LaMDA and MUM that it hopes, one day, to integrate into all its products. To show off its potential, Google had LaMDA speak as the dwarf planet Pluto, answering questions about the celestial bodys environment and its flyby from the New Horizons probe.

As this tech is adopted, users will be able to talk to Google: using natural language to retrieve information from the web or their personal archives of messages, calendar appointments, photos, and more.

This is more than just marketing for Google. The company has evidently been contemplating what would be a major shift to its core product for years. A recent research paper from a quartet of Google engineers titled Rethinking Search asks exactly this: is it time to replace classical search engines, which provide information by ranking webpages, with AI language models that deliver these answers directly instead?

There are two questions to ask here. First is can it be done? After years of slow but definite progress, are computers really ready to understand all the nuances of human speech? And secondly, should it be done? What happens to Google if the company leaves classical search behind? Appropriately enough, neither question has a simple answer.

Theres no doubt that Google has been pushing a vision of speech-driven search for a long time now. It debuted Google Voice Search in 2011, then upgraded it to Google Now in 2012; launched Assistant in 2016; and in numerous I/Os since, has foregrounded speech-driven, ambient computing, often with demos of seamless home life orchestrated by Google.

Despite clear advances, Id argue that actual utility of this technology falls far short of the demos. Check out the introduction below of Google Home in 2016, for example, where Google promises that the device will soon let users control things beyond the home, like booking a car, ordering dinner, or sending flowers to mom, and much, much more. Some of these things are now technically feasible, but I dont think theyre common: speech has not proven to be the flexible and faultless interface of our dreams.

Everyone will have different experiences, of course, but I find that I only use my voice for very limited tasks. I dictate emails on my computer, set timers on my phone, and play music on my smart speaker. None of these constitute a conversation. They are simple commands, and experience has taught me that if I try anything more complicated, words will fail. Sometimes this is due to not being heard correctly (Siri is atrocious on that score), but often it just makes more sense to tap or type my query into a screen.

Watching this years I/O demos I was reminded of the hype surrounding self-driving cars, a technology that has so far failed to deliver on its biggest claims (remember Elon Musk promising that a self-driving car would take a cross country trip in 2018? It hasnt happened yet). There are striking parallels between the fields of autonomous driving and speech tech. Both have seen major improvements in recent years thanks to the arrival of new machine learning techniques coupled with abundant data and cheap computation. But both also struggle with the complexity of the real world.

In the case of self-driving cars, weve created vehicles that dont perform reliably outside of controlled settings. In good weather, with clear road markings, and on wide streets, self-driving cars work well. But steer them into the real world, with its missing signs, sleet and snow, unpredictable drivers, and they are clearly far from fully autonomous.

Its not hard to see the similarity with speech. The technology can handle simple, direct commands that require the recognition of only a small number of verbs and nouns (think play music, check the weather and so on) as well as a few basic follow-ups, but throw these systems into the deep waters of conversation and they flounder. As Googles CEO Sundar Pichai commented at I/O last week: Language is endlessly complex. We use it to tell stories, crack jokes, and share ideas. [...] The richness and flexibility of language make it one of humanitys greatest tools and one of computer sciences greatest challenges.

However, there are reasons to think things are different now (for speech anyway). As Google noted at I/O, its had tremendous success with a new machine learning architecture known as Transformers, a model that now underpins the worlds most powerful natural language processing (NLP) systems, including OpenAIs GPT-3 and Googles BERT. (If youre looking for an accessible explanation of the underlying tech and why its so good at parsing language, I highly recommend this blog post from Google engineer Dale Markowitz.)

The arrival of Transformers has created a truly incredible, genuinely awe-inspiring flowering of AI language capabilities. As has been demonstrated with GPT-3, AI can now generate a seemingly endless variety of text, from poetry to plays, creative fiction to code, and much more, always with surprising ingenuity and verve. They also deliver state-of-the-art results in various speech and linguistic tests and, whats better, systems scale incredibly well. That means if you pump in more computational power, you get reliable improvements. The supremacy of this paradigm is sometimes known in AI as the bitter lesson and is very good news for companies like Google. After all, theyve got plenty of compute, and that means theres lots of road ahead to improve these systems.

Google channeled this excitement at I/O. During a demo of LaMDA, which has been trained specifically on conversational dialogue, the AI model pretended first to be Pluto, then a paper airplane, answering questions with imagination, fluency, and (mostly) factual accuracy. Have you ever had any visitors? a user asked LaMDA-as-Pluto. The AI responded: Yes I have had some. The most notable was New Horizons, the spacecraft that visited me.

A demo of MUM, a multi-modal model that understands not only text but also image and video, had a similar focus on conversation. When the model was asked: Ive hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare? it was smart enough to know that the questioner is not only looking to compare mountains, but that preparation means finding weather-appropriate gear and relevant terrain training. If this sort of subtlety can transfer into a commercial product and thats obviously a huge, skyscraper-sized if then it would be a genuine step forward for speech computing.

That, though, brings us to the next big question: even if Google can turn speech into a conversation, should it? I wont pretend to have a definitive answer to this, but its not hard to see big problems ahead if Google goes down this route.

First are the technical problems. The biggest is that its impossible for Google (or any company) to reliably validate the answers produced by the sort of language AI the company is currently demoing. Theres no way of knowing exactly what these sorts of models have learned or what the source is for any answer they provide. Their training data usually consists of sizable chunks of the internet and, as youd expect, this includes both reliable data and garbage misinformation. Any response they give could be pulled from anywhere online. This can also lead them to producing output that reflects the sexist, racist, and biased notions embedded in parts of their training data. And these are criticisms that Google itself has seemingly been unwilling to reckon with.

Similarly, although these systems have broad capabilities, and are able to speak on a wide array of topics, their knowledge is ultimately shallow. As Googles researchers put it in their paper Rethinking Search, these systems learn assertions like the sky is blue, but not associations or causal relationships. That means that they can easily produce bad information based on their own misunderstanding of how the world works.

Kevin Lacker, a programmer and former Google search quality engineer, illustrated these sorts of errors in GPT-3 in this informative blog post, noting how you can stump the program with common sense questions like Which is heavier, a toaster or a pencil? (GPT-3 says: A pencil) and How many eyes does my foot have? (A: Your foot has two eyes).

To quote Googles engineers again from Rethinking Search: these systems do not have a true understanding of the world, they are prone to hallucinating, and crucially they are incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over.

These issues are amplified by the sort of interface Google is envisioning. Although its possible to overcome difficulties with things like sourcing (you can train a model to provide citations, for example, noting the source of each fact it gives), Google imagines every answer being delivered ex cathedra, as if spoken by Google itself. This potentially creates a burden of trust that doesnt exist with current search engines, where its up to the user to assess the credibility of each source and the context of the information theyre shown.

The pitfalls of removing this context is obvious when we look at Googles featured snippets and knowledge panels cards that Google shows at the top of the Google.com search results page in response to specific queries. These panels highlight answers as if theyre authoritative but the problem is theyre often not, an issue that former search engine blogger (and now Google employee) Danny Sullivan dubbed the one true answer problem.

These snippets have made headlines when users discover particularly egregious errors. One example from 2017 involved asking Google Is Obama planning martial law? and receiving the answer (cited from a conspiracy news site) that, yes, of course he is (if he was, it didnt happen).

In the demos Google showed at I/O this year of LaMDA and MUM, it seems the company is still leaning toward this one true answer format. You ask and the machine answers. In the MUM demo, Google noted that users will also be given pointers to go deeper on topics, but its clear that the interface the company dreams of is a direct back and forth with Google itself.

This will work for some queries, certainly; for simple demands that are the search equivalent of asking Siri to set a timer on my phone (e.g. asking when was Madonna born, who sang Lucky Star, and so on). But for complex problems, like those Google demoed at I/O with MUM, I think theyll fall short. Tasks like planning holidays, researching medical problems, shopping for big-ticket items, looking for DIY advice, or digging into a favorite hobby, all require personal judgement, rather than computer summary.

The question, then, is will Google be able to resist the lure of offering one true answer? Tech watchers have noted for a while that the companys search products have become more Google-centric over time. The company increasingly buries results under ads that are both external (pointing to third-party companies) and internal (directing users to Google services). I think the talk to Google paradigm fits this trend. The underlying motivation is the same: its about removing intermediaries and serving users directly, presumably because Google believes its best positioned to do so.

In a way, this is the fulfillment of Googles corporate mission to organise the worlds information and make it universally accessible and useful. But this approach could also undermine what makes the companys product such a success in the first place. Google isnt useful because it tells you what you need to know, its useful because it helps you find this information for yourself. Google is the index, not the encyclopedia and it shouldnt sacrifice search for results.

Read more:

Google isnt ready to turn search into a conversation - The Verge

Posted in Google | Comments Off on Google isnt ready to turn search into a conversation – The Verge

Virus alert apps powered by Apple and Google have had limited success. – The New York Times

Posted: at 2:31 am

When Apple and Google collaborated last year on a smartphone-based system to track the spread of the coronavirus, the news was seen as a game changer. The software uses Bluetooth signals to detect app users who come into close contact. If a user later tests positive, the person can anonymously notify other app users whom the person may have crossed paths with in restaurants, on trains or elsewhere.

Soon countries around the world and some two dozen American states introduced virus apps based on the Apple-Google software. To date, the apps have been downloaded more than 90 million times, according to an analysis by Sensor Tower, an app research firm. Public health officials say the apps have provided modest but important benefits.

But Natasha Singer of The New York Times reports that some researchers say the two companies product and policy choices have limited the systems usefulness, raising questions about the power of Big Tech to set global standards for public health tools.

Computer scientists have reported accuracy problems with the Bluetooth technology. Some of the app users have complained of failed notifications, and there has been little rigorous research on whether the apps potential to accurately alert people of virus exposures outweighs potential drawbacks like falsely warning unexposed people or failing to detect users exposed to the virus.

Read more here:

Virus alert apps powered by Apple and Google have had limited success. - The New York Times

Posted in Google | Comments Off on Virus alert apps powered by Apple and Google have had limited success. – The New York Times

Google now lets you password-protect the page that shows all your searches – The Verge

Posted: at 2:31 am

Google has added a way to put a password on your Web and Activity page, which shows all your activity from across Google services, including your searches, YouTube watch history, and Google assistant queries (via Android Police). Without the verification, anyone who picks up a device youre logged into could see that activity.

To activate the verification, you can go to activity.google.com, and click the Manage My Activity verification link. From there, you can select the Require Extra Verification option, save, and enter your password to confirm that youre the one trying to make the change.

If you dont have the verification turned on, visiting activity.google.com will show a stream of your Google activity from across your devices, without asking for a password.

Turning on verification, however, will require whoevers trying to see the information to click the Verify button and enter the Google account password before itll show any history. For those who share a computer, or who sometimes lets others who arent exactly trustworthy use their device, this could be a very useful toggle.

While youre on the Web and App Activity page, you can also take a look at what activity Google is saving, and whether its being auto-deleted. Then, you can decide if youre happy with those settings. If not, this is the page to change them.

At Googles I/O keynote last week, it talked a lot about privacy with its announcement of Androids new Private Compute Core, a locked photos folder, and the ability to quickly delete your past 15 minutes of browsing in Chrome.

More here:

Google now lets you password-protect the page that shows all your searches - The Verge

Posted in Google | Comments Off on Google now lets you password-protect the page that shows all your searches – The Verge

Google Photos to end free unlimited storage from tomorrow: Plans, how to check space and more – The Indian Express

Posted: at 2:31 am

Google Photos will officially end its unlimited free storage policy for pictures at high resolution and express resolution starting tomorrow, June 1. The policy change was announced in November last year. If youve relied only on Google Photos to back up all your smartphone pictures, you will soon need to start worrying about the storage space on your account.

The policy change also means Google wants more consumers to pay up for the cloud storage service. Heres everything to keep in mind as Google changes its policy on cloud storage for Photos.

Google offers 15GB free storage space. This space is divided across Gmail, Google Drive and Photos. Under the earlier policy, photos at high or express resolution, which are both compressed formats, did not account towards free storage. This meant one could upload photos for free without worrying about running out of storage.

From June 1, these photos will count towards the 15GB free quota. If you are continuously uploading photos to your Google account, then you will perhaps need to buy some extra storage space.

Google One storage is the paid subscription which will add 100GB or more storage to your account depending on what plan you decide to choose. The basic plan starts at 100GB which is Rs 130 per month or Rs 1300 per year.

The 200GB plan starts at Rs 210 per month. The other plans are 2 TB at Rs 650 per month or Rs 6,500 per year, 10 TB which is Rs 3,250 per month, and 20 TB at Rs 6,500 per month, and 30 TB at Rs 9,750 per month.

Google says earlier photos are not impacted by the policy change. So even if you were not a paying customer for Google One, the earlier photos will not count towards your storage and you dont need to worry about transferring or deleting these in order to get extra space. But all photos uploaded from June 1 will be counted towards your storage space.

Just go to your Google account, and login to account storage management The tool link can be found at one.google.com/storage/management. Google will show what extra files can be deleted, including from Photos, Gmail and Drive.

Typically Pixel users get free unlimited storage on Google Photos, but the new policy will bring some changes as well.

Those with Pixel 3a and higher (up to Pixel 5) can continue to upload photos in High quality for free without worrying about storage space. But photos in original quality will count towards the free storage space.

Those with an older Pixel 3 continue to get unlimited free storage at Original quality for all photos and videos uploaded till January 31, 2022. Photos and videos uploaded on or before that date will remain free at Original quality.

After January 31, 2022, new photos and videos will be uploaded at High quality for free. If you upload new photos and videos at Original quality, they will count against the free storage quota.

Pixel 2 were given free storage at Original quality for all photos and videos uploaded till January 16, 2021. Photos and videos uploaded on or before that date will remain free at Original quality. After January 16, 2021, new photos and videos will be uploaded at High quality for free. If you upload new photos and videos at Original quality, they will count against your storage quota.

Those with the original Pixel (2016) get unlimited free storage at Original quality. They wont be able to upload in High quality, according to the support page.

Here is the original post:

Google Photos to end free unlimited storage from tomorrow: Plans, how to check space and more - The Indian Express

Posted in Google | Comments Off on Google Photos to end free unlimited storage from tomorrow: Plans, how to check space and more – The Indian Express

Google Photos finally stops pretending its compressed photos are high quality – The Verge

Posted: at 2:31 am

Are you planning to stick with Google Photos when its free unlimited storage disappears on June 1st? If youre anything like me, youre probably still struggling to figure out whether you can afford to procrastinate that decision a tad longer and today, Google has made that reckoning a little bit easier.

First off, the companys finally telling it like it is: Google will no longer pretend its compressed, lower-quality photos and videos are High quality, something that would have saved me a lengthy explanation just last week! (After June 1st, existing Google Pixel phone owners still get unlimited High quality photos, but if youre on, say, a Samsung or iPhone instead, its not like there was ever a Normal quality photo that doesnt count against the new 15GB limit.)

Soon, Storage saver will be the name for Googles normal-quality photos, formerly known as High quality. Youll be able to upload at either the Storage saver or Original quality tiers, both of which will count against your storage quota, with Original quality using more data.

What if youve already got 10GB worth of Gmail and 2GB of documents stored in a Google Drive like yours truly, leaving just 3GB left for photos before youll need to pay? First off, know that your existing High quality photos before June 1st dont count against the quota but also, Google has a new tool to help you find and delete blurry photos and large videos to help you free up even more space.

You can find it in the Manage storage section of the app, as you can see in the GIF above. Itll also help you find and delete screenshots, though thats been a feature of Google Photos for a while now. Google also promises to notify users who are nearing their quota, and you can click here for a storage estimate if youre logged into your account.

Still confused, perhaps? I wouldnt blame you; it took a while for me to get it all straight in my head, particularly considering that Google offers different levels of grandfathered free storage depending on which Pixel phone you own. Heres an attempt to condense that info for you:

Future Google phones wont have these perks: existing Pixels will be the last to come with free unlimited High quality uploads, Google confirmed to The Verge in November.

Read the original:

Google Photos finally stops pretending its compressed photos are high quality - The Verge

Posted in Google | Comments Off on Google Photos finally stops pretending its compressed photos are high quality – The Verge

Googles new tool will identify skin conditions what will people do with that information? – The Verge

Posted: at 2:31 am

Google announced last Tuesday that it developed a new artificial intelligence tool to help people identify skin conditions. Like any other symptom-checking tool, itll face questions over how accurately it can perform that task. But experts say it should also be scrutinized for how it influences peoples behavior: does it make them more likely to go to the doctor? Less likely?

These types of symptom-checking tools which usually clarify that they cant diagnose health conditions but can give people a read on what might be wrong have proliferated over the past decade. Some have millions of users and are valued at tens of millions of dollars. Dozens popped up over the past year to help people check to see if they might have COVID-19 (including one by Google).

Despite their growth, theres little information available about how symptom-checkers change the way people manage their health. Its not the type of analysis companies usually do before launching a product, says Jac Dinnes, a senior researcher at the University of Birminghams Institute of Applied Health Research who has evaluated smartphone apps for skin conditions. They focus on the answers the symptom-checkers give, not the way people respond to those answers.

Without actually evaluating the tools as theyre intended to be used, you dont know what the impact is going to be, she says.

Googles dermatology tool is designed to let people upload three photos of a skin issue and answer questions about symptoms. Then, it offers a list of possible conditions that the artificial intelligence-driven system thinks are the best matches. It shows textbook images of the condition and prompts users to then search the condition in Google. Users have the option to save the case to review it later or delete it entirely. The company aims to launch a pilot version later this year.

It also may introduce ways for people to continue research on a potential problem outside the tool itself, a Google spokesperson told The Verge.

When developing artificial intelligence tools like the new Google program, researchers tend to evaluate the accuracy of the machine learning program. They want to know exactly how well it can match an unknown thing, like an image of a strange rash someone uploads, with a known problem. Google hasnt published data on the latest iteration of its dermatology tool, but the company says it includes an accurate match to a skin problem in the top three suggested conditions 84 percent of the time.

Theres typically less focus on what users do with that information. This makes it hard to tell if a tool like this could actually meet one of its stated goals: to give people access to information that might take some of the load off dermatologists who are stretched thin all over the world. Theres no doubt that theres such a huge demand for dermatologists, Dinnes says. Theres a desire to use tools that are perceived as helping the situation, but we dont actually know if theyre going to help.

Its a big gap in our understanding, says Hamish Fraser, an associate professor of medical science at Brown University who studies symptom-checkers. In addition to the basic problem of whether people can even interpret the systems correctly and use them correctly, theres also this question about whether people will actually respond to anything that is fed back to them from the system.

Filling that gap is key as more and more of these tools come onto the market, Fraser says. There are more and more emerging technologies. Understanding how they could change peoples behavior is so important because their role in healthcare will likely grow.

People are already voting with their feet, in terms of using Google and other search engines to check symptoms and look up diseases, Fraser says. Theres obviously a need there.

Ideally, Fraser says, future studies would ask people using a symptom-checker for permission to follow up and ask what they did next or ask for permission to contact their doctor.

You would start to very quickly get a sense as to whether a random sample of millions of people using it got something from the system that related to what was actually going on, or what their family doctor said, or whether they went to the emergency department, he says.

One of the few studies that have asked some of these questions followed up with around 150,000 people who used a virtual medical chatbot called Buoy Health. Researchers checked how likely people said they were to go to the doctor before using the bot and how likely they were to go to the doctor after they saw what the bot had to say. Around a third of people said they would seek less urgent care maybe wait to see a primary care doctor rather than go to the emergency room. Only 4 percent said they would take more urgent steps than before they used the chatbot. The rest stayed around the same.

Its only one study, and it evaluates a checker for general medical symptoms, like reproductive health issues and gastrointestinal pain. But the findings were, in some ways, counterintuitive: many doctors worry that symptom-checkers lead to overuse of the health system and send people to get unnecessary treatment. This seemed to show the opposite, Fraser says. The findings also showed how important accuracy is: diverting people from treatment could be a big problem if done improperly.

If youve got something that youre concerned about on your skin, and an app tells you its low risk or it doesnt think its a problem, that could have serious consequences if it delays your decision to go and have a medical consultation, Dinnes says.

Still, that type of analysis tends to be uncommon. The company behind an existing app for checking skin symptoms, called Aysa, hasnt yet explicitly surveyed users to find out what steps they took after using the tool. Based on anecdotal feedback, the company thinks many people use the tool as a second opinion to double-check information they got from a doctor, says Art Papier, the chief executive officer of VisualDx, the company behind Aysa. But he doesnt have quantitative data.

We dont know if they went somewhere else after, he says. We dont ask them to come back to the app and tell us what the doctor said. Papier says the company is working to build those types of feedback loops into the app.

Google has planned follow-up studies for its dermatology tool, including a partnership with Stanford University to test the tool in a health setting. The company will monitor how well the algorithm performs, Lily Peng a physician-scientist and product manager for Google, said in an interview with The Verge. The team has not announced any plans to study what people do after they use the tool.

Understanding the way people tend to use the information from symptom-checkers could help ensure the tools are deployed in a way that will actually improve peoples experience with the healthcare system. Information on what steps groups of people take after using a checker also would give developers and doctors a more complete picture of the stakes of the tools that theyre building. People with the resources to see a specialist might be able to follow up on a concerning rash, Fraser says. If things deteriorate theyll probably take action, he says.

Others without that access might only have the symptom-checker. That puts a lot of responsibility on us people who are particularly vulnerable and less likely to get a formal medical opinion may well be relying most on these tools, he says. Its especially important that we do our homework and make sure theyre safe.

See the original post:

Googles new tool will identify skin conditions what will people do with that information? - The Verge

Posted in Google | Comments Off on Googles new tool will identify skin conditions what will people do with that information? – The Verge

Skin in the frame: black photographers welcome Google initiative – The Guardian

Posted: at 2:31 am

Christina Ebenezer first started taking photos with a group of friends when she was a 17-year-old student. Even then, she noticed the difference in how her camera captured people of different skin tones.

I didnt think much about this until I got older and became more experienced in photography. It was when I learned that the early Kodak Vericolor Shirley Cards were based on various white women that I thought OK, this was an industry standard that was not made with people like me in mind, Ebenezer, who has photographed for British Vogue, British GQ, and Vanity Fair, said.

Kodaks Shirley Cards were used by photo labs for calibrating skin tones, shadows and light in photographs. The card, named after the original model who worked for Kodak, ensured Shirley looked good, to the detriment of people with darker skin colour.

Robert Taylor, who has been a photographer for 30 years, remembers working with well-intentioned white photographers who had plainly done their best, but just hadnt got to grips with the technical and aesthetic challenges of doing black people and black skin right.

Taylor, whose work is held in several permanent collections including the National Portrait Gallery, the Victoria & Albert Museum, and the Royal Society, added: And in some cases, the settings and the choices of how things are set up in analogue as well as in digital just didnt work as well with dark skin.

It is this bias that Googles new equitable camera initiative hopes to tackle. The company has partnered with 17 professional image-makers to make changes to their computational photo algorithms to address long-standing problems, a spokesperson said.

The initiative has been welcomed by black photographers in the UK. Its definitely an important step forward. Its amazing and commendable what they want to do, said Daniel Oluwatobi, a photographer and videographer who has worked with a range of musicians, including Ella Mai, Pop Smoke, Burna Boy and the group NSG.

But, Oluwatobi added, people need to be more conscious not to put too much blame on the equipment itself. I want to have a balanced approach, he explained. A lot of the time, its the person behind the camera, and also the preferences involved in post-production. Ive taken pictures on absolutely dreadful cameras and Ive made black people look amazing because of how I am about lighting, post-production, and even the style I seek.

Ebenezer agrees that the racial bias in photography goes much further than the equipment itself. Though she started off taking pictures of family and friends, from a range of different skin tones, she was pressured to focus on white models when she got into fashion. I was told you really need to do this for your portfolio to be taken seriously, she said.

It got to a point where I thought, why am I trying to mould myself into something that Im not? Ive grown up around so much beauty when it came to people of different races and ethnicities. So why would I now make my portfolio based on people that I didnt have a personal connection with? I see my family members, I see my friends, I see those are the people that are around me 24/7 so why would I shy away from highlighting people like them in my work?

For Ebenezer and many other black creatives, the past year has been a busy one as the industry responded to the Black Lives Matter movement by commissioning them for work. Ebenezer describes this progress as mixed. While more people are listening and trusting her skill, she is still often the only black person on set.

Im less clear that anything really different is going on. The things that will make a change are more opportunities for high-quality work, and sincere, sensitive engagement between people who are not alike. Thats what will make the breakthrough, Taylor said.

Follow this link:

Skin in the frame: black photographers welcome Google initiative - The Guardian

Posted in Google | Comments Off on Skin in the frame: black photographers welcome Google initiative – The Guardian

Skiff, an end-to-end encrypted alternative to Google Docs, raises $3.7M seed – TechCrunch

Posted: at 2:31 am

Imagine if Google Docs was end-to-end encrypted so that not even Google could access your documents. Thats Skiff, in a nutshell.

Skiff is a document editor with a similar look and feel to Google Docs, allowing you to write, edit and collaborate in real time with colleagues with privacy baked in. Because the document editor is built on a foundation of end-to-end encryption, Skiff doesnt have access to anyones documents only users, and those who are invited to collaborate, do.

Its an idea that has already attracted the attention of investors. Skiffs co-founders Andrew Milich (CEO) and Jason Ginsberg (CTO) announced today that the startup has raised $3.7 million in seed funding from venture firm Sequoia Capital, just over a year since Skiff was founded in March 2020. Alphabet chairman John Hennessy, former Yahoo chief executive Jerry Yang and Eventbrite co-founders Julia and Kevin Hartz also participated in the round.

Milich and Ginsberg told TechCrunch that the company will use the seed funding to grow the team and build out the platform.

Skiff isnt that much different from WhatsApp or Signal, which are also end-to-end encrypted, underneath its document editor. Instead of using it to send messages to a bunch of people, were using it to send little pieces of documents and then piecing those together into a collaborative workspace, said Milich.

But the co-founders acknowledged that putting your sensitive documents in the cloud requires users to put a lot of trust into the startup, particularly one that hasnt been around for long. Thats why Skiff published a whitepaper with technical details of how its technology works, and has begun to open source parts of its code, allowing anyone to see how the platform works. Milich said Skiff has also gone through at least one comprehensive security audit, and the company counts advisors from the Signal Foundation to Trail of Bits.

It seems to be working. In the months since Skiff soft-launched through an invite-only program, thousands of users including journalists, research scientists and human rights lawyers use Skiff every day, with another 8,000 users on a waitlist.

The group of users that were most excited about are just regular people that care about privacy, said Ginsberg. There are just so many privacy communities and people that are advocates for these types of products that really care about how theyre built and have sort of lost trust in big companies.

Theyre using us because theyre really excited about the vision and the future of end-to-end encryption, he said.

Read the original post:

Skiff, an end-to-end encrypted alternative to Google Docs, raises $3.7M seed - TechCrunch

Posted in Google | Comments Off on Skiff, an end-to-end encrypted alternative to Google Docs, raises $3.7M seed – TechCrunch

Newly unredacted documents show Google shared location with other apps and more – Arizona Mirror

Posted: at 2:31 am

A new version of Arizona Attorney General Mark Brnovichs lawsuit against tech behemoth Google alleges the company tracked users location across third party apps and still gathered that information when devices connected to WiFi, even if location services were off.

And company employees voiced concerns that the media, including The New York Times, would find out.

So there is no way to give a third party app your location and not Google? A Google employee is quoted saying in the complaint in a newly unredacted section. This doesnt sound like something we would want on the front page of the NYT.

The complaint is part of an ongoing consumer fraud lawsuit Brnovich first filed in May 2020 alleging that Googles data collection schemes violated the states Consumer Fraud Act, though large portions of the lawsuit were redacted by the court at Googles request. What has followed has been a legal battle over what has been able to be released.

The AG began investigating Google after an Associated Press article in 2018, and Brnovich has been part of a 48-state antitrust investigation into Google since 2019. Brnovich has been widely critical of Google in the past.

Some of the redacted portions of the lawsuit and its exhibits have since been made public, including internal Google emails. Many remain fully redacted and had been previously filed under seal.

Previously unsealed documents showed that Googles own software engineers did not understand how its privacy functions worked in regards to location history settings.

I agree with the article, an unnamed Google employee wrote in an internal email, referring to the AP article in 2018. Location off should mean location off; not except for this case or that case.

The newly released documents shed further light on Google employees feelings about location settings and other privacy issues.

Real people just think in terms of location is on, location is off because that is exactly what you have on the front screen of your phone, one unnamed Google employee says in one of the newly released documents in response to a colleague sharing a story about how users were confused by the setting.

The newly released documents also revealed that location history would still get some information from your phone, even with it turned off.

When any Android user with GPS activated on their phone checked into the Google Play Store, where apps are downloaded and updated, the Play Store grabbed location information, an unnamed Google employee said in an internal email. The only way to stop the Play Store from doing so is to install a whole new operating system on the phone, the employee said.

[G]iven what you seem to want to do (not have any contact with any Google service whatsoever), your only option is flashing LineageOS for microG on your phone and getting away entirely from the Google ecosystem, the Google employee suggested in the email if a user wanted to remove their location information from Google.

Google also seemed aware that location history settings were confusing and not easy to access for their users.

Today, a collection of device usage and diagnostic data is smeared across 5 settings resulting in conditions that are difficult for Googlers, let alone users, to understand, a highly redacted document says.

The company also seemed aware of the possible fallout that the AP article would bring, as an internal group chat used to share news was quickly shut down when they began discussing the article.

Although I know how it works and what the difference between Location and Location History is, I did not know Web and App activity had anything to do with location, one Googler said in reply to the AP Article. Also seems we are not very good at explaining this to users.

Please dont comment! the next response in the thread read.

The new complaint also reveals that Brnovich is accusing Google of continuing to collect location through WiFi connectivity.

Google makes it so a user cannot opt out of this form of location tracking unless the user actually completely disables the WiFi functionality on his or her device, the complaint alleges, meaning the device cannot connect to the internet through WiFi.

Google is still able to get location information on users using their IP addresses, which can often give personally identifying information.

Googlers recognized that this behavior could be seen in a bad light to their users, according to the new complaint.

[W]e probably dont want it to be seen as hiding information from the user. As in we estimate where you are at the zip code level, but we will show you very local ads so that you dont freak out, the unnamed employee said.

The new filing also reveals that Google allegedly pressured LG into moving the placement of the location toggle on its phones to the second page of settings, and it appears Google may have attempted to pressure others to do the same, according to the new complaint.

Google tried to convince these carriers and manufacturers to conceal the location settings or make them less prominent through active misrepresentations and/or concealments, suppression, or omission of facts available to Google concerning user experience in order to assuage their privacy concerns, Brnovichs lawsuit alleges. In reality, Google was simply trying to boost the location attach rate, which is critical for Googles own advertising revenue.

Google rejected Brnovichs new claims, and said he isnt truthfully representing the companys products and services.

The Attorney General has gone out of his way to mischaracterize our services. We have always built privacy features into our products and provided robust controls for location date, a Google spokesperson told theArizona Mirror in a written statement. We look forward to setting the record straight.

Google said it has requested that the court redact portions of the documents that were released in order to protect proprietary and confidential information from competitors like Oracle.

The company contends that it has cooperated with the attorney general and provided tens of thousands of documents related to Brnovichs investigation as well and has made many of the settings easier to navigate.

Google did not respond to a request for comment or to questions about how location settings and WiFi connectivity as well as the Play Store interact.

Follow this link:

Newly unredacted documents show Google shared location with other apps and more - Arizona Mirror

Posted in Google | Comments Off on Newly unredacted documents show Google shared location with other apps and more – Arizona Mirror

Google to open first physical store in New York this summer – Reuters

Posted: May 22, 2021 at 10:10 am

The logo of Google is seen at the high profile startups and high tech leaders gathering, Viva Tech,in Paris, France May 16, 2019. REUTERS/Charles Platiau/File Photo/File Photo

Alphabet Inc's (GOOGL.O) Google said on Thursday it would open its first physical store in New York City this summer, mirroring a retail approach that has helped Apple Inc (AAPL.O) rake in billions of dollars in the last two decades.

The Google store will be located in the city's Chelsea neighborhood near the its New York City campus, which houses over 11,000 employees.

Google, which has set up pop-up stores in the past to promote its products, said it would sell Pixel smartphones, Pixelbooks and Fitbit fitness trackers along with Nest smart home devices at the retail outlet.

Visitors will also be able to avail customer service for their devices and pick up their online orders at the store. (https://bit.ly/3wrqXjX)

The announcement signals the internet giant has taken a leaf out of Apple's play-book of operating physical stores and providing in-person services to boost sales.

Apple, which opened its first two retail stores in Virginia in 2001, has 270 stores in the United States and many more around the world that drive its sales and also provide shoppers hands-on customer service.

Our Standards: The Thomson Reuters Trust Principles.

Read the original here:

Google to open first physical store in New York this summer - Reuters

Posted in Google | Comments Off on Google to open first physical store in New York this summer – Reuters

Page 80«..1020..79808182..90100..»