Monthly Archives: May 2021

SpaceX Falcon 9 to send 128 glowing baby squids and 5,000 water bears to ISS – Republic World

Posted: May 31, 2021 at 2:34 am

NASA on Saturday announced that its ready to launch a Falcon 9 rocket on June 3 at 1:29 p.m. ET under the SpaceX's 22nd cargo resupply mission to send the micro-animalswhich includes 5,000 tardigradesdubbed as 'water bears, 28 glow-in-the-dark baby squids,Tardigrades,Butterfly IQ Ultrasoundand new solar panels into the space. The microscopic creatures will reach the International Space Station next week for the astronauts to study stress factors that affect humans in space.NASAs resupply mission carrying scientific research and technology will launch from the Kennedy Space Center in Florida.

Experiments aboard include studying how water bears tolerate space, whether microgravity affects symbiotic relationships, analyzing the formation of kidney stones, and more, NASA explained in a release.

Tardigrades [tiny, just 0.04 inches or 1 millimeter long] are the tiny bear-likecreatures that tolerate environments more extreme than most life forms can. And therefore, these microorganisms will assist NASA in more research related to the biological survival under extreme conditions in the space. Scientists will study how different environmental conditions affect the tardigrade gene expression both on Earth and in Space. On 11 April 2019, Israeli spacecraft Beresheet carrying these microbial creatures crashed into the moon. However, these life-forms survived the crash as they were stored in dehydrated "tun" state and could be resuscitated later.

[Baby bobtail squid just hours after hatching.Image credit: NASA/Jamie S. Foster/University of Florida]

Spaceflight can be a really challenging environment for organisms, including humans, who have evolved to the conditions on Earth, principal investigator Thomas Boothby said in NASA release. One of the things we are really keen to do is understand how tardigrades are surviving and reproducing in these environments and whether we can learn anything about the tricks that they are using and adapt them to safeguard astronauts.

[Microgravity on Animal-Microbe Interactions (UMAMI) investigations done by NASA. Credit: NASA]

"Some of the things that tardigrades can survive include being dried out, being frozen and being heated up past the boiling point of water. They can survive thousands of times as much radiation as we can and they can go for days or weeks with little or no oxygen," Thomas Boothby, assistant professor of molecular biology at the University of Wyoming and principal investigator for the experiment, separately told a news briefing.

NASAs Microgravity on Animal-Microbe Interactions (UMAMI) examines the effects of spaceflight on the molecular and chemical interactions between beneficial microbes. According to NASA, the gravitys role in shaping their interactions is not well understood, and therefore, scientists aim to study beneficial microbessuch as the bobtail squid,Euprymna scolopes, to determine whether spaceflight alters the mutually beneficial relationship with other animal hosts and microbes. This could, in turn, help NASA support development of protective measures and mitigation to preserve astronaut health on long-duration space missions.

[A cotton seedling for the TICTOC investigation prepared for flight. Credit: NASA]

Scientists are also testing commercial off-the-shelf technology Butterfly IQ Ultrasoundthat demonstrates use of a portable ultrasoundwith mobile computing device. In order to study the results with the use of such a device in the microgravity, astronauts onboard ISS assess the quality of the ultrasound images, including image acquisition, display, and storage.

This type of commercial off-the-shelf technology could provide important medical capabilities for future exploration missions beyond low-Earth orbit, where immediate ground support is not available, Kadambari Suri, integration manager for the Butterfly iQ Technology Demonstration said in NASAs release.

The investigation also examines how effective just-in-time instructions are for autonomous use of the device by the crew. The technology also has potential applications for medical care in remote and isolated settings on Earth, he adds.

Read the original post:

SpaceX Falcon 9 to send 128 glowing baby squids and 5,000 water bears to ISS - Republic World

Posted in Spacex | Comments Off on SpaceX Falcon 9 to send 128 glowing baby squids and 5,000 water bears to ISS – Republic World

SpaceX launched a very special mission exactly one year ago – Digital Trends

Posted: at 2:34 am

Its exactly a year since SpaceXs historic Demo-2 mission that saw crewed launches and landings return to U.S. soil for the first time since the Space Shuttle program ended in 2011.

The mission also marked the first astronaut use of SpaceXs Crew Dragon spacecraft, which carried NASAs Doug Hurley and Bob Behnken to the International Space Station for a two-month stay. And thats not all, as this was also the first time for NASA to use a commercially built and operated American spacecraft for human spaceflight.

The Crew Dragon spacecraft began the Demo-2 mission atop a SpaceX Falcon 9 rocket at Cape Canaveral, Florida, on May 30, 2020.

Liftoff! pic.twitter.com/DRBfdUM7JA

— SpaceX (@SpaceX) May 30, 2020

During their time aboard the ISS, Hurley and Behnken worked on various scientific experiments, while Behnken participated in four spacewalks with fellow American astronaut Chris Cassidy. Both Hurley and Behnken also took part in events with media outlets and students back on Earth, answering questions about their trip up on the Crew Dragon, as well as about life on the space station.

After 64 days aboard the orbiting outpost, attention switched to the return journey, with the Crew Dragon about to embark on its first-ever crewed re-entry into Earths atmosphere, a procedure that puts huge stresses and strains on a space capsule. Seated inside Crew Dragon, Hurley and Behnken began their trip home on August 1, 2020, splashing down in the Gulf of Mexico the following day. A short time after the astronauts safe return, SpaceX CEO Elon Musk hailed the mission a success, saying it marked a new age of space exploration that will see commercially built technology used in future crewed voyages to the moon, Mars, and even beyond.

Speaking later about the ride home aboard the Crew Dragon, Behnken said the spacecraft really came alive and sounded like an animal as the vehicle felt the full force of re-entering Earths atmosphere.

As we descended through the atmosphere, the thrusters were firing almost continuously it doesnt sound like a machine, it sounds like an animal coming through the atmosphere with all the puffs that are happening from the thrusters and the atmosphere, Behnken said.

Enjoy SpaceXs historic Demo-2 mission all over again with this comprehensive collection of images that tells its story from launch to splashdown.

Since last years Demo-2 mission, SpaceX has used its Crew Dragon capsule two more times, sending four astronauts to the space station on each of the flights. The Crew-1 astronauts have already returned, while the Crew-2 astronauts are currently aboard the ISS after arriving there in April.

Original post:

SpaceX launched a very special mission exactly one year ago - Digital Trends

Posted in Spacex | Comments Off on SpaceX launched a very special mission exactly one year ago – Digital Trends

SpaceX Rival OneWeb Plans Next-Gen Constellation Thats Better Than Starlink – Observer

Posted: at 2:34 am

With more than 1,000 Starlink satellites beaming internet signals from the sky, SpaceX is leading the race of constellation-based broadband service. But its ambitious launch plan makes space environmentalists nervous: SpaceX has applied for regulatory permission to deploy 42,000 Starlink satellites over the next few years in low Earth orbit, an area already increasingly crowded with manmade objects and debris. Thats perhaps why Starlinks main competitor, U.K.-based OneWeb, despite having launched fewer than 200 satellites, is looking to develop a more efficient version of the emerging technology.

A consortium of space firms led by OneWeb has secured $45 million in funding from the British government to launch a beam-hopping satellite next year to test a second-generation network it aims to launch in 2025.

These new satellites, called Joey-Sat, are designed to be able to direct beams to increase capacity in specific areas in response to demand spikes or emergencies. From helping during a disaster to providing broadband on planes, this amazing technology will show how next-generation 5G connectivity can benefit all of us on Earth, U.K. Science Minister Amanda Solloway said in a statement Monday.

OneWeb is teaming up with antenna maker SatixFy, ground station builder Celestia and space debris removal startup Astroscale. The pilot mission is funded by the U.K. Space Agency through the European Space Agencys Sunrise program.

SatixFy, which receives the largest chunk of the fund ($35 million), will be tasked to build Joey-Sats beam-hopping payload and user terminals.

In March, SatixFy agreed to build an in-flight connectivity terminal for OneWebs existing LEO constellation. The company has a similar deal with the Canadian satellite operator Telesat, providing modem chips that will support beam hopping for Telesats Lightspeed LEO constellation project.

Celestia will build and test ground stations for Joey-Sat that feature a new multi-beam, electronically steered antenna. Astroscale is commissioned to develop technologies that could safely de-orbit these satellites when theyre dead so that they wont become free-floating space junk.

This ambitious project with OneWeb is the next step towards maturing our technologies and refining our U.K. capabilities to develop a full-service Active Debris Removal offering by 2024, Astroscale U.K. managing director John Auburn said in a statement.

OneWeb is partly owned by the U.K. government. The company aims to begin satellite broadband service to north of 50 degrees latitude by June, which would cover the U.K., northern Europe, Greenland, Iceland, Canada and Alaska.

Go here to read the rest:

SpaceX Rival OneWeb Plans Next-Gen Constellation Thats Better Than Starlink - Observer

Posted in Spacex | Comments Off on SpaceX Rival OneWeb Plans Next-Gen Constellation Thats Better Than Starlink – Observer

Why Apple and Googles Virus Alert Apps Had Limited Success – The New York Times

Posted: at 2:31 am

Sarah Cavey, a real estate agent in Denver, was thrilled last fall when Colorado introduced an app to warn people of possible coronavirus exposures.

Based on software from Apple and Google, the states smartphone app uses Bluetooth signals to detect users who come into close contact. If a user later tests positive, the person can anonymously notify other app users whom the person may have crossed paths with in restaurants, on trains or elsewhere.

Ms. Cavey immediately downloaded the app. But after testing positive for the virus in February, she was unable to get the special verification code she needed from the state to warn others, she said, even after calling Colorados health department three times.

They advertise this app to make people feel good, Ms. Cavey said, adding that she had since deleted the app, called CO Exposure Notifications, in frustration. But its not really doing anything.

The Colorado health department said it had improved its process and now automatically issues the verification codes to every person in the state who tests positive.

When Apple and Google announced last year that they were working together to create a smartphone-based system to help stem the virus, their collaboration seemed like a game changer. Human contact tracers were struggling to keep up with spiking virus caseloads, and the trillion-dollar rival companies whose systems run 99 percent of the worlds smartphones had the potential to quickly and automatically alert far more people.

Soon Austria, Switzerland and other nations introduced virus apps based on the Apple-Google software, as did some two dozen American states, including Alabama and Virginia. To date, the apps have been downloaded more than 90 million times, according to an analysis by Sensor Tower, an app research firm.

But some researchers say the companies product and policy choices limited the systems usefulness, raising questions about the power of Big Tech to set global standards for public health tools.

Computer scientists have reported accuracy problems with the Bluetooth technology used to detect proximity between smartphones. Some users have complained of failed notifications. And there is little rigorous research to date on whether the apps potential to accurately alert people of virus exposures outweighs potential drawbacks like falsely warning unexposed people, over-testing or failing to detect users exposed to the virus.

It is still an open question whether or not these apps are assisting in real contact tracing, are simply a distraction, or whether they might even cause problems, Stephen Farrell and Doug Leith, computer science researchers at Trinity College in Dublin, wrote in a report in April on Irelands virus alert app.

In the United States, some public health officials and researchers said the apps had demonstrated modest but important benefits. In Colorado, more than 28,000 people have used the technology to notify contacts of possible virus exposures. In California, which introduced a virus-tracking app called CA Notify in December, about 65,000 people have used the system to alert other app users, the state said.

Exposure notification technology has shown success, said Dr. Christopher Longhurst, the chief information officer of UC San Diego Health, which manages Californias app. Whether its hundreds of lives saved or dozens or a handful, if we save lives, thats a big deal.

In a joint statement, Apple and Google said: Were proud to collaborate with public health authorities and provide a resource which many millions of people around the world have enabled that has helped protect public health.

Based in part on ideas developed by Singapores government and by academics, Apple and Googles system incorporated privacy protections that gave health agencies an alternative to more invasive apps. Unlike virus-tracing apps that continuously track users whereabouts, the Apple and Google software relies on Bluetooth signals, which can estimate the distance between smartphones without needing to know peoples locations. And it uses rotating ID codes not real names to log app users who come into close contact for 15 minutes or more.

Some health agencies predicted last year that the tech would be able to notify users of virus exposures faster than human contact tracers. Others said they hoped the apps could warn commuters who sat next to an infected stranger on a bus, train or plane at-risk people whom contact tracers would not typically be able to identify.

Everyone who uses the app is helping to keep the virus under control, Chancellor Angela Merkel of Germany said last year in a video promoting the countrys alert system, called Corona-Warn-App.

But the apps never received the large-scale efficacy testing typically done before governments introduce public health interventions like vaccines. And the softwares privacy features which prevent government agencies from identifying app users have made it difficult for researchers to determine whether the notifications helped hinder virus transmission, said Michael T. Osterholm, the director of the Center for Infectious Disease Research and Policy at the University of Minnesota.

The apps played virtually no role at all in our being able to investigate outbreaks that occurred here, Dr. Osterholm said.

Some limitations emerged even before the apps were released. For one thing, some researchers note, exposure notification software inherently excludes certain vulnerable populations, such as elderly people who cannot afford smartphones. For another thing, they say, the apps may send out false alarms because the system is not set up to incorporate mitigation factors like whether users are vaccinated, wearing masks or sitting outside.

Proximity detection in virus alert apps can also be inconsistent. Last year, a study on Googles system for Android phones conducted on a light-rail tram in Dublin reported that the metal walls, flooring and ceilings distorted Bluetooth signal strength to such a degree that the chance of accurate proximity detection would be similar to that of triggering notifications by randomly selecting passengers.

Such glitches have irked early adopters like Kimbley Craig, the mayor of Salinas, Calif. Last December, when virus rates there were spiking, she said, she downloaded the states exposure notification app on her Android phone and soon after tested positive for Covid-19. But after she entered the verification code, she said, the system failed to send an alert to her partner, whom she lives with and who had also downloaded the app.

If it doesnt pick up a person in the same household, I dont know what to tell you, Mayor Craig said.

In a statement, Steph Hannon, Googles senior director of product management for exposure notifications, said that there were known challenges with using Bluetooth technology to approximate the precise distance between devices and that the company was continuously working to improve accuracy.

The companies policies have also influenced usage trends. In certain U.S. states, for instance, iPhone users can activate the exposure notifications with one click by simply turning on a feature on their settings but Android users must download a separate app. As a result, about 9.6 million iPhone users in California had turned on the notifications as of May 10, the state said, far outstripping the 900,000 app downloads on Android phones.

Google said it had built its system for states to work on the widest range of devices and be deployed as quickly as possible.

Some public health experts acknowledged that the exposure alert system was an experiment in which they, and the tech giants, were learning and incorporating improvements as they went along.

One issue they discovered early on: To hinder false alarms, states verify positive test results before a person can send out exposure notifications. But local labs can sometimes take days to send test results to health agencies, limiting the ability of app users to quickly alert others.

In Alabama, for instance, the states GuideSafe virus alert app has been downloaded about 250,000 times, according to Sensor Tower. But state health officials said they had been able to confirm the positive test results of only 1,300 app users. That is a much lower number than health officials would have expected, they said, given that more than 10 percent of Alabamians have tested positive for the coronavirus.

The app would be a lot more efficient if those processes were less manual and more automated, said Dr. Scott Harris, who oversees the Alabama Department of Public Health.

Colorado, which automatically issues the verification codes to people who test positive, has reported higher usage rates. And in California, UC San Diego Health has set up a dedicated help line that app users can call if they did not receive their verification codes.

Dr. Longhurst, the medical centers chief information officer, said the California app had proved useful as part of a larger statewide public health push that also involved mask-wearing and virus testing.

Its not a panacea, he said. But it can be an effective part of a pandemic response.

Read this article:

Why Apple and Googles Virus Alert Apps Had Limited Success - The New York Times

Posted in Google | Comments Off on Why Apple and Googles Virus Alert Apps Had Limited Success – The New York Times

Google isnt ready to turn search into a conversation – The Verge

Posted: at 2:31 am

The future of search is a conversation at least, according to Google.

Its a pitch the company has been making for years, and it was the centerpiece of last weeks I/O developer conference. There, the company demoed two groundbreaking AI systems LaMDA and MUM that it hopes, one day, to integrate into all its products. To show off its potential, Google had LaMDA speak as the dwarf planet Pluto, answering questions about the celestial bodys environment and its flyby from the New Horizons probe.

As this tech is adopted, users will be able to talk to Google: using natural language to retrieve information from the web or their personal archives of messages, calendar appointments, photos, and more.

This is more than just marketing for Google. The company has evidently been contemplating what would be a major shift to its core product for years. A recent research paper from a quartet of Google engineers titled Rethinking Search asks exactly this: is it time to replace classical search engines, which provide information by ranking webpages, with AI language models that deliver these answers directly instead?

There are two questions to ask here. First is can it be done? After years of slow but definite progress, are computers really ready to understand all the nuances of human speech? And secondly, should it be done? What happens to Google if the company leaves classical search behind? Appropriately enough, neither question has a simple answer.

Theres no doubt that Google has been pushing a vision of speech-driven search for a long time now. It debuted Google Voice Search in 2011, then upgraded it to Google Now in 2012; launched Assistant in 2016; and in numerous I/Os since, has foregrounded speech-driven, ambient computing, often with demos of seamless home life orchestrated by Google.

Despite clear advances, Id argue that actual utility of this technology falls far short of the demos. Check out the introduction below of Google Home in 2016, for example, where Google promises that the device will soon let users control things beyond the home, like booking a car, ordering dinner, or sending flowers to mom, and much, much more. Some of these things are now technically feasible, but I dont think theyre common: speech has not proven to be the flexible and faultless interface of our dreams.

Everyone will have different experiences, of course, but I find that I only use my voice for very limited tasks. I dictate emails on my computer, set timers on my phone, and play music on my smart speaker. None of these constitute a conversation. They are simple commands, and experience has taught me that if I try anything more complicated, words will fail. Sometimes this is due to not being heard correctly (Siri is atrocious on that score), but often it just makes more sense to tap or type my query into a screen.

Watching this years I/O demos I was reminded of the hype surrounding self-driving cars, a technology that has so far failed to deliver on its biggest claims (remember Elon Musk promising that a self-driving car would take a cross country trip in 2018? It hasnt happened yet). There are striking parallels between the fields of autonomous driving and speech tech. Both have seen major improvements in recent years thanks to the arrival of new machine learning techniques coupled with abundant data and cheap computation. But both also struggle with the complexity of the real world.

In the case of self-driving cars, weve created vehicles that dont perform reliably outside of controlled settings. In good weather, with clear road markings, and on wide streets, self-driving cars work well. But steer them into the real world, with its missing signs, sleet and snow, unpredictable drivers, and they are clearly far from fully autonomous.

Its not hard to see the similarity with speech. The technology can handle simple, direct commands that require the recognition of only a small number of verbs and nouns (think play music, check the weather and so on) as well as a few basic follow-ups, but throw these systems into the deep waters of conversation and they flounder. As Googles CEO Sundar Pichai commented at I/O last week: Language is endlessly complex. We use it to tell stories, crack jokes, and share ideas. [...] The richness and flexibility of language make it one of humanitys greatest tools and one of computer sciences greatest challenges.

However, there are reasons to think things are different now (for speech anyway). As Google noted at I/O, its had tremendous success with a new machine learning architecture known as Transformers, a model that now underpins the worlds most powerful natural language processing (NLP) systems, including OpenAIs GPT-3 and Googles BERT. (If youre looking for an accessible explanation of the underlying tech and why its so good at parsing language, I highly recommend this blog post from Google engineer Dale Markowitz.)

The arrival of Transformers has created a truly incredible, genuinely awe-inspiring flowering of AI language capabilities. As has been demonstrated with GPT-3, AI can now generate a seemingly endless variety of text, from poetry to plays, creative fiction to code, and much more, always with surprising ingenuity and verve. They also deliver state-of-the-art results in various speech and linguistic tests and, whats better, systems scale incredibly well. That means if you pump in more computational power, you get reliable improvements. The supremacy of this paradigm is sometimes known in AI as the bitter lesson and is very good news for companies like Google. After all, theyve got plenty of compute, and that means theres lots of road ahead to improve these systems.

Google channeled this excitement at I/O. During a demo of LaMDA, which has been trained specifically on conversational dialogue, the AI model pretended first to be Pluto, then a paper airplane, answering questions with imagination, fluency, and (mostly) factual accuracy. Have you ever had any visitors? a user asked LaMDA-as-Pluto. The AI responded: Yes I have had some. The most notable was New Horizons, the spacecraft that visited me.

A demo of MUM, a multi-modal model that understands not only text but also image and video, had a similar focus on conversation. When the model was asked: Ive hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare? it was smart enough to know that the questioner is not only looking to compare mountains, but that preparation means finding weather-appropriate gear and relevant terrain training. If this sort of subtlety can transfer into a commercial product and thats obviously a huge, skyscraper-sized if then it would be a genuine step forward for speech computing.

That, though, brings us to the next big question: even if Google can turn speech into a conversation, should it? I wont pretend to have a definitive answer to this, but its not hard to see big problems ahead if Google goes down this route.

First are the technical problems. The biggest is that its impossible for Google (or any company) to reliably validate the answers produced by the sort of language AI the company is currently demoing. Theres no way of knowing exactly what these sorts of models have learned or what the source is for any answer they provide. Their training data usually consists of sizable chunks of the internet and, as youd expect, this includes both reliable data and garbage misinformation. Any response they give could be pulled from anywhere online. This can also lead them to producing output that reflects the sexist, racist, and biased notions embedded in parts of their training data. And these are criticisms that Google itself has seemingly been unwilling to reckon with.

Similarly, although these systems have broad capabilities, and are able to speak on a wide array of topics, their knowledge is ultimately shallow. As Googles researchers put it in their paper Rethinking Search, these systems learn assertions like the sky is blue, but not associations or causal relationships. That means that they can easily produce bad information based on their own misunderstanding of how the world works.

Kevin Lacker, a programmer and former Google search quality engineer, illustrated these sorts of errors in GPT-3 in this informative blog post, noting how you can stump the program with common sense questions like Which is heavier, a toaster or a pencil? (GPT-3 says: A pencil) and How many eyes does my foot have? (A: Your foot has two eyes).

To quote Googles engineers again from Rethinking Search: these systems do not have a true understanding of the world, they are prone to hallucinating, and crucially they are incapable of justifying their utterances by referring to supporting documents in the corpus they were trained over.

These issues are amplified by the sort of interface Google is envisioning. Although its possible to overcome difficulties with things like sourcing (you can train a model to provide citations, for example, noting the source of each fact it gives), Google imagines every answer being delivered ex cathedra, as if spoken by Google itself. This potentially creates a burden of trust that doesnt exist with current search engines, where its up to the user to assess the credibility of each source and the context of the information theyre shown.

The pitfalls of removing this context is obvious when we look at Googles featured snippets and knowledge panels cards that Google shows at the top of the Google.com search results page in response to specific queries. These panels highlight answers as if theyre authoritative but the problem is theyre often not, an issue that former search engine blogger (and now Google employee) Danny Sullivan dubbed the one true answer problem.

These snippets have made headlines when users discover particularly egregious errors. One example from 2017 involved asking Google Is Obama planning martial law? and receiving the answer (cited from a conspiracy news site) that, yes, of course he is (if he was, it didnt happen).

In the demos Google showed at I/O this year of LaMDA and MUM, it seems the company is still leaning toward this one true answer format. You ask and the machine answers. In the MUM demo, Google noted that users will also be given pointers to go deeper on topics, but its clear that the interface the company dreams of is a direct back and forth with Google itself.

This will work for some queries, certainly; for simple demands that are the search equivalent of asking Siri to set a timer on my phone (e.g. asking when was Madonna born, who sang Lucky Star, and so on). But for complex problems, like those Google demoed at I/O with MUM, I think theyll fall short. Tasks like planning holidays, researching medical problems, shopping for big-ticket items, looking for DIY advice, or digging into a favorite hobby, all require personal judgement, rather than computer summary.

The question, then, is will Google be able to resist the lure of offering one true answer? Tech watchers have noted for a while that the companys search products have become more Google-centric over time. The company increasingly buries results under ads that are both external (pointing to third-party companies) and internal (directing users to Google services). I think the talk to Google paradigm fits this trend. The underlying motivation is the same: its about removing intermediaries and serving users directly, presumably because Google believes its best positioned to do so.

In a way, this is the fulfillment of Googles corporate mission to organise the worlds information and make it universally accessible and useful. But this approach could also undermine what makes the companys product such a success in the first place. Google isnt useful because it tells you what you need to know, its useful because it helps you find this information for yourself. Google is the index, not the encyclopedia and it shouldnt sacrifice search for results.

Read more:

Google isnt ready to turn search into a conversation - The Verge

Posted in Google | Comments Off on Google isnt ready to turn search into a conversation – The Verge

Virus alert apps powered by Apple and Google have had limited success. – The New York Times

Posted: at 2:31 am

When Apple and Google collaborated last year on a smartphone-based system to track the spread of the coronavirus, the news was seen as a game changer. The software uses Bluetooth signals to detect app users who come into close contact. If a user later tests positive, the person can anonymously notify other app users whom the person may have crossed paths with in restaurants, on trains or elsewhere.

Soon countries around the world and some two dozen American states introduced virus apps based on the Apple-Google software. To date, the apps have been downloaded more than 90 million times, according to an analysis by Sensor Tower, an app research firm. Public health officials say the apps have provided modest but important benefits.

But Natasha Singer of The New York Times reports that some researchers say the two companies product and policy choices have limited the systems usefulness, raising questions about the power of Big Tech to set global standards for public health tools.

Computer scientists have reported accuracy problems with the Bluetooth technology. Some of the app users have complained of failed notifications, and there has been little rigorous research on whether the apps potential to accurately alert people of virus exposures outweighs potential drawbacks like falsely warning unexposed people or failing to detect users exposed to the virus.

Read more here:

Virus alert apps powered by Apple and Google have had limited success. - The New York Times

Posted in Google | Comments Off on Virus alert apps powered by Apple and Google have had limited success. – The New York Times

Google now lets you password-protect the page that shows all your searches – The Verge

Posted: at 2:31 am

Google has added a way to put a password on your Web and Activity page, which shows all your activity from across Google services, including your searches, YouTube watch history, and Google assistant queries (via Android Police). Without the verification, anyone who picks up a device youre logged into could see that activity.

To activate the verification, you can go to activity.google.com, and click the Manage My Activity verification link. From there, you can select the Require Extra Verification option, save, and enter your password to confirm that youre the one trying to make the change.

If you dont have the verification turned on, visiting activity.google.com will show a stream of your Google activity from across your devices, without asking for a password.

Turning on verification, however, will require whoevers trying to see the information to click the Verify button and enter the Google account password before itll show any history. For those who share a computer, or who sometimes lets others who arent exactly trustworthy use their device, this could be a very useful toggle.

While youre on the Web and App Activity page, you can also take a look at what activity Google is saving, and whether its being auto-deleted. Then, you can decide if youre happy with those settings. If not, this is the page to change them.

At Googles I/O keynote last week, it talked a lot about privacy with its announcement of Androids new Private Compute Core, a locked photos folder, and the ability to quickly delete your past 15 minutes of browsing in Chrome.

More here:

Google now lets you password-protect the page that shows all your searches - The Verge

Posted in Google | Comments Off on Google now lets you password-protect the page that shows all your searches – The Verge

Google Photos to end free unlimited storage from tomorrow: Plans, how to check space and more – The Indian Express

Posted: at 2:31 am

Google Photos will officially end its unlimited free storage policy for pictures at high resolution and express resolution starting tomorrow, June 1. The policy change was announced in November last year. If youve relied only on Google Photos to back up all your smartphone pictures, you will soon need to start worrying about the storage space on your account.

The policy change also means Google wants more consumers to pay up for the cloud storage service. Heres everything to keep in mind as Google changes its policy on cloud storage for Photos.

Google offers 15GB free storage space. This space is divided across Gmail, Google Drive and Photos. Under the earlier policy, photos at high or express resolution, which are both compressed formats, did not account towards free storage. This meant one could upload photos for free without worrying about running out of storage.

From June 1, these photos will count towards the 15GB free quota. If you are continuously uploading photos to your Google account, then you will perhaps need to buy some extra storage space.

Google One storage is the paid subscription which will add 100GB or more storage to your account depending on what plan you decide to choose. The basic plan starts at 100GB which is Rs 130 per month or Rs 1300 per year.

The 200GB plan starts at Rs 210 per month. The other plans are 2 TB at Rs 650 per month or Rs 6,500 per year, 10 TB which is Rs 3,250 per month, and 20 TB at Rs 6,500 per month, and 30 TB at Rs 9,750 per month.

Google says earlier photos are not impacted by the policy change. So even if you were not a paying customer for Google One, the earlier photos will not count towards your storage and you dont need to worry about transferring or deleting these in order to get extra space. But all photos uploaded from June 1 will be counted towards your storage space.

Just go to your Google account, and login to account storage management The tool link can be found at one.google.com/storage/management. Google will show what extra files can be deleted, including from Photos, Gmail and Drive.

Typically Pixel users get free unlimited storage on Google Photos, but the new policy will bring some changes as well.

Those with Pixel 3a and higher (up to Pixel 5) can continue to upload photos in High quality for free without worrying about storage space. But photos in original quality will count towards the free storage space.

Those with an older Pixel 3 continue to get unlimited free storage at Original quality for all photos and videos uploaded till January 31, 2022. Photos and videos uploaded on or before that date will remain free at Original quality.

After January 31, 2022, new photos and videos will be uploaded at High quality for free. If you upload new photos and videos at Original quality, they will count against the free storage quota.

Pixel 2 were given free storage at Original quality for all photos and videos uploaded till January 16, 2021. Photos and videos uploaded on or before that date will remain free at Original quality. After January 16, 2021, new photos and videos will be uploaded at High quality for free. If you upload new photos and videos at Original quality, they will count against your storage quota.

Those with the original Pixel (2016) get unlimited free storage at Original quality. They wont be able to upload in High quality, according to the support page.

Here is the original post:

Google Photos to end free unlimited storage from tomorrow: Plans, how to check space and more - The Indian Express

Posted in Google | Comments Off on Google Photos to end free unlimited storage from tomorrow: Plans, how to check space and more – The Indian Express

Google Photos finally stops pretending its compressed photos are high quality – The Verge

Posted: at 2:31 am

Are you planning to stick with Google Photos when its free unlimited storage disappears on June 1st? If youre anything like me, youre probably still struggling to figure out whether you can afford to procrastinate that decision a tad longer and today, Google has made that reckoning a little bit easier.

First off, the companys finally telling it like it is: Google will no longer pretend its compressed, lower-quality photos and videos are High quality, something that would have saved me a lengthy explanation just last week! (After June 1st, existing Google Pixel phone owners still get unlimited High quality photos, but if youre on, say, a Samsung or iPhone instead, its not like there was ever a Normal quality photo that doesnt count against the new 15GB limit.)

Soon, Storage saver will be the name for Googles normal-quality photos, formerly known as High quality. Youll be able to upload at either the Storage saver or Original quality tiers, both of which will count against your storage quota, with Original quality using more data.

What if youve already got 10GB worth of Gmail and 2GB of documents stored in a Google Drive like yours truly, leaving just 3GB left for photos before youll need to pay? First off, know that your existing High quality photos before June 1st dont count against the quota but also, Google has a new tool to help you find and delete blurry photos and large videos to help you free up even more space.

You can find it in the Manage storage section of the app, as you can see in the GIF above. Itll also help you find and delete screenshots, though thats been a feature of Google Photos for a while now. Google also promises to notify users who are nearing their quota, and you can click here for a storage estimate if youre logged into your account.

Still confused, perhaps? I wouldnt blame you; it took a while for me to get it all straight in my head, particularly considering that Google offers different levels of grandfathered free storage depending on which Pixel phone you own. Heres an attempt to condense that info for you:

Future Google phones wont have these perks: existing Pixels will be the last to come with free unlimited High quality uploads, Google confirmed to The Verge in November.

Read the original:

Google Photos finally stops pretending its compressed photos are high quality - The Verge

Posted in Google | Comments Off on Google Photos finally stops pretending its compressed photos are high quality – The Verge

Googles new tool will identify skin conditions what will people do with that information? – The Verge

Posted: at 2:31 am

Google announced last Tuesday that it developed a new artificial intelligence tool to help people identify skin conditions. Like any other symptom-checking tool, itll face questions over how accurately it can perform that task. But experts say it should also be scrutinized for how it influences peoples behavior: does it make them more likely to go to the doctor? Less likely?

These types of symptom-checking tools which usually clarify that they cant diagnose health conditions but can give people a read on what might be wrong have proliferated over the past decade. Some have millions of users and are valued at tens of millions of dollars. Dozens popped up over the past year to help people check to see if they might have COVID-19 (including one by Google).

Despite their growth, theres little information available about how symptom-checkers change the way people manage their health. Its not the type of analysis companies usually do before launching a product, says Jac Dinnes, a senior researcher at the University of Birminghams Institute of Applied Health Research who has evaluated smartphone apps for skin conditions. They focus on the answers the symptom-checkers give, not the way people respond to those answers.

Without actually evaluating the tools as theyre intended to be used, you dont know what the impact is going to be, she says.

Googles dermatology tool is designed to let people upload three photos of a skin issue and answer questions about symptoms. Then, it offers a list of possible conditions that the artificial intelligence-driven system thinks are the best matches. It shows textbook images of the condition and prompts users to then search the condition in Google. Users have the option to save the case to review it later or delete it entirely. The company aims to launch a pilot version later this year.

It also may introduce ways for people to continue research on a potential problem outside the tool itself, a Google spokesperson told The Verge.

When developing artificial intelligence tools like the new Google program, researchers tend to evaluate the accuracy of the machine learning program. They want to know exactly how well it can match an unknown thing, like an image of a strange rash someone uploads, with a known problem. Google hasnt published data on the latest iteration of its dermatology tool, but the company says it includes an accurate match to a skin problem in the top three suggested conditions 84 percent of the time.

Theres typically less focus on what users do with that information. This makes it hard to tell if a tool like this could actually meet one of its stated goals: to give people access to information that might take some of the load off dermatologists who are stretched thin all over the world. Theres no doubt that theres such a huge demand for dermatologists, Dinnes says. Theres a desire to use tools that are perceived as helping the situation, but we dont actually know if theyre going to help.

Its a big gap in our understanding, says Hamish Fraser, an associate professor of medical science at Brown University who studies symptom-checkers. In addition to the basic problem of whether people can even interpret the systems correctly and use them correctly, theres also this question about whether people will actually respond to anything that is fed back to them from the system.

Filling that gap is key as more and more of these tools come onto the market, Fraser says. There are more and more emerging technologies. Understanding how they could change peoples behavior is so important because their role in healthcare will likely grow.

People are already voting with their feet, in terms of using Google and other search engines to check symptoms and look up diseases, Fraser says. Theres obviously a need there.

Ideally, Fraser says, future studies would ask people using a symptom-checker for permission to follow up and ask what they did next or ask for permission to contact their doctor.

You would start to very quickly get a sense as to whether a random sample of millions of people using it got something from the system that related to what was actually going on, or what their family doctor said, or whether they went to the emergency department, he says.

One of the few studies that have asked some of these questions followed up with around 150,000 people who used a virtual medical chatbot called Buoy Health. Researchers checked how likely people said they were to go to the doctor before using the bot and how likely they were to go to the doctor after they saw what the bot had to say. Around a third of people said they would seek less urgent care maybe wait to see a primary care doctor rather than go to the emergency room. Only 4 percent said they would take more urgent steps than before they used the chatbot. The rest stayed around the same.

Its only one study, and it evaluates a checker for general medical symptoms, like reproductive health issues and gastrointestinal pain. But the findings were, in some ways, counterintuitive: many doctors worry that symptom-checkers lead to overuse of the health system and send people to get unnecessary treatment. This seemed to show the opposite, Fraser says. The findings also showed how important accuracy is: diverting people from treatment could be a big problem if done improperly.

If youve got something that youre concerned about on your skin, and an app tells you its low risk or it doesnt think its a problem, that could have serious consequences if it delays your decision to go and have a medical consultation, Dinnes says.

Still, that type of analysis tends to be uncommon. The company behind an existing app for checking skin symptoms, called Aysa, hasnt yet explicitly surveyed users to find out what steps they took after using the tool. Based on anecdotal feedback, the company thinks many people use the tool as a second opinion to double-check information they got from a doctor, says Art Papier, the chief executive officer of VisualDx, the company behind Aysa. But he doesnt have quantitative data.

We dont know if they went somewhere else after, he says. We dont ask them to come back to the app and tell us what the doctor said. Papier says the company is working to build those types of feedback loops into the app.

Google has planned follow-up studies for its dermatology tool, including a partnership with Stanford University to test the tool in a health setting. The company will monitor how well the algorithm performs, Lily Peng a physician-scientist and product manager for Google, said in an interview with The Verge. The team has not announced any plans to study what people do after they use the tool.

Understanding the way people tend to use the information from symptom-checkers could help ensure the tools are deployed in a way that will actually improve peoples experience with the healthcare system. Information on what steps groups of people take after using a checker also would give developers and doctors a more complete picture of the stakes of the tools that theyre building. People with the resources to see a specialist might be able to follow up on a concerning rash, Fraser says. If things deteriorate theyll probably take action, he says.

Others without that access might only have the symptom-checker. That puts a lot of responsibility on us people who are particularly vulnerable and less likely to get a formal medical opinion may well be relying most on these tools, he says. Its especially important that we do our homework and make sure theyre safe.

See the original post:

Googles new tool will identify skin conditions what will people do with that information? - The Verge

Posted in Google | Comments Off on Googles new tool will identify skin conditions what will people do with that information? – The Verge