This is the biggest ‘WhatsApp mistake’ you are making on Android phones – Gadgets Now

NEW DELHI: WhatsApp users generally have a tendency to backup all chats: be it relevant or not. While storage is not an issue as WhatsApp backups dont count toward your Google Drive space, what is more concerning is that these WhatsApp backups may not be secured. This is because for Android phone users, the WhatsApp chats that get backed up on Google Drives, loses the default end-to-end (E2E) encryption. On the other hand, WhatsApp backups on iCloud, for iPhone users, these are encrypted. So, the next time you are trying to backup any sensitive chat on WhatsApp on Android, remember it is probably not a good idea as there would be no encryption at all. This is one of the biggest weaknesses in the entire WhatsApp E2E environment in Android.Thankfully, WhatsApp is reportedly working encrypting Google Drive chat backups. Android users can soon expect to password protect their WhatsApp chat backups on Google Drives, however, there is no official date yet as to when this feature would be available. The feature is in an alpha stage of development, so what were showing now is very poor but its enough to understand whats its purpose. Basically the feature allows you to encrypt your backup with a password, so youre sure that nobody (neither WhatsApp nor Google) will be able to see its content, according to WA Beta Info. Meanwhile, you can opt to disable auto chat backup in Google Drive or iCloud. Instead of chat backups, you can export specific chats. If some chats are important and needs to be saved, its a good idea to export them and save it securely somewhere else. Unnecessarily backing up all WhatsApp chats takes up storage and is often of little use. Also, disable the option to automatically add all WhatsApp media files in your phones gallery. WhatsApp photos consume your phones internal storage when they are saved in gallery. So, unless you explicitly want all those silly good morning photographs from appearing in your gallery, disable this option in the settings menu.

Read this article:
This is the biggest 'WhatsApp mistake' you are making on Android phones - Gadgets Now

RIT professor explores the art and science of statistical machine learning – RIT University News Services

Statistical machine learning is at the core of modern-day advances in artificial intelligence, but a Rochester Institute of Technology professor argues that applying it correctly requires equal parts science and art. Professor Ernest Fokou of RITs School of Mathematical Sciences emphasized the human element of statistical machine learning in his primer on the field that graced the cover of a recent edition of Notices of the American Mathematical Society.

One of the most important commodities in your life is common sense, said Fokou. Mathematics is beautiful, but mathematics is your servant. When you sit down and design a model, data can be very stubborn. We design models with assumptions of what the data will show or look like, but the data never looks exactly like what you expect. You may have a nice central tenet, but theres always something thats going to require your human intervention. Thats where the art comes in. After you run all these statistical techniques, when it comes down to drawing the final conclusion, you need your common sense.

Statistical machine learning is a field that combines mathematics, probability, statistics, computer science, cognitive neuroscience and psychology to create models that learn from data and make predictions about the world. One of its earliest applications was when the United States Postal Service used it to accurately learn and recognize handwritten letters and digits to autonomously sort letters. Today, we see it applied in a variety of settings, from facial recognition technology on smartphones to self-driving cars.

Researchers have developed many different learning machines and statistical models that can be applied to a given problem, but there is no one-size-fits-all method that works well for all situations. Fokou said using selecting the appropriate method requires mathematical and statistical rigor along with practical knowledge. His paper explains the central concepts and approaches, which he hopes will get more people involved in the field and harvesting its potential.

Statistical machine learning is the main tool behind artificial intelligence, said Fokou. Its allowing us to construct extensions of the human being so our lives, transportation, agriculture, medicine and education can all be better. Thanks to statistical machine learning, you can understand the processes by which people learn and slowly and steadily help humanity access a higher level.

This year, Fokou has been on sabbatical traveling the world exploring new frontiers in statistical machine learning. Fokous full article is available on the AMS website.

Here is the original post:
RIT professor explores the art and science of statistical machine learning - RIT University News Services

Machine learning could replace subjective diagnosis of mental health disorders – techAU

AI is taking over almost every industry and the health industry has some of the biggest benefits to gain. Machine Learning, a discipline of AI, is showing good signs of being able to superseed human capabilities in accurately identifying mental health disorders.

CSIRO have announced the results of a study of 101 participants, that used ML to diagnose bipolar or depression. The error rate was between 20 30% and while that isnt yet better than humans and isnt ready for clinical use, it does show a promising sign for the future.

The machine-learning system detects patterns in data: a process known as training. Like autonomous drivings use of computer vision, the results get better, with the more data you can provide it. Its expected to improve as its fed more data on how people play the game.

One of the big challenges in psychiatry is misdiagnosis.

It gives us first-hand information about what is happening in the brain, which can be an alternative route of information for diagnosis.

The immediate aim is to build a tool that will help clinicians, but the long-term goal is to replace subjective diagnosis altogether. Between depression and bipolar disorder, theres a significant incidence of misdiagnosis of bipolar people as being depressed, as much as 60%. Around one third of them remain misdiagnosed for more than 10 years.

It is estimated that within 5 years, computers could be making the diagnosis, rather than humans as we improve the ability for AI to understand the complex human brain.

The study involved having users play a simple game game where you select between two boxes on screen. One box rewards you with greater frequency than the other; you have to collect the most points.

Whether you stick with orange or experiment with blue, or just randomly alternate these decisions paint a picture of how your brain works.

Often the traditional signatures of mental illness are too subtle for humans to notice, but its the kind of thing machine-learning AI thrives on. Now CSIRO researchers have developed a system they say can peer into the mind with significant accuracy, and could revolutionise mental health diagnosis.

Last year, researchers reported they hadfound a wayof analysing language from Facebook status updates to predict future diagnoses of depression.

More information at CSIRO.

Read the original:
Machine learning could replace subjective diagnosis of mental health disorders - techAU

Next-gen supercomputers are fast-tracking treatments for the coronavirus in a race against time – CNBC

A computer image created by Nexu Science Communication together with Trinity College in Dublin, shows a model structurally representative of a betacoronavirus which is the type of virus linked to COVID-19.

Source: NEXU Science Communication | Reuters

Research has gone digital, and medical science is no exception. As the novel coronavirus continues to spread, for instance, scientists searching for a treatment have drafted IBM's Summit supercomputer, the world's most powerful high-performance computing facility, according to the Top500 list, to help find promising candidate drugs.

One way of treating an infection could be with a compound that sticks to a certain part of the virus, disarming it. With tens of thousands of processors spanning an area as large as two tennis courts, the Summit facility at Oak Ridge National Laboratory (ORNL) has more computational power than 1 million top-of-the-line laptops. Using that muscle, researchers digitally simulated how 8,000 different molecules would interact with the virus a Herculean task for your typical personal computer.

"It took us a day or two, whereas it has traditionally taken months on a normal computer," said Jeremy Smith, director of the University of Tennessee/ORNL Center for Molecular Biophysics and principal researcher in the study.

Simulations alone can't prove a treatment will work, but the project was able to identify 77 candidate molecules that other researchers can now test in trials. The fight against the novel coronavirus is just one example of how supercomputers have become an essential part of the process of discovery. The $200 million Summit and similar machines also simulate the birth of the universe, explosions from atomic weapons and a host of events too complicated or too violent to recreate in a lab.

The current generation's formidable power is just a taste of what's to come. Aurora, a $500 million Intel machine currently under installation at Argonne National Laboratory, will herald the long-awaited arrival of "exaflop" facilities capable of a billion billion calculations per second (five times more than Summit) in 2021 with others to follow. China, Japan and the European Union are all expected to switch on similar "exascale" systems in the next five years.

These new machines will enable new discoveries, but only for the select few researchers with the programming know-how required to efficiently marshal their considerable resources. What's more, technological hurdles lead some experts to believe that exascale computing might be the end of the line. For these reasons, scientists are increasingly attempting to harness artificial intelligenceto accomplish more research with less computational power.

"We as an industry have become too captive to building systems that execute the benchmark well without necessarily paying attention to how systems are used," says Dave Turek, vice president of technical computing for IBM Cognitive Systems. He likens high-performance computing record-seeking to focusing on building the world's fastest race car instead of highway-ready minivans. "The ability to inform the classic ways of doing HPC with AI becomes really the innovation wave that's coursing through HPC today."

Just getting to the verge of exascale computing has taken a decade of research and collaboration between the Department of Energy and private vendors. "It's been a journey," says Patricia Damkroger, general manager of Intel's high-performance computing division. "Ten years ago, they said it couldn't be done."

While each system has its own unique architecture, Summit, Aurora, and the upcoming Frontier supercomputer all represent variations on a theme: they harness the immense power of graphical processing units (GPUs) alongside traditional central processing units (CPUs). GPUs can carry out more simultaneous operations than a CPU can, so leaning on these workhorses has let Intel and IBM design machines that would have otherwise required untold megawatts of energy.

IBM's Summit supercomputer currently holds the record for the world's fastest supercomputer.

Source: IBM

That computational power lets Summit, which is known as a "pre-exascale" computer because it runs at 0.2 exaflops, simulate one single supernova explosion in about two months, according to Bronson Messer, the acting director of science for the Oak Ridge Leadership Computing Facility. He hopes that machines like Aurora (1 exaflop) and the upcoming Frontier supercomputer (1.5 exaflops) will get that time down to about a week. Damkroger looks forward to medical applications. Where current supercomputers can digitally model a single heart, for instance, exascale machines will be able to simulate how the heart works together with blood vessels, she predicts.

But even as exascale developers take a victory lap, they know that two challenges mean the add-more-GPUs formula is likely approaching a plateau in its scientific usefulness. First, GPUs are strong but dumbbest suited to simple operations such as arithmetic and geometric calculations that they can crowdsource among their many components. Researchers have written simulations to run on flexible CPUs for decades and shifting to GPUs often requires starting from scratch.

GPU's have thousands of cores for simultaneous computation, but each handles simple instructions.

Source: IBM

"The real issue that we're wrestling with at this point is how do we move our code over" from running on CPUs to running on GPUs, says Richard Loft, a computational scientist at the National Center for Atmospheric Research, home of Top500's 44th ranking supercomputerCheyenne, a CPU-based machine "It's labor intensive, and they're difficult to program."

Second, the more processors a machine has, the harder it is to coordinate the sharing of calculations. For the climate modeling that Loft does, machines with more processors better answer questions like "what is the chance of a once-in-a-millennium deluge," because they can run more identical simulations simultaneously and build up more robust statistics. But they don't ultimately enable the climate models themselves to get much more sophisticated.

For that, the actual processors have to get faster, a feat that bumps up against what's physically possible. Faster processors need smaller transistors, and current transistors measure about 7 nanometers. Companies might be able to shrink that size, Turek says, but only to a point. "You can't get to zero [nanometers]," he says. "You have to invoke other kinds of approaches."

If supercomputers can't get much more powerful, researchers will have to get smarter about how they use the facilities. Traditional computing is often an exercise in brute forcing a problem, and machine learning techniques may allow researchers to approach complex calculations with more finesse.

More from Tech Trends:Robotic medicine to fight the coronavirusRemote work techology that is key

Take drug design. A pharmacist considering a dozen ingredients faces countless possible recipes, varying amounts of each compound, which could take a supercomputer years to simulate. An emerging machine learning technique known as Bayesian Optimization asks, does the computer really need to check every single option? Rather than systematically sweeping the field, the method helps isolate the most promising drugs by implementing common-sense assumptions. Once it finds one reasonably effective solution, for instance, it might prioritize seeking small improvements with minor tweaks.

In trial-and-error fields like materials science and cosmetics, Turek says that this strategy can reduce the number of simulations needed by 70% to 90%. Recently, for instance, the technique has led to breakthroughs in battery design and the discovery of a new antibiotic.

Fields like climate science and particle physics use brute-force computation in a different way, by starting with simple mathematical laws of nature and calculating the behavior of complex systems. Climate models, for instance, try to predict how air currents conspire with forests, cities, and oceans to determine global temperature.

Mike Pritchard, a climatologist at the University of California, Irvine, hopes to figure out how clouds fit into this picture, but most current climate models are blind to features smaller than a few dozen miles wide. Crunching the numbers for a worldwide layer of clouds, which might be just a couple hundred feet tall, simply requires more mathematical brawn than any supercomputer can deliver.

Unless the computer understands how clouds interact better than we do, that is. Pritchard is one of many climatologists experimenting with training neural networksa machine learning technique that looks for patterns by trial and errorto mimic cloud behavior. This approach takes a lot of computing power up front to generate realistic clouds for the neural network to imitate. But once the network has learned how to produce plausible cloudlike behavior, it can replace the computationally intensive laws of nature in the global model, at least in theory. "It's a very exciting time," Pritchard says. "It could be totally revolutionary, if it's credible."

Companies are preparing their machines so researchers like Pritchard can take full advantage of the computational tools they're developing. Turek says IBM is focusing on designing AI-ready machines capable of extreme multitasking and quickly shuttling around huge quantities of information, and the Department of Energy contract for Aurora is Intel's first that specifies a benchmark for certain AI applications, according to Damkroger. Intel is also developing an open-source software toolkit called oneAPI that will make it easier for developers to create programs that run efficiently on a variety of processors, including CPUs and GPUs.As exascale and machine learning tools become increasingly available, scientists hope they'll be able to move past the computer engineering and focus on making new discoveries. "When we get to exascale that's only going to be half the story," Messer says. "What we actually accomplish at the exascale will be what matters."

Go here to see the original:
Next-gen supercomputers are fast-tracking treatments for the coronavirus in a race against time - CNBC

Decoding the Future Trajectory of Healthcare with AI – ReadWrite

Artificial Intelligence (AI) is getting increasingly sophisticated day by day in its application, with enhanced efficiency and speed at a lower cost. Every single sector has been reaping benefits from AI in recent times. The Healthcare industry is no exception. Here is decoding the future trajectory of healthcare with AI.

The impact of artificial intelligence in the healthcare industry through machine learning (ML) and natural language processing (NLP) is transforming care delivery. Additionally, patients are expected to gain relatively high access to their health-related information than before through various applications such as smart wearable devices and mobile electronic medical records (EMR).

The personalized healthcare will authorize patients to take the wheel of their well-being, facilitate high-end healthcare, and promote better patient-provider communication to underprivileged areas.

For instance, IBM Watson for Health is helping healthcare organizations to apply cognitive technology to provide a vast amount of power diagnosis and health-related information.

In addition, Googles DeepMind Health is collaborating with researchers, clinicians, and patients in order to solve real-world healthcare problems. Additionally, the company has combined systems neuroscience with machine learning to develop strong general-purpose learning algorithms within neural networks to mimic the human brain.

Companies are working towards developing AI technology to solve several existing challenges, especially within the healthcare space. Strong focus on funding and starting AI healthcare programs played a significant role in Microsoft Corporations decision to launch a 5-year, US$ 40 million program known as AI for Health in January 2019.

The Microsoft program will use artificial intelligence tools to resolve some of the greatest healthcare challenges including global health crises, treatment, and disease diagnosis. Microsoft has also ensured that academia, non-profit, and research organizations have access to this technology, technical experts, and resources to leverage AI for care delivery and research.

In January 2020, these factors influenced Takeda Pharmaceuticals Company and MITs School of Engineering to join hands for three years to drive innovation and application of AI in the healthcare industry and drug development.

AI applications are only centered on three main investment areas: Diagnostics, Engagement, and Digitization. With the rapid advancement in technologies. There are exciting breakthroughs in incorporating AI in medical services.

The most interesting aspect of AI is robots. Robots are not only replacing trained medical staff but also making them more efficient in several areas. Robots help in controlling the cost while potentially providing better care and performing accurate surgery in limited space.

China and the U.S. have started investing in the development of robots to support doctors. In November 2017, a robot in China passed a medical licensing exam using only an AI brain. Also, it was the first-ever semi-automated operating robot that was used to suture blood vessels as fine as 0.03 mm.

In order to prevent coronavirus from spreading, the American doctors are relying on a robot that can measure the patients act and vitals. In addition, robots are also being used for recovery and consulting assistance and transporting units. These robots are showcasing significant potential in revolutionizing medical procedures in the future.

Precision medicine is an emerging approach to disease prevention and treatment. The precision medication approach allows researchers and doctors to predict more accurate treatment and prevention strategies.

The advent of precision medicine technology has allowed healthcare to actively track patients physiology in real-time, take multi-dimensional data, and create predictive algorithms that use collective learnings to calculate individual outcomes.

In recent years, there has been an immense focus on enabling direct-to-consumer genomics. Now, companies are aiming to create patient-centric products within digitization processes and genomics related to ordering complex testing in clinics.

In January 2020, ixLayer, a start-up based in San-Francisco, launched one of its kind precision health testing platforms to enhance the delivery of diagnostic testing and to shorten the complex relationship among physicians, precision health tests, and patients.

Personal health monitoring is a promising example of AI in healthcare. With the emergence of advanced AI and Internet of Medical Things (IoMT), demand for consumer-oriented products such as smart wearables for monitoring well-being is growing significantly.

Owing to the rapid proliferation of smart wearables and mobile apps, enterprises are introducing varied options to monitor personal health.

In October 2019, Gali Health, a health technology company, introduced its Gali AI-powered personal health assistant for people suffering from inflammatory bowel diseases (IBD). It offers health tracking and analytical tools, medically-vetted educational resources, and emotional support to the IBD community.

Similarly, start-ups are also coming forward with innovative devices integrated with state-of-the-art AI technology to contribute to the growing demand for personal health monitoring.

In recent years, AI has been used in numerous ways to support the medical imaging of all kinds. At present, the biggest use for AI is to assist in the analysis of images and perform single narrow recognition tasks.

In the United States, AI is considered highly valuable in enhancing business operations and patients care. It has the greatest impact on patient care by improving the accuracy of clinical outcomes and medical diagnosis.

Strong presence of leading market players in the country is bolstering the demand for medical imaging in hospitals and research centers.

In January 2020, Hitachi Healthcare Americas announced to start a new dedicated R&D center in North America. Medical imaging will leverage the advancements in machine learning and artificial intelligence to bring about next-gen of medical imaging technology.

With a plethora of issues driven by the growing rate of chronic disease and the aging population, the need for new innovative solutions in the healthcare industry is moving on an upswing.

Unleashing AIs complete potential in the healthcare industry is not an easy task. Both healthcare providers and AI developers together will have to tackle all the obstacles on the path towards the integration of new technologies.

Clearing all the hurdles will need a compounding of technological refinement and shifting mindsets. As AI trend become more deep-rooted, it is giving rise to highly ubiquitous discussions. Will AI replace the doctors and medical professionals, especially radiologists and physicians? The answer to this is, it will increase the efficiency of the medical professionals.

Initiatives by IBM Watson and Googles DeepMind will soon unlock the critical answers. However, AI aims to mimic the human brain in healthcare, human judgment, and intuitions that cannot be substituted.

Even though AI is augmenting in existing capabilities of the industry, it is unlikely to fully replace human intervention. AI skilled forces will swap only those who dont want to embrace technology.

Healthcare is a dynamic industry with significant opportunities. However, uncertainty, cost concerns, and complexity are making it an unnerving one.

The best opportunity for healthcare in the near future are hybrid models, where clinicians and physicians will be supported for treatment planning, diagnosis, and identifying risk factors. Also, with an increase in the number of geriatric population and the rise of health-related concerns across the globe, the overall burden of disease management has augmented.

Patients are also expecting better treatment and care. Due to growing innovations in the healthcare industry with respect to improved diagnosis and treatment, AI has gained consideration among the patients and doctors.

In order to develop better medical technology, entrepreneurs, healthcare service providers, investors, policy developers, and patients are coming together.

These factors are set to exhibit a brighter future of AI in the healthcare industry. It is extremely likely that there will be widespread use and massive advancements of AI integrated technology in the next few years. Moreover, healthcare providers are expected to invest in adequate IT infrastructure solutions and data centers to support new technological development.

Healthcare companies should continually integrate new technologies to build strong value and to keep the patients attention.

-

The insights presented in the article are based on a recent research study on Global Artificial Intelligence In Healthcare Market by Future Market Insights.

Abhishek Budholiya is a tech blogger, digital marketing pro, and has contributed to numerous tech magazines. Currently, as a technology and digital branding consultant, he offers his analysis on the tech market research landscape. His forte is analysing the commercial viability of a new breakthrough, a trait you can see in his writing. When he is not ruminating about the tech world, he can be found playing table tennis or hanging out with his friends.

See the article here:
Decoding the Future Trajectory of Healthcare with AI - ReadWrite

The EARN IT Act revives the legal war over encryption – Scot Scoop News

The new bill, the EARN IT Act, makes tech companies more liable for what occurs on their platforms, but critics are worried that it could weaken privacy and security on the internet.

Historically, internet companies have not been legally responsible for the content on their service. The hope is that if tech companies are held accountable for abuse on their platforms, they will mitigate it.

But that comes with a catch: if the bill passes, services that utilize effective encryption often to protect users privacy are in legal jeopardy, and likely to weaken their security. It would make it easier for hackers to break into phones and internet connections, reducing the security of internet finance and social media.

Unbreakable cryptography is readily available in the modern world, but it has not always been that way. The United States government and its allies have long fought to keep secure encryption from the public, partly through restricting the export of encryption.

In his 1996 book, Applied Cryptography, Bruce Schneier wrote, According to the U.S. government, cryptography is a munition. This means it is covered under the same rules as a TOW missile or an M1 Abrams Tank. If you sell cryptography overseas without the proper export license, you are an international arms smuggler.

With those restrictions, the U.S. was trying to prevent hostile governments from using unbreakable encryption. But law enforcement was also trying to keep cryptography out of the hands of U.S. citizens. In 1993, the Clinton Administration proposed the Clipper Chip, which, in theory, would have prevented criminals from eavesdropping on phone calls, but still allowed the government to listen in. In practice, however, the clipper chip had vulnerabilities that allowed anyone to disable the part of the chip that allowed government access. Phone manufacturers did not implement it, and by 1996 it was defunct.

Tech companies state that they cannot assure privacy for their users while building a flaw in their security that allows law enforcement to access user data.

Matt Blaze, the computer scientist who broke the clipper chip, said, When I hear If we can put a man on the moon, we can do this, it is like saying If we can put a man on the moon, well surely we can put a man on the sun.

In 1996, the U.S government passed the Communications Decency Act. Section 230 of that law states that no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. This gives tech companies legal protection against abuse on their platforms. However, the EARN IT Act would make companies have to earn this protection by following the best practices.

What worries critics of the bill is that those best practices are determined by a commission dominated by law enforcement.

The Electronic Frontier Foundation, an internet freedom advocacy organization partially responsible for the widespread use of secure encryption,said in their statement on the EARN IT Act, We know how [Attorney General William Barr] is going to use his power on the best practices panel: to break encryption.

While the bill brings harsh criticism from the tech industry and advocates for free speech on the internet, it has bipartisan support.

This bill is a major first step. For the first time, you will have to earn blanket liability protection when it comes to protecting minors. Our goal is to do this in a balanced way that doesnt overly inhibit innovation, but forcibly deals with child exploitation, said Republican Sen. Lindsay Graham, who co-sponsored the bill with Democrat Richard Blumenthal.

Continued here:
The EARN IT Act revives the legal war over encryption - Scot Scoop News

Three things central bankers can learn from Bitcoin – MIT Technology Review

For central bankers, the game changed last summer when Facebook unveiled its proposal for Libra. Many have responded by seriously exploring whether and how they should issue their own digital money.

Arguably, though, the more fundamental change is more than a decade old. It was Bitcoin that first made it possible to transfer digital value without the need for an intermediary, a model that competes directly with the traditional financial system. The networks resilience against attackers suggests there is another way of setting up the system.

Last weekend at the MIT Bitcoin Expo held on campus in Cambridge, Massachusetts, I sat down with experts familiar with central banking as well as cryptocurrency. We discussed the practical concerns central bankers should be considering as they begin to design their own digital money systems. One common theme: central bankers have plenty to learn from Bitcoin.

Sign up for the Chain Letter blockchains, cryptocurrencies, and why they matter

Security can be achieved through resilience.

The US Federal Reserve has no current plans to issue a central bank digital currency (CBDC). But if it ever did, nine out of the top 10 requirements would pertain to security, said Bob Bench, director of applied fintech research at the Boston Fed. Because the second that thing goes live, he said, its the most attacked program in the world.

Bitcoin, with its mix of transparency, cryptography, and economic incentives, has something to teach central bankers about data security, according to Robleh Ali, a research scientist at the MIT Media Labs Digital Currency Initiative. Its a system that exists in a very hostile environment, and its proved to be resilient to that, said Ali. Its also a fundamentally different way of achieving security compared with how it is done in the traditional system: Rather than try to hide the data behind walls, its trying to make the system so its inherently resilient.

Keep it simple.

CBDCs can be thought of as third-generation digital currencies, said Ali. If Bitcoin is the first generation, Ethereum and other so-called smart-contract platforms, which include relatively complicated programming languages, can be seen as the second generation. While it may be tempting to add even more bells and whistles to a CBDC system, that would be the wrong approach, Ali said, because the more complexity you have, the more opportunities you give attackers to break in. What you want in the third generation is a much simpler system even than Bitcoin, he said. Its more about taking things away than adding things, and I think in terms of making it secure, that should be the mindset.

Privacy is going to be very tricky.

Ali said he expects not all central banks that choose to issue digital currency will use the same system, but many will likely pursue a hybrid between blockchain-based cryptocurrencies like Bitcoin and more traditional, centralized systems.

Such permissioned blockchain systems, also called distributed ledger technologies, could give central banks new tools, like the ability to program the currency to perform specific functions, said Sonja Davidovic, an economist at the International Monetary Fund. For instance, it may let banks automate their responses to certain kinds of economic changes and give central bankers more precise control over the money supply. They would also have much more detailed visibility into the goings-on in their respective economies. Theres a problem, however, said Davidovic: We havent really seen yet how privacy could be protected.

Bitcoin privacy is tricky. Though users are pseudonymous, its public accounting ledger, called the blockchain, makes all transactions traceable. How would a blockchain-based CBDC system keep transaction data private? How would it represent people on the blockchain? Unless the system allows only small transactions, users will have to identify themselves somehow in order to comply with anti-money-laundering rules. How will their identity data be protected from theft, fraud, or even government surveillance?

In the cryptocurrency world, so-called privacy coins like Zcash and Monero, which use advanced cryptographic techniques to hide blockchain transaction data from public view, have arisen as alternatives to Bitcoin. But even if central banks are able to do something similar, it still might be possible to construct profiles of people based on their metadata, said Davidovic: Im not entirely sure that this is a problem that technology alone can solve.

Read the original post:
Three things central bankers can learn from Bitcoin - MIT Technology Review

Only 6% of the ad industry is happy with the digital advertising ecosystem – AdNews

Just 6% of the industry is satisfied with the current digital advertising ecosystem, according to a survey by Industry Index.

The figures come as the Australian Competition and Consumer Commission (ACCC) kicks off two inquiries into online advertising, with one focusing on the adtech industry.

The survey was conducted in partnership with TV advertising solutions company MadHive and AdLedger, a nonprofit research consortium which has members such as Publicis Media, GroupM and OMG.

More than 100 brand marketers, agencies and digital publishers were surveyed, with 6% saying theyre satisfied with the current digital advertising ecosystem. Another 92% believe there is a need for industry-wide standardisation.

Digital advertising is still suffering from the same issues of transparency, fraud and fragmentation, Christiana Cacciapuoti, executive director at AdLedger, says.

And its because we just keep slapping band-aids on a fundamentally broken system, when we need to be developing a new infrastructure thats driven by innovative technologies.

The Australian watchdog is calling for feedback from the industry as it begins its inquiry into the sector, which it has described as opaque. Its expected to hand down its interim report by December.

The adtech ecosystem absolutely needs to be changed, Alysia Borsa, executive vice president and chief business and data officer at Meredith Corporation says.

There is a lack of transparency which leads to fraud, which leads to low quality, which leads to poor performance. Its a really bad cycle.

The survey also found that 83% of respondents believe cryptography can be used to create transparencies and efficiencies, most often agreeing that cryptography could improve problems associated with fraud (66%) and the ability to track results (66%).

Sooner or later, the industry is going to realise that this dysfunctional relationship has got to end, and the only way to fix it is with next-generation technologies, MadHive CEO Adam Helfgott says.

And with blockchain and cryptography already weeding out fraud and solving similar issues on OTT, its only a matter of time till the industry stands together and overhauls the system.

Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au

Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.

More:
Only 6% of the ad industry is happy with the digital advertising ecosystem - AdNews

Big B and Priyanka Gandhis corona advice, Italy gets freebies & WHO let the dogs out – ThePrint

Text Size:A- A+

New Delhi: Crashing stock markets, free porn and lots of advisories Twitter captures the global rollercoaster that coronavirus has caused.

First, another high profile coronavirus case: Canadian Prime Minister Justin Trudeau informed Twitter that his wife had tested positive for and is now in quarantine.

Former Congress president Rahul Gandhi slams the Modi government, again and its not the economy

His sister Priyanka, meanwhile, lectures us on keeping safe in these challenged times.

While Twitter talks about globalisations role in the spread of coronavirus, author Kavitha Rao shares a different, witty perspective.

Actions speak louder than words, and superstar Amitabh Bachchan is doing just that. Check out his advice

Theres good news from WHO for our best friend on COVID-19 and author Liam Hackett has something delightful to say about that.

Writer Rupa Gulab finds something funny in these pandemic times.

Actor Tom Hanks offers a thank you note to all those who wished his wife and him a speedy recovery from coronavirus.

Dont worry, Hanks, we know whos the stronger one in this battle!

Amidst all the conspiracy theories about which country might have engineered COVID-19 and why, Chinese foreign ministry spokesperson Zhao Lijian offers his version

The world might be facing a massive shutdown but at least the Italians are being granted some interesting freebies Ahem.

Discover why US whistleblower Edward Snowden contemplates investing in cryto-currency.

Finally, Farooq Abdullahs daughter Safia Abdullah Khan took to Twitter to celebrate her fathers release from detention in the Valley today.

-Inputs by Yimkumla Longkumer

ThePrint is now on Telegram. For the best reports & opinion on politics, governance and more, subscribe to ThePrint on Telegram.

Subscribe to our YouTube channel.

Read more:
Big B and Priyanka Gandhis corona advice, Italy gets freebies & WHO let the dogs out - ThePrint

What is social distancing? And what does it have to do with COVID-19? – Deseret News

SALT LAKE CITY Just a couple of months ago, few people outside of science and emergency preparedness had ever used the term social distancing. But Americans were doing it already, in an incremental yet revolutionary change enabled by technology.

We distanced ourselves from other people when we checked ourselves in at an airport kiosk or checked ourselves out at the grocery store. We became a little more socially distant once we stopped pulling over and asking for directions because we had GPS in our cars. We began eating restaurant food without going to restaurants (thanks to DoorDash and Uber Eats), and we watch movies on big screens at home, instead of seeing them with our neighbors in crowded theaters.

In short, even before COVID-19, America had been preparing for coronavirus-driven social distancing for years.

Facebook founder Mark Zuckerberg famously said that his companys goal was to bring people closer together. British economist Frances Cairncross said technology promised the death of distance.

But the togetherness of technology, which allows an increasing number of Americans to work remotely as the new coronavirus spreads, is a different kind of togetherness than what families enjoy at the dinner table, or the banter shared at an office or coffee shop. It also comes at a cost.

Grid View

As author John Horvat wrote, Commerce is based on more than just transactions. It has always relied upon organic relationships.

The pleasantries exchanged at the cash register do more than pass time, Horvat said. These seemingly minor exchanges help knit communities together. They tend to produce what sociologists call social capital.

As Americans retreat even further from each other out of fear of contracting COVID-19, we will likely experience more of the negative effects of the social distancing weve already been doing, including loneliness and depression, sociologists and other experts say.

But in this case, there could be a benefit once the pandemic has passed: This mandatory social distancing might be the catalyst that brings us closer together again.

The rapid spread of the new coronavirus, which emerged in Wuhan, China, in late 2019, is enabled, in part, because of how long it takes for symptoms to occur (five days or longer) and because it can be transmitted from one person to another within a space of about six feet, according to the Centers for Disease Control and Prevention.

While people are most contagious when they are sick, medical experts believe that transmission can also occur before symptoms emerge, which is why governments across the world were calling for social distancing measures weeks before the World Health Organization on March 11 declared the coronavirus to be a pandemic.

The term social distancing itself isnt new.

A report prepared 10 years ago by the Association of State and Territorial Health Officials, in conjunction with the CDC, deemed social distancing an effective nonpharmaceutical intervention to combat pandemics and argued that the practice was effective during the flu pandemic of 1918-19.

Then, the social distancing ordered by the city of St. Louis, Missouri, stood in stark contrast to that of Philadelphia, which held a parade and became the U.S. city with the greatest number of deaths, 16,000 in six months.

More than a century later, Philadelphia is still being punished for this on Twitter, where people are writing Dont be Philadelphia; be St. Louis.

To be St. Louis nationwide, health officials are urging Americans to take a drastic and unsettling step: to stay home as much as possible. Cancel everything. Now, Yascha Mounk, an assistant professor at Johns Hopkins University, wrote in The Atlantic.

For introverts, the overly stressed and people who are uncomfortable in crowds, remote work and widespread cancellations may sound like a vacation, government permission to do what they yearn to do anyway. Ive been ahead of the curve. Ive been socially-distancing myself for the past 20 years, Fox News personality and comedian Greg Gutfeld posted on Twitter. And National Security Agency whistleblower Edward Snowden, now living in Russia, posted on Wednesday, Social distancing is underrated.

The idea of social distancing can seem like an anomaly in an age of hyper-connectivity, said Dan Rothwell, professor emeritus of communication at Cabrillo University in Aptos, California.

But as Rothwell points out in his book In Mixed Company: Communicating in Small Groups and Teams, virtual connection has resulted in a society-wide erosion of civility.

Research has shown that virtual interaction is more likely to be negative and disapproving than when people communicate face to face, and social distance can promote misunderstandings, Rothwell said.

Moreover, the ease with which technology allows us to retreat, even from our own family members, is troubling, he added.

My next-door neighbor is my daughter, her husband and our four grandkids. Its wonderful, but I cant tell you the number of times weve texted to see if theyre there, or picked up the phone to ask one of the grandkids to send some milk over to us, when what would have happened before is we would have had to wander over there and knock on the door.

That has happened in offices, as well, as researchers have found that people who work near each other will text instead of getting up and walking over to anothers desk to ask a question. And one study has shown that nearly 7 in 10 millennials have been told via text or Facebook that a romantic relationship was over, Rothwell said.

That said, technology is also making social distancing and quarantine more bearable than it was in centuries past, when being banished from a community, as for leprosy, meant you might never see your family again.

We can quarantine ourselves and still be connected. And thats an interesting contradiction, Rothwell said.

Some people arent bothered by health officials recommendation that we keep close to our homes like the person who responded to Snowden on Twitter, Its times like these I appreciate my social anxiety and hermit-like tendencies.

Although social distancing may be easier for introverted people than extroverts, the response to COVID-19 will expose the myth that introverts dont like being around people, said Susan Cain, author of Quiet: The Power of Introverts in a World That Cant Stop Talking.

While introverted people are energized by quiet, theyre not antisocial. They just want to interact quietly and with fewer people at a time, Cain said.

But at a time like this, anyones preferred way of being around people is going to be difficult right now.

And for both introverts and extroverts, too much solitude can morph into loneliness, which is increasing among all age groups in the U.S.

As Claire Pomeroy reported for Scientific American last year, nearly half of Americans say that they frequently feel alone and with no meaningful connection with other people. Loneliness itself has been described as an epidemic.

Biologists have shown that feelings of loneliness trigger the release of stress hormones that in turn are associated with higher blood pressure, decreased resistance to infection and increased risk of cardiovascular disease and cancer, Pomeroy wrote, adding that theres even some evidence that loneliness accelerates cognitive decline.

Although Cain thrives on quiet, she said she likes to write in a busy coffeeshop near her home in the Northeast. She didnt go there on Wednesday, however, because of the coronavirus warnings, and she noticed on Tuesday that it was much less crowded than usual.

I feel like were at a tipping point kind of moment, she said. It has been striking me how much it affects us all, even though it affects us in different ways.

Georganne Bender, a consultant and speaker with Kizer & Bender in St. Charles, Illinois, calls herself a consumer anthropologist because she researches consumer behavior in their natural environment which she still considers to be brick-and-mortar stores. Even though online shopping and speedy delivery was keeping many people at home even before they were told to practice social distancing, she sees pockets of hope.

For example, she cited Wegmans, a chain of grocery stores in the Northeast and Mid-Atlantic states, that offers cafes in some locations, as well as live music, and is creating a sense of community for people who might otherwise be lonely. The first time I went was on a Friday night, and I saw a band playing and people dancing, I was blown away. I saw young couples there, and also men and women there with their elderly parents. When stores do things like that, it does bring people together, and they start making friendships, she said.

Similarly, Matthew Stern reported for RetailWire that a Dutch chain, Jumbo Supermarkets, now has a chatter checkout line for people who want to talk, and a coffee area where lonely shoppers can socialize with volunteers.

Such measures could help combat the isolation of technology-driven societies, Bender said, as well as the negative side effects of social distance, both culturally and government-imposed.

We are growing generations of people who dont know how to communicate, Bender said. Were losing a lot of camaraderie, and knowing your neighbors, how to talk to people, how to make eye contact. But the yearning for interaction still exists even as we retreat into our homes, she said.

You watch people at a conference or trade show, and the interaction is off the charts because were hungry for that, she said.

My hope is that when this is all over and were feeling safe again, we all start coming out of houses and going back to the malls and sporting events and concerts, and having friends over and interacting with each other again. I think were going to be starving for that. And hopefully, the coronavirus isolating us will be a catalyst for people getting back together.

More:
What is social distancing? And what does it have to do with COVID-19? - Deseret News