EARN IT Act threatens end-to-end encryption – Naked Security

While were all distracted by stockpiling latex gloves and toilet paper, theres a bill tiptoeing through the US Congress that could inflict the backdoor virus that law enforcement agencies have been trying to inflict on encryption for years.

At least, thats the interpretation of digital rights advocates who say that the proposed EARN IT Act could harm free speech and data security.

Sophos is in that camp. For years, Naked Security and Sophos have said #nobackdoors, agreeing with the Information Technology Industry Council that Weakening security with the aim of advancing security simply does not make sense.

The first public hearing on the proposed legislation took place on Wednesday. You can view the 2+ hours of testimony here.

Called the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act), the bill would require tech companies to meet safety requirements for children online before obtaining immunity from lawsuits. You can read the discussion draft here.

To kill that immunity, the bill would undercut Section 230 of the Communications Decency Act (CDA) from certain apps and companies so that they could be held responsible for user-uploaded content. Section 230, considered the most important law protecting free speech online, states that websites arent liable for user-submitted content.

Heres how the Electronic Frontier Foundation (EFF) frames the importance of Section 230:

Section 230 enforces the common-sense principle that if you say something illegal online, you should be the one held responsible, not the website or platform where you said it (with some important exceptions).

EARN IT is a bipartisan effort, having been introduced by Republican Lindsey Graham, Democrat Richard Blumenthal and other legislators whove used the specter of online child exploitation to argue for the weakening of encryption. This comes as no surprise: in December 2019, while grilling Facebook and Apple, Graham and other senators threatened to regulate encryption unless the companies give law enforcement access to encrypted user data, pointing to child abuse as one reason.

What Graham threatened at the time:

Youre going to find a way to do this or were going to go do it for you. Were not going to live in a world where a bunch of child abusers have a safe haven to practice their craft. Period. End of discussion.

One of the problems of the EARN IT bill: the proposed legislation offers no meaningful solutions to the problem of child exploitation, as the EFF says:

It doesnt help organizations that support victims. It doesnt equip law enforcement agencies with resources to investigate claims of child exploitation or training in how to use online platforms to catch perpetrators. Rather, the bills authors have shrewdly used defending children as the pretense for an attack on our free speech and security online.

If passed, the legislation will create a National Commission on Online Child Sexual Exploitation Prevention tasked with developing best practices for owners of Internet platforms to prevent, reduce, and respond to child exploitation online. But, as the EFF maintains, Best practices would essentially translate into legal requirements:

If a platform failed to adhere to them, it would lose essential legal protections for free speech.

The best practices approach came after pushback over the bills predicted effects on privacy and free speech pushback that caused its authors to roll out the new structure. The best practices would be subject to approval or veto by the Attorney General (currently William Barr, whos issued a public call for backdoors), the Secretary of Homeland Security (ditto), and the Chair of the Federal Trade Commission (FTC).

The bill doesnt explicitly mention encryption. It doesnt have to: policy experts say that the guidelines set up by the proposed legislation would require companies to provide lawful access: a phrase that could well encompass backdoors.

CNET talked to Lindsey Barrett, a staff attorney at Georgetown Laws Institute for Public Representation Communications and Technology Clinic who said that the way that the bill is structured is a clear indication that its meant to target encryption:

When youre talking about a bill that is structured for the attorney general to give his opinion and have decisive influence over what the best practices are, it does not take a rocket scientist to concur that this is designed to target encryption.

If the bill passes, the choice for tech companies comes down to either weakening their own encryption and endangering the privacy and security of all their users, or foregoing Section 230 protections and potentially facing liability in a wave of lawsuits.

Kate Ruane, a senior legislative counsel for the American Civil Liberties Union, had this to say to CNET:

The removal of Section 230 liability essentially makes the best practices a requirement. The cost of doing business without those immunities is too high.

Tellingly, one of the bills lead sponsors, Sen. Richard Blumenthal, told the Washington Post that hes unwilling to include a measure that would stipulate that encryption is off-limits in the proposed commissions guidelines. This is what he told the newspaper:

I doubt I am the best qualified person to decide what best practices should be. Better-qualified people to make these decisions will be represented on the commission. So, to ban or require one best practice or another [beforehand] I just think leads us down a very perilous road.

The EARN IT Act joins an ongoing string of legal assaults against the CDAs Section 230. Most recently, in January 2019, the US Supreme Court refused to consider a case against defamatory reviews on Yelp.

Weve also seen actions taken against Section 230-protected sites such as those dedicated to revenge porn, for one.

In March 2018, we also saw the passage of H.R. 1865, the Fight Online Sex Trafficking Act (FOSTA) bill, which makes online prostitution ads a federal crime and which amended Section 230.

In response to the overwhelming vote to pass the bill it sailed through on a 97-2 vote, over the protests of free-speech advocates, constitutional law experts and sex trafficking victims Craigslist shut down its personals section.

Besides the proposed bill containing no tools to actually stop online child abuse, it would actually make it much harder to prosecute pedophiles, according to an analysis from The Center for Internet and Society at Stanford Law School. As explained by Riana Pfefferkorn, Associate Director of Surveillance and Cybersecurity, as it now stands, online providers proactively, and voluntarily, scan for child abuse images by comparing their hash values to known abusive content.

Apple does it with iCloud content, Facebook has used hashing to stop millions of nude childrens images, and Google released a free artificial intelligence tool to help stamp out abusive material, among other voluntary efforts by major online platforms.

The key word is voluntarily, Pfefferkorn says. Those platforms are all private companies, as opposed to government agencies, which are required by Fourth Amendment protections against unreasonable search to get warrants before they search our digital content, including our email, chat discussions, and cloud storage.

The reason that private companies like Facebook can, and do, do exactly that is that they are not the government, theyre private actors, so the Fourth Amendment doesnt apply to them.

Turning the private companies that provide those communications into agents of the state would, ironically, result in courts suppression of evidence of the child sexual exploitation crimes targeted by the bill, she said.

That means the EARN IT Act would backfire for its core purpose, while violating the constitutional rights of online service providers and users alike.

Besides the EFF, the EARN IT bill is facing opposition from civil rights groups that include the American Civil Liberties Union and Americans for Prosperity, Access Now, Mozilla, the Center for Democracy & Technology, Fight for the Future, the Wikimedia Foundation, the Surveillance Technology Oversight Project, the Consumer Technology Association, the Internet Association, and the Computer & Communications Industry Association.

Earlier this month, Sen. Ron Wyden, who introduced the CDAs Section 230, said in a statement that the disastrous legislation is a Trojan horse that will give President Trump and Attorney General Barr the power to control online speech and require government access to every aspect of Americans lives.

Read my full statement on the disastrous EARN IT Act, which will give Bill Barr and Donald Trump more control over twitter.com/i/web/status/1

Wydens statement didnt specifically mention encryption, but his office told Ars Technica that when [the senator] discusses weakening security and requiring government access to every aspect of Americans lives, that is referring to encryption.

Read this article:
EARN IT Act threatens end-to-end encryption - Naked Security

The EARN IT Bill Is the Government’s Plan to Scan Every Message Online – EFF

Imagine an Internet where the law required every message sent to be read by government-approved scanning software. Companies that handle such messages wouldnt be allowed to securely encrypt them, or theyd lose legal protections that allow them to operate.

Take Action

Stop the Graham-Blumenthal Attack on Encryption

Thats what the Senate Judiciary Committee has proposed and hopes to pass into law. The so-called EARN IT bill, sponsored by Senators Lindsay Graham (R-SC) and Richard Blumenthal (D-CT), will strip Section 230 protections away from any website that doesnt follow a list of best practices, meaning those sites can be sued into bankruptcy. The best practices list will be created by a government commission, headed by Attorney General Barr, who has made it very clear he would like to ban encryption, and guarantee law enforcement legal access to any digital message.

The EARN IT bill had its first hearing today, and its supporters strategy is clear. Because they didnt put the word encryption in the bill, theyre going to insist it doesnt affect encryption.

This bill says nothing about encryption, co-sponsor Sen. Blumenthal said at todays hearing. Have you found a word in this bill about encryption? he asked one witness.

Its true that the bills authors avoided using that word. But they did propose legislation that enables an all-out assault on encryption. It would create a 19-person commission thats completely controlled by the Attorney General and law enforcement agencies. And, at the hearing, a Vice-President at the National Center for Missing and Exploited Children (NCMEC) made it clear [PDF] what he wants the best practices to be. NCMEC believes online services should be made to screen their messages for material that NCMEC considers abusive; use screening technology approved by NCMEC and law enforcement; report what they find in the messages to NCMEC; and be held legally responsible for the content of messages sent by others.

You cant have an Internet where messages are screened en masse, and also have end-to-end encryption any more than you can create backdoors that can only be used by the good guys. The two are mutually exclusive. Concepts like client-side scanning arent a clever route around this; such scanning is just another way to break end-to-end encryption. Either the message remains private to everyone but its recipients, or its available to others.

The 19-person draft commission isnt any better than the 15-person commission envisioned in an early draft of the bill. Its completely dominated by law enforcement and allied groups like NCMEC. Not only will those groups have a majority of votes on the commission, but the bill gives Attorney General Barr the power to veto or approve the list of best practices. Even if other commission members do disagree with law enforcement, Barrs veto power will put him in a position to strongarm them.

The Commission wont be a body that seriously considers policy; it will be a vehicle for creating a law enforcement wish list. Barr has made clear, over and over again, that breaking encryption is at the top of that wish list. Once its broken, authoritarian regimes around the world will rejoice, as they have the ability to add their own types of mandatory scanning, not just for child sexual abuse material but for self-expression that those governments want to suppress.

The privacy and security of all users will suffer if U.S. law enforcement is able to achieve its dream of breaking encryption. Senators should reject the EARN IT bill.

Take Action

Stop the Graham-Blumenthal Attack on Encryption

View original post here:
The EARN IT Bill Is the Government's Plan to Scan Every Message Online - EFF

WhatsApp Users To Get This Killer New Feature: Heres How It Works – Forbes

NurPhoto via Getty Images

It looks like WhatsApp will soon add some serious security changes to its messaging app, now used by 2 billion people worldwide, as it strives to fend off competition from other platforms seen as being more secure. WhatsApp has always positioned itself as a security and privacy champion, and so this comes as little surprise. All that said, given WhatsApps scale, such changes have a huge impact.

Last year, reports emerged that the Facebook-owned messaging giant was testing disappearing or self-destructing messages for Group Chats. The platforms sister app, Facebook Messenger, has had the functionality for some time within its so-called secret conversations,, meaning theyre end-to-end encrypted.

Now, according WABetaInfo, WhatsApp is stepping up plans to launch the functionality some time soon. As GSMArena explains, in two beta versions of the app, the option to set self-destructing messages in private chats can be found. The app versions are2.20.83and2.20.84and you can choose the expiry period of the messages between one hour, one day, one week, one month or one year.

With a timer set, users can see a clock icon which warns them that the message will disappear, providing an indication of how long remains. There is no confirmation as to the timing of the launch of the new functionality, or whether other security protections such as reporting screenshots or preventing text-copying will be added as part of this shift to more secure endpoint functionality.

Disappearing messages.

WhatsApp does not compete with Messenger, they are from the same stable after all. But it does compete with Signal, which is getting more traction in the market and has recently confirmed its plans to become more mainstream. Ironically, Signals shift into WhatsApp territory has been helped by an investment from departed WhatsApp co-founder Brian Acton.

Signal is starting to encroach on WhatsApp territory, which had been the secure message platform of choice for businesses, government officials and other groups for anything other than officially restricted data. Recently we have seen the EU mandate Signal as an alternative to WhatsApp, citing security considerations, although for the EUs diplomatic staff, even Signal is not secure enough.

Signal operates a much more secure client-side app than WhatsApp, with security prioritized over ease of functionality. So much so that even transferring chat history to a new phone is tricky on an Android and impossible on an iPhone. WhatsApp uses its own version of Signals open-source security protocol to protect WhatsApp chats, but Signals version remains open-source and so is seen as being more secure. Signal offers a disappearing message function for every chat.

Signal

It has been a difficult 12-months for WhatsApp on the security front. The platform that has done more to promote end-to-end security for the masses than anyone else has been hit with reports ofnation-state attacks,malicious media filesand the risk of a backdoor tolock out targeted individuals out. In this WhatsApp has become a victim of its own success. When attackers are looking for new vulnerabilities, a ubiquitous applike WhatsApp or Facebookprovides a likely access point onto most phones if an exploit can be found. That goes with the territory.

Beefing up security will not help WhatsApp, or Signal for that matter, defend the latest moves by the U.S. government to undermine end-to-end encryption, allowing law enforcement access to private chats. Disappearing messages will not help a plug gaps in the encryption armour. And so while WhatsApp beefs up its security, the real battle looks like being with lawmakers not other platforms.

More here:
WhatsApp Users To Get This Killer New Feature: Heres How It Works - Forbes

Patent hints that encrypted displays could appear on future Apple devices – TechSpot

Why it matters: Regardless of the security and privacy measures we take on our devices, the content reaching us through their screens is ultimately susceptible to shoulder surfing and is often the source of amusement for overly inquisitive peers. Although third-party and in-built privacy screens have tried to address the problem and succeeded to some extent, Apple seems to be developing a solution that visually encrypts the display itself to make it impossible for unwanted observers to figure out the actual screen content.

Shoulder surfing remains a common practice among folk who have little regard for user privacy and often engage in this unethical activity, either for personal amusement or to social engineer their way to someone's sensitive information.

There have been attempts to curb this phenomenon with products like HP's Sure View display technology built into some of its laptops and third-party privacy filters for several form-factor devices. Apple users, however, might not have to worry long about this problem as the company recently filed for a patent that tracks the user's gaze as they operate the device and visually encrypts content to protect it from unwanted observers.

PhoneArena reports that Apple's 'gaze-dependent display encryption' technology could appear in multiple Apple devices in the future, including iPhones, iPads, monitors, the Apple Watch - basically anything with a display and other hardware required for the tech to function.

Using the camera to identify and track the user's gaze, along with special processing circuitry, the device's screen can generate visually encrypted frames when an onlooker is detected. These frames are made up of two regions: one that includes unmodified content for the intended user, based on their gaze and proximity from the camera, and a second obscured region that shows manipulated content through text scrambling, color altering, and image warping techniques.

The area within these circles represents unmodified information currently under user view

The patent also suggests that content manipulation will take place dynamically as "display content is not to be visually encrypted" when an onlooker's gaze is away from the display. When they do take a peek (intentionally or otherwise), the processing circuitry will begin generating visually encrypted frames, seemingly unnoticeable to the user.

The whole idea potentially makes sure that information reaches its desired user safely, much like the Compubody Sock from several years ago that set out to achieve the same objective, albeit in a much simpler, low-tech fashion.

The Compubody Sock was certainly effective but risked you getting more attention than usual

It remains to be seen if Apple implements this technology in its future products or simply decides to add this patent to its ever-growing pile of unused ones. The company's Face ID tech could eventually evolve to support this feature, further improving the user privacy of its devices; however, the processing and financial costs associated with this technology are likely going to make for even more expensive Apple products in the future.

Original post:
Patent hints that encrypted displays could appear on future Apple devices - TechSpot

Cloud Encryption Technology Market Outlooks 2020: Industry Analysis, Growth rate, Cost Structures and Future Forecasts to 2025 – 3rd Watch News

The global Cloud Encryption Technology Market is influenced by several strategic factors and demand dynamics, a detailed study of which is presented in this report. The growth of the Cloud Encryption Technology market can be attributed to governmental regulations in key regions and the emerging business landscape. The report on the global Cloud Encryption Technology market covers these notable developments and evaluates their impact global market landscape.

The report presents a comprehensive analysis of market dynamics including growth drivers and notable trends impacting the future growth of the market. The report studies prominent opportunities, recent technological advances, and market changing factors in various nations. The factor affecting the revenue share of key regional markets are briefly analyzed in the report.

Get a Sample Copy of this Report @ https://www.regalintelligence.com/request-sample/75011

The Major Players Covered in this Report:Gemalto, Sophos, Symantec, SkyHigh Networks, Netskope

The rise Cloud Encryption Technology Industry have stimulated the competition between established market players and new entrants. The growing demand as result of vast majority of the population depends on the Cloud Encryption Technology industry to satisfy their daily requirements. The Cloud Encryption Technology industry is well known for its high standards of manufacturing, product quality, packaging and constant innovation. Further, prominent companies in the global industry are focused on providing more reliable Cloud Encryption Technology for various applications. The manufacturers are focused on providing high-performance devices and equipment to all sectors.

Reach us to quote the effective price of this report: https://www.regalintelligence.com/check-discount/75011

Why you should consider this report?

Key methods of major players

Enquire Before Buying this report @ https://www.regalintelligence.com/enquiry/75011

About Us:We, Regal Intelligence, aim to change the dynamics of market research backed by quality data. Our analysts validate data with exclusive qualitative and analytics driven intelligence. We meticulously plan our research process and execute in order to explore the potential market for getting insightful details. Our prime focus is to provide reliable data based on public surveys using data analytics techniques. If you have come here, you might be interested in highly reliable data driven market insights for your product/service,reach us here 24/7.

Contact Us:Regal Intelligencewww.regalintelligence.com[emailprotected]Ph no: +1 231 930 2779 (U.S.)

Follow Us:https://in.linkedin.com/company/regal-intelligence https://www.facebook.com/regalintelligence/ https://twitter.com/RI_insights

See more here:
Cloud Encryption Technology Market Outlooks 2020: Industry Analysis, Growth rate, Cost Structures and Future Forecasts to 2025 - 3rd Watch News

This is the biggest ‘WhatsApp mistake’ you are making on Android phones – Gadgets Now

NEW DELHI: WhatsApp users generally have a tendency to backup all chats: be it relevant or not. While storage is not an issue as WhatsApp backups dont count toward your Google Drive space, what is more concerning is that these WhatsApp backups may not be secured. This is because for Android phone users, the WhatsApp chats that get backed up on Google Drives, loses the default end-to-end (E2E) encryption. On the other hand, WhatsApp backups on iCloud, for iPhone users, these are encrypted. So, the next time you are trying to backup any sensitive chat on WhatsApp on Android, remember it is probably not a good idea as there would be no encryption at all. This is one of the biggest weaknesses in the entire WhatsApp E2E environment in Android.Thankfully, WhatsApp is reportedly working encrypting Google Drive chat backups. Android users can soon expect to password protect their WhatsApp chat backups on Google Drives, however, there is no official date yet as to when this feature would be available. The feature is in an alpha stage of development, so what were showing now is very poor but its enough to understand whats its purpose. Basically the feature allows you to encrypt your backup with a password, so youre sure that nobody (neither WhatsApp nor Google) will be able to see its content, according to WA Beta Info. Meanwhile, you can opt to disable auto chat backup in Google Drive or iCloud. Instead of chat backups, you can export specific chats. If some chats are important and needs to be saved, its a good idea to export them and save it securely somewhere else. Unnecessarily backing up all WhatsApp chats takes up storage and is often of little use. Also, disable the option to automatically add all WhatsApp media files in your phones gallery. WhatsApp photos consume your phones internal storage when they are saved in gallery. So, unless you explicitly want all those silly good morning photographs from appearing in your gallery, disable this option in the settings menu.

Read this article:
This is the biggest 'WhatsApp mistake' you are making on Android phones - Gadgets Now

RIT professor explores the art and science of statistical machine learning – RIT University News Services

Statistical machine learning is at the core of modern-day advances in artificial intelligence, but a Rochester Institute of Technology professor argues that applying it correctly requires equal parts science and art. Professor Ernest Fokou of RITs School of Mathematical Sciences emphasized the human element of statistical machine learning in his primer on the field that graced the cover of a recent edition of Notices of the American Mathematical Society.

One of the most important commodities in your life is common sense, said Fokou. Mathematics is beautiful, but mathematics is your servant. When you sit down and design a model, data can be very stubborn. We design models with assumptions of what the data will show or look like, but the data never looks exactly like what you expect. You may have a nice central tenet, but theres always something thats going to require your human intervention. Thats where the art comes in. After you run all these statistical techniques, when it comes down to drawing the final conclusion, you need your common sense.

Statistical machine learning is a field that combines mathematics, probability, statistics, computer science, cognitive neuroscience and psychology to create models that learn from data and make predictions about the world. One of its earliest applications was when the United States Postal Service used it to accurately learn and recognize handwritten letters and digits to autonomously sort letters. Today, we see it applied in a variety of settings, from facial recognition technology on smartphones to self-driving cars.

Researchers have developed many different learning machines and statistical models that can be applied to a given problem, but there is no one-size-fits-all method that works well for all situations. Fokou said using selecting the appropriate method requires mathematical and statistical rigor along with practical knowledge. His paper explains the central concepts and approaches, which he hopes will get more people involved in the field and harvesting its potential.

Statistical machine learning is the main tool behind artificial intelligence, said Fokou. Its allowing us to construct extensions of the human being so our lives, transportation, agriculture, medicine and education can all be better. Thanks to statistical machine learning, you can understand the processes by which people learn and slowly and steadily help humanity access a higher level.

This year, Fokou has been on sabbatical traveling the world exploring new frontiers in statistical machine learning. Fokous full article is available on the AMS website.

Here is the original post:
RIT professor explores the art and science of statistical machine learning - RIT University News Services

Machine learning could replace subjective diagnosis of mental health disorders – techAU

AI is taking over almost every industry and the health industry has some of the biggest benefits to gain. Machine Learning, a discipline of AI, is showing good signs of being able to superseed human capabilities in accurately identifying mental health disorders.

CSIRO have announced the results of a study of 101 participants, that used ML to diagnose bipolar or depression. The error rate was between 20 30% and while that isnt yet better than humans and isnt ready for clinical use, it does show a promising sign for the future.

The machine-learning system detects patterns in data: a process known as training. Like autonomous drivings use of computer vision, the results get better, with the more data you can provide it. Its expected to improve as its fed more data on how people play the game.

One of the big challenges in psychiatry is misdiagnosis.

It gives us first-hand information about what is happening in the brain, which can be an alternative route of information for diagnosis.

The immediate aim is to build a tool that will help clinicians, but the long-term goal is to replace subjective diagnosis altogether. Between depression and bipolar disorder, theres a significant incidence of misdiagnosis of bipolar people as being depressed, as much as 60%. Around one third of them remain misdiagnosed for more than 10 years.

It is estimated that within 5 years, computers could be making the diagnosis, rather than humans as we improve the ability for AI to understand the complex human brain.

The study involved having users play a simple game game where you select between two boxes on screen. One box rewards you with greater frequency than the other; you have to collect the most points.

Whether you stick with orange or experiment with blue, or just randomly alternate these decisions paint a picture of how your brain works.

Often the traditional signatures of mental illness are too subtle for humans to notice, but its the kind of thing machine-learning AI thrives on. Now CSIRO researchers have developed a system they say can peer into the mind with significant accuracy, and could revolutionise mental health diagnosis.

Last year, researchers reported they hadfound a wayof analysing language from Facebook status updates to predict future diagnoses of depression.

More information at CSIRO.

Read the original:
Machine learning could replace subjective diagnosis of mental health disorders - techAU

Next-gen supercomputers are fast-tracking treatments for the coronavirus in a race against time – CNBC

A computer image created by Nexu Science Communication together with Trinity College in Dublin, shows a model structurally representative of a betacoronavirus which is the type of virus linked to COVID-19.

Source: NEXU Science Communication | Reuters

Research has gone digital, and medical science is no exception. As the novel coronavirus continues to spread, for instance, scientists searching for a treatment have drafted IBM's Summit supercomputer, the world's most powerful high-performance computing facility, according to the Top500 list, to help find promising candidate drugs.

One way of treating an infection could be with a compound that sticks to a certain part of the virus, disarming it. With tens of thousands of processors spanning an area as large as two tennis courts, the Summit facility at Oak Ridge National Laboratory (ORNL) has more computational power than 1 million top-of-the-line laptops. Using that muscle, researchers digitally simulated how 8,000 different molecules would interact with the virus a Herculean task for your typical personal computer.

"It took us a day or two, whereas it has traditionally taken months on a normal computer," said Jeremy Smith, director of the University of Tennessee/ORNL Center for Molecular Biophysics and principal researcher in the study.

Simulations alone can't prove a treatment will work, but the project was able to identify 77 candidate molecules that other researchers can now test in trials. The fight against the novel coronavirus is just one example of how supercomputers have become an essential part of the process of discovery. The $200 million Summit and similar machines also simulate the birth of the universe, explosions from atomic weapons and a host of events too complicated or too violent to recreate in a lab.

The current generation's formidable power is just a taste of what's to come. Aurora, a $500 million Intel machine currently under installation at Argonne National Laboratory, will herald the long-awaited arrival of "exaflop" facilities capable of a billion billion calculations per second (five times more than Summit) in 2021 with others to follow. China, Japan and the European Union are all expected to switch on similar "exascale" systems in the next five years.

These new machines will enable new discoveries, but only for the select few researchers with the programming know-how required to efficiently marshal their considerable resources. What's more, technological hurdles lead some experts to believe that exascale computing might be the end of the line. For these reasons, scientists are increasingly attempting to harness artificial intelligenceto accomplish more research with less computational power.

"We as an industry have become too captive to building systems that execute the benchmark well without necessarily paying attention to how systems are used," says Dave Turek, vice president of technical computing for IBM Cognitive Systems. He likens high-performance computing record-seeking to focusing on building the world's fastest race car instead of highway-ready minivans. "The ability to inform the classic ways of doing HPC with AI becomes really the innovation wave that's coursing through HPC today."

Just getting to the verge of exascale computing has taken a decade of research and collaboration between the Department of Energy and private vendors. "It's been a journey," says Patricia Damkroger, general manager of Intel's high-performance computing division. "Ten years ago, they said it couldn't be done."

While each system has its own unique architecture, Summit, Aurora, and the upcoming Frontier supercomputer all represent variations on a theme: they harness the immense power of graphical processing units (GPUs) alongside traditional central processing units (CPUs). GPUs can carry out more simultaneous operations than a CPU can, so leaning on these workhorses has let Intel and IBM design machines that would have otherwise required untold megawatts of energy.

IBM's Summit supercomputer currently holds the record for the world's fastest supercomputer.

Source: IBM

That computational power lets Summit, which is known as a "pre-exascale" computer because it runs at 0.2 exaflops, simulate one single supernova explosion in about two months, according to Bronson Messer, the acting director of science for the Oak Ridge Leadership Computing Facility. He hopes that machines like Aurora (1 exaflop) and the upcoming Frontier supercomputer (1.5 exaflops) will get that time down to about a week. Damkroger looks forward to medical applications. Where current supercomputers can digitally model a single heart, for instance, exascale machines will be able to simulate how the heart works together with blood vessels, she predicts.

But even as exascale developers take a victory lap, they know that two challenges mean the add-more-GPUs formula is likely approaching a plateau in its scientific usefulness. First, GPUs are strong but dumbbest suited to simple operations such as arithmetic and geometric calculations that they can crowdsource among their many components. Researchers have written simulations to run on flexible CPUs for decades and shifting to GPUs often requires starting from scratch.

GPU's have thousands of cores for simultaneous computation, but each handles simple instructions.

Source: IBM

"The real issue that we're wrestling with at this point is how do we move our code over" from running on CPUs to running on GPUs, says Richard Loft, a computational scientist at the National Center for Atmospheric Research, home of Top500's 44th ranking supercomputerCheyenne, a CPU-based machine "It's labor intensive, and they're difficult to program."

Second, the more processors a machine has, the harder it is to coordinate the sharing of calculations. For the climate modeling that Loft does, machines with more processors better answer questions like "what is the chance of a once-in-a-millennium deluge," because they can run more identical simulations simultaneously and build up more robust statistics. But they don't ultimately enable the climate models themselves to get much more sophisticated.

For that, the actual processors have to get faster, a feat that bumps up against what's physically possible. Faster processors need smaller transistors, and current transistors measure about 7 nanometers. Companies might be able to shrink that size, Turek says, but only to a point. "You can't get to zero [nanometers]," he says. "You have to invoke other kinds of approaches."

If supercomputers can't get much more powerful, researchers will have to get smarter about how they use the facilities. Traditional computing is often an exercise in brute forcing a problem, and machine learning techniques may allow researchers to approach complex calculations with more finesse.

More from Tech Trends:Robotic medicine to fight the coronavirusRemote work techology that is key

Take drug design. A pharmacist considering a dozen ingredients faces countless possible recipes, varying amounts of each compound, which could take a supercomputer years to simulate. An emerging machine learning technique known as Bayesian Optimization asks, does the computer really need to check every single option? Rather than systematically sweeping the field, the method helps isolate the most promising drugs by implementing common-sense assumptions. Once it finds one reasonably effective solution, for instance, it might prioritize seeking small improvements with minor tweaks.

In trial-and-error fields like materials science and cosmetics, Turek says that this strategy can reduce the number of simulations needed by 70% to 90%. Recently, for instance, the technique has led to breakthroughs in battery design and the discovery of a new antibiotic.

Fields like climate science and particle physics use brute-force computation in a different way, by starting with simple mathematical laws of nature and calculating the behavior of complex systems. Climate models, for instance, try to predict how air currents conspire with forests, cities, and oceans to determine global temperature.

Mike Pritchard, a climatologist at the University of California, Irvine, hopes to figure out how clouds fit into this picture, but most current climate models are blind to features smaller than a few dozen miles wide. Crunching the numbers for a worldwide layer of clouds, which might be just a couple hundred feet tall, simply requires more mathematical brawn than any supercomputer can deliver.

Unless the computer understands how clouds interact better than we do, that is. Pritchard is one of many climatologists experimenting with training neural networksa machine learning technique that looks for patterns by trial and errorto mimic cloud behavior. This approach takes a lot of computing power up front to generate realistic clouds for the neural network to imitate. But once the network has learned how to produce plausible cloudlike behavior, it can replace the computationally intensive laws of nature in the global model, at least in theory. "It's a very exciting time," Pritchard says. "It could be totally revolutionary, if it's credible."

Companies are preparing their machines so researchers like Pritchard can take full advantage of the computational tools they're developing. Turek says IBM is focusing on designing AI-ready machines capable of extreme multitasking and quickly shuttling around huge quantities of information, and the Department of Energy contract for Aurora is Intel's first that specifies a benchmark for certain AI applications, according to Damkroger. Intel is also developing an open-source software toolkit called oneAPI that will make it easier for developers to create programs that run efficiently on a variety of processors, including CPUs and GPUs.As exascale and machine learning tools become increasingly available, scientists hope they'll be able to move past the computer engineering and focus on making new discoveries. "When we get to exascale that's only going to be half the story," Messer says. "What we actually accomplish at the exascale will be what matters."

Go here to see the original:
Next-gen supercomputers are fast-tracking treatments for the coronavirus in a race against time - CNBC

Decoding the Future Trajectory of Healthcare with AI – ReadWrite

Artificial Intelligence (AI) is getting increasingly sophisticated day by day in its application, with enhanced efficiency and speed at a lower cost. Every single sector has been reaping benefits from AI in recent times. The Healthcare industry is no exception. Here is decoding the future trajectory of healthcare with AI.

The impact of artificial intelligence in the healthcare industry through machine learning (ML) and natural language processing (NLP) is transforming care delivery. Additionally, patients are expected to gain relatively high access to their health-related information than before through various applications such as smart wearable devices and mobile electronic medical records (EMR).

The personalized healthcare will authorize patients to take the wheel of their well-being, facilitate high-end healthcare, and promote better patient-provider communication to underprivileged areas.

For instance, IBM Watson for Health is helping healthcare organizations to apply cognitive technology to provide a vast amount of power diagnosis and health-related information.

In addition, Googles DeepMind Health is collaborating with researchers, clinicians, and patients in order to solve real-world healthcare problems. Additionally, the company has combined systems neuroscience with machine learning to develop strong general-purpose learning algorithms within neural networks to mimic the human brain.

Companies are working towards developing AI technology to solve several existing challenges, especially within the healthcare space. Strong focus on funding and starting AI healthcare programs played a significant role in Microsoft Corporations decision to launch a 5-year, US$ 40 million program known as AI for Health in January 2019.

The Microsoft program will use artificial intelligence tools to resolve some of the greatest healthcare challenges including global health crises, treatment, and disease diagnosis. Microsoft has also ensured that academia, non-profit, and research organizations have access to this technology, technical experts, and resources to leverage AI for care delivery and research.

In January 2020, these factors influenced Takeda Pharmaceuticals Company and MITs School of Engineering to join hands for three years to drive innovation and application of AI in the healthcare industry and drug development.

AI applications are only centered on three main investment areas: Diagnostics, Engagement, and Digitization. With the rapid advancement in technologies. There are exciting breakthroughs in incorporating AI in medical services.

The most interesting aspect of AI is robots. Robots are not only replacing trained medical staff but also making them more efficient in several areas. Robots help in controlling the cost while potentially providing better care and performing accurate surgery in limited space.

China and the U.S. have started investing in the development of robots to support doctors. In November 2017, a robot in China passed a medical licensing exam using only an AI brain. Also, it was the first-ever semi-automated operating robot that was used to suture blood vessels as fine as 0.03 mm.

In order to prevent coronavirus from spreading, the American doctors are relying on a robot that can measure the patients act and vitals. In addition, robots are also being used for recovery and consulting assistance and transporting units. These robots are showcasing significant potential in revolutionizing medical procedures in the future.

Precision medicine is an emerging approach to disease prevention and treatment. The precision medication approach allows researchers and doctors to predict more accurate treatment and prevention strategies.

The advent of precision medicine technology has allowed healthcare to actively track patients physiology in real-time, take multi-dimensional data, and create predictive algorithms that use collective learnings to calculate individual outcomes.

In recent years, there has been an immense focus on enabling direct-to-consumer genomics. Now, companies are aiming to create patient-centric products within digitization processes and genomics related to ordering complex testing in clinics.

In January 2020, ixLayer, a start-up based in San-Francisco, launched one of its kind precision health testing platforms to enhance the delivery of diagnostic testing and to shorten the complex relationship among physicians, precision health tests, and patients.

Personal health monitoring is a promising example of AI in healthcare. With the emergence of advanced AI and Internet of Medical Things (IoMT), demand for consumer-oriented products such as smart wearables for monitoring well-being is growing significantly.

Owing to the rapid proliferation of smart wearables and mobile apps, enterprises are introducing varied options to monitor personal health.

In October 2019, Gali Health, a health technology company, introduced its Gali AI-powered personal health assistant for people suffering from inflammatory bowel diseases (IBD). It offers health tracking and analytical tools, medically-vetted educational resources, and emotional support to the IBD community.

Similarly, start-ups are also coming forward with innovative devices integrated with state-of-the-art AI technology to contribute to the growing demand for personal health monitoring.

In recent years, AI has been used in numerous ways to support the medical imaging of all kinds. At present, the biggest use for AI is to assist in the analysis of images and perform single narrow recognition tasks.

In the United States, AI is considered highly valuable in enhancing business operations and patients care. It has the greatest impact on patient care by improving the accuracy of clinical outcomes and medical diagnosis.

Strong presence of leading market players in the country is bolstering the demand for medical imaging in hospitals and research centers.

In January 2020, Hitachi Healthcare Americas announced to start a new dedicated R&D center in North America. Medical imaging will leverage the advancements in machine learning and artificial intelligence to bring about next-gen of medical imaging technology.

With a plethora of issues driven by the growing rate of chronic disease and the aging population, the need for new innovative solutions in the healthcare industry is moving on an upswing.

Unleashing AIs complete potential in the healthcare industry is not an easy task. Both healthcare providers and AI developers together will have to tackle all the obstacles on the path towards the integration of new technologies.

Clearing all the hurdles will need a compounding of technological refinement and shifting mindsets. As AI trend become more deep-rooted, it is giving rise to highly ubiquitous discussions. Will AI replace the doctors and medical professionals, especially radiologists and physicians? The answer to this is, it will increase the efficiency of the medical professionals.

Initiatives by IBM Watson and Googles DeepMind will soon unlock the critical answers. However, AI aims to mimic the human brain in healthcare, human judgment, and intuitions that cannot be substituted.

Even though AI is augmenting in existing capabilities of the industry, it is unlikely to fully replace human intervention. AI skilled forces will swap only those who dont want to embrace technology.

Healthcare is a dynamic industry with significant opportunities. However, uncertainty, cost concerns, and complexity are making it an unnerving one.

The best opportunity for healthcare in the near future are hybrid models, where clinicians and physicians will be supported for treatment planning, diagnosis, and identifying risk factors. Also, with an increase in the number of geriatric population and the rise of health-related concerns across the globe, the overall burden of disease management has augmented.

Patients are also expecting better treatment and care. Due to growing innovations in the healthcare industry with respect to improved diagnosis and treatment, AI has gained consideration among the patients and doctors.

In order to develop better medical technology, entrepreneurs, healthcare service providers, investors, policy developers, and patients are coming together.

These factors are set to exhibit a brighter future of AI in the healthcare industry. It is extremely likely that there will be widespread use and massive advancements of AI integrated technology in the next few years. Moreover, healthcare providers are expected to invest in adequate IT infrastructure solutions and data centers to support new technological development.

Healthcare companies should continually integrate new technologies to build strong value and to keep the patients attention.

-

The insights presented in the article are based on a recent research study on Global Artificial Intelligence In Healthcare Market by Future Market Insights.

Abhishek Budholiya is a tech blogger, digital marketing pro, and has contributed to numerous tech magazines. Currently, as a technology and digital branding consultant, he offers his analysis on the tech market research landscape. His forte is analysing the commercial viability of a new breakthrough, a trait you can see in his writing. When he is not ruminating about the tech world, he can be found playing table tennis or hanging out with his friends.

See the article here:
Decoding the Future Trajectory of Healthcare with AI - ReadWrite