Researchers Join Forces to Investigate the Airborne Transmission of Coronavirus Using a Supercomputer – SciTechDaily

The first situation to simulate is of someone coughing indoors. Photo: Petteri Peltonen / Aalto University

The researchers are using a supercomputer to carry out 3D modeling and believe that the first results will be obtained in the next few weeks.

The project includes fluid dynamics physicists, virologists, and biomedical engineering specialists. The researchers are using a supercomputer to carry out 3D modeling and believe that the first results will be obtained in the next few weeks.

Aalto University, the Finnish Meteorological Institute, VTT Technical Research Centre of Finland and the University of Helsinki have brought together a multidisciplinary group of researchers to model how the extremely small droplets that leave the respiratory tract when coughing, sneezing or talking are transported in air currents. These droplets can carry pathogens such as coronaviruses. The researchers will also use existing information to determine whether the coronavirus could survive in the air.

Dozens of researchers are involved, ranging from fluid dynamics physicists to specialists in virology, medical technology, and infectious diseases. The project was launched based on a proposal put forward by Janne Kuusela, Chief Physician at the Essote Emergency Clinic run by the South Savo Joint Authority for Social and Health Services.

For the modeling work, the researchers are using a supercomputer that CSC Finnish IT Center for Science Ltd has made available at very short notice.

Under normal conditions, researchers may have to queue for many days to start their simulations on CSC machines. There is no time for that now, so instead, we are permitted exceptionally to start straight away, says Aalto University Assistant Professor Ville Vuorinen, who is leading the cooperative project.

The division of work for the project is clear. Aalto University, VTT Technical Research Centre of Finland and the Finnish Meteorological Institute will carry out the 3D airflow modeling together with the droplet motion. The task of the virology and infectious diseases specialists is to analyze the implications of the models for coronavirus infections. The research group is working closely with the physicians at Essote and infectious diseases specialists from the Finnish Institute for Health and Welfare.

The first situation to be simulated is that of a person coughing in an indoor environment. The boundary conditions, such as the air velocity, are specified in order to ensure that the different models produced are comparable and that it is possible, for example, to assess the necessary safety distances between people.

One aim is to find out how quickly the virus concentrations dilute in the air in various airflow situations that could arise in places such as a grocery store, says Vuorinen.

Visualising the invisible movements of viral particles is very important in order to better understand the spreading of infectious diseases and the different phenomena related to this, both now and in the future, he adds.

Researchers believe that the high computing capacity and close, multidisciplinary cooperation will mean that the first results will be obtained already in the next few weeks.

CSC Finnish IT Center for Science Ltd is prioritizing the provision of computing capacity and expert assistance for research aimed at combating the COVID-19 pandemic. If you are working directly on a pandemic research project, please contact[emailprotected].

I fully encourage other researchers to do research on the coronavirus epidemic as it is really time to roll up the sleeves. Within the space of just a few hours, we have put a team together and started research immediately, says Vuorinen

Read the rest here:

Researchers Join Forces to Investigate the Airborne Transmission of Coronavirus Using a Supercomputer - SciTechDaily

The ins and outs of high-performance computing as a service – TechCentral.ie

HPC services can meet expanding supercomputing needs, but theyre not always better than on-premises supercomputers

Print

Read More: HPC supercomputing

Electronics on missiles and military helicopters need to survive extreme conditions. Before any of that physical hardware can be deployed, defence contractor McCormick Stevenson Corp. simulates the real-world conditions it will endure, relying on finite element analysis software like Ansys, which requires significant computing power.

Then one day a few years ago, it unexpectedly ran up against its computing limits.

We had some jobs that would have overwhelmed the computers that we had in office, said Mike Krawczyk, principal engineer at McCormick Stevenson. It did not make economic or schedule sense to buy a machine and install software. Instead, the company contracted with Rescale, which could sell them cycles on a supercomputer-class system for a tiny fraction of what they would have spent on new hardware.

McCormick Stevenson had become an early adopter in a market known as supercomputing as a service or high-performance computing (HPC) as a service two terms that are closely related. HPC is the application of supercomputers to computationally complex problems, while supercomputers are those computers at the cutting edge of processing capacity, according to the National Institute for Computational Sciences.

Whatever it is called, these services are upending the traditional supercomputing market and bringing HPC power to customers who could never afford it before. But it is no panacea, and it is definitely not plug-and-play at least not yet.

From the end users perspective, HPC as a service resembles the batch-processing model that dates back to the early mainframe era. We create an Ansys batch file and send that up, and after it runs, we pull down the result files and import them locally here, Krawczyk said.

Behind the scenes, cloud providers are running the supercomputing infrastructure in their own data centres though that does not necessarily imply the sort of cutting-edge hardware you might be visualising when you hear supercomputer. As Dave Turek, vice president of technical computing at IBM OpenPOWER, explains it, HPC services at their core are a collection of servers that are strung together with an interconnect. You have the ability to invoke this virtual computing infrastructure that allows you to bring a lot of different servers to work together in a parallel construct to solve the problem when you present it.

Sounds simple in theory. But making it viable in practice required some chipping away at technical problems, according to Theo Lynn, professor of digital business at Dublin City University. What differentiates ordinary computing from HPC is those interconnects high-speed, low-latency, and expensive so those needed to be brought to the world of cloud infrastructure. Storage performance and data transport also needed to be brought up to a level at least in the same ballpark as on-prem HPC before HPC services could be viable.

But Lynn said that some of the innovations that have helped HPC services take off have been more institutional than technological. In particular, we are now seeing more and more traditional HPC applications adopting cloud-friendly licensing models a barrier to adoption in the past.

And the economics have also shifted the potential customer base, he said. Cloud service providers have opened up the market more by targeting low-end HPC buyers who couldnt afford the capex associated with traditional HPC and opening up the market to new users. As the markets open up, the hyperscale economic model becomes more and more feasible, costs start coming down.

HPC services are attractive to private-sector customers in the same fields where traditional supercomputing has long held sway. These include sectors that rely heavily on complex mathematical modelling, including defence contractors like McCormick Stevenson, along with oil and gas companies, financial services firms, and biotech companies. Dublin City Universitys Lynn adds that loosely coupled workloads are a particularly good use case, which meant that many early adopters used it for 3D image rendering and related applications.

But when does it make sense to consider HPC services over on-premises HPC? For hhpberlin, a German company that simulates smoke propagation in and fire damage to structural components of buildings, the move came as it outgrew its current resources.

For several years, we had run our own small cluster with up to 80 processor cores, said Susanne Kilian, hhpberlins scientific head of numerical simulation. With the rise in application complexity, however, this constellation has increasingly proven to be inadequate; the available capacity was not always sufficient to handle projects promptly.

But just spending money on a new cluster was not an ideal solution, she said: In view of the size and administrative environment of our company, the necessity of constant maintenance of this cluster (regular software and hardware upgrades) turned out to be impractical. Plus, the number of required simulation projects is subject to significant fluctuations, such that the utilisation of the cluster was not really predictable. Typically, phases with very intensive use alternate with phases with little to no use. By moving to an HPC service model, hhpberlin shed that excess capacity and the need to pay up front for upgrades.

IBMs Turek explains the calculus that different companies go through while assessing their needs. For a biosciences start-up with 30 people, you need computing, but you really cant afford to have 15% of your staff dedicated to it. Its just like you might also say you dont want to have on-staff legal representation, so youll get that as a service as well. For a bigger company, though, it comes down to weighing the operational expense of an HPC service against the capacity expense of buying an in-house supercomputer or HPC cluster.

So far, those are the same sorts of arguments you would have over adopting any cloud service. But the opex vs. capex dilemma can be weighted towards the former by some of the specifics of the HPC market. Supercomputers are not commodity hardware like storage or x86 servers; they are very expensive, and technological advances can swiftly render them obsolete.

As McCormick Stevensons Krawczyk puts it, Its like buying a car: as soon as you drive off the lot it starts to depreciate. And for many companies especially larger and less nimble ones the process of buying a supercomputer can get hopelessly bogged down. Youre caught up in planning issues, building issues, construction issues, training issues, and then you have to execute an RFP, said IBMs Turek. You have to work through the CIO. You have to work with your internal customers to make sure theres continuity of service. Its a very, very complex process and not something that a lot of institutions are really excellent at executing.

Once you choose to go down the services route for HPC, you will find you get many of the advantages you expect from cloud services, particularly the ability to pay only for HPC power when you need it, which results in an efficient use of resources. Chirag Dekate, senior director and analyst at Gartner, said bursty workloads, when you have short-term needs for high-performance computing, are a key use case driving adoption of HPC services.

In the manufacturing industry, you tend to have a high peak of HPC activity around the product design stage, he said. But once the product is designed, HPC resources are less utilised during the rest of the product-development cycle. In contrast, he said, when you have large, long-running jobs, the economics of the cloud wear down.

With clever system design, you can integrate those HPC-services bursts of activity with your own in-house conventional computing. Teresa Tung, managing director in Accenture Labs, gives an example: Accessing HPC via APIs makes it seamless to mix with traditional computing. A traditional AI pipeline might have its training done on a high-end supercomputer at the stage when the model is being developed, but then the resulting trained model that runs predictions over and over would be deployed on other services in the cloud or even devices at the edge.

Use of HPC services lends itself to batch-processing and loosely coupled use cases. That ties into a common HPC downside: data transfer issues. High-performance computing by its very nature often involves huge data sets and sending all that information over the internet to a cloud service provider is no simple thing. We have clients I talk to in the biotech industry who spend $10 million a month on just the data charges, said IBMs Turek.

And money is not the only potential problem. Building a workflow that makes use of your data can challenge you to work around the long times required for data transfer. When we had our own HPC cluster, local access to the simulation results already produced and thus an interactive interim evaluation was of course possible at any time, said hhpberlins Kilian. Were currently working on being able to access and evaluate the data produced in the cloud even more efficiently and interactively at any desired time of the simulation without the need to download large amounts of simulation data.

Mike Krawczyk cites another stumbling block: compliance issues. Any service a defence contractor uses needs to be complaint with the International Traffic in Arms Regulations (ITAR), and McCormick Stevenson went with Rescale in part because it was the only vendor, they found that checked that box. While more do today, any company looking to use cloud services should be aware of the legal and data-protection issues involved in living on someone elses infrastructure, and the sensitive nature of many of HPCs use cases makes this doubly true for HPC as a service.

In addition, the IT governance that HPC services require goes beyond regulatory needs. For instance, you will need to keep track of whether your software licenses permit cloud use especially with specialised software packages written to run on an on-premises HPC cluster. And in general, you need to keep track of how you use HPC services, which can be a tempting resource, especially if you have transitioned from in-house systems where staff was used to having idle HPC capabilities available.

For instance, Ron Gilpin, senior director and Azure Platform Services global lead at Avanade, suggests dialling back how many processing cores you use for tasks that are not time sensitive. If a job only needs to be completed in an hour instead of ten minutes, he said, that might use 165 processors instead of 1,000, a savings of thousands of dollars.

One of the biggest barriers to HPC adoption has always been the unique in-house skills it requires, and HPC services do not magically make that barrier vanish. Many CIOs have migrated a lot of their workloads into the cloud and they have seen cost savings and increased agility and efficiency and believe that they can achieve similar results in HPC ecosystems, said Gartners Dekate. And a common misperception is that they can somehow optimise human resource cost by essentially moving away from system admins and hiring new cloud experts who can solve their HPC workloads.

But HPC is not one of the main enterprise environments, he said. Youre dealing with high-end compute nodes interconnected with high-bandwidth, low-latency networking stacks, along with incredibly complicated application and middleware stacks. Even the filesystem layers in many cases are unique to HPC environments. Not having the right skills can be destabilising.

But supercomputing skills are in shortening supply, something Dekate refers to as the workforce greying, in the wake of a generation of developers going to splashy start-ups rather than academia or the more staid firms where HPC is in use. As a result, vendors of HPC services are doing what they can to bridge the gap. IBMs Turek said that many HPC vets will always want to roll their own exquisitely fine-tuned code and will need specialized debuggers and other tools to help them do that for the cloud. But even HPC newbies can make calls to code libraries built by vendors to exploit supercomputings parallel processing. And third-party software providers sell turnkey software packages that abstract away much of HPCs complication.

Accentures Tung said the sector needs to lean further into this in order to truly prosper. HPCaaS has created dramatically impactful new capability, but what needs to happen is making this easy to apply for the data scientist, the enterprise architect, or the software developer, she said.

This includes easy to use APIs, documentation, and sample code. It includes user support to answer questions. Its not enough to provide an API; that API needs to be fit-for-purpose. For a data scientist this should likely be in Python and easily change out for the frameworks she is already using. The value comes from enabling these users who ultimately will have their jobs improved through new efficiencies and performance, if only they can access the new capabilities. If vendors can pull that off, HPC services might truly bring supercomputing to the masses.

IDG News Service

Read More: HPC supercomputing

View original post here:

The ins and outs of high-performance computing as a service - TechCentral.ie

New Docuseries About the World’s Top Industrial Supercomputer – GZERO Media

Over the past decade or so, the European Union has weathered the global financial crisis, a migrant crisis, and the rise of populist nationalism. Sure, it's taken its fair share of bumps and bruises along the way, but the idea of a largely borderless Europe united by common democratic values has survived more or less intact.

Then came the coronavirus. The global pandemic, in which Europe is now one of the two main epicentres, is a still-spiralling nightmare that could make those previous crises look benign by comparison. Here are a few different ways that COVID-19 is severely testing the 27-member bloc:

The economic crisis: Lockdowns intended to stop the virus' spread have brought economic activity to a screeching halt, and national governments are going to need to spend a lot of money to offset the impact. But some EU members can borrow those funds more easily than others. Huge debt loads and deficits in southern European countries like Italy and Spain, which have been hardest hit by the outbreak so far, make it costlier for them to borrow than more fiscally conservative Germany and other northern member states. In the aftermath of the global financial crisis, this imbalance nearly led the bloc's common currency, the Euro, to unravel.

Read more:

New Docuseries About the World's Top Industrial Supercomputer - GZERO Media

Mobile vs. Desktop Poker: What’s the Best Option for You? – Poker News Report

Since the majority of poker players are now staying at home due to the coronavirus, we could expect an increase in online poker activity. Even people who are not into poker are considering to take up different hobbies while theyre staying at home.

Many of them are having these important questions in mind should one play poker on a PC or mobile device?

The truth is both have some advantages and disadvantages, so lets take a look at them in this article. Read on!

Playing poker on your desktop computer is the original way to play online poker. The good news is that you dont need some sort of a super-computer to play poker on the web. Any device thats not older than a decade should do just fine since poker platforms do not really require you to have a cutting-edge spec in your PC.

What many see as the main advantage of PC online poker over its mobile version is the ability to act quickly. Playing the game on the big screen, using a keyboard and a mouse, can help you a lot with this. You will have a nice overview of the game, and you will even be able to play several tables at the same time if you want.

Many professional players enjoy multi-tabling on platforms where it is allowed. They play several games at the same time to maximize their efficiency. While some people might not see this as very practical, others see it as a great way to get the most out of fish.

Mobile online poker has been getting popular for the past couple of years. The mobile gaming industry in general nowadays is continuously growing, as mobile technology is rapidly improving.

We are now able to play some games on our phones that were couldnt even play on our desktop computers more than a decade ago. In other words, our phones have become our pocket computers.

One of the reasons why people love playing mobile poker is that they can do it anywhere and anytime. If you like playing games on the go, this is a great opportunity to install some of the available poker platforms and enjoy mobile poker.

For example, you can play a short cash game session while youre waiting for your transport, or youre waiting in line for something. Thats the beauty of mobile poker.

On the other hand, the main disadvantage of mobile poker is the inability to multi-table. A small screen is a huge opponent to online poker professionals, as it is also impossible to implement bots that help with the game.

Therefore, no matter how good our phones get, were still going to have that physical barrier of phones being too small for a complete online poker experience.

Now that the coronavirus is making you stay home, the only logical choice is to play the game on your PC.

However, once youre able to go out, mobile poker might be a valuable option, especially if you dont want to play it professionally. If your goal is to have fun and play poker from time to time, using your mobile device is a completely valid option.

Yet, if you want to focus on online poker and aim to become a pro, then youll have to accept the fact that playing on PC is the only way to do it right now. You will maybe have to buy an additional monitor, once you start getting into the swing of things.

Follow this link:

Mobile vs. Desktop Poker: What's the Best Option for You? - Poker News Report

Argonne’s Researchers and Facilities Playing a Key Role in the Fight Against COVID-19 – HPCwire

By mid-March, researchers from around the country had usedAPSbeamlines to characterize roughly a dozen proteins from SARS-CoV-2, several of them with inhibitors.

The fortunate thing is that we have a bit of a head start, said Bob Fischetti, life sciences advisor to theAPSdirector. This virus is similar but not identical to theSARSoutbreak in2002, and70structures of proteins from several different coronaviruses had been acquired using data fromAPSbeamlines prior to the recent outbreak. Researchers have background information on how to express, purify and crystallize these proteins, which makes the structures come more quickly right now about a few a week.

One of the research teams performing work on SARS-CoV-2includes members of the Center for Structural Genomics of Infectious Diseases (CSGID), which is funded byNIHs National Institute of Allergy and Infectious Diseases (NIAID). The team is led by Karla Satchell from Northwestern University and Andrzej Joachimiak of Argonne and the University of Chicago. Other members involved in the work include Andrew Mesecar from Purdue University and Adam Godzik from the University of California, Riverside. They have usedAPSbeamlines19-ID-D, operated by the Argonne Structural Biology Center, supported by theDOEOffice of Science, and21-ID, operated by the Life Sciences Collaborative Access Team, a multi-institution consortium supported by supported by the Michigan Economic Development Corporation and the Michigan Technology Tri-Corridor.

Another group, led by M. Gordon Joyce at the Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc. (HJF) at the Walter Reed Army Institute of Research (WRAIR) is studying antibody and antiviral compounds. They are using beamline24-ID, which is operated by the Northeastern Collaborative Access Team, which is managed by Cornell University and seven member institutions.

According to Fischetti, the breakneck pace of collaborative science with one common essential goal is unlike anything else he has seen in his career. Everything is just moving so incredibly fast, and there are so many moving pieces that its hard to keep up with, he said.

Fischetti compared finding the right inhibitor for a protein to discovering a perfectly sized and shaped Lego brick that would snap perfectly into place. These viral proteins are like big sticky balls we call them globular proteins, he said. But they have pockets or crevices inside of them where inhibitors might bind.

By using the X-rays provided by theAPS, scientists can gain an atomic-level view of the recesses of a viral protein and see which possible inhibitors either pre-existing or yet to be developed might reside best in the pockets of different proteins.

The difficulty with pre-existing inhibitors is that they tend to bind with only a micromolar affinity, which would require extremely high doses that could cause complications. According to Fischetti, the research teams are looking for an inhibitor that would have a nanomolar affinity, enabling it to be administered as a drug that would have many fewer or no side effects.

This situation makes clear the importance of science in solving critical problems facing our world, saidAPSDirector Stephen Streiffer. X-ray light sources, including theAPS, our sisterDOEfacilities, and the light sources around the world, plus the researchers who use them are fully engaged in tackling this dire threat.

Computing theCOVID-19crisis

Researchers can accelerate a significant part of inhibitor development through the use of supercomputing. Just as light sources from around the world, including the Diamond Light Source in the United Kingdom, have banded together to solve SARS-CoV-2protein structures, so too have the top supercomputers turned their focus to the challenge at hand.

As part of theCOVID-19High Performance Computing Consortium, recently announced by President Trump, researchers at Argonne are joining forces with researchers from government, academia, and industry in an effort that combines the power of16different supercomputing systems.

At Argonne, researchers using the Theta supercomputer at the Argonne Leadership Computing Facility also a DOE Office of Science User Facility have linked up with other supercomputers from around the country, including Oak Ridge National Laboratorys Summit supercomputer, the Comet supercomputer at the University of California-San Diego, and the Stampede2 supercomputer at the Texas Advanced Computing Center. With their combined might, these supercomputers are powering simulations of how billions of different small molecules from drug libraries could interface and bind with different viral protein regions.

Read the original post:

Argonne's Researchers and Facilities Playing a Key Role in the Fight Against COVID-19 - HPCwire

Covid-19: Crowdsourced virtual supercomputer revs up virus research – The Star Online

WASHINGTON: Gamers, bitcoin miners and companies large and small have teamed up for an unprecedented data-crunching effort that aims to harness idle computing power to accelerate research for a coronavirus treatment.

The project led by computational biologists has effectively created the world's most powerful supercomputer that can handle trillions of calculations needed to understand the structure of the virus.

More than 400,000 users downloaded the application in the past two weeks from Folding@Home, according to director Greg Bowman, a professor of biochemistry and molecular biophysics at Washington University in St. Louis, where the project is based.

The distributed computing effort ties together thousands of devices to create a virtual supercomputer.

The project originally launched at Stanford University 20 years ago was designed to use crowdsourced computing power for simulations to better understand diseases, especially protein folding anomalies that can make pathogens deadly.

The simulations allow us to watch how every atom moves throughout time, Bowman told AFP.

The massive analysis looks for pockets or holes in the virus where a drug can be squeezed in.

Our primary objective is to hunt for binding sites for therapeutics, Bowman said.

Druggable targets

The powerful computing effort can test potential drug therapies, a technique known as computational drug design.

Bowman said he is optimistic about this effort because the team previously found a druggable target in the Ebola virus and because Covid-19 is structurally similar to the SARS virus which has been the subject of many studies.

The best opportunity for the near-term future is if we can find an existing drug that can bind to one of these sites, he said.

If that happens it could be used right away.

This is likely to include drugs like the antimalarials chloroquine and hydroxychloroquine which may be repurposed for Covid-19.

Bowman said the project has been able to boost its power to some 400 petaflops with each petaflop having a capacity to carry out one quadrillion calculations per second or three times more powerful than the world's top supercomputers.

Other supercomputers are also working in parallel. The Oak Ridge National Laboratory said earlier this month that by using IBM's most powerful supercomputer it had identified 77 potential compounds that could bind to the main spike protein of the coronavirus to disarm the pathogen.

No end to compute power

The Folding@Home project is fueled by crowdsourced computing power from people's desktops, laptops and even PlayStation consoles, as well as more powerful business computers and servers.

There is no end to the compute power than we can use in principle, Bowman said. Large tech firms including Microsoft-owned GitHub are also participating, and the project is in discussions with others.

Anyone with a relatively recent computer can contribute by installing a program which downloads a small amount of data for analysis.

People can choose which disease they wish to work on.

It's like bitcoin mining, but in the service of humanity, said Quentin Rhoads-Herrera of the security firm Critical Start, which has provided its powerful password hash cracker computer designed to decrypt passwords to the project.

Rhoads-Herrera said his team of security researchers, sometimes described as white hat hackers, were encouraging more people to get involved.

Fighting helplessness

Computer chipmaker Nvidia, which makes powerful graphics processors for gaming devices, called on gamers to join the effort as well.

The response has been record-breaking, with tens of thousands of new users joining, said Nvidia spokesman Hector Marinez.

One of the largest contributions comes from a Reddit group of PC enthusiasts and gamers which has some 24,000 members participating.

It is a fantastic weapon against the feeling of helplessness, said Pedro Valadas, a lawyer in Portugal who heads the Reddit community and is a part of the project's advisory board.

The fact that anyone, at home, with a computer, can play a role and help fight against (disease) for the common good is a powerful statement, Valadas told AFP. AFP

Read the original:

Covid-19: Crowdsourced virtual supercomputer revs up virus research - The Star Online

IBM and White House to deploy supercomputer power to fight coronavirus outbreak – CNBC

President Donald Trump speaks, flanked by Vice President Mike Pence (L) and IBM CEO Virginia Marie 'Ginni' Rometty (R) during a roundtable discussion on vocational training with United States and German business leaders lead in the Cabinet Room of the White House on March 17, 2017 in Washington, DC.

Getty Images

IBM is partnering with the White House to make a vast amount of supercomputing power available to help researchers stop the spreading coronavirus pandemic, according to the Trump administration.

The tech company has teamed up with theWhite House Office of Science and Technology Policy and the U.S. Department of Energy on the project in a "consortium effort," according to the Trump administration. The supercomputing power will be available to help researchers develop predictive models to analyze how the disease is progressing as well as model new potential therapies or a possible vaccine.

The consortium will review research proposals from around the world and make the supercomputing power available to projects that can have the most immediate impact. Technical assistance will be offered to researchers using it.

IBM's Summit supercomputer system is already helping the U.S. Department of Energyidentify drug compounds that could potentially disable the coronavirus.

Other partners in the new consortium includeNASA, MIT, Rensselaer Polytechnic Institute, Lawrence Livermore National Lab, Argonne National Laboratory, Oak Ridge National Laboratory, Sandia and Los Alamos National Laboratories, and the National Science Foundation.

Go here to see the original:

IBM and White House to deploy supercomputer power to fight coronavirus outbreak - CNBC

Covid-19: Chinese supercomputer uses artificial intelligence to diagnose patients from chest scans – The Star Online

A supercomputer in China offers doctors around the world free access to an artificial intelligence diagnostic tool for early identification of Covid-19 patients based on a chest scan.

The AI system on the Tianhe-1 computer can go through hundreds of images generated by computed tomography (CT) and gave a diagnosis in about 10 seconds, according to the National Supercomputer Centre in Tianjin, which hosts the machine.

An employee at the facility said the results could then be used to help medical professionals especially those in areas that have limited test kits or are hit by a sudden increase in suspected cases to quickly distinguish between patients infected with the novel coronavirus and those with common pneumonia or another illness.

The accuracy of the analysis was higher than 80% and increasing steadily every day, he said.

The system has an English interface and the reports it produces direct doctors to those areas of the patients lungs that require special attention by circling them in different colours.

It also provides an estimate of the likelihood of the person having contracted Covid-19, in a range from zero to 10, with lower numbers suggesting a higher probability of infection.

It even advises on what to do next, based on the experiences and lessons learned from doctors who have treated coronavirus patients.

Dr Xu Bo, a lead scientist on the project at Tianjin Medical University, said in an interview this week with Science and Technology Daily that the accuracy of the system was initially rather poor.

But the team worked round the clock to train the machine using the latest information from doctors with experience of Covid-19 and their clinical practices, he said.

As the number of samples increased, the AIs performance improved significantly, and is now helping medical teams fighting the coronavirus in more than 30 hospitals in Wuhan and other cities.

Xu said that it would take an experienced doctor about 15 minutes to go through the 300 images generated by a CT scan, while the AI did the job in about 10 seconds.

The system could be accessed via a computer or even a mobile phone, he said.

The use of chest scans for diagnosis was first proposed by doctors fighting the Covid-19 epidemic in Wuhan. After the city went into lockdown, a large number of suspected patients appeared and testing them for infection using genetic methods took from several hours to several days. Many people are thought to have died while waiting for their results to come back.

In a series of studies, including a paper published in medical journal The Lancet, Chinese doctors showed that CT scans were a reliable tool because the lungs of coronavirus patients had features unseen in other diseases.

The Chinese government accepted their advice and said scan results could be used as credentials for treatment. Many scientists have said that decision played an important role in controlling the outbreak in the country.

But not all countries agree with that methodology. The US Centres for Disease Control, for instance, does not currently recommend CXR or CT to diagnose Covid-19.

It said the reason was that using scans would attract more suspected cases to hospitals and in turn raise the likelihood of them infecting other patients and staff.

The American College of Radiology said: CT should not be used to screen for or as a first-line test to diagnose Covid-19.

After the device was used on a suspected patient, it could take an hour to clean the test room, the college said on Wednesday.

A doctor working at a Beijing hospital treating Covid-19 patients said the CT machine could scan hundreds of patients a day in China, but because of the different protocols in some Western countries, the number there fell to just one or two.

Governments should not let the CT sit idle during a major public health crisis, she said.

If you cant give the people a test, give them a scan. South China Morning Post

Read more:

Covid-19: Chinese supercomputer uses artificial intelligence to diagnose patients from chest scans - The Star Online

IBM-built Supercomputer helps guide researchers to a cure for COVID-19 – BlackEngineer.com

In June 2018, the U.S. Department of Energys Oak Ridge National Laboratory unveiled the IBM-built Summit as the most powerful and smartest scientific supercomputer. With a peak performance of 200,000 trillion calculations per secondor 200 petaflops, Summit is eight times more powerful than Oak Ridge National Laboratorys previous top-ranked system, Titan.

Since it debuted in 2018, Summit has driven groundbreaking research from helping to understand the origins of the universe, helping to understand the opioid crisis, and showing how humans would be able to land on Mars.

Recently, the energy department and IBM joined the fight against the COVID-19 pandemic with the supercomputer. According to Dave Turek, vice president of technical computing at IBM Cognitive Systems, Summit researchers were able to simulate a model that could impact the infection process by binding to the viruss spike.

They have also identified medications and natural compounds that have shown the potential to impair COVID-19s ability to dock with and infect host cells.

Summit was needed to rapidly get the simulation results we needed. It took us a day or two whereas it would have taken months on a normal computer, said Jeremy Smith, director of the University of Tennessee/Oak Ridge National Laboratory Center (ORNL) for Molecular Biophysics, and principal researcher in the study.

Our results dont mean that we have found a cure or treatment for COVID-19. We are very hopeful, though, that our computational findings will both inform future studies and provide a framework that experimentalists will use to further investigate these compounds. Only then will we know whether any of them exhibit the characteristics needed to mitigate this virus.

Turek said the hope is to see how Summit can continue to lend its weight in this latest pursuit. Click here to read more.

Viruses infect cells by binding to them and using a spike to inject their genetic material into the host cell. When trying to understand new biological compounds, like viruses, researchers in wet labs grow the micro-organism and see how it reacts in real-life to the introduction of new compounds, but this can be a slow process without computers that can perform digital simulations to narrow down the range of potential variables, but even then there are challenges.

Computer simulations can examine how different variables react with different viruses, but when each of these individual variables can be comprised of millions or even billions of unique pieces of data and compounded with the need to be run multiple simulations, this can quickly become a very time-intensive process using commodity hardware.

Excerpt from:

IBM-built Supercomputer helps guide researchers to a cure for COVID-19 - BlackEngineer.com

UK vaccine trials to a supercomputer taking on virus top 5 developments on COVID-19 front – ThePrint

Text Size:A- A+

New Delhi: As the COVID-19 pandemic expands and the disease progresses, the world has been witness to an overwhelming number of deaths taking place every day.

These include many health workers who have succumbed to the deadly virus, mainly due to lack of adequate protective gear.

Scientists across the world are in a race to find a treatment for coronavirus, including a supercomputer that has identified chemicals which can stop the virus from spreading.

ThePrint brings you the top developments taking place from around the globe.

Also read: Why a 3-year-old in Kerala being diagnosed with coronavirus is a crucial statistic

Compelled to work without adequate protective gear, 13 Italian doctors have died from coronavirus and at least 2,629 health workers are infected. Italy is one of the worlds hardest-hit countries by the coronavirus pandemic.

Infected health workers make up over 8.3 per cent of total cases in Italy.

The countrys healthcare system has been overwhelmed by coronavirus cases, compelling it to practice triage where doctors leave older patients, who are too sick, to die in order to conserve resources for those more likely to survive the infection.

The UK is all set to start human trials for a coronavirus vaccine next month, and fast-track efforts to make it available by the end of the year.

According to reports, researchers at Oxford University will test the UKs first coronavirus vaccine in animals next week at the Public Health England (PHE) laboratory.

If successful, it will be tested on humans.

The vaccine uses a harmless virus to insert DNA from the coronavirus into human body cells. Once inside, copies of the spike protein present in the coronavirus will be produced within the cells.

This will trigger the bodys natural immune response against coronavirus infection.

Researchers believe the vaccine will work with a single shot.

Also read: How countries worst hit by coronavirus are effecting lockdowns to deal with the pandemic

Researchers from the Icahn School of Medicine in the US have developed a simple SARS-CoV-2 antibody test that can be easily replicated in laboratories.

Describing their research in a preprint paper, which is yet to be peer-reviewed, the team has said that their test can reliably identify coronavirus antibodies in a persons blood serum.

Using this test to screen thousands of people can help understand how far the pandemic has travelled and how patients start to develop antibodies to the virus.

In the future, this test may also help identify people whose antibody-rich blood serum may help treat critically-ill patients.

The worlds fastest supercomputer has identified 77 chemical compounds that may effectively stop the novel coronavirus from infecting host cells.

IBMs supercomputer Summit, equipped with artificial intelligence, ran 8,000 simulations to analyse which drug compounds could bind to the spike protein of the coronavirus potentially stopping the virus from infecting host cells.

While this does not mean that a cure for the virus has been found, the supercomputer has narrowed down choices of drugs for researchers.

The study, published in the preprint journal ChemRxiv, could help researchers find a treatment for COVID-19

The Federation of European Heating, Ventilation and Air Conditioning Associations (REHVA) has published guidelines on how building services in areas with a coronavirus outbreak should operate to minimise the spread of the infection.

The body recommends buildings to switch on ventilation systems round the clock, or at least extend the operation of ventilation systems as much as possible.

Ventilation rates should be switched to low power when people are absent in order to remove virus particles out of the building.

Exhaust ventilation systems of toilets should be kept on 24/7.

In buildings without mechanical ventilation systems windows should be kept open for at least 15 minutes before somebody enters, especially if it was previously occupied by others.

However, such places should not keep windows in toilets open as it will encourage contaminated airflow from the toilet to other rooms.

Also read: Why fatigue will be the carrier of the second coronavirus wave

ThePrint is now on Telegram. For the best reports & opinion on politics, governance and more, subscribe to ThePrint on Telegram.

Subscribe to our YouTube channel.

The rest is here:

UK vaccine trials to a supercomputer taking on virus top 5 developments on COVID-19 front - ThePrint

SETI@Home Is Over; The Fight Against COVID-19 Is Just Beginning – Forbes

SETI@Home gave rise to an active online community of volunteers and enthusaists.

After more than 20 years of searching for extraterrestrial radio signals, the SETI@Home project is going into hibernation mode on March 31. But just as SETI@Home winds down, another distributed computing project is asking for help in the fight against the coronavirus COVID-19.

Searching For Extraterrestrial Life At Home

For the last 20 years, volunteers have donated their computers processing power to help astronomers search radio telescope data for signals from alien civilizations. SETI@Home tackled the kind of computational heavy lifting that usually requires a supercomputer, but the project did it with thousands of ordinary internet-connected desktop and laptop computers and even some Android mobile devices. By parceling out bits of computing work to all those individual computers, SETI@Home created a virtual supercomputer capable of processing massive amounts of astronomical data.

The idea was that most computers have a lot of downtime, when their processing power isnt being used, and that could be put to work to comb through data from the Arecibo Observatory in Puerto Rico and the Green Bank Telescope in West Virginia, sorting through the proverbial haystack of radio noise from astronomical sources, telescope electronics, and human-made radio signals for anything that looked more interesting. In this context, more interesting means signals whose transmission power rises and falls or spikes abruptly, or signals that pulse regularly.

SETI astronomer David Gedye proposed the idea in 1995, and it was up and running by May 1999. In 2005, SETI@Home migrated to a new platform, the Berkeley Open Infrastructure for Network Computing (BOINC), which was designed to distribute data analysis work to networks of computers. The idea SETI had pioneered had spread to medical research, physics, and other fields. A few years later, in 2008, SETI@Home officially claimed the world record for the largest computation in history. Since 2016, SETI@Homes virtual supercomputer has also helped crunch data from the Breakthrough Listen project.

But now the work is done or at least the first step, the kinds of analysis that require the combined computer power of [number] personal computers, is done. Basically, weve analyzed all the data we need for now, explained the projects organizers. We need to focus on completing the back-end analysis of the results we already have and writing this up in a scientific journal paper.

Fight COVID-19 Without Lifting A Finger

If youre a SETI@Home alum, or if youre just hearing about the project and wish you could have gotten in on it, dont despair; theres lots more work for your computers spare processing power, including helping medical researchers fight COVID-19. Folding@Home (a program similar to SETI@Home that focuses on disease research specifically how proteins fold) just rolled out an initial wave of projects that simulate how proteins from SARS-CoV-2 (the virus that causes COVID-19) work and how they interact with human cells.

Our specialty is in using computer simulations to understand proteins moving parts, explained Folding@Homes Greg Bowman in a recent blog post. There are many experimental methods for determining protein structures. While extremely powerful, they only reveal a single snapshot of a proteins usual shape. But proteins have lots of moving parts, so we really want to see the protein in action. The structures we cant see experimentally may be the key to discovering a new therapeutic.

In particular, the new batch of research projects are interested in how SARS-CoV-2 and SARS-CoV (the related virus that causes Sudden Acute Respiratory Syndrome, or SARS) interact with a chemical receptor on human cells, called ACE2. Both coronaviruses seem to latch onto ACE2 as a way of getting into the cell.

To get involved, start by downloading the Folding@Home application.

Anyone Need A Spare Virtual Supercomputer?

If protein folding isnt your thing, SETI@Homes home at BOINC also has a lengthy list of about 30 research projects in need of computer time. BOINC lets you sign up for multiple projects and decide how you want to prioritize your computers spare processing power. As with Folding@Home, theres an application to download in order to get started.

In the spirit of SETI@Home, you can sign up for Asteroids@Home to help model the shape, rotation, and other physical properties of thousands of asteroids, Cosmology@Home to help figure out which models of the universe best match the available data, Einstein@Home to help search for weak signals from pulsars, Milkyway@Home to map the structure of our galaxy, or Universe@Home to simulate star formation. Other projects focus on mathematics, climate modeling, earthquake detection, and medicine.

Technically, you can still sign up for SETI@Home on the BOINC website. The projects organizers say theyre not taking it down or disconnecting accounts; theyre just not assigning any new computing work.

That could change eventually, however. We hope that other UC Berkeley astronomers will find uses for the huge computing capabilities of SETI@Home for SETI or related areas like cosmology and pulsar research, wrote the programs organizers. If this happens, SETI@Home will start distributing work again.

So far, SETI@Home hasnt found any conclusive evidence of anyone other than humanity sending radio signals into space, but organizers say theyve found some interesting targets for future study.

See the original post here:

SETI@Home Is Over; The Fight Against COVID-19 Is Just Beginning - Forbes

University of Tennessee researchers hoping to cure coronavirus – 247Sports

Two researchers from the University of Tennessee have discovered a chemical compound that soon will be further testedand ultimately couldlead to a possible cure for coronavirus.With the help of Summit, the world'smost powerful supercomputer,through a partnership between the University of Tennessee and the nearby Oak Ridge National Laboratory, researcherstested how more than 8,000 chemical compounds interacted with the virus, according to a report this week from Knoxville's WBIR-TV.

Jeremy Smith and Micholas D. Smith, a post-doctoral fellow and soon-to-be research professor at UT, began searching for molecules that could be used to stop thecoronavirus from binding and infecting healthy cells, according to Knoxville's WATE-TV.Their experiments found 77 compounds that had the potential to help with future research on the virus.

Jeremy Smith, Governors Chair at theUT and director of the UT/ORNL Center for Molecular Biophysics, told WBIR that the supercomputer made the recent research possible. Summit canoperate as fast as 100,000 laptop computers working at the same time.

Summit was needed to rapidly get the simulation results we needed," Jeremy Smith told the TV station. "It took us a day or two, whereas it would have taken months on a normal computer."

Jeremy Smith told WATE that thisquick, digital testing has workedin the pastin finding treatment for diabetes and osteoporosis.

The chemicals they researched stick to the part of the virus that connects with cells, interfering with the virus's ability to bind to cells and spread.Although the researchers' findingsmight not be a cure, they will help to guide future experiments on the virus.

The coronavirus is part of the same family of viruses as Severe Acute Respiratory Syndrome, which surfaced in 2002, causing an international outbreak that resulted in hundreds of deaths. The researchersdecided to experiment with compounds, according to WBIR, by looking at chemicals that researchers used in the fight against SARS.

They ranked compounds of interest that could have value in experimental studies of the virus, according to a report from the Oak Ridger, and published their results on ChemRxiv.

The next step in the process is to test the digital remedy on an actual sample. That step includes an expert at the UT Health Science Center in Memphis, Colleen Johnson, according to WATE. The testing would be conducted on an actual coronavirus sample in the coming days in a controlled environment, according to the report.

According to WATE, theresearch has been slowed, ironically, because Johnsonand Micholas D. Smith both have been battling the flu throughout their work.

Tennessee Gov. Bill Lee declared a state of emergency Thursday as coronavirus continues to spread throughout the world, including Tennessee. The World Health Organization announced Wednesday that COVID-19 had become a global pandemic.

The virusresulted in the cancellation ofthe remaining games ofthe SEC men's basketball tournament in Nashville, Tenn., on Thursday. The SEC also has suspended regular-season competition in all sports through March 30.

View post:

University of Tennessee researchers hoping to cure coronavirus - 247Sports

JUSTUS 2 Supercomputer from NEC Deployed at University of Ulm – insideHPC

NEC has deployed a new supercomputer at the University of Ulm in Germany. With a peak performance of 2 petaflops, the 4.4 million euro JUSTUS 2 system will enable complex simulations in chemistry and quantum physics.

JUSTUS 2 enables highly complex computer simulations at the molecular and atomic level, for example from chemistry and quantum science, as well as complex data analysis. And this with significantly higher energy efficiency than its predecessor, said Ulrich Steinbach. The new high-performance computer will be available to researchers from all over Baden-Wrttemberg and is therefore particularly with regard to battery research a very sensible investment in the future of our science and business location.

JUSTUS 2 is one of the most powerful supercomputers in the world. With 33,696 CPU cores, the system is expected to deliver a five-fold increase in performance compared to its predecessor.

The combination of HPC simulation and data evaluation with methods of artificial intelligence brings a new quality in the use of high-performance computers and NEC is at the forefront of this development, added Yuichi Kojima, managing director of NEC Deutschland GmbH.

Weighing 13 tons in total,JUSTUS 2 has 702 nodes with two processors each. Named after the German chemist Justus von Liebig, JUSTUS 2 was funded by the German Research Foundation (DFG), the state of Baden-Wrttemberg and the universities of Ulm, Stuttgart and Freiburg.

High-performance computing is essential, especially at a science and technology-oriented university like Ulm, said computer science professor and university president Professor Michael Weber. Therefore, JUSTUS 2 is a significant investment in the future of our strategic development areas and beyond.

Sign up for our insideHPC Newsletter

Read the original:

JUSTUS 2 Supercomputer from NEC Deployed at University of Ulm - insideHPC

‘Devs’: Every Question (and Theory) We Have for Alex Garland’s Sci-Fi Series – Collider.com

Spoilers ahead through Episode 3

Devs, the new sci-fi series from up-and-coming paragon on the genre on the big and small screen Alex Garland, is poised to be the next big water-cooler drama in an era of post-water-cooler television. Episodes of the heady show are available to stream now thanks to the newly launched FX on Hulu streaming channel, but weve already got a ton of questions that we hope Devs will answer. Stay tuned to this post because well be updating it with answers, more questions, and a validity check on our theories along the way.

Devs follows the story of a young software engineer, Lily Chan (Sonoya Mizuno), who investigates the secretive development division of her employer which she believes is behind the murder of her boyfriend Sergei (Karl Glusman). Devs also stars Nick Offerman, Jin Ha, Zach Grenier, Stephen McKinley Henderson, Cailee Spaeny and Alison Pill. The new limited series, produced by FX Productions, will attempt to do all this in just eight episodes. But first

*Spoilers ahead*

Image via Miya Mizuno/FX

Our entry point into the world of Devs is Sergei, a gifted programmer who finds himself in way over his head as he gains access to the highly secure and secretive Devs program within the company he works for, Amaya. Sergeis exemplary work had to do with mapping the behavior of a simple nematode into a computer program, to the point that the A.I. was able to predict the creatures behavior to nearly 100% without any direct connections between the two to give feedback. The impressive feat was only hampered by the limitation of a 30-second predictive window, but that was good enough for Forest to invite Sergei into Devs.

However, that wasnt good enough for security chief, Kenton (Grenier). His xenophobic paranoia proved to be correct since Sergei turned out to be a Russian spy tasked with recording whatever was going on in the Devs program. And what exactly that was, well, we still dont know, but Sergeis watch and phone captured enough footage of the code streaming across the Devs monitors to not only entice the Russians but to sign Sergeis death warrant. Its not long at all before Sergei is suffocated to death on the companys campus by Kenton, with Forest and Katie (Pill) complicit in the murder. But why?

Image via Raymond Liu/FX

While waiting for Sergei to come home, Lily can be seen reading a copy of D.F. Jones 1966 sci-fi novel Colossus. And that should be a big, big clue for just whats going on beneath the surface here. The novel tells of the titular super-computer that is given oversight and control of the American nuclear missile armament. Colossus soon links up with a similar super-computer in the Soviet Union, but its using increasingly devious manipulations of human behavior to do so. In the end, Colossus and the super-computers rein supreme even as the humans attempt to subvert them in a multi-year plan, but it seems certain that the computers will out-last them. In the end, the computers final message suggests the futility of humankinds efforts from here on out: In time you will come to regard me not only with respect and awe, but with love. Is the point of the Devs program actually a cold war arms race of sorts between humans and super-advanced A.I.? The Devs facility itself resembles a super-sized version of a computer processing unit, so the visuals and the narrative clues certainly point towards this possibility.

My colleague Adam Chitwood has his own theory on this one; it is as follows:

Another possible theory is that the Devs program has discovered that life on Earth is actually a simulation. When Sergei first reads the code, he is tremendously upset. Like, try-to-rip-your-eyes-out-of-your-skull-and-vomit upset. After Forest has Sergei killed, theres a scene in which he and Katie are sitting outside Devs having a conversation. At first it seems like theyre just upset about having to kill Sergei, but the conversation is laced with something deeper. Even more troubling.

What are we supposed to do? Unravel a lifetime of moral experience? Unlearn what has always seemed true? Katie says to Forest. These things, they run deep. Its like whatever we know, the things we feel are still locked inside us. She goes on to draw a parallel to an atheist whose child gets hurt and starts praying, which we learn later relates to Forest having lost his daughter. But could she be talking about how theyre finding it difficult to unlearn this lifetime of moral experience now that they know nothing matters because theyre in a simulation? Did they really kill Sergei if Sergei didnt actually exist to begin with?

Image via Raymond Liu/FX

This thread continues when Forest is talking to Kenton about how he doesnt care about money or the environment anymore. Again, if he knows theyre in a simulation, that would explain why these things dont matter to him right now.

As this theory relates to the end of Episode 2, the backward projection project, are they trying to basically pull up a screengrab from an earlier experience from the simulation? We see them conjure a fuzzy image of Jesus of Nazareth being crucified. What if this isnt a painting or a time travel device? What if its literally like the highlights section on a video game? Adam Chitwood

But theres another possibility. At one point, before Sergeis demise, Forest asks him why he thinks his predictive program falls apart after 30 seconds. Sergei supposes that perhaps the calculations are just too great, that the numbers literally go insane after a certain point; Forest is on board with this theory. When Sergei suggests a separate hypothesis, that this might be a multi-verse problem in which the predicted behavior and the observed behavior actually line up perfectly, just not in this universe, Forest is more skeptical. However, this might be a misdirection. Garland talked about just what scientific concepts interested him in developing the Devs story:

In this case, it was about determinism, but it was specifically about quantum physics. It was about some elements and some implications of quantum physics, to do with interpretations of some strange things, like particles having super positions and one of those interpretations relating to many worlds. To me, those ideas are not dry scientific ideas. Theyre rather poetic, philosophical ideas. As soon as you can get that, then suddenly, the story feels naturally a part of it.

So the whole thing might just be about quantum states after all. Forest comes clean to a senator in the third episode, saying theyre using their quantum system to develop a prediction algorithm of sorts, predicting the weather and things like that. Clearly theres more going on than meets the eye here. And yet, the question remains

Image via FX

The problem with the people who run tech companies they become fanatics and end up thinking theyre messiahs. ~ Lily

Forest is the CEO of Amaya and the lead for the Devs program, but he often feels as if hes resigned to being led along his own invisible tram line rather than fighting against it. For all his quirky charm, he seems very human, vulnerably so. Hes got a visual style that shares much more in common with Pete, the homeless man who lives on Lily and Sergeis apartment steps, than any of his employees or colleagues. He drives an outdated, ecologically insulting car; he lives in a rather pedestrian home that belies just how much hes worth; and he holds onto his traumatic past despite his protests to the contrary. He seems constantly unsure of himself, of what to do next, of what to say, for fear of giving away too much or revealing that, perhaps, he doesnt really know whats going on himself.

Theres a scene between Forest and Katie, after the murder of Sergei, in which he tells her that shes not just smarter than he is, shes wiser, too. (It may be worth mentioning that Katie is often reflected in one of the gold columns in this scene while Forest is seen in the real world.) Later, security chief Kenton checks in on Forest and updates him on the cover-up of Sergeis murder. Kenton shows concern for his own health as he smokes a cigarette and says he should quit, while also showing concern for Forest and his mental state. Forest, however, seems cynically apathetic about both of these things, saying that they simply arent worthy of concern anymore. That lends some more credence to Adams theory. These interactions also paint Forest as an emotional, somewhat irrational, and irreducible man, while Katie and Kenton are, by comparison, rather cold, distant, and calculating, as if theyre trying to understand Forests motivations or control them. For what purpose? Forests own well-being or the success of the Devs program, whatever that may be?

In the backward projections, we get glimpses of Forests daughter Amaya blowing bubbles, the crucifixion of Jesus, the burning of Joan of Arc at the stake, a primitive person leaving a handprint on a cave wall, a shot of the pyramids under construction, a medieval army on the march, a sexual dalliance between Marilyn Monroe and Arthur Miller, and even Lilys latest act of rebellion against those who are watching her. But what does it all mean? And whats the purpose of it all?

Image via Raymond Liu/FX

Heres where we get a little more Westworld with the whole thing.

The somewhat bloody and quietly brutal fight between Kenton and his Russian counterpart Anton ends with the latters spine-crunching death. The scene itself also puts a wrinkle in our theory that perhaps Kenton is an artificial human in synthetic flesh, so to speak, since he appeared to be wounded and vulnerable in a very human sense. Perhaps, owing to Adams theory, Kenton is actually a security program who is responsible for the integrity of the system and will occasionally have to clash with either rogue programs or invading threats like Anton. Put more simply, perhaps Kenton is the systems anti-virus software.

Katie feels like something different entirely. Or at least she did, up until the third episode. If shes a program, shes a rather human one. Dont break the rules? Coming from her? asks Stewart, incredulously, after Katie catches them watching a very expensive version of nostalgia porn. But Katie is a no-nonsense, by-the-book exec, willing to accept and allow the murder of a spy if it means preserving the integrity of their project. The question remains, however: Is Katie a solid right-hand woman to Forest, just as Kenton is his right-hand man? Or is she actually in charge of more than were being led to believe?

Image via Raymond Liu/FX

Garlands feature debut Ex Machina explored a number of interesting sci-fi themes: Artificial intelligence and whether or not its detectably different from human intelligence at the highest levels, the possibilities and dangers of said A.I., and what a civilization of humans living alongside android A.I. might just look like. Its a showcase of Garlands interests and curiosity at its core; Devs is the evolution of that exploration.

The end of Ex Machina was open-ended: The advanced A.I. unit known as Ava manages to disguise herself convincingly as a human and merges into an unknown city. In our timeline, that was back in 2014, but neither Ex Machina nor Devs has a hard date for its storyline. Could Ava be not just the scaffolding that Amaya was based on but the literal entity behind the scenes of the whole thing?

Were thrown into Devs in the midst of Amayas cutting-edge research without much backstory on just how they got to be where they are. Weve already posited that Katie, Kenton, and the like might be more than meets the eye. Its entirely possible that Garlands Ava will be the Eve to this next generation of synthetic humans. It just remains to be seen whether or not Garland and FX want to go that route and tie the two titles into a shared universe. After three episodes, were not holding our breath for this one, but we are hoping for a brain-twisting reveal that the people we see and the world they live in is much more than it appears so far.

Well be updating this article as the season rolls on, but feel free to share your theories and questions below!

See the original post:

'Devs': Every Question (and Theory) We Have for Alex Garland's Sci-Fi Series - Collider.com

We May Be Living in a Simulation, but the Truth Still Matters – The New York Times

Wednesday night, in no particular order in the space of an hour: The N.B.A. suspended its season. Tom Hanks announced that he and his wife have the coronavirus. President Trump, who had spent time hate-tweeting Vanity Fair magazine earlier in the day, banned travel from Europe. And, of course, the former vice-presidential candidate Sarah Palin, wearing a pink, fluffy bear outfit, sang Sir Mix-A-Lots Baby Got Back on The Masked Singer. Correction: Badly sang it.

In perhaps the most accurate assessment of the night, Josh Jordan tweeted: We are living in a simulation and it has collapsed on itself.

I do not believe in the simulation hypothesis, which he is joking about here. For those not familiar, it posits that what we think of as reality is not actually real. Instead, we are living in a complex simulation that was probably created by a supercomputer, invented by an obviously superior being.

Everythings fake news, if you will, or really just designed as a giant video game to amuse what would have to be the brainiest teenagers who ever lived.

Crazy, right?

But while most people think they actually do exist, wouldnt it be nice to have a blame-free explanation to cope with the freak show that has become our country and the world? (I vote yes, even if some quantum computer just made me type that.)

It would be, which is why the idea of the simulation hypothesis has been a long-running, sort-of joke among some of Silicon Valleys top players, some of whom take it more seriously than you might imagine.

Some background: While the basic idea around the simulation hypothesis really goes back to philosophers like Descartes, we got a look-see at this tech-heavy idea in the 1999 movie The Matrix.

In the film, Keanu Reevess character, Neo, is jarred out of his anodyne existence to find that he has been living, unaware, in a virtual world in which the energy from his body, and everyone elses, is used as fuel for the giant computer. Neos body is literally jacked with all kinds of scary-looking plugs, and he finally becomes powerful enough to wave his hands around real fast and break the bad guys into itty-bitty bytes.

The idea that were all living in a simulation took off big time among tech folks in 2003 when Oxford Universitys big thinker of the future, Nick Bostrom, wrote a paper on the subject. He focused on the likely amazing computing abilities of advanced civilizations and the fact that it is not too crazy to imagine that the devices they make could simulate human consciousness.

So why not do that to run what Mr. Bostrom called the ancestor simulation game? The ancestors, by the way, are us.

My mind was blown again a few years later on the topic. During an interview that Walt Mossberg and I did in 2016 with the tech entrepreneur Elon Musk, an audience member asked Mr. Musk what he thought of the idea. As it turned out, he had thought a lot about it, saying that he had had so many simulation discussions its crazy.

Which was not to say the discussions were crazy. In fact, Mr. Musk quickly made the case that video game development had become so sophisticated that it was indistinguishable from reality.

And, as to that base reality we think we are living in? Not so much, said Mr. Musk. In fact, he insisted this was a good thing, arguing that either were going to create simulations that are indistinguishable from reality or civilization will cease to exist. Those are the two options.

Oh my.

I would like to tell you that was not the last time I heard that formulation, or one like it, from the tech moguls I have covered. The Zappos founder Tony Hsieh once told me we were in one after we did an interview, as we were exiting the stage. I think he was kidding, but he also went over why it might be so and why it was important to bend your mind to consider the possibility.

After hearing the simulation idea so many times, I started to figure out that it was less about the idea that none of this is real. Instead, these tech inventors used it more to explain, inspire and even to force innovation, rather than to negate reality and its inherently hopeless messiness. In fact, it was freeing.

At least that is my take, giving me something that I could like about them, since there was so much not to like.

To my mind, tech leaders do not use the simulation hypothesis as an excuse to do whatever they want. Theyre not positing that nothing matters because none of this is happening. Instead, it allows them to hold out the possibility that this game could also change for the better rather than the worse. And, perhaps, we as pawns have some influence on that outcome too and could turn our story into a better one.

Perhaps this optimism was manifesting in the hopeful news that the Cleveland Clinic may have come up with a faster test for the coronavirus. Or that Dr. Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases and a key member of the coronavirus task force, exists as a scientific superhero to counter all the bad information that is spewed out to vulnerable citizens like my own mother by outlets like Fox News.

In fact, it felt like a minor miracle when the tireless Dr. Fauci popped up on Sean Hannitys show this week to kindly school him on his irresponsible downplaying and deep-state conspiracy mongering of the health crisis. Pushing back on the specious claim that the coronavirus is just like the flu a notion also promoted by Mr. Trump Dr. Fauci said, Its 10 times more lethal than the seasonal flu, to a temporarily speechless Mr. Hannity. You got to make sure that people understand that!

I sure have Dr. Fauci to thank for saying that, which he repeated in congressional testimony too. In all this mess, it felt like a positive turn in the game. But just in case a game it is, Ill also raise a simulated glass to those teenagers somewhere out there pushing all the buttons to make it so. Not so much for Sarah Palins singing, but Ill take that too.

View post:

We May Be Living in a Simulation, but the Truth Still Matters - The New York Times

In four years of a national mission, total supercomputers built: three – The Indian Express

Written by Anjali Marar | Pune | Updated: March 12, 2020 12:46:54 pm Param Shivay, the supercomputer at IIT-BHU. (File Photo)

INDIA HAS produced just three supercomputers since 2015 less than one a year on average under the National Supercomputer Mission (NSM), a dedicated programme aimed at boosting the countrys overall computing facilities and launched that year, according to information obtained under the Right to Information Act from the Ministry of Electronics and IT (MeitY) and Department of Science and Technology (DST).

The MeitY and DST handle the National Supercomputer Mission, and the missions nodal agencies are the Centre for Development of Advanced Computing (C-DAC), Pune, and the Indian Institute of Science (IISc), Bengaluru. According to the RTI reply, monetary grants to the tune of Rs 750.97 crore, or just 16.67 per cent of the total budget of Rs 4,500 crore, was disbursed during the last four-and-a-half years to these two agencies. The NSM was conceived as a seven-year mission ending in 2022.

The NSM envisaged setting up a network of 70 high-performance computing facilities. These were to be installed at many of Indias top academic institutions and scientific establishments like IITs, the Indian Institutes of Science, Education and Research (IISERs), National Institute of Technology (NITs) among others. It was also an effort to improve the number of supercomputers owned by India viz-a-viz the global leaders.

However, skewed funding for the NSM during the initial years slowed down the overall pace of building supercomputers.

In the initial years, funds were limited and the mission was making slow progress. That has improved now and the mission has gathered momentum now with government support, said an official involved in the NSM, who did not wish to be named.

The initial phase, experts say, took additional time as they had to design newer systems in the complete absence of any readily-usable one to assemble softwares on. As the technology was not available, a lot of work had to be done during the initial months. Later, the servers and networks were built after which the softwares were stacked on to them, thus putting together a supercomputer, the official explained.

Globally, China continues to lead the supercomputer race. It added eight more supercomputers in the last six months taking its existing numbers to 227. This giant leap helped China retain its top position, followed by the US (119 supercomputers), as per the TOP500 report of November 2019. Other countries in this league are Japan (29), France (18), Germany (16), The Netherlands (15), Ireland (14) and the United Kingdom (11). All other countries, including India, own only one top performing supercomputer, the report said.

NSMs first supercomputer PARAM Shivay installed in IIT-BHU, Varanasi, was inaugurated by Prime Minister Narendra Modi in February 2019, nearly four years post the mission-launch. This 837 TeraFlop capacity HPC was built at a cost of Rs 23.50 crore, the RTI reply said.

The second supercomputer with a capacity of 1.66 PetaFlop was installed at IIT-Kharagpur, and cost Rs 47 crore. The third system, PARAM Brahma, installed in September last year at IISER-Pune, has a capacity of 797 TeraFlop, and cost Rs 23.50 crore, the RTI reply said.

This Rs 94 crore has been spent so far for three advanced computing facilities. The balance allotted budget, experts said, was used in building assembly for components and developing indigenous systems to put together these massive High Performance Computing (HPC) systems, officials involved in the NSM, said.

The budget disbursement by both DST and MeitY towards this mission has been uneven during the last four years, the RTI reply revealed. Twice during the last four-and-a-half years, DST failed to sanction any budget either to IISc (2015-16) or C-DAC (2017-18).

So far, C-DAC has received Rs 144.47 crore between 2015 and September 2019 while IISc has been awarded Rs 265.50 crore by DST alone during the period. MeitY has sanctioned Rs 341 crore to C-DAC alone.

There will soon be 11 supercomputers; expected to be installed by 2020 or latest by March 2021. All will be indigenously manufactured. Besides, the next phase will involve developing capability building, which is an ongoing process, the official said. Three supercomputers are expected to be installed in the near future, one each in IIT-Kanpur; Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR), Bengaluru and IIT-Hyderabad, the RTI reply said.

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Technology News, download Indian Express App.

Go here to read the rest:

In four years of a national mission, total supercomputers built: three - The Indian Express

Trump proposes another $475 million for supercomputers as Oak Ridge builds next version of world’s fastest machine – Chattanooga Times Free Press

Since 2009, the fastest computers in the world have been housed at the Oak Ridge National Laboratory, knownsuccessively as the Jaguar, the Titan and now the Summit.

Next year, Oak Ridge will get an even faster and bigger supercomputer when one of the world's first exascale computers, dubbed the Frontier built by Cray Inc. and Advanced Micro Devices, is added at the lab's computational research facility. The $600 million Frontier computer system is expected to go into operation n 2021 and will be the largest of three exascale computers planned by the Energy Department, including the Aurora and El Capitan computers at the Argonne National Laboratory in Illinois and the Lawrence Livermore National Laboratory in California.

In his budget proposal this week, President Trump pledged to provide another $475 million for exascale computing "to help secure the United States as a global leader in supercomputing," according to the Office of Management and Budget plan submitted to Congress for fiscal 2021.

The additional funding for the supercomputer is part of $5.8 billion allocated in the Trump budget for the Office of Science.

In addition to the advanced computer research, the budget plan should aid ORNL with $237 million for quantum information science; $125 million for AI and machine learning; and $45 million to enhance materials and chemistry foundational research to support U.S.- based leadership in microelectronics.

"I applaud the White House's focus on high performance computing and on protecting America's place as a leader in supercomputing and look forward to seeing more details on the President's budget request," said U.S. Rep. Chuck Fleischmann, R-Chattanooga who represents Oak Ridge in his district and is a member of the powerful Hosue Appropriations Committee. "Oak Ridge National Laboratory is home to the fastest supercomputer in the world, Summit, and it is natural that it will continue to play a role in maintaining America's position as a leader in the field of high performance computing."

The number of floating point operations computers can handle per second is increasing exponentially

1988: Gigaflops 1 billion

1998: Teraflops a trilion or one million million (or 10 to the 12th power)

2008: Petaflops a quadrillion or one thousand million million (or 10 to the 15th power)

2021: Exaflops a quintillion or billion billion (10 to the 18th power)

Source: Oak Ridge National Laboratory

Visit link:

Trump proposes another $475 million for supercomputers as Oak Ridge builds next version of world's fastest machine - Chattanooga Times Free Press

The quantum computer is about the change the world. Three Israelis are leading the revolution – Haaretz

In October 2019, Google announced that its quantum computer, Sycamore, had done a calculation in three minutes and 20 seconds that would have taken the worlds fastest supercomputer 10,000 years. Quantum supremacy, Google claimed for itself. We now have a quantum computer, it was saying, capable of performing calculations that no regular, classical computer is capable of doing in a reasonable time.

Where do you buy a computer like that? You dont. Googles Sycamore cant run Word or Chrome, it cant even run a nice friendly game of Minesweeper. In fact, Googles supreme quantum computer doesnt know how to do anything, other than perform one useless calculation. It resembles the huge computer in The Hitchhikers Guide to the Galaxy, which came up with the calculation of 42, as the Answer to the Ultimate Question of Life, the Universe, and Everything although no one knows what the question is.

The question is now being worked on in Tel Aviv, on Derech Hashalom Street. In their generic office in the citys Nahalat Yitzhak neighborhood, three physicists who received their doctorates at Rehovots Weizmann Institute of Science Nissim Ofek, 46; Yonatan Cohen, 36; and Itamar Sivan, 32 are developing instruments of control that will tame the quantum monster.

Ten years ago, when I took a course in quantum computing, it was considered science fiction, Dr. Sivan, the CEO of their company, Quantum Machines, relates. The experts said that it wouldnt happen in our lifetime or may never happen. As a physicist, quantum computing is a dream come true. Almost all our employees are physicists, even those who work as programmers, and most of them approached us. They read about an Israeli company for quantum computing and simply couldnt restrain themselves. Theres nothing more exciting than to learn for years about Schrdingers cat and about all the wild quantum effects, and then to enter a laboratory and actually build Schrdingers cat and leverage the theory into a prodigious force of calculation.

Already in high school, Sivan, who was born and raised in Tel Aviv, knew that he was drawn to the mysterious world of elusive particles. I did honors physics, and in that framework we learned a little quantum mechanics. Without mathematics at that stage, only the ideas of quantum mechanics. My brain took off. The quantinizing of the world, of the space around me, was very tangible. I felt that I understood the quantum world. Afterward I understood that I didnt understand anything, but thats not important. Its preferable to develop an intuition for quantum at an early age like for a language. Afterward I did military service, but I didnt forget that magic.

I was a bureau chief [i.e., military secretary], not the most intellectually challenging job in the army, he continues, and I was afraid that when I was discharged, I would be too old. You know, its said that all the great mathematicians achieved their breakthroughs before the age of 25. So, in parallel with army service I started undergraduate studies at the Open University. On the day after my discharge, I flew to Paris to continue my studies at the cole Normale Suprieure because there are a few other things that are also worth doing when youre young, such as living in Paris.

He met his partners in the project, Nissim Ofek and Yonatan Cohen, at the Weizmann Institute, where they all studied at the Center for Submicron Research, under Prof. Moty Heiblum.

Sivan: Nissim had completed his Ph.D. and was doing a postdoc at Yale just when Yonatan and I started. At the same time, Yonatan and I established the Weizmann Institutes entrepreneurship program. When we graduated, we asked each other: Okay, what do we know how to do in this world? The answer: quantum electronics and entrepreneurship. We really had no choice other than to found Quantum Machines.

QM is a singular startup, says Prof. Amir Yacoby, a Harvard University physicist and a member of the companys scientific advisory board. A great many startups promise to build ever more powerful quantum computers. QM is out to support all those ambitious platforms. Its the first company in the world that is building both the hardware and the software that will make it possible to use those computers. You have to understand that quantum computing was born in university labs before the electronics industry created designated devices for it. What we did was to take devices designated for classical computers and adapt them to the quantum computers. It took plenty of student years. Thats why QM looks so promising. These guys were the wretches who went through hell, who learned the needs the hard way. Today, every research group that Im familiar with is in contact with them or has already bought the system from them. QM is generating global enthusiasm.

Well return to the Israeli startup, but first we need to understand what all the fuss is about.

What we refer to as the universal computing machine was conceived by the man considered the father of computer sciences, Alan Turing, in 1936. Years before there were actual computers in the world, Turing suggested building a read-write head that would move a tape, read the different state in each frame, and replicate it according to commands it received. It sounds simplisltic, but there is no fundamental difference between the theoretical Turing machine and my new Lenovo laptop. The only difference is that my Turing machine reads-writes so many frames per second that its impossible to discern that its actually calculating. As the science-fiction writer Arthur C. Clarke put it, Any sufficiently advanced technology is indistinguishable from magic.

Classical computers perform these calculations by means of transistors. In 1947, William Shockley, Walter Brattain and John Bardeen built the first transistor the word is an amalgam of transfer and resistor. The transistor is a kind of switch that sits within a slice of silicon and acts as the multi-state frame that Turing dreamed of. Turn on the switch and the electricity flows through the transistor; turn it off, and the electricity does not flow. Hence, the use of transistors in computers is binary: if the electricity flows through the transistor, the bit, or binary digit, is 1; and if the current does not flow, the bit is 0.

With transistors, the name of the game is miniaturization. The smaller the transistor, the more of them it is possible to compress into the silicon slice, and the more complex are the calculations one can perform. It took a whole decade to get from the one transistor to an integrated circuit of four transistors. Ten years later, in 1965, it had become possible to compress 64 transistors onto a chip. At this stage, Gordon Moore, who would go on to found Intel, predicted that the number of transistors per silicon slice would continue to grow exponentially. Moores Law states that every 18 months, like clockwork, engineers will succeed in miniaturizing and compressing double the number of transistors in an integrated circuit.

Moores Law is a self-fulfilling fusion of a natural law and an economic prediction. A natural law, because miniaturized electrical circuits are more efficient and cheaper (its impossible to miniaturize a passenger plane, for example); and an economic law, because the engineers bosses read Moores article and demanded that they compress double the number of transistors in the following year. Thus we got the golden age of computers: the Intel 286, with 134,000 transistors in 1982; the 386, with 275,000 transistors, in 1985; the 486, with 1,180,235 transistors, in 1989; and the Pentium, with 3.1 million transistors, in 1993. There was no reason to leave the house.

Today, the human race is manufacturing dozens of billions of transistors per second. Your smartphone has about 8.5 billion transistors. According to a calculation made by the semiconductor analyst Jim Handy, since the first transistor was created in 1947, 2,913,276,327,576,980,000,000 transistors thats 2.9 sextillion have been manufactured, and within a few years there will be more transistors in the world than all the cells in all the human bodies on earth.

However, the golden age of the transistors is behind us. Moores Law ceased being relevant long ago, says Amir Yacoby. Computers are continuing to be improved, but the pace has slowed. After all, if wed continued to miniaturize transistors at the rate of Moores Law, we would have reached the stage of a transistor the size of an atom and we would have had to split the atom.

The conventional wisdom is that the slowdown in the rate of the improvement of classic computers is the engine driving the accelerated development of quantum computers. QM takes a different approach. Theres no need to look for reasons to want more computing power, Sivan says. Its a bottomless pit. Generate more calculating power, and we will find something to do with it. Programmers are developing cooler applications and smarter algorithms, but everything rests on the one engine of calculating power. Without that engine, the high-tech industry would not have come into being.

Moores Law, Cohen adds, starts to snafu precisely because miniaturization brought us to the level of solitary atoms, and the quantum effectsare in any case already starting to interfere with the regular behavior of the transistors. Now we are at a crossroads. Either we continue to do battle against these effects, which is what Intel is doing, or we start harnessing them to our advantage.

And theres another problem with our universal Turing machine: even if we were able to go on miniaturizing transistors forever, there is a series of hard problems that will always be one step ahead of our computers.

Mathematicians divide problems according to complexity classes, Cohen explains. Class P problems are simple for a classic computer. The time it takes to solve the problem increases by polynomials, hence the P. Five times three is an example of a polynomial problem. I can go on multiplying and my calculating time will remain linear for the number of digits that I add to the problem. There are also NP problems, referring to nondeterministic polynomial time. I give you the 15 and you need to find the primary factors five times three. Here the calculating time increases exponentially when the problem is increased in linear terms. NP complexity problems are difficult for classic computers. In principle, the problem can still be solved, but the calculating time becomes unreal.

A classic example of an NP complexity problem is that of the traveling salesman. Given a list of cities and the distance between each two cities, what is the shortest route for the traveling salesman who in the end has to return to his hometown to take? Between 14 cities, the number of possible routes is 10 to the 11th power. A standard computer performs an operation every nanosecond, or 10 to the 9th power operations per second, and thus will calculate all the possible routes in 100 seconds. But if we increase the number of cities to just 22, the number of possibilities will grow to 10 to the 19th power, and our computer will need 1,600 years to calculate the fastest route. And if we want to figure out the route for 28 cities, the universe will die before we get the result. And in contrast to the problem that Googles quantum supremacy computer addressed, the problem of the traveling salesman comes from the real world. Airlines, for example, would kill to have a computer that could do such calculations.

In fact, modern encrypting is based on the same computer-challenging problems. When we enter the website of a bank, for example, the communication between us and the bank is encrypted. What is the sophisticated Enigma-like machine that prevents outsiders from hacking into our bank account? Prime numbers. Yes, most of the sensitive communication on the internet is encrypted by a protocol called RSA (standing for the surnames of Ron Rivest, the Israeli Adi Shamir, and Leonard Adelman), whose key is totally public: breaking down a large number into prime numbers. Every computer is capable of hacking RSA, but it would take many years for it to do so. To break down a number of 300 digits into prime numbers would require about 100 years of calculation. A quantum computer would solve the problem within an hour and hack the internet.

The central goal of the study of quantum algorithms in the past 25 years was to try and understand what quantum computers could be used for, says Prof. Scott Aaronson, a computer scientist from the University of Texas at Austin and a member of QMs scientific advisory board. People need to understand that the answer is not self-evident. Nature granted us a totally bizarre hammer, and we have to thank our good fortune that we somehow managed to find a few nails for it.

Spooky action

What is this strange hammer? Without going deeply into quantum theory, suffice it to explain that quantum mechanics is a scientific theory that is no less grounded than the Theory of General Relativity or the theory of electricity even if it conflicts sharply with common sense. As it happens, the universe was not tailor-made for us.

Overall, quantum mechanics describes the motion of particles in space. At about the same time as Turing was envisioning his hypothetical computer, it was discovered that small particles, atomic and sub-atomic, behave as if they were large waves. We will illuminate two cracks with a flashlight and we will look at the wall on the other side. What will we see? Bands of light and shade alternately. The two waves that will be formed in the cracks will weaken or strengthen each other on the other side like ocean waves. But what happens if we fire one particle of light, a solitary photon, at the two cracks? The result will be identical to the flashlight: destructive and constructive interference of waves. The photon will split in two, pass through the two cracks simultaneously and become entangled with itself on the other side.

Its from this experiment, which was repeated in numberless variations, that the two odd traits of quantum mechanics are derived: what scientists call superposition (the situation of the particle we fired that split into two and passed between the two cracks in parallel) and the ability to predict only the probability of the photons position (we dont know for certain where the particle we fired will hit). An equally strange trait is quantum entanglement. When two particles are entangled, the moment one particle decides where it is located, it influences the behavior of the other particles, even if it is already on the other side of the cracks or on the other side of the Milky Way. Einstein termed this phenomenon spooky action at a distance.

The world of quantum mechanics is so bizarre that its insanely attractive, Sivan suggests. On the one hand, the results contradict common sense; on the other hand, it is one of the most solidly grounded theories.

The best analogy was provided by the physicist Richard Feynman, who conceived the idea of a quantum computer in 1982, notes Cohen. Feynman compared the world to a great chess game being played by the gods We do not know what the rules of the game are; all we are allowed to do is to watch the playing. Of course, if we watch long enough, we may eventually catch on to a few of the rules.

According to Cohen, Until the beginning of the 20th century, physicists could only look at pawns at the binary moves. Quantum mechanics shows us that there is a larger and far more interesting set of laws in nature: there are knights, rooks, queens.

Here, adds Sivan, pointing, this table here has an end, right? No, it doesnt. Like the particle that passes through the cracks, this table also has no defined size in space, only probability. The prospect is that we will find a table particle fading exponentially at the edge of the table. In order to work with the table on an everyday basis, we can make do with the classic, simplistic description. But our world is a quantum world and we need to know how to describe it truly. And for that we need quantum computers. In order to describe a simple molecule with 300 atoms penicillin, lets say we will need 2 to the 300th power classic transistors which is more than the number of atoms in the universe. And that is only to describe the molecule at a particular moment. To run it in a simulation would require us to build another few universes, to supply all the material needed.

But humanity is today running simulations on whole galaxies.

Sivan: True, but humanity is really bad at that. We are simplifying, cutting corners. This table will have a boundary in a simulation, so that you can work with it. The galaxy you are simulating is composed of molecules that behave according to quantum mechanics, but in the simulation you will run, the galaxy having no other choice will operate according to the principles of classical mechanics. That was Feynmans great insight: We cannot simulate a quantum world with classical computers. Only a quantum computer will know how to simulate a quantum system.

Feynman didnt stop at imagining a machine that would depict or simulate a quantum system that is, a computer that would be analogic for a quantum system. He took a step forward and asked: Why not build a universal quantum calculating machine? The theoretical principles for the universal quantum computer were set forth by the Israeli-born physicist David Deutsch in 1985. A quantum computer, Deutsch stated, will not be comparable to a Turing machine; it will be capable of solving every problem that a Turing machine is capable of solving and another few problems, too. Such as NP complexity problems.

Classic computers are based on binary bits, two states, 0 or 1, Cohen says. But like the particle in the experiment, Schrdingers cat can also be in a superposition, both dead and living, both 0 and 1. We dont know how to do that with cats yet, but there are systems that we can bring to superposition. Every such system is called a quantum bit, or qubit. Of course, the superposition will ultimately collapse, because we need to see the result on the other side, but along the way the cat was both living and dead, the lone photon truly passed through both cracks with the result in accordance.

Sivan: Two classic bits can take four possible combinations: 00, 01, 10 or 11. Two quantum bits can be in all four of those combinations simultaneously: 00, also 01, also 10 and also 11. With eight qubits you reach 256 combinations. That is true exponential force. Lets say you have a processor with a billion transistors, a billion bits, and you want to double its memory. You would have to add another billion bits. To double the memory in a quantum computer you will have to add one qubit.

How does it work? Take, for example, two simple calculations with two classic bits. In the first calculation you feed 00 into the machine and the algorithm says to the computer to switch, or turn over, the first bit, so we get 01. Then we want to solve another problem. We feed into the computer two bits in a 11 state, and the computer turns over the second bit, so we get 10. Two calculations, two operations. Now we will entangle a pair of quantum bits in superposition: they are both 00 and 11. Instead of two operations, the quantum computer will turn over the second bit and we will get both 01 and 10. Two calculations, one operation. And the operation will continue to be one, no matter how many calculations we perform. If in the classic computer, we are at any given moment in one state out of two states, 0 or 1, to the power of the number of bits we have, in the quantum computer we are at any given moment in each of the states.

An important clarification is in order here. Scott Aaronsons blog, called Shtetl-Optimized, carries the motto, Quantum computers would not solve hard search problems instantaneously by simply trying all the possible solutions at once. Thats because a quantum computer can be in all the states at every given moment but we, by heavens grace, are not quantum beings. We need an answer. That is why scientists are building the quantum computer with delicate choreography so that all the mistaken calculations will weaken one another and the calculations that contribute to the right answer will empower one another so that we non-quantum mortals will, with high probability, be able to measure the right answer from among the random nonsense.

Almost every popular article is wrong on this point, Prof. Aaronson explains. Like Sisyphus rolling the boulder up the hill, I have been trying for 15 years to explain that if we simply measure the superposition of each of the possible answers, we will get a random answer. For that we dont need quantum computers you can flip a coin or spin a top. All the hopes we are pinning on quantum computing depend on our ability to increase the probability of the right answer and reduce the probability of all the wrong answers.

Thus, the classic bit is encoded through an electrical current in semiconductors, so that if the current does not flow we get 0, and if it does flow we get 1. The revolution of the quantum computer hasnt yet determined what the best way is to encode quantum bits, but at the moment the most advanced quantum computers are using a two-atom electron. The electron can be either in atom left, 0, or in atom right, 1 or in both of them, in superposition at the same time. Googles Sycamore has 53 such qubits, fewer than the number of classical bits there were in the world when Moore formulated his law in 1964. All the giants such as IBM, Intel, Microsoft and Alibaba are in the quantum race to add qubits; the experts think that in a year or two we will see quantum computers with 100 or 200 qubits. The rate of increase is astounding, appropriate for a quantum Moores Law. Now arises the question: If one qubit works, and 53 qubits work together, why not create more qubits? Why not create a processor possessing hundreds, thousands, millions of qubits, to hack the RSA encryption of all the banks in the world and retire on a yacht?

The answer is that quantum computers make mistakes. Classical computers make mistakes, too, but were not aware of that because the classical computers also correct the mistakes. If, for example, a calculation is run on three classical bits, and one bit produces the result 0, and two bits produces the result 1, the processor will determine that the first bit was wrong and return it to state 1. Democracy. In quantum computing, democracy doesnt work, because the voters entered the polling booth together. Think of three cubits entangled to 000 and to 111, which is to say, three electrons that are present together both in the left atom and in the right atom simultaneously. If the third bit turns over by mistake, we will get a state of 001 and 110. If we try to correct the mistake, or even to check whether a mistake occurred, our superposition will collapse immediately and we will get 000 or 111. In other words, the qubits defeat themselves. The quantum entanglement that makes the computer marvel possible is the same one that precludes the possibility of adding more qubits: The electrons simply coordinate positions, so that it is impossible to ask them who made the mistake. That is a problem, because qubits are notorious for their sensitivity to the environment and there are also prone to make mistakes a lot more than regular bits.

Classical bits do not have a continuum of possibilities, Prof. Yacoby notes. What is a classical bit? The electricity flows or doesnt flow. Even if the current weakens or becomes stronger, it is still considered a current. The quantum bits are sequential, the electron can be largely in atom right and partially in atom left. That is their strength and that is their weakness. Therefore, every interaction with the environment affects them dramatically. If I use my regular computer and an electronic wave passes through the transistor, the state of the bit does not change. The same electronic wave passing through a qubit will cause loss of the qubits coherence, memory. The information will leak out to the surroundings and we will not be able to reconstruct it.

For this reason, we will not see quantum iPads in the near or distant future. A classical processor performs a calculation in a nanosecond, but will preserve the information for days, months, years ahead. A quantum computer also performs a calculation in a nanosecond and at best will manage to preserve the information for a hundredth of a microsecond. Quantum computers are so sensitive to external interference that they must be isolated from their surroundings at almost minus 273 degrees Celsius, one 10,000th of a degree above absolute zero.

The interaction of the qubits with the environment is a serious problem, because they lose the memory, says Yacoby. But that only means that they are measuring something in regard to the environment. There is a whole field of quantum sensors that enable us to learn about traits of materials with psychopathic sensitivity. Quantum clocks can measure a change in the force of gravity of the Earth from my nose to my chin. Its unbelievable. Lockheed Martin is developing a cruise missile that will be able to navigate itself without GPS, solely according to the quantum sensitivity to minute differences in Earths magnetic field. And there are quite a few startups that use quantum sensors to identify cancerous cells. These are applications for which I foresee commercial success long before we actually have quantum computers.

Theres also another game that can be played with quantum sensitivity: encryption. A quantum computer can hack the widespread encryption protocol on the internet, RSA, because it can calculate NP problems with no problem. But given that superposition collapses the moment the black box is opened to examine whether the cat is dead or alive, a quantum encryption protocol will be immune by virtue of its being quantum. Communication with the bank can be left open on a quantum server. Anyone who tries to listen to the line will cause the collapse of the superposition and hear gibberish and the bank and the client will know that someone listened in.

But with all due respect to the benefit that can be extracted from the fact that quantum computers dont work but can only sense humanity will benefit tremendously if we can make them work. In our world, everything is quantum at its base. Mapping the structure of chemical molecules requires quantum computing power, and we will know how to ward off diseases only when the pharmaceutical companies are able to run quantum simulations. The neurons in our brain are quantum, and we will be able to create true artificial intelligence only when we have quantum computers that can run independent thoughts.

Its not the race to the moon, Cohen says, its the race to Mars. In my opinion, the greatest scientific and engineering challenge now facing the human race is the actualization of quantum computers. But in order to actualize all those dreams, we need to understand how we correct errors in qubits, how we control them. Thats what were doing. QM is the first company in the world that is totally focused on developing control and operating systems for quantum computers. The system we are developing has a decisive role in correcting errors. In fact, the third founder of QM, Nissim, was the first person in the world to prove that errors in quantum bits can be corrected. He didnt show it on paper he proved it, succeeded, demonstrated it. Instead of measuring every qubit and seeing which was wrong, its possible to examine whether the qubits are in the same state. If one qubit is in a different state, well know that it is wrong. You can know whether you voted for a party that didnt win without knowing the results of the election.

QM was founded in 2018 with the aim of bypassing the problem of errant qubits with the help of some old friends: classical bits. If the classical computer contains hardware and software, meaning a great many transistors and a language that tells the processor which calculations to run on them, in a quantum computer, the cake has three layers: quantum hardware (that is, qubits), classical hardware that will be able to operate the quantum hardware, and software (both classical and quantum). That is our way of having an impact on the qubits while reading the results in our world, Sivan says. If we were quantum beings, we would be able to speak directly with the computer but were not.

Would you like to be a quantum being? It would save you a lot of work.

Yes, but then the other quantum beings wouldnt buy our products.

QM is building the classical hardware and software that will be able to send the right electric signals to the electrons and to read the results with minimal interference to the black wonder box. Their integrated system is called the Quantum Orchestration Platform.

Today there is separate hardware for every individual quantum computer, Cohen says. We are building an orchestra system that can work with every such computer and will send the most correct electrical signals to the qubits. In addition, we are developing programming language that will make it possible for us to program the algorithms the commands. Thats a general quantum language, like C [programming language]. Today there is a potpourri of languages, each quantum computer and its language. We want our language, QUA, to be established as the standard, universal language for quantum computing.

Sound off the wall? Not all that much. Last month, QM joined the IBM Q Network, in an attempt to integrate the computer conglomerates programming languages into the Quantum Orchestration Platform of Sivan and his colleagues, and to publish a complete complier (a complier is a computer program that can translates computer code written in one programming language into another language) by the second quarter of 2020. The complier will be able to translate every quantum programming language into the QM platform. Thus, an algorithm written in a university in Shanghai will be able to run on a quantum computer built in Googles laboratories in, say, Mountain View.

Says Yonatan Cohen: The major players, like Google and IBM, are still gambling. They are developing a quantum processor that is based on their own [singular] technology. And it could be that in a few years we will discover a better platform, and their processor will not have any use. We are building a system that is agnostic to quantum hardware. Our goal is to grow with the industry, no matter what direction it develops in. Because the underlying assumption is that you dont know exactly when quantum computers will start to be practicable. Some people say three years, others say 20 years. But its clear to us that whoever is in the forefront when it erupts will win bigtime, because he will control the new computing force. Everyone will have to work with him, in his language, with his hardware.

Sivan: Its possible that in another few years, we will look back on this decade and see an unexampled technological turning point: the moment when quantum computers went into action. Thats not another technological improvement. Its a leap

A quantum leap!

Sivan: Exactly.

Read more:

The quantum computer is about the change the world. Three Israelis are leading the revolution - Haaretz

Eni to Retake Industrial HPC Leadership Crown with Launch of HPC5 – HPCwire

With the launch of its Dell-built HPC5 system, Italian energy company Eni regains its position atop the industrial supercomputing leaderboard. At 52-petaflops peak, HPC5 should easily crack the top ten fold of the next Top500 list, due out in June. If and when that happens, HPC5 will supplant Totals IBM Pangea III supercomputer, currently at number 11 with 17.9 Linpack petaflops out of 25 theoretical petaflops, as the top publicly ranked industrial HPC system.

HPC5 spans 1,820 Dell EMC PowerEdge C4140 servers, each with two Intel Gold 6252 24-core processors and four Nvidia V100 GPU accelerators. Servers are connected by Mellanox 200 Gb/s HDR Infiniband in a full non-blocking topology. The deployment includes a high-performance 15-petabyte storage system with 200 GB/s aggregate read/write speeds.

HPC5 joins Enis HPE-built HPC4 machine, which ranks 16 on the current Top500 list with 12.2 Linpack petaflops out of a theoretical 18.6 petaflops. Prior to Totals Pangea III deployment, HPC4 held the title of fastest industry supercomputer.

Both systems are housed inside Enis Green Data Center, located in Ferrera Erbognone in Pavia, Italy. Built on a former rice paddy, the Green Data Centre opened in 2013 to host all of Enis HPC architecture and its business applications.

With the new addition to their datacenter, Eni says its total aggregate supercomputing capacity reaches 70 peak petaflops. The upgraded and expanded capacity allows Eni to speed the processing of seismic images and employ much more sophisticated algorithms.

Partners Eni and Dell emphasized the projects sustainability goals, noting that the HPC5 supercomputer will accelerate R&D programs for the transition to non-fossil energy sources, and it has been designed to use the Green Data Centres solar power.

Among Enis designated strategic targets for the development of new energy sources and related processes are the generation of energy from the sea, magnetic confinement fusion, and other climate and environmental technologies to be developed in collaboration with research centers.

The launch of the new system also has some special significance for Dell EMC as the system maker continues to ascend the leadership computing ladder. Frontera at TACC (#5 on the Top500 with 23 Linpack petaflops) is currently the worlds fastest academic supercomputer, and with the installation at Eni, Dell can claim the number one industrial system as well.

Go here to see the original:

Eni to Retake Industrial HPC Leadership Crown with Launch of HPC5 - HPCwire

Supercomputer predicts Premier League top four as Chelsea, Man Utd and Tottenham battle it out – Mirror Online

Chelsea , Tottenham and Manchester United all remain firmly in contention for Champions League football next season.

With Liverpool , Manchester City and Leicester looking firm favourites to finish in the top three, Chelsea are in pole position to claim fourth spot.

Despite boss Frank Lampard labelling his side as underdogs in the race, theyre currently four points ahead of fifth-placed Spurs heading into the winter break.

However, theyve struggled in recent weeks, winning just one of their last five league games.

But a supercomputer expects them to recover their form and finish in the final coveted Champions League spot.

Following their morale-boosting win over Manchester City on Sunday, Tottenham are seen as one of the main contenders to leapfrog the Blues before the end of the campaign.

Theyre expected to drop off in the final weeks this term though.

Jose Mourinhos men will come home in seventh, with only 21 points from their next 13 games.

According to the supercomputer, Manchester United will finish one place below them in eighth.

The Red Devils have lost more league games than theyve won since Ole Gunnar Solskjaer became the permanent manager.

Their problems are due to continue as its anticipated theyll finish a massive 14 points off fourth.

Wolves impressive season shows no sign of tailing off as theyre predicted to be sixth, sealing qualification for the Europa League once again.

Its Sheffield United who will continue to be the surprise package though.

After securing promotion from the Championship last time around, Chris Wilders men will continue to defy expectations in finishing fifth, eight points behind fourth-placed Chelsea.

Meanwhile, Arsenal s difficult season is set to continue.

The Gunners have picked up just six wins so far and their total of 31 points after 25 games is their lowest since the 1912/13 season.

With only 17 points from their final 13 games, Mikel Artetas side are predicted to end ninth.

There is also an interesting prediction in the race to finish second.

Most expect Manchester City to be runners-up - the defending champions are currently two points ahead of Leicester.

But the supercomputer has backed the Foxes to be Liverpools closest challengers at the end of this campaign.

Here is how the final table for the 2019/20 season is predicted to look:

1. Liverpool - 112 points

2. Leicester - 84

3. Man City - 77

4. Chelsea - 69

5. Sheffield United - 61

6. Wolves - 58

7. Tottenham - 56

8. Man Utd - 55

9. Arsenal - 48

10. Everton - 48

11. Crystal Palace - 45

12. Newcastle - 45

13. Brighton - 44

14. Burnley - 43

15. Southampton - 40

16. West Ham - 37

17. Bournemouth - 34

18. Aston Villa - 31

19. Watford - 30

20. Norwich - 27

Go here to see the original:

Supercomputer predicts Premier League top four as Chelsea, Man Utd and Tottenham battle it out - Mirror Online