Atos engages in the fight against Covid-19 – insideHPC

An effective response to COVID-19 requires global action and collaboration between public and private actors, said Elie Girard, Atos CEO. We are proud to be helping those working intensively on the frontline in an effort to counteract the pandemic by sharing our resources and solutions. We are now in uncharted territory, but Atos commitment and willingness to help citizens live and work sustainably and confidently in a changing digital landscape during the pandemic and in the long term, remain intact.

Helping local authorities contain the spread of the virus

The increasing mobility of the population makes containment of diseases harder and the need to act quickly and effectively is essential at every level.

Leveraging its public sector and healthcare experience, Atos has designed EpiSYS, an Epidemic Management System (EMS) which gives healthcare professionals a precise overview of an epidemiological situation by storing and managing all patient data and data related to the virus, including tracking and tracing patient incident reports, in real-time.EpiSYS was adopted in Austriain early March to help control the spread of Covid-19 and take strategic decisions in the current crisis.

Mobilizing supercomputers and machine learning to speed up research

Atos high-performance computers can count thousands of times faster than standard computers. Whether they are used for simulation, to build predictive models, analyze the progress of the disease or develop new treatments, these powerful machines are performing very demanding calculations that prove to be essential intodays race againstthe clock.

Atos supercomputers are at work around the world. Two of the most powerful supercomputersin France, Joliot-Curie, operated at the CEAs supercomputing center (TGCC), and Occigen, operated at CPUs supercomputing center (CINES), areprovidingurgent computing access to large computer resourcesto European research teams involved in the fight against COVID-19.

In theUK, Atos BullSequana supercomputer at theHartree Centreis providing computing power to run simulations of Covid-19 protein behavior. In addition, the JADE national AI supercomputing facility provided and managed by Atos is being used by UK researchers from the Universities of Oxford, Cambridge and Southampton working in the area of biomolecular simulations. This supercomputer, owned by Oxford University and hosted at the Hartree Centre, part of the Science and Technology Facilities Council (STFC), contributes towards efforts in the development of a vaccine against SARS-COV2, anti-viral drugs and to better understand the molecular architecture of the virus and how it functions.

Similarly, Atos supercomputers inBraziland theCzech Republicare also driving research around Covid-19.

Providing business continuity with safe and flexible digital environments

Atos technologies also play a critical role in preserving business continuity amid the pandemic. To answer the challenges that organizations are facing with home office becoming the norm, Atos has built and adapted acomprehensive workplace servicesofferingand delivers end-to-end solutions, from remote management to wireless technology, whileensuring the security of any enterprise environment.

It is also essential to ensure government and public authorities are able to continue to supply basic daily needs for every citizens life. In the past weeks, Atos has been able to guarantee service continuity with QoS (Quality of Service) for power utilities and water supplies, likeScottish Water,in various countries thanks to an unwavering commitment and expertise in these industries.

Sharing our data science skills with the research community

Atos is taking part in the Covid19 Dataset Challenge,an international competition launched by the White House, asking AI researchers to apply machine learning tools and techniques to help provide answer to key questions about the disease. Atos has a dedicated team of 10 experts working on the project.

Sign up for our insideHPC Newsletter

See the rest here:

Atos engages in the fight against Covid-19 - insideHPC

La Liga lending their super computer to coronavirus fight – From piracy tracking to disease tracking – ESPN

Apr 21, 2020

Sid LoweSpain writer

MADRID -- On the fifth floor of La Liga headquarters on Torrelaguna Street, in the northeast part of the city, is a computer 4,232 times as powerful as the one this piece is being written on and quicker than Inaki Williams. The building, which towers over the road heading out of Madrid toward the airport, is virtually empty, but while it stands alone, the computer hasn't stopped working. Built to make sure you're not watching illegal streams of games you shouldn't be -- watching any game would be a fine thing right now -- the computer is now being used to help in the fight against the coronavirus.

There has not been a game in Spain since Eibar played Real Sociedad behind closed doors over a month ago, and no one knows when football is coming back. The sport has come up with plenty of initiatives during these days of confinement: There have been FIFA 20 tournaments and fundraising, online concerts and lots of silly stuff to pass the time. Some players are phoning their club's oldest members to check up on them; others are providing food. We've seen Leganes' daily workouts and Saul Niguez's plan to help small businesses get back to their feet.

- Stream new episodes of ESPN FC Monday-Friday on ESPN+- Stream every episode of 30 for 30: Soccer Stories on ESPN+

Then there's La Liga's computer, heading into the monster's mouth. And, yes, that really is how it's referring to it: it is called Demogorgon.

"It's about the size of a normal computer, but it's capacity is more than 4,000 times the size. The processing speed is huge," says Emilio Fernandez del Castillo, the head of technological content protection at La Liga and person responsible for overseeing the computer. "It is built to detect and prevent piracy: We're searching for our content Monday to Sunday all over the world, and this is the tool that enables us to find it. Imagine how big it has to be to do that. Other systems, other computers, simply wouldn't be able to."

2 Related

The computer in La Liga HQ can do that and more, so some of the league's processing capacity was plugged into the folding@home platform, a project in which people volunteer to run computer simulations typically focused on medical research, specifically around viruses and proteins. La Liga is essentially lending its capacity so, in a way, everything is run through them but at the same time nothing is. The machine is like the thousands of home computers involved in this effort, only it is so much more powerful -- and because it's La Liga, there is a relationship of sorts there. There are hundreds of thousands of normal computers; it is like 4,000 of them in one go. The processing speed is off the scale.

"We have engineers, IT experts, people who know the systems so well and they thought: 'Look, we can hand this over, we haven't got games every day -- Barcelona aren't on every day,'" Emilio says. "So, we 'loan' some of our 'space' for that research. We were helping investigations into cancer. But then when all this happened, attention shifted and we handed it over to fight against coronavirus."

Now that there are no games at all and no goals to pursue except recovery, even more processing speed can be harnessed to help the investigation -- along with more than 700,000 people across the globe who have joined the grid in the past month.

"La Liga contacted us and wanted to learn more about our work: The technical department of La Liga found folding@home and joined our grid, installing our client on their super computer in Madrid," says Gregory Bowman, associate professor in the Department of Biochemistry & Molecular Biophysics at the Washington University School of Medicine in St. Louis. That dramatically increases the chance of successfully modelling and understanding the coronavirus.

"In brief, we're trying to understand how all the moving parts of the SARS-CoV-2 virus' proteins contribute to their function, identify new therapeutic opportunities based on this insight, design new therapeutics and engage with experimental collaborators to test them," Bowman says.

Dan Thomas is joined by Craig Burley, Shaka Hislop and a host of other guests every day as football plots a path through the coronavirus crisis. Stream on ESPN+ (U.S. only).

The programme's models simulate the dynamics of COVID-19 proteins, searching for ways to attack the virus and identify points at which treatment can be most effectively inserted. It seeks to understand how the virus functions, watching the way that atoms might move. The simulations seek to understand the spike on the surface of the virus that identifies and attaches itself to human cells, watching and mapping how it might open up for its "attack": It is like the mouth of the Demogorgon monster from TV series "Stranger Things," or so analysts think. Hence the name.

The variables, though, are incalculable ... or they would be on a normal computer. Now, my laptop might produce the finest football articles known to man, but if it tried to simulate the virus' movement alone, I could be here a hundred years and not get anywhere near it. But with the combined power of processors worldwide, including super computers such as La Liga's, investigators are more optimistic.

"We're doing everything we can to accelerate the development of therapies," Bowman says. "We hope to start submitting papers soon, and we're keeping everyone up to date. Great progress is being made. We can't guarantee the outcome or timeline of our work, but we've already been successful on related problems like Ebola virus, and we're making progress on understanding how SARS-CoV-2 infects human cells and will soon share insight into novel therapeutic opportunities our simulations have uncovered, called cryptic pockets."

For now, work continues. And so, with a little help from La Liga, once more into the mouth of the monster.

Read the rest here:

La Liga lending their super computer to coronavirus fight - From piracy tracking to disease tracking - ESPN

One Supercomputers HPC And AI Battle Against The Coronavirus – The Next Platform

Normally, supercomputers installed at academic and national laboratories get configured once, acquired as quickly as possible before the money runs out, installed and tested, qualified for use, and put to work for a four or five or possibly longer tour of duty. It is a rare machine that is upgraded even once, much less a few times.

But that is not he case with the Corona system at Lawrence Livermore National Laboratory, which was commissioned in 2017 when North America had a total solar eclipse and hence its nickname. While this machine, procured under the Commodity Technology Systems (CTS-1) to not only do useful work, but to assess the CPU and GPU architectures provided by AMD, was not named after the coronavirus pandemic that is now spreading around the Earth, the machine is being upgraded one more time to be put into service as a weapon against the SARS-CoV-2 virus which caused the COVID-19 illness that has infected at least 2.75 million people (confirmed by test, with the number very likely being higher) and killed at least 193,000 people worldwide.

The Corona system was built by Penguin Computing, which has a long-standing relationship with Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories the so-called Tri-Labs that are part of the US Department of Energy and that coordinate on their supercomputer procurements. The initial Corona machine installed in 2018 had 164 compute nodes, each equipped with a pair of Naples Epyc 7401 processors, which have 24 cores each running at 2 GHz with an all core turbo boost of 2.8 GHz. The Penguin Tundra Extreme servers that comprise this cluster have 256 GB of main memory and 1.6 TB of PCI-Express flash. When the machine was installed in November 2018, half of the nodes were equipped with four of AMDs Radeon Instinct MI25 GPU accelerators, which had 16 GB of HBM2 memory each and which had 768 gigaflops of FP64 performance, 12.29 teraflops of FP32 performance, and 24.6 teraflops of FP16 performance. The 7,872 CPU cores in the system delivered 126 teraflops at FP64 double precision all by themselves, and the Radeon Instinct MI25 GPU accelerators added another 251.9 teraflops at FP64 double precision. The single precision performance for the machine was obviously much higher, at 4.28 petaflops across both the CPUs and GPUs. Interestingly, this machine was equipped with 200 Gb/sec HDR InfiniBand switching from Mellanox Technologies, which was obviously one of the earliest installations of this switching speed.

In November last year, just before the coronavirus outbreak or, at least we think that was before the outbreak, that may turn out to not be the case AMD and Penguin worked out a deal to installed four of the much more powerful Radeon Instinct MI60 GPU accelerators, based on the 7 nanometer Vega GPUs, in the 82 nodes in the system that didnt already have GPU accelerators in them. The Radeon Instinct MI60 has 32 GB of HBM2 memory, and has 6.6 teraflops of FP64 performance, 13.3 teraflops of FP32 performance, and 26.5 teraflops of FP16 performance. Now the machine has 8.9 petaflops of FP32 performance and 2.54 petaflops of FP64 performance, and this is a much more balanced 64-bit to 32-bit performance, and it makes these nodes more useful for certain kinds of HPC and AI workloads. Which turns out to be very important to Lawrence Livermore in its fight against the COVID-19 disease.

To find out more about how the Corona system and others are being deployed in the fight against COVID-19, and how HPC and AI workloads are being intertwined in that fight, we talked to Jim Brase, deputy associate director for data science at Lawrence Livermore.

Timothy Prickett Morgan: It is kind of weird that this machine was called Corona. Foreshadowing is how you tell the good literature from the cheap stuff. The doubling of performance that just happened late last year for this machine could not have come at a better time.

Jim Brase: It pretty much doubles the overall floating point performance of the machine, which is great because what we are mainly running on Corona is both the molecular dynamics calculations of various viral and human protein components and then machine learning algorithms for both predictive models and design optimization.

TPM: Thats a lot more oomph. So what specifically are you doing with it in the fight against COVID-19?

Jim Brase: There are two basic things were doing as part of the COVID-19 response, and this machine is almost entirely dedicated to this although several of our other clusters at Lawrence Livermore are involved as well.

We have teams that are doing both antibody and vaccine design. They are mainly focused on therapeutic antibodies right now. They are basically designing proteins that will interact with the virus or with the way the virus interacts with human cells. That involves hypothesizing different protein structures and computing what those structures actually look like in detail, then computing using molecular dynamics the interaction between those protein structures and the viral proteins or the viral and human cell interactions.

With this machine, we do this iteratively to basically design a set of proteins. We have a bunch of metrics that we try to optimize on binding strength, the stability of the binding, stuff like that and then we do a detailed molecular dynamics calculations to figure out the effective energy of those binding events. These metrics determine the quality of the potential antibody or vaccine that we design.

TPM: To wildly oversimplify, this SARS-CoV-2 virus is a ball of fat with some spikes on it that wreaks havoc as it replicates using our cells as raw material. This is a fairly complicated molecule at some level. What are we trying to do? Stick goo to it to try to keep it from replicating or tear it apart or dissolve it?

Jim Brase: In the case of in the case of antibodies, which is what were mostly focusing on right now, we are actually designing a protein that will bind to some part of the virus, and because of that the virus then changes its shape, and the change in shape means it will not be able to function. These are little molecular machines that they depend on their shape to do things.

TPM: Theres not something that will physically go in and tear it apart like a white blood cell eats stuff.

Jim Brase: No. Thats generally done by biology, which comes in after this and cleans up. What we are trying to do is what we call neutralizing antibodies. They go in and bind and then the virus cant do its job anymore.

TPM: And just for a reference, what is the difference between a vaccine and an antibody?

Jim Brase: In some sense, they are the opposite of each other. With a vaccine, we are putting in a protein that actually looks like the virus but it doesnt make you sick. It stimulates the human immune system to create its own antibodies to combat that virus. And those antibodies produced by the body do exactly the same thing we were just talking about Producing antibodies directly is faster, but the effect doesnt last. So it is more of a medical treatment for somebody who is already sick.

TPM: I was alarmed to learn that for certain coronaviruses, immunity doesnt really last very long. With the common cold, the reason we get them is not just because they change every year, but because if you didnt have a bad version of it, you dont generate a lot of antibodies and therefore you are susceptible. If you have a very severe cold, you generate antibodies and they last for a year or two. But then youre done and your body stops looking for that fight.

Jim Brase: The immune system is very complicated and for some things it creates antibodies that remembers them for a long time. For others, its much shorter. Its sort of a combination of the of the what we call the antigen the thing about that, the virus or whatever that triggers it and then the immune system sort of memory function together, cause the immunity not to last as long. Its not well understood at this point.

TPM: What are the programs youre using to do the antibody and protein synthesis?

Jim Brase: We are using a variety of programs. We use GROMACS, we use NAMD, we use OpenMM stuff. And then we have some specialized homegrown codes that we use as well that operate on the data coming from these programs. But its mostly the general, open source molecular mechanics and molecular dynamics codes.

TPM: Lets contrast this COVID-19 effort with like something like SARS outbreak in 2003. Say you had the same problem. Could you have even done the things you are doing today with SARS-CoV-2 back then with SARS? Was it even possible to design proteins and do enough of them to actually have an impact to get the antibody therapy or develop the vaccine?

Jim Brase: A decade ago, we could do single calculations. We could do them one, two, three. But what we couldnt do was iterate it as a design optimization. Now we can run enough of these fast enough that we can make this part of an actual design process where we are computing these metrics, then adjusting the molecules. And we have machine learning approaches now that we didnt have ten years ago that allow us to hypothesize new molecules and then we run the detailed physics calculations against this, and we do that over and over and over.

TPM: So not only do you have a specialized homegrown code that takes the output of these molecular dynamics programs, but you are using machine learning as a front end as well.

Jim Brase: We use machine learning in two places. Even with these machines and we are using our whole spectrum of systems on this effort we still cant do enough molecular dynamics calculations, particularly the detailed molecular dynamics that we are talking about here. What does the new hardware allow us to do? It basically allows us to do a higher percentage of detailed molecular dynamics calculations, which give us better answers as opposed to more approximate calculations. So you can decrease the granularity size and we can compute whole molecular dynamics trajectories as opposed to approximate free energy calculations. It allows us to go deeper on the calculations, and do more of those. So ultimately, we get better answers.

But even with these new machines, we still cant do enough. If you think about the design space on, say, a protein that is a few hundred amino acids in length, and at each of those positions you can put in 20 different amino acids, you on the order of 20200 in the brute force with the possible number of proteins you could evaluate. You cant do that.

So we try to be smart about how we select where those simulations are done in that space, based on what we are seeing. And then we use the molecular dynamics to generate datasets that we then train machine learning models on so that we are basically doing very smart interpolation in those datasets. We are combining the best of both worlds and using the physics-based molecular dynamics to generate data that we use to train these machine learning algorithms, which allows us to then fill in a lot of the rest of the space because those can run very, very fast.

TPM: You couldnt do all of that stuff ten years ago? And SARS did not create the same level of outbreak that SARS-CoV-2 has done.

Jim Brase: No, these are all fairly new early new ideas.

TPM: So, in a sense, we are lucky. We have the resources at a time when we need them most. Did you have the code all ready to go for this? Were you already working on this kind of stuff and then COVID-19 happened or did you guys just whip up these programs?

Jim Brase: No, no, no, no. Weve been working on this kind of stuff for her for a few years.

TPM: Well, thank you. Id like to personally thank you.

Jim Brase: It has been an interesting development. Its both been both in the biology space and the physics space, and those two groups have set up a feedback loop back and forth. I have been running a consortium called Advanced Therapeutic Opportunities in Medicine, or ATOM for short, to do just this kind of stuff for the last four years. It started up as part of the Cancer Moonshot in 2016 and focused on accelerating cancer therapeutics using the same kinds of ideas, where we are using machine learning models to predict the properties, using both mechanistic simulations like molecular dynamics, but all that combined with data, but then also using it other the other way around. We also use machine learning to actually hypothesize new molecules given a set of molecules that we have right now and that we have computed properties on them that arent quite what we want, how do we just tweak those molecules a little bit to adjust their properties in the directions that we want?

The problem with this approach is scale. Molecules are atoms that are bonded with each other. You could just take out an atom, add another atom, change a bond type, or something. The problem with that is that every time you do that randomly, you almost always get an illegal molecule. So we train these machine learning algorithms these are generative models to actually be able to generate legal molecules that are close to a set of molecules that we have but a little bit different and with properties that are probably a little bit closer to what we what we want. And so that allows us to smoothly adjust the molecular designs to move towards the optimization targets that we want. If you think about optimization, what you want are things with smooth derivatives. And if you do this in sort of the discrete atom bond space, you dont have smooth derivatives. But if you do it in these, these are what we call learned latent spaces that we get from generative models, then you can actually have a smooth response in terms of the molecular properties. And thats what we want for optimization.

The other part of the machine learning story here is these new types of generative models. So variational autoencoders, generative adversarial models the things you hear about that generate fake data and so on. Were actually using those very productively to imagine new types of molecules with the kinds of properties that we want for this. And so thats something we were absolutely doing before COVID-19 hit. We have taken these projects like ATOM cancer project and other work weve been doing with DARPA and other places focused on different diseases and refocused those on COVID-19.

One other thing I wanted to mention is that we havent just been applying biology. A lot of these ideas are coming out of physics applications. One of our big things at Lawrence Livermore is laser fusion. We have 192 huge lasers at the National Ignition Facility to try to create fusion in a small hydrogen deuterium target. There are a lot of design parameters that go into that. The targets are really complex. We are using the same approach. Were running mechanistic simulations of the performance of those targets, we are then improving those with real data using machine learning. So now we now have a hybrid model that has physics in it and machine learning data models, and using that to optimize the designs of the laser fusion target. So thats led us to a whole new set of approaches to fusion energy.

Those same methods actually are the things were also applying to molecular design for medicines. And the two actually go back and forth and sort of feed on each other and support each other. In the last few weeks, some of the teams that have been working on the physics applications have actually jumped over onto the biology side and are using some of the same sort of complex workflows that were using on these big parallel machines that theyve developed for physics and applying those to some of the biology applications and helping to speed up the applications on these on this new hardware thats coming in. So it is a really nice synergy going back and forth.

TPM: I realize that machine learning software uses the GPUs for training and inference, but is the molecular dynamics software using the GPUs, too?

Jim Brase: All of the molecular dynamics software has been set up to use GPUs. The code actually maps pretty naturally onto the GPU.

TPM: Are you using the CUDA variants of the molecular dynamics software, and I presume that it is using the Radeon Open Compute, or ROCm, stack from AMD to translate that code so it can run on the Radeon Instinct accelerators?

Jim Brase: There has been some work to do, but it works. Its getting its getting to be pretty solid now, thats one of the reasons we wanted to jump into the AMD technology pretty early, because you know, any time you do first-in-kind machines its not always completely smooth sailing all the way.

TPM: Its not like Lawrence Livermore has a history of using novel designs for supercomputers. [Laughter]

Jim Brase: We seldom work with machines that are not Serial 00001 or Serial 00002.

TPM: Whats the machine learning stack you use? I presume it is TensorFlow.

Jim Brase: We use TensorFlow extensively. We use PyTorch extensively. We work with the DeepChem group at Stanford University that does an open chemistry package built on TensorFlow as well.

TPM: If you could fire up an exascale machine today, how much would it help in the fight against COVID-19?

Jim Brase: It would help a lot. Theres so much to do.

I think we need we need to show the benefits of computing for drug design and we are concretely doing that now. Four years ago, when we started up ATOM, everybody thought this was nuts, the general idea that we could lead with computing rather than experiment and do the experiments to focus on validating the computational models rather than the other way around. Everybody thought we were nuts. As you know, with the growth of data, the growth of machine learning capabilities, more accessibility to sophisticated molecular dynamics, and so on its much more accepted that computing is a big part of this. But we still have a long way to go on this.

The fact is, machine learning is not magic. Its a fancy interpolator. You dont get anything new out of it. With the physics codes, you actually get something new out of it. So the physics codes are really the foundation of this. You supplement them with experimental data because theyre not right necessarily, either. And then you use the machine learning on top of all that to fill in the gaps because you havent been able to sample that huge chemical and protein space adequately to really understand everything at either the data level or the mechanistic level.

So thats how I think of it. Data is truth sort of and what you also learn about data is that it is not always the same as you go through this. But data is the foundation. Mechanistic modeling allows us to fill in where we just cant measure enough data it is too expensive, it takes too long, and so on. We fill in with mechanistic modeling and then above that we fill in that then with machine learning. We have this stack of experimental truth, you know, mechanistic simulation that incorporates all the physics and chemistry we can, and then we use machine learning to interpolate in those spaces to support the design operation.

For COVID-19, there are there are a lot of groups doing vaccine designs. Some of them are using traditional experimental approaches and they are making progress. Some of them are doing computational designs, and that includes the national labs. Weve got 35 designs done and we are experimentally validating those now and seeing where we are with them. It will generally take two to three iterations of design, then experiment, and then adjust the designs back and forth. And were in the first round of that right now.

One thing were all doing, at least on the public side of this, is we are putting all this data out there openly. So the molecular designs that weve proposed are openly released. Then the validation data that we are getting on those will be openly released. This is so our group working with other lab groups, working with university groups, and some of the companies doing this COVID-19 research can contribute. We are hoping that by being able to look at all the data that all these groups are doing, we can learn faster on how to sort of narrow in on the on the vaccine designs and the antibody designs that will ultimately work.

Read more from the original source:

One Supercomputers HPC And AI Battle Against The Coronavirus - The Next Platform

Could Machine Learning Replace the Entire Weather Forecast System? – HPCwire

Just a few months ago, a series of major new weather and climate supercomputing investments were announced, including a 1.2 billion order for the worlds most powerful weather and climate supercomputer and a tripling of the U.S. operational supercomputing capacity for weather forecasting. Weather and climate modeling are among the most power-hungry use cases for supercomputers, and research and forecasting agencies often struggle to keep up with the computing needs of models that are, in many cases, simulating the atmosphere of the entire planet as granularly and as regularly as possible.

What if that all changed?

In a virtual keynote for the HPC-AI Advisory Councils 2020 Stanford Conference, Peter Dueben outlined how machine learning might (or might not) begin to augment and even, eventually, compete with heavy-duty, supercomputer-powered climate models. Dueben is the coordinator for machine learning and AI activities at the European Centre for Medium-Range Weather Forecasts (ECMWF), a UK-based intergovernmental organization that houses two supercomputers and provides 24/7 operational weather services at several timescales. ECMWF is also the home of the Integrated Forecast System (IFS), which Dueben says is probably one of the best forecast models in the world.

Why machine learning at all?

The Earth, Dueben explained, is big. So big, in fact, that apart from being laborious, developing a representational model of the Earths weather and climate systems brick-by-brick isnt achieving the accuracy that you might imagine. Despite the computing firepower behind weather forecasting, most models remain at a 10 kilometer resolution that doesnt represent clouds, and the chaotic atmospheric dynamics and occasionally opaque interactions further complicate model outputs.

However, on the other side, we have a huge number of observations, Dueben said. Just to give you an impression, ECMWF is getting hundreds of millions of observations onto the site every day. Some observations come from satellites, planes, ships, ground measurements, balloons This data collected over the last several decades constituted hundreds of petabytes if simulations and climate modeling results were included.

If you combine those two points, we have a very complex nonlinear system and we also have a lot of data, he said. Theres obviously lots of potential applications for machine learning in weather modeling.

Potential applications of machine learning

Machine learning applications are really spread all over the entire workflow of weather prediction, Dueben said, breaking that workflow down into observations, data assimilation, numerical weather forecasting, and post-processing and dissemination. Across those areas, he explained, machine learning could be used for anything from weather data monitoring to learning the underlying equations of atmospheric motions.

By way of example, Dueben highlighted a handful of current, real-world applications. In one case, researchers had applied machine learning to detecting wildfires caused by lightning. Using observations for 15 variables (such as temperature, soil moisture and vegetation cover), the researchers constructed a machine learning-based decision tree to assess whether or not satellite observations included wildfires. The team achieved an accuracy of 77 percent which, Deuben said, doesnt sound too great in principle, but was actually quite good.

Elsewhere, another team explored the use of machine learning to correct persistent biases in forecast model results. Dueben explained that researchers were examining the use of a weak constraint machine learning algorithm (in this case, 4D-Var), which is a kind of algorithm that would be able to learn this kind of forecast error and correct it in the data assimilation process.

We learn, basically, the bias, he said, and then once we have learned the bias, we can correct the bias of the forecast model by just adding forcing terms to the system. Once 4D-Var was implemented on a sample of forecast model results, the biases were ameliorated. Though Dueben cautioned that the process is still fairly simplistic, a new collaboration with Nvidia is looking into more sophisticated ways of correcting those forecast errors with machine learning.

Dueben also outlined applications in post-processing. Much of modern weather forecasting focuses on ensemble methods, where a model is run many times to obtain a spread of possible scenarios and as a result, probabilities of various outcomes. We investigate whether we can correct the ensemble spread calculated from a small number of ensemble members via deep learning, Dueben said. Once again, machine learning when applied to a ten-member ensemble looking at temperatures in Europe improved the results, reducing error in temperature spreads.

Can machine learning replace core functionality or even the entire forecast system?

One of the things that were looking into is the emulation of different permutation schemes, Dueben said. Chief among those, at least initially, have been the radiation component of forecast models, which account for the fluxes of solar radiation between the ground, the clouds and the upper atmosphere. As a trial run, Dueben and his colleagues are using extensive radiation output data from a forecast model to train a neural network. First of all, its very, very light, Dueben said. Second of all, its also going to be much more portable. Once we represent radiation with a deep neural network, you can basically port it to whatever hardware you want.

Showing a pair of output images, one from the machine learning model and one from the forecast model, Dueben pointed out that it was hard to notice significant differences and even refused to tell the audience which was which. Furthermore, he said, the model had achieved around a tenfold speedup. (Im quite confident that it will actually be much better than a factor of ten, Dueben said.)

Dueben and his colleagues have also scaled their tests up to more ambitious realms. They pulled hourly data on geopotential height (Z500) which is related to air pressure and trained a deep learning model to predict future changes in Z500 across the globe using only that historical data. For this, no physical understanding is really required, Dueben said, and it turns out that its actually working quite well.

Still, Dueben forced himself to face the crucial question.

Is this the future? he asked. I have to say its probably not.

There were several reasons for this. First, Dueben said, the simulations were unstable, eventually blowing up if they were stretched too far. Second of all, he said, its also unknown how to increase complexity at this stage. We only have one field here. Finally, he explained, there were only forty years of sufficiently detailed data with which to work.

Still, it wasnt all pessimism. Its kind of unlikely that its going to fly and basically feed operational forecasting at one point, he said. However, having said this, there are now a number of papers coming out where people are looking into this in a much, much more complicated way than we have done with really sophisticated convolutional networks and they get, actually, quite good results. So who knows!

The path forward

The main challenge for machine learning in the community that were facing at the moment, Dueben said, is basically that we need to prove now that machine learning solutions can really be better than conventional tools and we need to do this in the next couple of years.

There are, of course, many roadblocks to that goal. Forecasting models are extraordinarily complicated; iterations on deep learning models require significant HPC resources to test and validate; and metrics of comparison among models are unclear. Dueben also outlined a series of major unknowns in machine learning for weather forecasting: could our explicit knowledge of atmospheric mechanisms be used to improve a machine learning forecast? Could researchers guarantee reproducibility? Could the tools be scaled effectively to HPC? The list went on.

Many scientists are working on these dilemmas as we speak, Dueben said, and Im sure we will have an enormous amount of progress in the next couple of years. Outlining a path forward, Dueben emphasized a mixture of a top-down and a bottom-up approach to link machine learning with weather and climate models. Per his diagram, this would combine neutral networks based on human knowledge of earth systems with reliable benchmarks, scalability and better uncertainty quantification.

As far as where he sees machine learning for weather prediction in ten years?

It could be that machine learning will have no long-term effect whatsoever that its just a wave going through, Dueben mused. But on the other hand, it could well be that machine learning tools will actually replace almost all conventional models that were working with.

The rest is here:

Could Machine Learning Replace the Entire Weather Forecast System? - HPCwire

Mighty CPU rival to Intel and AMD set to shake up the market – TechRadar India

The announcement of Amazons Graviton2 may well have made AMD and Intel a little nervous - Amazon is, after all, a customer of both. Now, the two companies have even greater reason to be worried.

AnandTech reports that Parisian company SiPearl recently announced it had signed a major agreement with semiconductor giant ARM. The French firm will use ARM IP (Zeus Neoverse CPU) to develop a new set of CPUs: Rhea, Chronos and another unnamed model.

The company is backed by the European Commission as part of the European Processor Initiative (EPI) project, which aims to design a high performance, low power microprocessor for Europes first exascale supercomputer.

Three generations of processors are expected to be delivered in four years, which is a rather ambitious timeline. SiPearl will also be heavily dependent on technology from two other French companies: Kalray and Menta.

Although SiPearl will not, for the foreseeable future, produce any consumer-focused products, its roadmap gestures towards an automotive POC (power over Coax?) and an automotive central processing unit that could be on the horizon.

So, while SiPearl won't compete just yet with the likes of Amperes Altra, AMDs Epyc family or Intels Xeon range, it's one to keep a close eye on as Europe wrestles to build an HPC unit capable of competing with global giants.

Continued here:

Mighty CPU rival to Intel and AMD set to shake up the market - TechRadar India

How SMU computer science professors are using their resources to help find a coronavirus vaccine – The Dallas Morning News

What if university computer scientists, biologists and historians collaborated to use modern artificial intelligence and machine learning to examine a massive trove of infectious disease research papers, text mining for abstract patterns, elusive insights and hard-to-spot trends related to COVID-19 and the coronavirus family of viruses?

Imagine the energy such a group could generate if their students, working remotely and cut off from the normal distractions of student life, jumped in to volunteer for the project? Welcome to the nascent Southern Methodist University Artificial Intelligence Lab.

COVID-19 has been called the greatest challenge since World War II. Artificial intelligence and machine learning technologies were still young during other recent outbreaks SARS in 2002, H1N1 in 2009, MERS in 2012 and even Ebola in 2014. Increasingly more powerful processors, better algorithms and massive amounts of data have changed what we can do in 2020.

Our charge at the time of this pandemic is to deploy everything we now know about AI to discover as much as possible about what we do not know about COVID-19. The larger challenge, however, is to shape a university response that effectively trains students to grapple with a rapidly-changing and destabilized world.

More than a dozen SMU faculty members and students are volunteering their time to text-mine a large collection of scientific papers made available via the White House Office of Science and Technology Policy and a collection of research groups. These works are packaged as a challenge on the data science site Kaggle. We are using SMUs supercomputer to yield insights from these papers that we hope will aid active infectious disease researchers in their search for a solution. As we discover trends, patterns and insights, it is our hope that our AI and machine learning research can assist in the goal to shorten the time required to discover and develop a vaccine or therapy.

Teams are meeting virtually, outside of hours scheduled for classes, working from apartments or family homes. By matching the COVID-19 research papers with teams of students directed by faculty, we are making use of two major resources universities have to contribute at this very unusual moment in history: brainpower and the gift of time for creative reflection.

Many of our students have hours on their hands after finishing their virtual classwork, and they are eager to serve the public good. They are telling us that they want to be able to look back on this time and remember that they were part of the fight. There is no more valuable gift that we could give our students than preparing them to face an uncertain world.

While we have placed our near-term priority on assisting in the search for a vaccine or therapy, there are many facets to a global pandemic that can and should be addressed. Here are a few ideas that we are contemplating in our lab:

Tasks of this kind are inherently interdisciplinary. Executing them in an effective manner requires creative work at the juncture between the disciplines. Students can only build these domains if they have knowledge of the technical skills they learn in computer science, as well as textual analysis skills they learn from the social sciences and humanities. Responding to coronavirus thus requires our faculty to think creatively about teaching, for instance, in the form of interdisciplinary labs such as the one we have recently launched.

Even while health-related restrictions are forcing students and teachers to limit their interactions to online spaces, SMUs faculty is experimenting with new ways to connect, bringing the immediacy and liveliness of an engaged, intellectual community to meet the challenge of the pandemic. Even more importantly, we are modeling for our students what it looks like to respond in real time to real world challenges, regrouping and refocusing our research on the pandemic, and inviting them to make discoveries.

At the end of a disrupted semester, these SMU students will have had an educational experience that is enhanced, rather than diminished, at least in terms of opportunities for research.

Frederick R. Chang is a cyber security professor, chair of the Department of Computer Science at Southern Methodist University and the founding director of the SMU AI Lab.

Jo Guldi is an associate professor of history at SMU, where she teaches text mining and is a founding member of the SMU AI Lab.

They wrote this column for The Dallas Morning News.

Read more from the original source:

How SMU computer science professors are using their resources to help find a coronavirus vaccine - The Dallas Morning News

5 Companies That Came To Win This Week – CRN: Technology news for channel partners and solution providers

The Week Ending April 24

As the COVID-19 pandemic continues to make headlines, some of this weeks 5 Companies That Came to Win roundup remains focused on what IT and channel companies are doing to help mitigate the impact of the pandemic and the economic slowdown.

Topping this weeks list is Dell Technologies for its new financing and payment options, including $9 billion in financing, to help partners and customers weather the economic crisis.

Also making the list are Hewlett Packard Enterprise and its Aruba business for suspending partner revenue targets in a move to help solution providers maintain their Partner Ready program levels. AMD and Penguin Computing win applause for boosting the performance of a U.S. government supercomputer that is conducting COVID-19 research. Big Data startup Confluent is on the list for raising $250 million in funding. And McAfee has filled its long-vacant global channel chief post as the security company steps up its channel efforts.

Dell Provides $9B In Financing Through New DFS Payment Program

Dell Technologies wins kudos this week for launching a new Payment Flexibility Program that includes zero-percent interest rates for infrastructure solutions and up to 180-day payment deferral to help channel partners and customers cope with the new normal of the economic slowdown.

Dells financial arm, Dell Financial Services, this week unveiled the new program that also makes $9 billion in financing available to help fund customers critical technology needs.

DFS is offering zero-percent interest rates for servers, storage and networking systems with no up-front payment required. First payments are deferred for up to 180 days for all data center infrastructure and services. And the company is offering short-term options for remote work and learning solutions with six- and 12-month terms and refresh options for laptop and desktop computers.

Channel partners hailed Dells financing options, saying they provide themselves and their customers with the flexibility they need to preserve cash in these uncertain times.

HPE, Aruba Suspend Partner Ready Revenue Targets

Hewlett Packard Enterprise has suspended revenue thresholds for both its HPE and Aruba Partner Ready channel programs as part of a broad relief package for partners during the economic slowdown.

The move ensures partners will maintain the same level in the two Partner Ready programs for 2021 even if they fail to meet revenue commitment levels in the wake of the COVID-19 pandemic and resulting decline in the economy.

HPE has also provided financial relief to distributors aimed at helping partners who are focused on small and mid-size businesses. Those include suspending or significantly reducing strategic development initiative targets and providing extended payment and early payment discount terms.

AMD, Penguin Computing Fight COVID-19 In Supercomputer Deal

AMD and Penguin Computing have teamed up to upgrade the U.S. Department of Energys Corona supercomputer with AMD Radeon Instinct GPUs in a move thats expected to accelerate coronavirus research.

Under the deal announced this week, AMD is donating hundreds of its Radeon Instinct MI50 GPUs as part of the new COVID-19 HPC (high-performance computing) Fund. Penguin Computing, an HPC integration service provider, is offering free upgrade services for the AMD-based Corona supercomputer that Penguin delivered to the Energy department in 2018.

The move accelerates plans by the DOEs Lawrence Livermore National Laboratory, which operates the supercomputer, to outfit the system with more of the Radeon Instinct GPUs. The upgrade will support COVID-19 research at the laboratory, which is part of the White House-led COVID-19 HPC consortium.

Streaming Big Data Startup Confluent Raises Stunning $250M In Additional Funding

Big data startup Confluent caught everybodys attention this week when it raised an impressive $250 million in Series E funding, pushing the companys total financing to $456 million and its market value to $4.5 billion.

Confluent, started by the developers of the open-source Kafka event stream software, develops a Kafka-based platform that helps businesses and organizations manage and act on huge volumes of real-time, continuously produced data such as streams of financial transactions or data from operational IT systems.

Also winning big in venture funding this week was identity security tech developer ForgeRock, which raised an impressive $93.5 million in its own Series E round of funding.

McAfee Hires Ex-Apple Sales Exec For Global Channel Chief

After a nearly two-year vacancy, platform security vendor McAfee has filled its global channel chief position with the hire of former Apple sales executive Kathleen Curry.

Curry worked at Apple for five-and-a-half years as a sales executive where she was primarily responsible for global client development and the companys Enterprise Design Lab. She spent her first year at Apple supporting global alliances. Before joining Apple she worked at NCR Corp. managing global retail channels, and before that leading enterprise channel sales at Motorola.

Currys appointment comes as McAfee is stepping up its channel game. The company plans to launch a new partner program this year and Curry is tasked with bringing together McAfees channel, operations, alliances and OEM teams as well as expanding partner program initiatives to accommodate the growing number of remote workers.

See the article here:

5 Companies That Came To Win This Week - CRN: Technology news for channel partners and solution providers

How Penguin Computing Is Fighting COVID-19 With Hybrid HPC – CRN: Technology news for channel partners and solution providers

Supporting Research On-Premise And In The Cloud

Penguin Computing President Sid Mair said the company is using its high-performance computing prowess on-premise and in the cloud to help researchers tackle the novel coronavirus.

The Fremont, Calif.-based company this week announced it is working with AMD to upgrade the U.S. Department of Energy's Corona supercomputer a coincidence of a name with the chipmaker's Radeon Instinct MI50 GPUs to accelerate coronavirus research. But that's not the only way the system integrator is looking to help researchers study and understand the virus.

[Related: How PC Builder Maingear Pivoted To Building Ventilators In A Month]

In an interview with CRN, Mair said the company is in multiple discussions for additional opportunities to help researchers using Penguin Computing's HPC capabilities to study the virus and COVID-19, the disease it causes. But the company is also using its own internal capabilities, an HPC cloud service called Penguin Computing On Demand, to deploy compute resources when there isn't enough time or money for researchers to stand up new on-premise HPC clusters for research.

"We have several researchers that have joined in and are utilizing that environment, and at the moment, we're doing that at no cost for COVID-19 research, even though it is a production commercial system that we currently sell high-performance computing compute cycles on today," he said.

HPC is seen as a critical tool in accelerating the discovery of drugs and vaccines for COVID-19, as demonstrated by the recent formation of the White House-led COVID-19 High Performance Computing Consortium, which counts chipmakers AMD and Nvidia as well as OEMs and cloud service providers like Hewlett Packard Enterprise and Microsoft as members. The effort is also receiving support from Folding@Home, a distributed computing application that lets anyone with a PC or server contribute.

This strategy of utilizing both on-premise and cloud servers to deliver HPC capabilities is referred by some experts as "hybrid HPC," which Mair said allows researchers to offload compute jobs into the cloud when there isn't enough resources to deploy new on-premise servers.

"They can't upgrade quick enough in order to continue to do their research, so being able to walk in and move their workflow over into an HPC environment that works and acts and implements just like they would do it on-premise but they're doing it in the cloud is becoming very, very beneficial to our researchers," he said.

William Wu, vice president of marketing and product management at Penguin Computing, said Penguin Computing is also planning to expand its offerings for researchers doing anything related to COVID-19, which could include running simulations to understand the impact of easing stay-at-home restrictions.

"We do intend to roll out something much more broader to allow anybody that is doing anything related to COVID, either directly or indirectly, to take advantage of what we're offering," he said.

In Mair's interview with CRN, he discussed how Penguin Computing's new GPU upgrade deal with AMD for the Corona supercomputer came together, how the company protects its employees during server upgrades, why GPUs are important for accelerating COVID-19 research and whether the pandemic is shifting the demand between on-premise and cloud HPC solutions.

What follows is an edited transcript.

See more here:

How Penguin Computing Is Fighting COVID-19 With Hybrid HPC - CRN: Technology news for channel partners and solution providers

Supercomputer Simulations Illuminate the Origins of Planets – HPCwire

Astronomers believe that many planets including our own solar system emerged from giant disks of gas and dust spinning around stars. To understand these cosmic mechanisms, researchers have typically used simulations to separately examine planetary development and magnetic field formation. Now, new work by researchers from the University of Zurich and the University of Cambridge has unified these fields of study in a single simulation for the first time ever.

Researchers knew that planets likely formed as a result of gravitational instabilities in the disks that allowed particles to congeal together, slowly forming planets over hundreds of thousands of years. With the new study, the research team aimed to examine the effects that magnetic fields have on planet formation in the context of those gravitational instabilities.

To do that, they modified a hybrid mesh-particle method that calculated the mass and gravity using particles, creating a virtual adaptive mesh that allowed the researchers to simultaneously incorporate magnetic fields, fluid dynamics and gravity.

Applying that method required supercomputing power. The researchers turned to Piz Daint, the in-house Cray supercomputer of the Swiss National Supercomputing Centre (CSCS). Piz Daints 5,704 XC50 nodes each pack an Intel Xeon E5-2690 v3 CPU and an Nvidia Tesla P100 GPU, and its 1,813 XC40 nodes each carry two Intel Xeon E5-2695 v4 CPUs. The two sections stack up at 21.2 Linpack petaflops and 1.9 Linpack petaflops respectively, placing 6th and 185th on the most recent Top500 list of the worlds most powerful supercomputers.

After running the simulations on Piz Daint, the researchers got some very interesting results. For some time, the astronomy community has puzzled over why planets spin slower than the disks from which they were born. But now, it seems, they might have their answer.

Our new mechanism seems to be able to solve and explain this very general problem, said Lucio Mayer, professor of computational astrophysics at the University of Zurich and project manager at the National Centre of Competence in Research PlanetS.

The simulation shows that the energy generated by the interaction of the forming magnetic field with gravity acts outwards and drives a wind that throws matter out of the disk, Mayer said. If this is true, this would be a desirable prediction, because many of the protoplanetary disks studied with telescopes that are a million years old have about 90 percent less mass than predicted by the simulations of disks formation so far.

The researchers believe that that matter-ejecting mechanism is the culprit behind the loss of angular momentum in the disks and, ultimately, the planets they birth. The discovery of this mechanism, of course, was only made possible by conducting their unified simulation on Piz Daint.

To read CSCS Simone Ulmers article discussing this research, click here.

Originally posted here:

Supercomputer Simulations Illuminate the Origins of Planets - HPCwire

Weather-focused supercomputer shifting gears to aid fight against COVID-19 – KXAN.com

The National Center for Atmospheric Research (NCAR) is joining the COVID-19 High Performance Computing Consortium by providing one of the nations leading supercomputers to help research the deadly pandemic caused by the COVID-19 virus.

The NCAR-operated Cheyenne supercomputer, a 5.34-petaflop machine that ranks among the worlds 50 fastest, will be available to scientists across the country who are working to glean insights into the novel coronavirus that has spread worldwide. Researchers are mounting a massive effort to learn more about the behavior of the virus, such as transmission patterns and whether it is affected by seasonal changes, even as they work toward the development of treatments and vaccines.

Advanced computing technology is crucial for better understanding the spread and behavior of COVID-19 and helping to protect society from this deadly virus, said NCAR Director Everette Joseph. We are very pleased that the Cheyenne supercomputer will contribute to this critical effort.

The Cheyenne supercomputer, built by SGI (now Hewlett Packard Enterprise), is one of the worlds leading supercomputers for Earth system sciences. It is funded by the National Science Foundation, which is NCARs sponsor, and by the State of Wyoming through an appropriation to the University of Wyoming. The system is housed at the NCAR-Wyoming Supercomputing Center in Cheyenne, and it encompasses tens of petabytes of storage capacity in addition to the supercomputer.

The White House last month announced the launch of the COVID-19 High Performance Computing Consortium, a unique public-private consortium spearheaded by the White House Office of Science and Technology Policy, IBM, the U.S. Department of Energy, and the National Science Foundation (NSF). It enables researchers to access the most powerful high-performance computing resources to accelerate understanding of the COVID-19 virus and develop methods for combating it.

The National Science Foundation is very pleased to be part of the COVID-19 HPC Consortium and provide access to the Cheyenne supercomputer and the NCAR-Wyoming Supercomputing Center, said Anjuli Bamzai, director of the NSF Division of Atmospheric and Geospace Sciences. Cheyenne and other NSF-fundedhigh-end computing resources will enable the nations research community to pursue advanced modeling using artificial intelligence techniques and other approaches,to gain vital insights into COVID-19 and potential strategies for protecting society.

COVID-19 researchers can submit research proposals to the consortium, via an online portal, which will then be reviewed and matched with computing resources from one of the partner institutions. An expert panel of top scientists and computing researchers will work with proposers to quickly assess the public health benefit of the work and coordinate the allocation of the consortiums powerful computing assets.

The consortiums world-class supercomputers process massive amounts of calculations that can answer complex scientific questions in hours or days instead of weeks or months. Such computing power is at a premium and can be difficult for scientists to procure under normal circumstances.

With society facing an unprecedented challenge, it is imperative to mobilize the most advanced scientific resources in order to protect lives and livelihoods, said Antonio Busalacchi, president of the University Corporation for Atmospheric Research (UCAR), which manages NCAR on behalf of NSF. We look forward to contributing to this unique partnership of government, private sector, and academic supercomputing resources, which will provide critical assistance to researchers working to understand COVID-19 and bring it under control.

David Hosansky (NCAR/UCAR)

Original post:

Weather-focused supercomputer shifting gears to aid fight against COVID-19 - KXAN.com

NCAR-Operated Supercomputer to Join National COVID-19 Computing Consortium – HPCwire

April 7, 2020 The National Center for Atmospheric Research (NCAR) is joining the COVID-19 High Performance Computing Consortium by providing one of the nations leading supercomputers to help research the deadly pandemic caused by the COVID-19 virus.

The NCAR-operated Cheyenne supercomputer, a 5.34-petaflop machine that ranks among the worlds 50 fastest, will be available to scientists across the country who are working to glean insights into the novel coronavirus that has spread worldwide. Researchers are mounting a massive effort to learn more about the behavior of the virus, such as transmission patterns and whether it is affected by seasonal changes, even as they work toward the development of treatments and vaccines.

Advanced computing technology is crucial for better understanding the spread and behavior of COVID-19 and helping to protect society from this deadly virus, said NCAR Director Everette Joseph. We are very pleased that the Cheyenne supercomputer will contribute to this critical effort.

The Cheyenne supercomputer, built by SGI (now Hewlett Packard Enterprise), is one of the worlds leading supercomputers for Earth system sciences. It is funded by the National Science Foundation, which is NCARs sponsor, and by the State of Wyoming through an appropriation to the University of Wyoming. The system is housed at the NCAR-Wyoming Supercomputing Center in Cheyenne, and it encompasses tens of petabytes of storage capacity in addition to the supercomputer.

The White House last month announced the launch of theCOVID-19 High Performance Computing Consortium, a unique public-private consortium spearheaded by the White House Office of Science and Technology Policy, IBM, the U.S. Department of Energy, and the National Science Foundation (NSF). It enables researchers to access the most powerful high-performance computing resources to accelerate understanding of the COVID-19 virus and develop methods for combating it.

The National Science Foundation is very pleased to be part of the COVID-19 HPC Consortium and provide access to the Cheyenne supercomputer and the NCAR-Wyoming Supercomputing Center, said Anjuli Bamzai, director of the NSF Division of Atmospheric and Geospace Sciences. Cheyenne and other NSF-fundedhigh-end computing resources will enable the nations research community to pursue advanced modeling using artificial intelligence techniques and other approaches,to gain vital insights into COVID-19 and potential strategies for protecting society.

COVID-19 researchers can submit research proposals to the consortium via anonline portal, which will then be reviewed and matched with computing resources from one of the partner institutions. An expert panel of top scientists and computing researchers will work with proposers to quickly assess the public health benefit of the work and coordinate the allocation of the consortiums powerful computing assets.

The consortiums world-class supercomputers process massive amounts of calculations that can answer complex scientific questions in hours or days instead of weeks or months. Such computing power is at a premium and can be difficult for scientists to procure under normal circumstances.

With society facing an unprecedented challenge, it is imperative to mobilize the most advanced scientific resources in order to protect lives and livelihoods, said Antonio Busalacchi, president of the University Corporation for Atmospheric Research (UCAR), which manages NCAR on behalf of NSF. We look forward to contributing to this unique partnership of government, private sector, and academic supercomputing resources, which will provide critical assistance to researchers working to understand COVID-19 and bring it under control.

Source: David Hosansky, National Center for Atmospheric Research and University Corporation for Atmospheric Research

Follow this link:

NCAR-Operated Supercomputer to Join National COVID-19 Computing Consortium - HPCwire

UTEP researchers are working to develop a COVID-19 vaccine with the help of a supercomputer – El Paso Times

Molly Smith, El Paso Times Published 8:19 a.m. MT April 9, 2020 | Updated 10:05 a.m. MT April 9, 2020

How does coronavirus enter the body, and why does it become fatal for some compared to just a cough or fever for others? USA TODAY

As the worldwide coronavirus death toll climbs daily, Dr. Suman Sirimulla feels the pressure that comes with developing a vaccine in real time in the midst of a pandemic.

That pressure, he said, only motivates him to spend as many hours as he can in the lab.

Sirimulla, an assistant professor of pharmaceutical sciences at the University of Texas at El Paso, is working to develop the molecular structure of a drug that would target the novel coronavirus, which causes the respiratory illness COVID-19. To do that, he and his team are using a supercomputer to screen billions of molecular compoundsto find ones that could be a match.

"We have some sophisticated algorithms ... where we can design molecules in such a way that they have optimal properties," Sirimulla said. Those properties include reduced toxicity so a vaccine has fewer side effects.

Without the use of a supercomputer, screening billions of molecules would take millions of years.

UTEP's Dr. Suman Sirimulla, an assistant professor of pharmaceutical sciences at the University of Texas at El Paso, is working on a vaccine against the coronavirus from his lab.(Photo: Mark Lambie/El Paso Times)

Sirimulla and his team, which consists of UTEP faculty and graduate students, as well as University of New Mexico researchers, believe they can develop a vaccine or antiviral drug within the next 15 to 24 months. While that mightseem like a long timeto a nonscientist, that's actually incredibly fast. It typically takes up to 10 years to develop a new drug.

More: UTEP to assist El Paso health department laboratory test COVID-19 specimens

To speed up the process, the team hasenlisted the help of the general public, who can volunteer to run Sirimulla's application on their personal computers through BOINC@TACC.

"Because of the urgency, we're trying to use all of the resources we have right now," he said.

UTEP's Dr. Suman Sirimulla, an assistant professor of pharmaceutical sciences at the University of Texas at El Paso, is working on a vaccine against the coronavirus from his lab.(Photo: Mark Lambie/El Paso Times)

When researchers find a molecular compound that could inhibit the viral proteins in the coronavirus, UNM's lab, which has samples of the virus, will test its efficacy in combating the disease.

From there, the drug would be tested in animals, and if effective, be put to the test in human clinical trials.

Even if other scientists develop a vaccine before them, Sirimulla and his team will continue their research. Multiple vaccines will be neededbecause a virus has multiple strains, some of which will become vaccine-resistant.

"We need multiple fronts and vaccine," hesaid.

More: Coronavirus cases in El Paso and Texas: Daily statistics on cases, deaths in state

Molly Smith may be reached at 915-546-6413;mksmith@elpasotimes.com; @smithmollykon Twitter.

Read or Share this story: https://www.elpasotimes.com/story/news/education/2020/04/09/covid-19-utep-researchers-use-supercomputer-seek-coronavirus-vaccine/2970007001/

Originally posted here:

UTEP researchers are working to develop a COVID-19 vaccine with the help of a supercomputer - El Paso Times

Elon Musk Seemingly Used The Superhuman False Narrative In Advancing Teslas Self-Driving Car Ambitions – Forbes

Elon Musk sent a tweet this week referring to Tesla's self-driving tech as potentially being ... [+] "superhuman" which raises interesting questions about AI.

Superhuman.

What does that mean?

What does that mean to you?

Well, Elon Musk has suggested that Tesla cars outfitted with self-driving tech can definitely be superhuman (in his tweet on April 7, 2020), which invokes the superhuman moniker and raises questions about what exactly the notion of being superhuman portends.

Regrettably, he is joined by a slew of others, both outside the field of AI and even many within the AI field, continuing to proudly and with apparent abandon bandy around the superhuman signature.

The problem is that superhuman is a lousy form of terminology, allowing for inflated allusions to what AI is today, and stokes excessive over-the-top hype as an outright misnomer that spreads marketing blarney, more so than offering bona fide substance.

Some might say that those with a bitter distaste for the use of superhuman are overly tightly wound and should just loosen up about the matter.

No big deal, it would seem.

The counterargument is that in light of the heaps upon heaps of hyperbole going on about AI, there has to be somebody, someplace, and at some point-in-time with a willingness and verve that will start drawing a line in the sand (see my remarks about the dangers and qualms of the superhuman trope at this link here).

One such line would be at the shameless and mindless invoking of the superhuman imagery.

Why pick on superhuman as the straw that breaks the camels back?

Because it has a visceral stickiness that is going to keep it in use and likely get worse and worse in expanding usage over time.

In short, it sounds nice and catches the imagination, and akin to a veritable snowball, it just keeps rolling ahead, becoming bigger and bigger in popularity as it lumbers down the AI hysteria mountain.

Other ways of hyping AI are often more scientific-sounding and less catchy for the general public.

The super part in superhuman dovetails into our fascination and beloved adulation of the vaunted superman and superwomen comic books, movies, merchandising, etc., and now has become a kind of general lore in our contemporary society (the character of Superman was first showcased on April 18, 1938, in Action Comics #1).

Lets tackle what superhuman even seems to mean.

Suppose someone creates a checkers playing computer program, using AI, and it is able to beat all comers of a human variety.

In 1994, human player Marion Tinsley, a checkers world champion, fell to a checkers playing program called Chinook in a closely watched and highly publicized match, a moment that some assert was the point at which checkers exceeded humans at the game of checkers.

It has been said that AI checkers playing games have become superhuman.

Really?

Are we really willing to ascribe the notion of being superhuman due to the aspect that a computer program was able to best a top-ranked human checkers player?

By the way, many of the games played were draws.

Does that change your opinion about the superhuman capability of the checkers program?

If it was so superhuman, why didnt it whip the human in each and every game played, knocking the human player for a loop and showcasing how really super it is.

Anyway, the key point is that flinging around the superhuman catchphrase can be done by anyone and for whatever reason, they might arbitrarily choose.

You see, there isnt a formal definition per se of superhuman.

At least not a definition that all have agreed upon and furthermore, nor agreed to reserve for use in only proper settings (kind of like a Break Glass when superhuman is warranted or needed).

This brings up another facet.

Checkers is an interesting game, but it certainly isnt the most challenging of games (oops, sorry to you checkers fans, please dont go berserk; its a great game, but you have to admit it is not as complex as say Go, Chess, and the like).

Does being superhuman count when the underlying task itself is not the topmost of challenges per se?

Suppose an AI system is able to cook a souffle and the resulting delicacy receives raves as the best ever by anyone, human hands included.

Superhuman!

Superhuman?

Okay, you might say, lets make the stakes higher and use something that humanity has mentally strained to do well for eons, such as the playing of chess.

Chess is a tough game.

We marvel at those human players that can play chess in ways that are a beauty to behold.

In 1997, an IBM chess playing game running on the Deep Blue supercomputer was able to win against human chess champion Garry Kasparov.

Was that program something we can rightfully refer to as superhuman?

Chess is something that most humans dont do well, and thus it would seem that the program was pretty impressive, along with beating our considered best at the game.

Keep in mind that the only thing the program could do is play chess.

It couldnt write a song, it couldnt carry on a Socratic open-ended dialogue with you, and otherwise used various programming tricks such as having in computer memory tons and tons of prior chess positions that it could rapidly search and make use of.

This doesnt seem to be especially super, nor superhuman.

Dont misunderstand and misinterpret such a condemnation this does not imply that those superb chess-playing programs and checkers playing programs arent tremendous accomplishments.

They are!

And, for each instance whereby via the use of AI techniques that we make further progress toward achieving (eventually) true AI, its something worthy of applauding and offering some kind of trophy or recognition for those triumphs.

But, using a medal or crown that implies being capable of human efforts, and indeed implies the ability to go beyond human efforts, presumably far beyond human efforts as a result of being super, thats not an appropriate way to offer praise.

Consider too the role of common-sense reasoning.

Humans have common-sense reasoning.

As an aside, I realize some might chuckle and say that they know some people that lack in common-sense, but, putting aside such snickering, there is something called common-sense that humans do undeniably seem to have overall (see my analysis of common-sense reasoning at this link here).

There isnt any AI system today that has anything close to what human common-sense reasoning seems to entail.

So, if an AI system is superhuman, does it count that the AI doesnt have a core aspect of human capability, namely that the AI lacks common-sense reasoning?

Wouldnt you tend to assume that something of a superhuman caliber ought to be able to do everything that a human can do, and on top of that, go beyond human reach and be super?

That just seems logical.

Again, it might appear that this is blowing out of proportion the misuse of superhuman as a means to describe AI systems, yet do realize that many arent aware of the true limitations and narrowness involved in these AI systems that some are saying are superhuman.

The subtle attachment of superhuman to an AI system provides a glow of incredible essence, and inch by inch is convincing the public that AI can do wondrous things of a superhuman nature, all of which creates outsized expectations and sets people up to be misled and less wary of what AI is able to actually do today.

Take another consideration, brittleness.

Many of the Machine Learning (ML) and Deep Learning (DL) systems that are being deployed today are brittle at the edges of what they do.

A facial recognition system that is developed by using ML/DL could be really good at detecting people by their faces, and yet it also can fail to do so when a face is partially obscured or in other circumstances, which, by the way, other humans might not falter at.

Does that facial recognition deserve the superhuman label?

You might say that it does because in some respects it exceeds human ability to recognize faces, but at the same time, this hides the fact that AI-based facial recognition is actually worse than human capability in many ways.

Plus, as mentioned about common-sense reasoning, the AI facial recognition has no there in terms of understanding that the face so recognized is a human being and what a human being is or does. For the AI system, the face is a mathematical construct, no more significant than counting beans.

If something is superhuman, it seems like it ought to be super in all respects, and not brittle or weak in ways that undermine the super part of what it is getting as accolades.

With all of that as background, now lets turn our attention to true self-driving cars.

Heres the question for today: Do AI-based true self-driving cars deserve to get the superhuman tribute, and if so, when or how will we know that it is appropriate and fair to do so?

Thats a great question.

Lets unpack the matter and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to AI-based true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Pondering Superhuman

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Existing Teslas are not Level 4 and nor are they Level 5.

Most would classify them as Level 2 today.

What difference does that make?

Well, if you have a true self-driving car (Level 4 and Level 5), one that is being driven solely by the AI, there is no need for a human driver and indeed no interaction between the AI and a human driver.

For a Level 2 car, the human driver is still in the drivers seat.

Furthermore, the human driver is considered the responsible party for driving that car.

The twist thats going to mess everyone up is that the AI might seem to be able to drive the Level 2 car, meanwhile, it cannot, and thus the human driver still must be attentive and act as though they are driving the car.

With that as a crucial backdrop, heres the tweet that Elon Musk sent on April 7, 2020: Humans drive using 2 cameras on a slow gimbal & are often distracted. A Tesla with 8 cameras, radar, sonar & always being alert can definitely be superhuman.

The first part of his tweet makes a physics-clever reference to human eyes, saying that they are like two cameras, and our two eyes and head are mounted on our necks, akin to a slow gimbal that allows us to look back-and-forth while driving a car (for my indication of how Elon Musk is shaped by his physics mindset and how that plays out in terms of his actions as a leader and executive, take a look at this link here).

In terms of human drivers succumbing to being distracted while driving, this indeed is a serious and quite troubling problem, along with drivers being intoxicated and otherwise succumbing to a host of human foibles while at the wheel of a car.

Sadly, in the United States alone, there are about 40,000 deaths each year due to car crashes, and an estimated 2.5 million injuries annually.

The hope is that true self-driving cars will avoid incurring as many of those deaths and injuries as possible.

Some believe that we are going to have zero deaths, but this doesnt make logical sense since there will still be some deaths involved in car crashes, regardless if we somehow magically even had only self-driving cars on our roadways (for why zero fatalities is a zero chance, see my analysis at this link here).

Suppose that true self-driving cars are able to reduce the number of car-related deaths and injuries, does that constitute that the AI and the self-driving car are superhuman?

It is tempting to perhaps give the AI such a prize, especially since the task at hand involves life-or-death.

A checkers or chess-playing AI system is obviously not involved in life-or-death circumstances (unless, perhaps, theres a dual-to-the-death on the line as part of the match, something we dont do anymore).

In short, the AI for a self-driving car has a lot going for it in terms of possibly being a candidate to get the honor of being considered superhuman.

It involves the complexities of driving a car, it entails life-or-death matters, and if it can presumably drive more reliably than humans then it seems to be able to drive better than humans do.

Still, does that attain a superhuman quality?

Essentially, the AI is driving as well as humans, minus the foibles of humans.

See the article here:

Elon Musk Seemingly Used The Superhuman False Narrative In Advancing Teslas Self-Driving Car Ambitions - Forbes

BSC Uses Bioinformatics, AI, and Marenostrum Supercomputer in Fight Against COVID-19 – HPCwire

April 1, 2020 Barcelona Supercomputing Center (BSC) collaborates in the fight against the coronavirus from different areas: the application of bioinformatics for the research on the virus and its possible treatments, the use of artificial intelligence and natural language processing to analyze the data about the spread and impact of the pandemic and the use of the MareNostrum 4 supercomputer to enable the fight against the coronavirus.

Bioinformatics to search for treatments

From the bioinformatics side, the BSC is an example of how bioinformatics and supercomputers are nowadays an indispensable tool for research centers that have experimental laboratories to accelerate the fight against the coronavirus. Bioinformatics is used for research on the virus and its possible treatments, analyzing the coronavirus genome and its successive mutations, and searching for drugs and immune therapies (antibodies and vaccines).

Genomics

Understanding how the virus has evolved through different epidemics (such as the SARS epidemic in 2003, MERS in 2012, or the current COVID-19) is important because it allows us to understand how it is possible for the virus to pass from one species to another and what changes it has to undergo to make possible this transmission. It sheds light on the virus mode of transmission and the mechanisms it uses to interact with our immune system and the immune system of other species. This is crucial when looking for treatments and for the prevention and prediction of eventual future outbreaks.

This study is carried out on data available in public databases that house genomic sequences of the different virus mutations and animal species. The information is analyzed with computer programs specifically designed for it, some developed in the BSC itself and others by other teams. The processing of these data requires great computational capacity and therefore the high-performance computing resources of MareNostrum 4 supercomputer, hosted and managed by Barcelona Supercomputing Center, are used.

Search for treatments

Another important aspect is the search for treatments against the diseases caused by the coronavirus, including simulations that reproducein silicothe possible routes that can be exploited to attack this virus.

This process is known in research as docking and consists in simulating in the computer the interactions between the virus and the molecules that could be used to make vaccines, antibody treatments or drug treatments.

To carry out this process, the researchers use the knowledge generated in the research of the virus genome, information on the structures of its virus proteins and data on drugs and other inorganic molecules, which are stored in computer libraries that contain millions of chemical compounds and the results obtained in previous experiments, collected over years by the scientific community.

Computer search or drug screening is very helpful in speeding up the process of finding and validating disease treatments and vaccines, as it greatly cuts the time and investment required for the first phase of this research. Any treatment or vaccine that computer models predict may be successful must subsequently be validated in experimental laboratories, animal testing, and clinical research, and refined in constant collaboration between different research participants.

To carry out this work, researchers at the BSC use different computer programs, including the PELE molecular interaction modeling software developed at BSC. This software and the power of the MareNostrum 4 supercomputer enable thousands of computational experiments to be performed optimizing the binding of drugs and proteins in a fast and effective way.

At the BSC, research on the virus and its possible treatments are carried out in close collaboration between the groups of Alfonso Valencia, ICREA researcher, director of the BSC Life Sciences Department and leader of the Computational biology group, Vctor Guallar, also ICREA researcher, head of the Electronic and atomic protein modeling team and maximum promoter of the PELE software and Toni Gabaldn, ICREA researcher and head of the Comparative genomics group. All of them work in cooperation with the BSC operations team, who are in charge of providing them with the computational resources needed.

Currently, there are two projects that channel the research carried out at BSC on the coronavirus and its possible treatments: EXSCALATE4CoV (E4C), funded by the European Commission under the H2020 program, and a collaborative project with the centers of research IrsiCaixa and CreSa-IRTA.

E4C especially emphasizes on basic and applied research to search for drugs while the collaboration with IrsiCaixa and CreSa-IRTA is more focused on the search for immunological therapies supported by genomic research and bioinformatics tools.

Artificial intelligence to analyze the spread and social impact of the pandemic

BSCs High Performance Artificial Intelligence (HPAI) research group collaborates with UNICEF and IBM on a project that aims to analyze the socioeconomic impact of the virus locally and globally, with an emphasis on social distancing. The goal is to find impact indicators, patterns and statistics that serve the UN and local authorities to take better and faster measurements. The group that carries out the project is currently made up of about 40 people from eight different countries, and focuses on the cases of three cities: New York, Tokyo and Barcelona. HPAI leads the case of Barcelona.

The same team of BSCs artificial intelligence experts collaborates with Mexican researchers and other researchers at the center in the creation of a data collection and analysis system to assist in decision-making to deal with COVID-19, which expansion is found there, still at a fairly early stage. The project is carried out in collaboration with Mexico City, Nuevo Len and Jalisco: http://dash.covid19.geoint.mx/

MareNostrum 4 and users support

The MareNostrum 4 supercomputer, which despite the current circumstances is still in full operation, provides the necessary computational capacity to accelerate ongoing investigations against the coronavirus.

The BSCs Department of Life Sciences is using it for its own research, but the center also made it available to research teams or external entities that need high-performance computing for their research against the coronavirus.

The BSC Operations Department provides support in the use of the MareNostrum 4, both to internal and external researchers.

About BSC

Barcelona Supercomputing Center-Centro Nacional de Supercomputacin (BSC-CNS) is the national supercomputing centre in Spain. The center is specialised in high performance computing (HPC) and manage MareNostrum, one of the most powerful supercomputers in Europe, located in the Torre Girona chapel. BSC is involved in a number of projects to design and develop energy efficient and high performance chips, based on open architectures like RISC-V, for use within future exascale supercomputers and other high performance domains. The centre leads the pillar of the European Processor Project (EPI), creating a high performance accelerator based on RISC-V. More information:www.bsc.es

Source: Barcelona Supercomputing Center

See the original post:

BSC Uses Bioinformatics, AI, and Marenostrum Supercomputer in Fight Against COVID-19 - HPCwire

OLCF and Summit Supercomputer Join the COVID-19 High Performance Computing Consortium – HPCwire

March 31, 2020 The Oak Ridge Leadership Computing Facility and the Summit supercomputer have joined forces with other U.S. Federal agencies, industry, and academic leaders to provide access to the worlds most powerful high-performance computing resources in support of COVID-19 research through the COVID-19 High Performance Computing Consortium.

The Consortium is a unique private-public effort spearheaded by the White House Office of Science and Technology Policy, the U.S. Department of Energy and IBM to bring together federal government, industry, and academic leaders who are volunteering compute time and resources on their world-class machines.

Learn more and submit a research proposal:https://www.ibm.com/covid19/hpc-consortium

About the Oak Ridge Leadership Computing Facility

The Oak Ridge Leadership Computing Facility is charged with helping researchers solve some of the worlds most challenging scientific problems with a combination of world-class high-performance computing (HPC) resources and world-class expertise in scientific computing.

Source:Katie Bethea, Oak Ridge Leadership Computing Facility

Here is the original post:

OLCF and Summit Supercomputer Join the COVID-19 High Performance Computing Consortium - HPCwire

Supercomputer Testing Probes Viral Transmission in Airplanes – HPCwire

It might be a long time before the general public is flying again, but the question remains: how high-risk is air travel in terms of viral infection? In an article for the Texas Advanced Computing Center (TACC), Faith Singer-Villalobos highlighted new, supercomputer-enabled research that explored how viruses travel and transmit on airplanes.

The new research, which was led by Ashok Srinivasan (a professor of computer science at the University of West Florida), aimed to use pedestrian dynamics models to assess disease spread in airplanes. Typically, pedestrian dynamics researchers have used the Self-Propelled Entity Dynamics model, or SPED, which essentially constitutes a molecular dynamics simulation where the molecules are people and the rules of interaction are social not simply physical. However, like molecular dynamics models, SPED was slow, limiting its utility in urgent situations.

To bridge that gap, the research team developed CALM, loosely short for constrained linear movements in a crowd. CALM, which dropped the molecular dynamics framework of SPED, is targeted at assessing movement in narrow passages and its lighter foundation allow for 90-second runtimes (a 60-fold speedup relative to SPED). The researchers applied CALM to analyze how passengers disembarked on three different airplanes. Because human behavior is unpredictable, they also ran simulations with a thousand different variables, using the distribution of the results to generate distributions of predicted human behavior.

To run their massive quantity of simulations, the researchers turned to TACCs Frontera supercomputer, the worlds fifth largest per the latest Top500 ranking with 23.5 Linpack petaflops. Fronteras 8,008 compute nodes are powered by Intel Xeon Platinum 8280 CPUs and connected by Mellanox HDR100 InfiniBand. Frontera also has two subsystems, both equipped with four Nvidia GPUs per node (Quadro RTX 5000s power one subsystem, while V100s power the other).

Frontera was the natural choice, given that it was the new NSF-funded flagship machine, Srinivasan said. One question you have is whether you have generated a sufficient number of scenarios to cover the range of possibilities. We check this by generating histograms of quantities of interest and seeing if the histogram converges. Using Frontera, we were able to perform sufficiently large simulations that we now know what a precise answer looks like.

The researchers made particular use of Fronteras GPU-driven subsystem, given that CALM had been designed to leverage GPUs. Using the GPUs turned out to be a fortunate choice because we were able to deploy these simulations in the COVID-19 emergency, Srinivasan said. The GPUs on Frontera are a means of generating answers fast.

As for answers, Srinivasan cautions that models arent an exact proxy for real-world cases due to the impact of outlier events, but the simulations expose flaws in the systems and guide best practices.

In our approach, we dont aim to accurately predict the actual number of cases, he explained. Rather, we try to identify vulnerabilities in different policy or procedural options, such as different boarding procedures on a plane. We generate a large number of possible scenarios that could occur and examine whether one option is consistently better than the other. If it is, then it can be considered more robust. In a decision-making setting, one may wish to choose the more robust option, rather than rely on expected values from predictions.

To read the original article discussing this research, click here.

Read the rest here:

Supercomputer Testing Probes Viral Transmission in Airplanes - HPCwire

Coronavirus Massive Simulations Completed on Supercomputer – UC San Diego Health

A coronavirus envelope all-atom computer model is being developed by the Amaro Lab of UC San Diego on the NSF-funded Frontera supercomputer of TACC at UT Austin. Biochemist Rommie Amaro hopes to build on her recent success with all-atom influenza virus simulations (left) and apply them to the coronavirus (right). Credit: Lorenzo Casalino (UC San Diego), TACC

Scientists are preparing a massive computer model of the coronavirus that they expect will give insight into how it infects in the body. They've taken the first steps, testing the first parts of the model and optimizing code on the Frontera supercomputer at the University of Texas at Austin. The knowledge gained from the full model can help researchers design new drugs and vaccines to combat the coronavirus.

UC San Diegos Rommie Amaro is leading efforts to build the first complete all-atom model of the SARS-COV-2 coronavirus envelope, its exterior component.

Rommie Amaro, Professor of Chemistry and Biochemistry, University of California, San Diego.

If we have a good model for what the outside of the particle looks like and how it behaves, we're going to get a good view of the different components that are involved in molecular recognition, said Amaro, a professor of chemistry and biochemistry.

Molecular recognition involves how the virus interacts with the angiotensin converting enzyme 2 (ACE2) receptors and possibly other targets within the host cell membrane.

The coronavirus model is anticipated by Amaro to contain roughly 200 million atoms, a daunting undertaking, as the interaction of each atom with one another has to be computed. Her team's workflow takes a hybrid, or integrative modeling approach.

We're trying to combine data at different resolutions into one cohesive model that can be simulated on leadership-class facilities like Frontera, Amaro said. We basically start with the individual components, where their structures have been resolved at atomic or near atomic resolution. We carefully get each of these components up and running and into a state where they are stable. Then we can introduce them into the bigger envelope simulations with neighboring molecules.

On March 12-13, the Amaro Lab ran molecular dynamics simulations on up to 4,000 nodes, or about 250,000 processing cores, on Frontera at the Texas Advanced Computing Center at the University of Texas at Austin.

Amaro's work with the coronavirus builds on her success with an all-atom simulation of the influenza virus envelope, published in ACS Central Science, in February 2020. She said that the influenza work will have a remarkable number of similarities to what they're now pursuing with the coronavirus.

The NSF-funded Frontera supercomputer of the Texas Advanced Computing Center at UT Austin is ranked #5 fastest in the world and #1 for academic systems, according to the November 2019 Top500 rankings. (Credit: TACC)

It's a brilliant test of our methods and our abilities to adapt to new data and to get this up and running right off the fly, Amaro said. It took us a year or more to build the influenza viral envelope and get it up and running on the national supercomputers. For influenza, we used the Blue Waters supercomputer, which was in some ways the predecessor to Frontera. The work, however, with the coronavirus obviously is proceeding at a much, much faster pace. This is enabled, in part because of the work that we did on Blue Waters earlier.

According to Amaro, these simulations will provide new insights into the different parts of the coronavirus that are required for infectivity.

And why we care about that is because if we can understand these different features, scientists have a better chance to design new drugs; to understand how current drugs work and potential drug combinations work. The information that we get from these simulations is multifaceted and multidimensional and will be of use for scientists on the front lines immediately and also in the longer term, Amaro explained. Hopefully, the public will understand that there's many different components and facets of science to push forward to understand this virus. These simulations on Frontera are just one of those components, but hopefully an important and a gainful one.

Click on the following track to listen.

More here:

Coronavirus Massive Simulations Completed on Supercomputer - UC San Diego Health

UAH researchers and the world’s fastest supercomputer join the fight against the COVID-19 virus – alreporter.com

The number of people applying for unemployment in Alabama continues to skyrocket amid the COVID-19 outbreak, but there are fewer people handling those claims this month than last.

The Alabama Department of Labor closed an office in Birmingham and let some workers go earlier this month. That staffing shortage, coupled with an onslaught of new claims, has slowed the time its taking to process them, one worker told APR.

Approximately 74,056 people filed unemployment claims during the week that ended March 28, according to the departments preliminary data. That was far more than had ever been filed for any week going back to 1987, when the U.S. Department of Labor began keeping data on weekly unemployment claims.

Where we would have alerted a claimant that it would take two to three weeks, now the verbiage is, as soon as administratively possible, the employee at the department told APR by phone Saturday. The person asked not to be identified as theyre still employed with the state.

Its currently taking between six and seven weeks to process claims, the worker said, and people who have applied are expressing concern over the long wait.

Its an issue, the worker said.

The employee said workers at the now-closed Birmingham office were called into a meeting on Feb. 18 and told the office would close for good on March 13. Anyone who wanted to continue working for the department had to report to the Montgomery office on March 16, the worker said, or they would be considered to have quit.

In a response to APRs questions, Alabama Department of Labor spokeswoman Tara Hutchison wrote that Eleven employees found other positions in a career center or tax office, three employees resigned in lieu of transferring, two are retiring, and six conditional employees were separated.

Advertisement

There was no discussion in that Feb. 18 meeting of the novel coronavirus or the possibility of mass filings, the workers said. There was discussion of what might happen if another recession hit, the person said, but administrators didnt have a plan for that.

China informed the World Health Organization about the novel coronavirus on Dec. 31. President Donald Trump on Jan. 31 banned foreign nationals entry into the country if they had traveled to China within the last two weeks.

According to the Centers for Disease Control and Prevention there were 18 confirmed COVID-19 cases in the U.S. as of Feb. 18, the day workers were told the Birmingham office would be closing.

A day after the Feb. 18 meeting at the Birmingham office Irans COVID-19 breakout began.

By March 8, eight days before workers were ordered to show up to the Montgomery office, Italy ordered a lockdown of 60 million residents. Three days later the World Health Organization classified COVID-19 as a pandemic.

By March 13, the day the Birmingham office closed, there were 2,611 confirmed COVID-19 cases in the U.S.

The worker said just 15 of the 37 employees made the move to the Montgomery office, and those who did are faced with an overwhelming workload and are spending hours each day doing jobs that others had done before the move. All but one of the 15 adjudicate claims, the person said, meaning they process them and determine whether the person should receive unemployment benefits.

Hutchison told APR that the decision to close the Birmingham office was made because of funding and budget issues.

The Unemployment Insurance programs budget has been cut repeatedly for several years. The buildings rental and overhead costs were eliminated by transferring those employees to the Montgomery Call Center, Hutchison said in the message.

The worker questioned, however, why the department waited until a month before the planned closure to inform the staff, and expressed concern that there

As you know, we are taking in remarkable numbers of new claims due to COVID-19. There was no way to know at the time that this situation would occur. We are working constantly to improve service, and one of those ways is by reutilizing those employees who transferred to other positions, and having them accept claims, Hutchison said. We are also looking to bring back those conditional employees who have separated, if they havent found other work. Additionally, the federal government is providing increased funding to assist with staffing issues.

The Birmingham office was already short-staffed enough to have been allowing staff there overtime pay to handle existing claims, the employee said.

This just added just a whole new level, the person said.

The workers said staff at the department want the public to know that they care and are working hard to get claims processed as quickly as possible.

We want to make sure that were doing the job right. We want to make sure that were following guidelines that weve had in place all throughout our employment with how to do these claims, the person said. If the public knew that, that would be great.

Here is the original post:

UAH researchers and the world's fastest supercomputer join the fight against the COVID-19 virus - alreporter.com

BSC uses bioinformatics, AI and supercomputer in the fight against the coronavirus – Science Business

Barcelona Supercomputing Center (BSC) collaborates in the fight against the coronavirus from different areas: the application of bioinformatics for the research on the virus and its possible treatments, the use of artificial intelligence and natural language processing to analyse the data about the spread and impact of the pandemic and the use of the MareNostrum 4 supercomputer to enable the fight against the coronavirus.

Bioinformatics to search for treatments

From the bioinformatics side, the BSC is an example of how bioinformatics and supercomputers are nowadays an indispensable tool for research centres that have experimental laboratories to accelerate the fight against the coronavirus. Bioinformatics is used for research on the virus and its possible treatments, analysing the coronavirus genome and its successive mutations, and searching for drugs and immune therapies (antibodies and vaccines).

Genomics

Understanding how the virus has evolved through different epidemics (such as the SARS epidemic in 2003, MERS in 2012, or the current Covid-19) is important because it allows us to understand how it is possible for the virus to pass from one species to another and what changes it has to undergo to make possible this transmission.. It sheds light on the virus mode of transmission and the mechanisms it uses to interact with our inmune system and the inmune system of other species. This is crucial when looking for treatments and for the prevention and prediction of eventual future outbreaks.

This study is carried out on data available in public databases that house genomic sequences of the different virus mutations and animal species. The information is analysed with computer programs specifically designed for it, some developed in the BSC itself and others by other teams. The processing of these data requires great computational capacity and therefore the high-performance computing resources of MareNostrum 4 supercomputer, hosted and managed by Barcelona Supercomputing Center, are used.

Search for treatments

Another important aspect is the search for treatments against the diseases caused by the coronavirus, including simulations that reproducein silicothe possible routes that can be exploited to attack this virus.

This process is known in research as "docking" and consist in simulating in the computer the interactions between the virus and the molecules that could be used to make vaccines, antibody treatments or drug treatments.

To carry out this process, the researchers use the knowledge generated in the research of the virus genome, information on the structures of its virus proteins and data on drugs and other inorganic molecules, which are stored in computer libraries that contain millions of chemical compounds and the results obtained in previous experiments, collected over years by the scientific community.

Computer search or drug screening is very helpful in speeding up the process of finding and validating disease treatments and vaccines, as it greatly cuts the time and investment required for the first phase of this research. Any treatment or vaccine that computer models predict may be successful must subsequently be validated in experimental laboratories, animal testing, and clinical research, and refined in constant collaboration between different research participants.

To carry out this work, researchers at the BSC use different computer programs, including the PELE molecular interaction modelling software developed at BSC. This software and the power of the MareNostrum 4 supercomputer enable thousands of computational experiments to be performed optimizing the binding of drugs and proteins in a fast and effective way.

At the BSC, research on the virus and its possible treatments are carried out in close collaboration between the groups of Alfonso Valencia, ICREA researcher, director of the BSC Life Sciences Department and leader of the Computational biology group, Vctor Guallar, also ICREA researcher, head of the Electronic and atomic protein modeling team and maximum promoter of the PELE software and Toni Gabaldn, ICREA researcher and head of the Comparative genomics group. All of them work in cooperation with the BSC operations team, who are in charge of providing them with the computational resources needed.

Currently there are two projects that channel the research carried out at BSC on the coronavirus and its possible treatments: EXSCALATE4CoV (E4C), funded by the European Commission under the H2020 program, and a collaborative project with the centres of research IrsiCaixa and CreSa-IRTA.

E4C specially emphasises on basic and applied research to search for drugs whilethe collaboration with IrsiCaixa and CreSa-IRTA is more focused on the search for immunological therapies supported by genomic research and bioinformatics tools.

Artificial intelligence to analyse the spread and social impact of the pandemic

BSCs High Performance Artificial Intelligence (HPAI) research group collaborates with UNICEF and IBM on a project that aims to analyse the socioeconomic impact of the virus locally and globally, with an emphasis on social distancing. The goal is to find impact indicators, patterns and statistics that serve the UN and local authorities to take better and faster measurements. The group that carries out the project is currently made up of about 40 people from eight different countries, and focuses on the cases of three cities: New York, Tokyo and Barcelona. HPAI leads the case of Barcelona.

The same team of BSCs artificial intelligence experts collaborate with Mexican researchers and other researchers at the centre in the creation of a data collection and analysis system to assist in decision-making to deal with COVID-19, which expansion is found there, still at a fairly early stage. The project is carried out in collaboration with Mexico City, Nuevo Len and Jalisco:http://dash.covid19.geoint.mx/

MareNostrum 4 and users support

The MareNostrum 4 supercomputer, which despite the current circumstances is still in full operation, provides the necessary computational capacity to accelerate ongoing investigations against the coronavirus.

The BSCs department of Life Sciences is using it for its own research, but the center also made it available to research teams or external entities that need high-performance computing for their research against the coronavirus.

The BSC Operations Department provides support in the use of the MareNostrum 4, both to internal and external researchers.

Read more:

BSC uses bioinformatics, AI and supercomputer in the fight against the coronavirus - Science Business

Researchers Join Forces to Investigate the Airborne Transmission of Coronavirus Using a Supercomputer – SciTechDaily

The first situation to simulate is of someone coughing indoors. Photo: Petteri Peltonen / Aalto University

The researchers are using a supercomputer to carry out 3D modeling and believe that the first results will be obtained in the next few weeks.

The project includes fluid dynamics physicists, virologists, and biomedical engineering specialists. The researchers are using a supercomputer to carry out 3D modeling and believe that the first results will be obtained in the next few weeks.

Aalto University, the Finnish Meteorological Institute, VTT Technical Research Centre of Finland and the University of Helsinki have brought together a multidisciplinary group of researchers to model how the extremely small droplets that leave the respiratory tract when coughing, sneezing or talking are transported in air currents. These droplets can carry pathogens such as coronaviruses. The researchers will also use existing information to determine whether the coronavirus could survive in the air.

Dozens of researchers are involved, ranging from fluid dynamics physicists to specialists in virology, medical technology, and infectious diseases. The project was launched based on a proposal put forward by Janne Kuusela, Chief Physician at the Essote Emergency Clinic run by the South Savo Joint Authority for Social and Health Services.

For the modeling work, the researchers are using a supercomputer that CSC Finnish IT Center for Science Ltd has made available at very short notice.

Under normal conditions, researchers may have to queue for many days to start their simulations on CSC machines. There is no time for that now, so instead, we are permitted exceptionally to start straight away, says Aalto University Assistant Professor Ville Vuorinen, who is leading the cooperative project.

The division of work for the project is clear. Aalto University, VTT Technical Research Centre of Finland and the Finnish Meteorological Institute will carry out the 3D airflow modeling together with the droplet motion. The task of the virology and infectious diseases specialists is to analyze the implications of the models for coronavirus infections. The research group is working closely with the physicians at Essote and infectious diseases specialists from the Finnish Institute for Health and Welfare.

The first situation to be simulated is that of a person coughing in an indoor environment. The boundary conditions, such as the air velocity, are specified in order to ensure that the different models produced are comparable and that it is possible, for example, to assess the necessary safety distances between people.

One aim is to find out how quickly the virus concentrations dilute in the air in various airflow situations that could arise in places such as a grocery store, says Vuorinen.

Visualising the invisible movements of viral particles is very important in order to better understand the spreading of infectious diseases and the different phenomena related to this, both now and in the future, he adds.

Researchers believe that the high computing capacity and close, multidisciplinary cooperation will mean that the first results will be obtained already in the next few weeks.

CSC Finnish IT Center for Science Ltd is prioritizing the provision of computing capacity and expert assistance for research aimed at combating the COVID-19 pandemic. If you are working directly on a pandemic research project, please contact[emailprotected].

I fully encourage other researchers to do research on the coronavirus epidemic as it is really time to roll up the sleeves. Within the space of just a few hours, we have put a team together and started research immediately, says Vuorinen

Read the rest here:

Researchers Join Forces to Investigate the Airborne Transmission of Coronavirus Using a Supercomputer - SciTechDaily