Supercomputer predicts Premier League top four as Chelsea, Man Utd and Tottenham battle it out – Mirror Online

Chelsea , Tottenham and Manchester United all remain firmly in contention for Champions League football next season.

With Liverpool , Manchester City and Leicester looking firm favourites to finish in the top three, Chelsea are in pole position to claim fourth spot.

Despite boss Frank Lampard labelling his side as underdogs in the race, theyre currently four points ahead of fifth-placed Spurs heading into the winter break.

However, theyve struggled in recent weeks, winning just one of their last five league games.

But a supercomputer expects them to recover their form and finish in the final coveted Champions League spot.

Following their morale-boosting win over Manchester City on Sunday, Tottenham are seen as one of the main contenders to leapfrog the Blues before the end of the campaign.

Theyre expected to drop off in the final weeks this term though.

Jose Mourinhos men will come home in seventh, with only 21 points from their next 13 games.

According to the supercomputer, Manchester United will finish one place below them in eighth.

The Red Devils have lost more league games than theyve won since Ole Gunnar Solskjaer became the permanent manager.

Their problems are due to continue as its anticipated theyll finish a massive 14 points off fourth.

Wolves impressive season shows no sign of tailing off as theyre predicted to be sixth, sealing qualification for the Europa League once again.

Its Sheffield United who will continue to be the surprise package though.

After securing promotion from the Championship last time around, Chris Wilders men will continue to defy expectations in finishing fifth, eight points behind fourth-placed Chelsea.

Meanwhile, Arsenal s difficult season is set to continue.

The Gunners have picked up just six wins so far and their total of 31 points after 25 games is their lowest since the 1912/13 season.

With only 17 points from their final 13 games, Mikel Artetas side are predicted to end ninth.

There is also an interesting prediction in the race to finish second.

Most expect Manchester City to be runners-up - the defending champions are currently two points ahead of Leicester.

But the supercomputer has backed the Foxes to be Liverpools closest challengers at the end of this campaign.

Here is how the final table for the 2019/20 season is predicted to look:

1. Liverpool - 112 points

2. Leicester - 84

3. Man City - 77

4. Chelsea - 69

5. Sheffield United - 61

6. Wolves - 58

7. Tottenham - 56

8. Man Utd - 55

9. Arsenal - 48

10. Everton - 48

11. Crystal Palace - 45

12. Newcastle - 45

13. Brighton - 44

14. Burnley - 43

15. Southampton - 40

16. West Ham - 37

17. Bournemouth - 34

18. Aston Villa - 31

19. Watford - 30

20. Norwich - 27

Go here to see the original:

Supercomputer predicts Premier League top four as Chelsea, Man Utd and Tottenham battle it out - Mirror Online

Leeds fans react as super computer tips them for the title – FootballFanCast.com

2 minute read 7/2/2020 | 08:00pm

There are a few signs that were coming towards the end of winter.

Leaves are growing back on trees, the sun is staying out for a bit longer and TalkSPORT have once again wheeled out their infamous super computer.

Indeed the radio station continued their regular tradition of using their groundbreaking piece of technology to predict the Championship table, and it makes for good reading for Leeds fans.

Yes, according to the machine, the Whites 16-year wait for a place in the Premier League is finally going to come to an end as theyve been tipped to win the title.

Understandably, a number of United fans were happy to see their side top this table.

Of course, theres still a long way to go, but after winning just two of their last nine games it seems as though this was a much-needed boost for some members of the fanbase.

Others had their doubts about this prediction.

Four Three Two One

One fan jokingly asked whether or not the computer predicted Leeds to lose to Wigan, while others commented that a similar prediction was made last year after the Elland Road outfit were touted for a second-place finish and automatic promotion.

In other news, Leeds may miss Phillips and Forshaw more than ever on Saturday.

Read this article:

Leeds fans react as super computer tips them for the title - FootballFanCast.com

Follow-up: Virologists and supercomputer need to conquer the coronavirus – Innovation Origins

In our weekly follow-up column we feature a sequel to the best-read article of the past week. This week: An Austrian start-up discovers an already existing drug that could potentially be used against the coronavirus.

The number of people who have died from the coronavirus has now risen to over 800. The virus has thus claimed more victims than the SARS epidemic did in 2002 and 2003. At the moment, almost 35,000 people worldwide are infected with the coronavirus according to the World Health Organization.

Scientists all over the world are trying to find a cure for the virus. However, before there is any such cure, nothing else can be done except take precautions. Make sure precautionary measures are taken so that the virus cannot spread any further, Harald Wychgel of the RIVM explains. In China you see that entire cities are on lock down. The number of infections in the EU is not that high, but it is important that we are vigilant about this. Were taking precautions in order to prevent it from spreading.

Virologists claim that it will take at least another year before a drug against the virus is released on the market. Research is being done on vaccines where a weakened version of the virus is injected into the body. This causes the body to produce antibodies, which become active when the body becomes infected by the virus. Research is also underway to find a means of preventing the virus from spreading more widely. Just like the way HIV inhibitors work. But before such a drug is approved, a lot of time is wasted on trial and error, Wychgel says.

But what if you could tackle the coronavirus with an established drug that has already been approved for use in human beings? Which is exactly what Innophore does. Theyre an Austrian company that originated as a spin-off from the University of Graz. They do whats referred to as drug repurposing. As in when an established drug is applied in a new way. Which in itself is not so novel, says founder Christian Gruber. Viagra was originally intended to regulate blood pressure. Thanks to repurposing, it has been given a whole new purpose.

Gruber believes that the main advantage of this research method is the time it takes. It is no longer necessary to conduct clinical trials as the drug has already been approved for use in humans. But how do you discover other applications for established medicines? Gruber and his team developed a powerful search engine for this purpose. Normally, a platform searches for a match between a compound (substance that has the potential to fight a disease) and the virus. But were not looking for a compound. We look, so to speak, inside the void where a compound binds to the virus. This is based on machine learning and weve been working on it since 2011.

Gruber got involved when the genome sequence of the virus was catalogued in one of the three largest DNA databases in the world. We decided right away that whatever happens, we dont want to make a profit from this. This is because we have contacts in China too, its terrible whats happening there right now.

And that worked, because within a few hours the Gruber team came up with what are known as protease inhibitors (substances that prevent the virus from spreading further). The virus has the same structure as the SARS virus. So we explored all the databases that we can access, looking for possible targets. These include HIV inhibitors, for example.

The model that Gruber published was downloaded by researchers all over the world. Incredible. Normally, a handful of researchers in that particular area look at that kind of model. Since we published the model, our inboxes have been overflowing. Were getting proposals for research collaborations from universities and institutes that we would never have dreamed of before.

Gruber is proud of this, yet he doesnt want to take too much credit either. We were the first to publicize it and share it with the rest of the world. But in China, scientists have been working behind the scenes for much longer, reviewing and testing our findings so that they can be quickly tested on people. But its great that the Centers for Disease Control and Prevention in China are grateful to us and want to continue working with us.

Gruber is currently busy drafting a research proposal for the European Union. The EU has set aside an emergency budget of 10 million for research into the coronavirus. We have scientists from all over the world Oxford, Graz, Harvard, medical universities in Germany and the technical university in Wuhan. Were working on the proposal together with a group of fifty to seventy people.

In the proposal, the scientists want to link various research platforms and databases and provide them with an automated response platform. Think of it as a kind of robot that immediately springs into action in the event of a new outbreak of a virus and searches for available medication that can also be used for that new virus. By joining forces, it should even be possible to find other compounds that may help prevent viruses. The coronavirus in this case.

The best case scenario is that the virus is already under control and we are able to focus on other diseases or viruses, Gruber says. We also want to ensure that all of the information is always available. Luckily it has never happened before but what if an outbreak prevents you from being able to access that information? We want to have secure cloud storage. And we need to make sure that all available platforms can bundle information in a worthwhile way. I am very excited about this project. When it gets off the ground we will be using the most advanced technology available, a dream come true for us.

However, the priority right now is to contain the coronavirus. When I read the reports about cruise ships where people have been infected, I get the shivers. Imagine being aboard one of those ships. I can very well imagine how frightened passengers are. Thats why its so important to have an automated search engine that will quickly come up with viable options. Im not a virologist and I dont have much to say about epidemics, but the sooner resources are available to contain viruses, the better.

Continued here:

Follow-up: Virologists and supercomputer need to conquer the coronavirus - Innovation Origins

Sometimes The Road To Petaflops Is Paved With Gold And Platinum – The Next Platform

Supercomputing, with a few exceptions, is a shared resource that is allocated to users in a particular field or geography to run their simulations and models on systems that are much larger than they might otherwise be able to buy on their own. Call it a conservation of core-hour-dollars that allows a faster time to model in exchange for limited access.

So it is with the Norddeutschem Verbund fr Hoch- und Hchstleistungsrechnen (HLRN) supercomputing alliance in Northern Germany. The HLRN consortium, which provides calculating oomph for the German federal states of Berlin, Brandenburg, Bremen, Hamburg, Mecklenburg-Western Pomerania, Lower Saxony, and Schleswig-Holstein, has used a variety of different architectures from different vendors over the past several decades, and as such is representative of mainstream HPC shops that, as we pointed out recently, comprise the majority of the revenue stream in the HPC sector and account for thousands of HPC facilities worldwide. HLRN in particular has a very large number of university and research institution users, at close to 200, all jockeying for time on the system, so adding capacity makes the lines a bit shorter, at least in theory.

The second phase of the HLRN-IV supercomputer, known by the nickname Lise after Lise Meitner, an Austrian-Swedish physicist who was one of the discoverers of nuclear fission in 1939, has fired up recently, and the machine is noteworthy for a few reasons. First, Atos is the prime contractor on the machine, and second, it is based on the doubled-up Cascade Lake-AP Xeon SP-9200 Platinum processors that Intel launched last April and that are employed in custom enclosures that Intel itself manufactures.

Since its founding in 2001, the HLRN consortium has operated a distributed system across two datacenters; one is usually at the Zuse Institute Berlin and the other has been located at Leibniz University in Hannover or at the University of Gottingen. The initial HLRN-I system, which was called Hanni and Berni across its two halves, was comprised each of a 16 node cluster of IBMs RS/6000 p690 servers based on its dual-core Power4 processors, which debuted that year. The p690 machines had 32 sockets and 64 GB of main memory each and were connected by a proprietary federation interconnect that IBM created for its parallel NUMA systems. This HLRN-I machine had 26 TB of disk capacity and had a peak performance of 2 teraflops at 64-bit double precision. You can get a graphics card with way more floating point performance these days, and it fits in your hand instead of taking up two datacenters.

In 2008, these systems were upgraded wit a pair of Altix ICE supercomputers from Silicon Graphics in Berlin and Hannover, called Bice and Hice naturally. This system had a mix of NUMA and scale-out nodes. The NUMA nodes were comprised of a mix of two-socket Altix XE 250 nodes and two-socket Altix UV 1000 nodes using a mix of Xeon processors from Intel (four-core and eight-core chips with fatter memory) and the NUMAlink5 interconnect to share the memory across the 2,816 cores and 12.5 TB of main memory across the 200 nodes in the machine. The regular, scale-out part of each side of the HLRN-II system had a mix of two generations of Xeon processors across its 10,240 cores in 1,280 nodes and a total of 12.1 TB of main memory. Add it all up and the HLRN-II machine had 124.76 teraflops of double precision floating point calculating capacity; this was balanced out by an 810 TB Lustre parallel file system.

Enter HLRN-III in 2013, which we wrote about five years later. This machine, which cost $39 million and which was built in phases like prior systems using a mix of generations. In this case, by Cray based on its Cascades XC30 and XC40 system designs and their Aries interconnect. The HLRN-III systems were nicknamed Konrad and Gottfried and they each used a mix of Ivy Bridge and Haswell Xeon processors, with the Berlin system having a total of 1,872 nodes with 44,928 cores and 117 TB of memory yielding a peak performance of 1.4 petaflops and the University of Leibniz (which is where the Gottfried name comes from, the mathematician and co-creator of calculus) had a total of 1.24 petaflops of oomph and 105 TB of memory across its 1,680 nodes and 40,320 cores. Each machine had a 3.7 PB Lustre file system and a 500 TB GPFS file system.

With the HLRN-IV system, the two halves are not just a little bit different, but really distinct systems that were installed at different times. The Emmy system at the University of Gottingen, which was operational in October 2018, was named after groundbreaking German mathematician Amalie Emmy Noether, who blazed a trail for women in that field as much as Meitner did in physics. The Emmy system at Gottingen had 449 nodes, with 448 of them having just Skylake Xeon SP-6148 Gold processors and one of them having four Volta Tesla V100 GPU accelerators from Nvidia added. Not counting that GPU-accelerated system, Emmy had 17,920 cores across its 448 nodes and 93 TB of memory. These nodes were interlinked with a 100 Gb/sec Omni-Path interconnect from Intel, and its performance was never divulged. Presumably Emmy will be upgraded at some point to deliver the expected 16 petaflops of aggregate performance

The Lise half of the system in Berlin, which is just coming online, has significantly more computational power than that initial Emmy partition in Gottingen. This system currently has 1,180 nodes with 113,280 cores in total using a pair of the Xeon AP-9242 Platinum chips per node, which themselves put two 24 core Cascade Lake processors into a single socket for a total of four chips and 96 cores per node. These nodes are also interlinked with 100 Gb/sec Omni-Path interconnect. This machine is noteworthy in that it is showcasing Intels multichip Cascade Lake-AP processors, which have not really dented the attack by the AMD Epyc processors and which are not exactly taking the HPC market by storm. (We suspect HLRN got a great deal on these Intel Cascade Lake-AP chips and the servers that sport them, with Atos as the system integrator hopefully making some dough.) Back in November 2019, when the Lise system was tested with 103,680 of its cores on the Linpack benchmark, it was rated at 5.36 petaflops, so there must be some pretty big upgrades on the horizon to get to the 16 petaflops and more than 200,000 cores that the final HLRN-IV system (Emmy plus Lise) will eventually encompass. The completed system with all of those 16 petaflops spread across the Berlin and Gottingen sites will cost 30 million, or about $32.6 million.

The interesting bit as far as we are concerned is that the combined HLRN-IV system will have 6.2X more double precision performance at 16.4 percent lower cost than the HLRN-III system it replaced seven years later. This illustrates the principal that we have talked about before, which is that it is far easier to increase the performance of a supercomputer than it is to lower its price. HPC centers have tended to budget linearly over the decades, but it is getting more expensive to make the flops leaps. Still, a 7.4X improvement in bang for the buck over seven years can get a deal done.

We realize that our bang for the buck comparisons are imprecise because of the lack of publicly available data on supercomputer costs over time, but at around $15,000 per teraflops back in 2013, the HLRN-III cluster was twice as expensive per flops as Tianhe-2 system in China, which used GPU accelerators, but about half the price of the all-CPU and very custom PrimeHPC systems from Fujitsu that were inspired by the K supercomputer at RIKEN lab in Japan. The price of systems, particularly those that used accelerators, dropped significantly between 2013 and 2018, and GPU accelerated machines like Summit and Sierra cost just north of $1,000 per teraflops around the time the all-CPU Emmy portion of the HLRN-IV system was going in, which cost $2,038 per teraflops at current euro to dollar exchange rates. Call it two grand.

So in general, all-CPU machines are, it seems, more expensive, and this stands to reason. The programming is harder for GPU accelerated machines, and that costs money, too. Or, you can as many HPC centers do outside of the largest national labs, stick with all-CPU architectures and pay the premium there. GPU-accelerated exascale machines due to be installed in the United States in 2021 through 2023 will cost on the order of $400 per teraflops, and we suspect that all-CPU systems over that timeframe will cost 2X to 3X that per teraflops. None of that counts the facilities or electricity costs that come with the architecture choices, of course. As best we can figure.

Read the original post:

Sometimes The Road To Petaflops Is Paved With Gold And Platinum - The Next Platform

The Role of AI in the Development of Medical Pods – HostReview.com

In the recent period, Artificial Intelligence (AI) has entered the mainstream culture. What's more, AI is growing in popularity at a rapid rate, expanding its reach with each passing day.

Lots of scientists are working on improving the AI experience. They try hard to uplift the human condition and reveal the knowledge that was hidden from the public for so long.

For instance, the so-called Medical Pods are an area where Artificial Intelligence plays a vital role. And yet, few people are aware that such technology even exists.

But don't worry, we got you covered. Our guide on the role of AI will explain everything you need to know about this advanced off-world technology that could save humanity.

In recently published articles, Jared Rand and Ileana, the Star Traveler, have explained the nature of Medical Pods. In essence, these pods are chambers that resemble the beds that we currently use in medicine.

Yet, Medical Pods are not human-created, and this feature provides them with exquisite abilities. For example, they can regenerate tissue and correct imperfections. The entire principle of these beds relies on the use of Tachyon particle energy, which is plasma energy in its raw form. As we all know, plasma energy is everywhere around us.

According to Rand and Ileana, there are three types of medical pods:

Even though they follow the same concept, these three classes can offer different healing capacities. But what is the role of AI in all this? Lets find out.

The collective human consciousness is apprehensive about the potential of Artificial Intelligence. However, the ET technology is free from such prejudice. Moreover, AI plays an integral role in the 3D-5D transition, which is happening as we speak. The so-called Great Awakening is a paradigm shift that should wake up the collective and give birth to a new reality.

When it comes to Medical Pods, AI is not autonomous but it controls many aspects of the chambers. In a way, MRI works on a similar principle.

So, Artificial Intelligence in Medical Pods is a super-computer software with near-endless applications.

Above all else, this system can perform a complete internal analysis of the body. Every tissue and every organ in the body will be subjected to in-depth analysis, down to the micron level. The AI is also able to scan the entire neural network, as well as to perform laparoscopic surgeries.

The bottom line is that the AI can link with the humans vibrational frequency. We can then use the information obtained from the 3D anatomical scanning to cure diseases, heal wounds, and even revive people. With a low error rate and extreme precision, AI-controlled Medical Pods are the future of medicine.

Artificial Intelligence can tackle even the most complex of tasks, and Medical Pods are just the tip of the iceberg. A lot more knowledge lies beyond the reach of the collective human conscience, but we are getting there.

Once Medical Pods are revealed to us, we will be able to use them in a variety of ways. The most important thing is that this advanced technology will save lives and improve the quality of life here on Earth.

See the original post:

The Role of AI in the Development of Medical Pods - HostReview.com

Be More Chill Takes Over London Beginning February 12 – Playbill.com

Joe Iconis and Joe Tracz's Be More Chill begins performances at London's The Other Palace February 12. With a Tony-nominated score by Iconis and a book by Tracz, the musical follows the story of an unpopular teen who takes a supercomputer pill to become coolonly to discover that the A.I. wants to take over the world.

Scott Folan (Mother of Him) plays the role of Jeremy Heere in the U.K. debut, with Blake Patrick Anderson (Closer to Heaven) as Michael. Two alums of the musical Six, Renee Lamb and Millie OConnell, play Jenna and Chloe, respectively.

Joining the quartet are Miracle Chance as Christine Canigula, Stewart Clarke as The Squip, Eloise Davies as Brooke Lohst, James Hameed as Rich Goranski, Miles Paloma as Jake Dillinger, and Christopher Fry as Mr. Heere and Mr. Reyes, with Gabriel Hinchcliffe, Eve Norris, and Jon Tsouras as understudies.

READ: Examining the Legacy of Be More Chill With 5 Members of the Broadway Cast

The creative team includes director Stephen Brackett, choreographer Chase Brock, set designer Beowulf Boritt, costume designer Bobby Frederick Tilley II, lighting designer Tyler Micoleau, sound designer Ryan Rumery, and projection designer Alex Basco Koch. Orchestrations are by Charlie Rosen, with vocal arrangements by Emily Marshall, and musical direction by Louisa Green. U.K. casting is by Will Burton.

A 2015 world premiere of Be More Chill at Two River Theater in New Jersey led to a cast album, which took the internet by storm. An Off-Broadway premiere followed in 2018, quickly selling out and adding an extension to its run, followed by a Broadway production opening in 2019.

Jerry Goehring and Lisa Dozier King serve as executive producers for the U.K. production.

View original post here:

Be More Chill Takes Over London Beginning February 12 - Playbill.com

Google claims to have invented a quantum computer, but IBM begs to differ – The Conversation CA

On Oct. 23, 2019, Google published a paper in the journal Nature entitled Quantum supremacy using a programmable superconducting processor. The tech giant announced its achievement of a much vaunted goal: quantum supremacy.

This perhaps ill-chosen term (coined by physicist John Preskill) is meant to convey the huge speedup that processors based on quantum-mechanical systems are predicted to exhibit, relative to even the fastest classical computers.

Googles benchmark was achieved on a new type of quantum processor, code-named Sycamore, consisting of 54 independently addressable superconducting junction devices (of which only 53 were working for the demonstration).

Each of these devices allows the storage of one bit of quantum information. In contrast to the bits in a classical computer, which can only store one of two states (0 or 1 in the digital language of binary code), a quantum bit qbit can store information in a coherent superposition state which can be considered to contain fractional amounts of both 0 and 1.

Sycamore uses technology developed by the superconductivity research group of physicist John Martinis at the University of California, Santa Barbara. The entire Sycamore system must be kept cold at cryogenic temperatures using special helium dilution refrigeration technology. Because of the immense challenge involved in keeping such a large system near the absolute zero of temperature, it is a technological tour de force.

The Google researchers demonstrated that the performance of their quantum processor in sampling the output of a pseudo-random quantum circuit was vastly better than a classical computer chip like the kind in our laptops could achieve. Just how vastly became a point of contention, and the story was not without intrigue.

An inadvertent leak of the Google groups paper on the NASA Technical Reports Server (NTRS) occurred a month prior to publication, during the blackout period when Nature prohibits discussion by the authors regarding as-yet-unpublished papers. The lapse was momentary, but long enough that The Financial Times, The Verge and other outlets picked up the story.

A well-known quantum computing blog by computer scientist Scott Aaronson contained some oblique references to the leak. The reason for this obliqueness became clear when the paper was finally published online and Aaronson could at last reveal himself to be one of the reviewers.

The story had a further controversial twist when the Google groups claims were immediately countered by IBMs quantum computing group. IBM shared a preprint posted on the ArXiv (an online repository for academic papers that have yet to go through peer review) and a blog post dated Oct. 21, 2019 (note the date!).

While the Google group had claimed that a classical (super)computer would require 10,000 years to simulate the same 53-qbit random quantum circuit sampling task that their Sycamore processor could do in 200 seconds, the IBM researchers showed a method that could reduce the classical computation time to a mere matter of days.

However, the IBM classical computation would have to be carried out on the worlds fastest supercomputer the IBM-developed Summit OLCF-4 at Oak Ridge National Labs in Tennessee with clever use of secondary storage to achieve this benchmark.

While of great interest to researchers like myself working on hardware technologies related to quantum information, and important in terms of establishing academic bragging rights, the IBM-versus-Google aspect of the story is probably less relevant to the general public interested in all things quantum.

For the average citizen, the mere fact that a 53-qbit device could beat the worlds fastest supercomputer (containing more than 10,000 multi-core processors) is undoubtedly impressive. Now we must try to imagine what may come next.

The reality of quantum computing today is that very impressive strides have been made on the hardware front. A wide array of credible quantum computing hardware platforms now exist, including ion traps, superconducting device arrays similar to those in Googles Sycamore system and isolated electrons trapped in NV-centres in diamond.

These and other systems are all now in play, each with benefits and drawbacks. So far researchers and engineers have been making steady technological progress in developing these different hardware platforms for quantum computing.

What has lagged quite a bit behind are custom-designed algorithms (computer programs) designed to run on quantum computers and able to take full advantage of possible quantum speed-ups. While several notable quantum algorithms exist Shors algorithm for factorization, for example, which has applications in cryptography, and Grovers algorithm, which might prove useful in database search applications the total set of quantum algorithms remains rather small.

Much of the early interest (and funding) in quantum computing was spurred by the possibility of quantum-enabled advances in cryptography and code-breaking. A huge number of online interactions ranging from confidential communications to financial transactions require secure and encrypted messages, and modern cryptography relies on the difficulty of factoring large numbers to achieve this encryption.

Quantum computing could be very disruptive in this space, as Shors algorithm could make code-breaking much faster, while quantum-based encryption methods would allow detection of any eavesdroppers.

The interest various agencies have in unbreakable codes for secure military and financial communications has been a major driver of research in quantum computing. It is worth noting that all these code-making and code-breaking applications of quantum computing ignore to some extent the fact that no system is perfectly secure; there will always be a backdoor, because there will always be a non-quantum human element that can be compromised.

More appealing for the non-espionage and non-hacker communities in other words, the rest of us are the possible applications of quantum computation to solve very difficult problems that are effectively unsolvable using classical computers.

Ironically, many of these problems emerge when we try to use classical computers to solve quantum-mechanical problems, such as quantum chemistry problems that could be relevant for drug design and various challenges in condensed matter physics including a number related to high-temperature superconductivity.

So where are we in the wonderful and wild world of quantum computation?

In recent years, we have had many convincing demonstrations that qbits can be created, stored, manipulated and read using a number of futuristic-sounding quantum hardware platforms. But the algorithms lag. So while the prospect of quantum computing is fascinating, it will likely be a long time before we have quantum equivalents of the silicon chips that power our versatile modern computing devices.

[ Deep knowledge, daily. Sign up for The Conversations newsletter. ]

Here is the original post:

Google claims to have invented a quantum computer, but IBM begs to differ - The Conversation CA

Hands-on review: Alienware m15 r2: supercomputer or superweapon? – FutureFive New Zealand

Alienwares latest creation, the m15 r2 gaming laptop, is a testament to how much punch the evil scientists at Alienware could pack into a fifteen-inch laptop, without skimping on portability.

The m15 is a seriously cool laptop. From unboxing it from its glossy black box to the final chime of the Windows setup, everything feels refined, simplistic and highly polished.

Somehow, the m15 manages to combine the chic of a Tesla with the power of a superweapon.

But with a $2800 price tag, your own Death Star is probably a cheaper option.

With laptops tending to lean towards the bezel-less, paperthin frame popularised by the Macbook and Surface Pro, its refreshing to see a computer manufacturer trying something different. The ice-white colour is pretty synonymous with Alienware by this point, and the m15 is no different. It definitely has its curves, and it flaunts them where it can.

Dell seems to have noticed the little things, too. From the glowing ring around the end of the power cord, to the grinning alien logo that doubles as a power button, the attention to detail helps to complete the package.

Visually, it just looks good.

But you cant ignore its flaws.

Mainly, the gigantic horizontal cooling fan Frankenstein-ed to the laptop behind the screen. The mega-fan might have something to do with the weight, too, which is frankly an issue. Coming in at a little over 2kg, it isnt something to hold while you walk around the room in a conference call. Thankfully, with the weight brings sturdiness, with everything from the hinges to the screen feeling sturdy enough to handle even the most careless laptop user.

Dimension-wise, the m15 is somehow sleek and bulky at the same time. What matters most is that its sleek enough to shimmy into a backpack, which realistically is the greatest test of a laptop a test it passes with flying colours.

Where the m15 really begins to shine is its performance.

The m15 is powered by the latest and greatest ninth-generation Intel i9 core, and together with an obscenely powerful RTX 2080 graphics card and 16GB of DDR4 RAM, the computer has no difficulty running the hottest games on the market. From Halo Reach to the latest Call of Duty, everything eventually falls to the m15. The RTX graphics card also allows for ray tracing in compatible games, offering a level of futureproofing not typically found in laptops.

The 15-inch OLED display is crisp to a fault and makes gaming and entertainment a pleasure. Colours are vibrant, and the thin bezel surrounding the screen is inoffensive. The laptops speakers are surprisingly good too, with the snazzy honeycomb-style speakers sitting pretty above the keyboard.

Speaking of the keyboard, I found it had a surprising level of functionality when using it, although I wouldnt go writing any novels on the m15 anytime soon.

When I slammed the oversized cooling fan earlier, I may have been a bit harsh. For what it is, the laptop creates some heat, and mounting the fan behind the screen is innovative and pretty smart by Alienware. Keeping it out of sight and out of mind, the fan is surprisingly quiet.

While this is realistically a gaming PC hidden within a laptop, the Alienware m15 definitely doesnt forget its roots. Its brilliant display and crisp sound are definite winners, while the performance is enough to rival most high-end gaming computers. Despite its unforgiving weight, the m15s surprising portability and sturdiness is definitely something to be commended.

The m15 isnt for the faint of heart: its a commitment a commitment to the bulky lifestyle the m15 brings. You can expect to groan as you lift it out of your bag but can also expect a gaming experience that can compete with the best of them.

Link:

Hands-on review: Alienware m15 r2: supercomputer or superweapon? - FutureFive New Zealand

Giant Planets Could Form Around Tiny Stars in Just a Few Thousand Years – Universe Today

M-type (red dwarf) stars are cooler, low-mass, low-luminosity objects that make up the vast majority of stars in our Universe accounting for 85% of stars in the Milky Way galaxy alone. In recent years, these stars have proven to be a treasure trove for exoplanet hunters, with multiple terrestrial (aka. Earth-like) planets confirmed around the Solar Systems nearest red dwarfs.

But what is even more surprising is the fact that some red dwarfs have been found to have planets that are comparable in size and mass to Jupiter orbiting them. A new study conducted by a team of researchers from the University of Central Lancashire (UCLan) has addressed the mystery of how this could be happening. In essence, their work shows that gas giants only take a few thousand years to form.

The study, which recently appeared in the journal Astronomy & Astrophysics, was the work of Dr. Anthony Mercer and Dr. Dimitris Stamatellos of the UCLan Jeremiah Horrocks the Institute for Mathematics, Physics & Astronomy (JHI MPA). Dr. Mercer, an Astrophysics Reader with the JHI MPA, led the research under the supervision of Dr. Stamatellos, who leads the institutes Theoretical Star formation & Exoplanets group.

Together, they studied how planets could form around red dwarf stars to determine what mechanism would allow for the formation of super-massive gas giants. According to conventional models of planet formation, where the gradual build-up of dust particles leads to progressively bigger bodies, red dwarf systems should not have enough mass to form super-Jupiter-type planets.

To investigate this discrepancy, Mercer and Dr. Stamatellos used the UK Distributed Research using Advanced Computing (DiRAC) supercomputer which connects facilities at Cambridge, Durham, Edinburgh, and Leicester University to simulate the evolution of protoplanetary discs around red dwarf stars. These rotating discs of gas and dust are common around all newly borns stars and are what eventually lead to planet formation.

What they found was that if these young discs are large enough, they can fragment into different pieces, which would coalesce due to mutual gravitational attraction to form gas giant planets. However, this would require that the planets form within a few thousand years, a timescale that is extremely fast in astrophysical terms. As Dr. Mercer explained:

The fact that planets may be able to form on such short timescale around tiny stars is incredibly exciting. Our work shows that planet formation is particularly robust: other worlds can form even around small stars in a variety of ways, and therefore planets may be more diverse than we previously thought.

Their research also indicated that these planets would be extremely hot after they form, with temperatures reaching thousands of degrees in their cores. Because they dont have an internal energy source, they would become fainter over time. This means that these planets would be easy to observe in the infrared wavelength when they are still young, but the window for direct observation would be small.

Still, such planets could still be observed indirectly based on their effect on their host star, which is how planets orbiting red dwarf stars have typically been found. This is known as the Radial Velocity Method (aka. Doppler Spectroscopy), where changes in the stars spectra indicate that it is moving, which is an indication of planets exerting their gravitational influence on it. As Dr. Stamatellos added:

This was the first time that we were able not only to see planets forming in computer simulations but also to determine their initial properties with great detail. It was fascinating to find that these planets are of the fast and furious kind they form quickly and they are unexpectedly hot.

These results are nothing if not timely. Recently, astronomers detected a second extrasolar planet around Proxima Centauri, the closest star to our own. Unlike Proxima b, which is Earth-sized, rocky, and orbits within the stars habitable zone; Proxima c is believed to be 1.5 times the size of Earth, half as massive as Neptune (making it a mini-Neptune), and orbits well-outside Proxima Centauris habitable zone.

Knowing that there is a possible mechanism that allows gas giants to form around red dwarfs stars puts us a step closer to understanding these entirely-common, but still-mysterious stars.

Further Reading: UCLan, Astronomy & Astrophysics

Like Loading...

Continue reading here:

Giant Planets Could Form Around Tiny Stars in Just a Few Thousand Years - Universe Today

What will happen when robots have taken all the jobs? – Telegraph.co.uk

To some this will sound like a nanny-state hellscape, and Susskind does not shy from calling his proposed solution The Big State. He does not, however, go into detail about how exactly the community will decide which activities are worthy of payment. Perhaps we will be subject to the tyranny of a slim majority that decides dog-breeding, classical music or literary criticism are valueless activities, in which case no one will ever do them again.

But the moral objection to UBI that it will encourage laziness and anomie is always at bottom a puritan condescension. If one asked Susskind whether, if he never had to worry about money, he would just spend all day watching reruns of Bake Off and slumping into potato-ish ennui, he would probably deny it. So why assume it of everyone else?

As it turns out, Bertrand Russell anticipated this objection 90 years ago: It will be said that while a little leisure is pleasant, men would not know how to fill their days if they had only four hours work out of the 24. Insofar as this is true in the modern world it is a condemnation of our civilisation; it would not have been true at any earlier period. There was formerly a capacity for light-heartedness and play which has been to some extent inhibited by the cult ofefficiency.

Modern sceptics might still dismiss Russells argument as a Fabian pipe-dream, but the cult of efficiency is still very much abroad, and it is indeed what is driving the race to automation. Susskinds careful analysis shows that it will be an increasingly unignorable problem, even if his proposed solution will not convince everyone. At the last gasp, he even drops in the alarming recommendation that our future politicians should guide us on what it means to live a flourishing life, in the face of which prospect one might after all be happier to resign oneself to a robot apocalypse.

A World Without Work is published by Allen Lane at 20. To order your copy for 16.99, call 0844 871 1514 or visit the Telegraph Bookshop

Here is the original post:

What will happen when robots have taken all the jobs? - Telegraph.co.uk

We know the earth is warming. We know that will stress water in the West. But we don’t know how. – The Colorado Sun

Flavio Lehner was a graduate student working with computer models simulating Earths climate at the University of Berne in Switzerland when he had a chance to join a research vessel collecting sea temperatures and measuring ocean currents between Greenland and Svalbard, Norway.

As a lifestyle, field work is very agreeable, Lehner said. But for me, it was a watershed moment. I had to decide which way to go.

Was it to be a life in the real world of ocean voyages or mathematical abstractions?

Climate is changing, Colorado researchers agree. But how will it change snow and water in the West?

They had been measuring ocean currents for 10 years, Lehner said. In real-world data collection, you look at one fraction of the Earth for a long time. With models, you can look at the big-picture questions.

Two of those big-picture questions are how much snow will fall on the mountains of the West and how much water will be available for the regions forests, farms and cities in a world growing warmer as greenhouse gases build up in the atmosphere.

Today, Lehner, 35, and his colleagues at the National Center for Atmospheric Research in Boulder, are trying to divine answers through a welter of mathematical calculations designed to reflect how the world works.

Those equations are linked together in NCARs Community Earth System Model, a sort of algorithmic Rube Goldberg machine, which connects a set of algorithms representing the laws of physics that govern the planet thermodynamics, transfer of radiation and global conservation of momentum and water and uses them to generate a picture of the future.

It takes years to construct such a model, and it is hoped it accurately reflects the world. The models do have deficiencies, and we work on those, Lehner said.

For example, there is a tongue of cold water in the tropical Pacific Ocean near the equator. The position and shape of it dont look realistic in a lot of models, Lehner said, and that in turn, could affect predicted rain patterns.

If you think of a map of the world overlaid by a checkerboard, you get a vision of the cells into which the data is distributed Greenland ocean temperatures into the Greenland cell and Rocky Mountain snowfall into the Colorado cell. The cells are big 60 miles by 60 miles to as much as 150 miles on a side.

The smaller the cells the better the resolution in the model and the clearer the picture, but more computer power and detailed data are required.

The model is run on various scenarios to see what will happen to the world. Seeing what happens to rain and snow in the West is trickier.

Temperature on a regional scale is clear, Lehner said. If it is particularly warm in Boulder, it is going to be warm in Denver, but it can rain in Boulder and not in Denver.

The mountains make the Western cells even more difficult to model. As a result, predicting what is going to happen to rain and snow in the West is challenging.

The big models dont even agree on whether there will be more or less precipitation in the West as the world warms.

As the air gets hotter, it can hold more moisture 7% more for each 1-degree Celsius increase in temperature but whether that translates to more rain or just a few heavier storms is unclear.

Hot air that rises at the equator moves hundreds of miles north and south, descending to create a band of deserts around the world, including the Sahara, Gobi and the American Southwests Sonoran.

The models show the Southwest deserts advancing north as the world heats up, but by just 10 miles in some, and by 100 miles in others. It is easier to see the movement over the oceans, which are flat, Lehner said, but more complicated once mountains and valleys are added. So, over land it is not always clear what will happen to the rainfall and deserts.

To better calibrate the Earth system models, 30 of the big model groups around the world from Japan to China to Russia to Canada to Boulder are participating in an exercise called the Climate Model Intercomparison Project, or CMIP, in which they all run the same data set to see where their models differ.

It isnt straightforward when you see these models dont agree, Lehner said. Then its a lot of detective work to figure out why.

NCARs model is housed in Eagle, the centers supercomputer in Cheyenne, and Lehner can tap into it from his laptop (a bit like checking your bank balance online) and run simulations.

While Eagle is the fastest supercomputer in the world dedicated to energy research performing 8 million-billion calculations per second to run the data through the models mass of algorithms takes at least 24 hours for a 20-year projection; a century will take weeks.

While modelers try to sort out the glitches and differences, all the models do agree on quite a lot, including the basic fact that as the amount of greenhouse gases in the atmosphere rises, the world will be warmer and in many places drier.

In the political realm, if we dont have the answer for sure, we dont know anything, Lehner said. We dont yet know with any certainty what will happen to precipitation over the Southwest, but we can anticipate that in a warmer world and a warmer world is certain we will see more stresses on water resources.

To get a better sense of what will happen at a regional level, researchers take the data from Earth system models and downscale it to smaller models.

United States Geological Survey researcher Gregory McCabe, for example, constructed a hydrological water balance model taking into account variables such as precipitation, temperature, soil moisture and snow accumulation and melt. The cells in this model were approximately 2 miles by 2 miles.

The model showed that since 1980 there have been lower-than-average snow conditions in the western U.S. that are unprecedented within the context of 20th century climate.

When the model was applied to the Upper Colorado River Basin with a future average temperature increase of 0.86 degrees Celsius, stream flow declined by 8%. When the average temperature was increased to 2 degrees Celsius, stream flow dropped 17%.

Weve seen a shift in peak runoff to earlier in the year, McCabe said. So, the water is coming off sooner in several places in the West that has implications of how much water there will be in the river in July.

Models can be run backward into the past as well into the future. Using paleoclimate data from tree rings going back to 1490, McCabe and his colleagues reconstructed snowfall and stream flow in the Upper Colorado River basin which includes parts of Colorado, New Mexico, Utah and Wyoming.

Using the historical record, they concluded that under the warmer temperatures used in the hydrology model, the basin is likely to experience periods of water shortage more severe than anything in the past 500 years.

Lehner was part of a team that did a similar paleo study of the Upper Rio Grande Basin looking at how much of the snow made it into the river and concluded that the current, steep declining trend is unprecedented in the context of the last 445 years.

Katrina Bennett, a hydrologist at the Los Alamos National Laboratory in New Mexico, downscaled Earth system model output into a hydrological model (with cells about 3.7 miles a side) to look at what would happen to stream flows as forests were lost to fire and pests in the San Juan Basin and found that that their disappearance alone could reduce stream flows in basin 6 to 11%.

Over the next 50 to 100 years as forests are replaced with shrubs and the water balance shifts, Bennett said, the question is how far?

MORE: Climate change is transforming Western forests. And that could have big consequences far beyond wildfires.

Between 2000 and 2014, the Colorado River suffered the worst 15-year drought on record. Bradley Udall, a research scientist at Colorado State University, and Jonathan Overpeck, at the University of Arizona, sought to parse what was happening to the river.

Using the temperature-water flow sensitives of a hydrological model, they concluded that while droughts in the past were driven by a lack of rain and snow, this drought was in large part caused by high temperatures.

About a third of the lost flow was the result of record-setting temperatures that caused evaporation from streams and soils, as well as evapotranspiration as plants suck up water. By 2050, they projected heat could reduce the Colorado River flows 20%.

You can already see the effects of heat, Udall said. I spent a week hiking in the Red River Valley in Utah at 10,000 feet. Wed had a wet winter, but by September, it was extremely dry. Streams were dry, marshy wetlands crunched underfoot.

McCabes work on stream flows, Lehners centuries look back on the Rio Grande, Bennetts examination of the interplay of San Juan forests and streams and Udalls assessment of the impact of heat on the Colorado River are each smaller pictures of what is happening and what may happen in the future.

Science is reliant on models from big global climate to smaller hydrology models, Udall said. What weve learned out of these global climate models are the extremes, the best and worst case. It gives you the sense of the range, but not what is most probable.

The struggle of what we know and what we dont know should not paralyze us, he said. We know a lot, and it should tell us to be cautious. Since 2000, weve learned a lot, and it is mostly bad.

CORRECTION: This story was updated at 1:42 p.m. on Jan. 22, 2020, to correctly describe the states in the Upper Colorado River basin included in Gregory McCabes research.

This reporting is made possible by our members. You can directly support independent watchdog journalism in Colorado for as little as $5 a month. Start here: coloradosun.com/join

Excerpt from:

We know the earth is warming. We know that will stress water in the West. But we don't know how. - The Colorado Sun

Super Computer model rates Newcastle United chances of beating Everton and relegation probability – The Mag

Interesting overview of Newcastle United for the season and todays match against Everton at Goodison Park.

The super computer model predictions are based on the FiveThirtyEight revision to the Soccer Power Index, which is a rating mechanism for football teams which takes account of over half a million matches, and is based on Optas play-by-play data.

They have analysed all Premier League matches this midweek, including this game against Carlo Ancelottis team.

Their computer model gives Everton a 62% chance of a win, it is 23% for a draw and a 15% possibility of a Newcastle win (percentage probabilities rounded up/down to nearest whole number).

When it comes to winning the title, they have the probability at 99% Liverpool and the rest (including Man City now) nowhere, a quite remarkable situation with still four months of the season remaining.

Also interesting to see how the computer model now rates the percentage probability chances of relegation:

91% Norwich

67% Bournemouth

50% Villa

31% West Ham

19% Watford

12% Brighton

11% Burnley

10% Newcastle United

4% Southampton

3% Palace

1% Arsenal

So they now rate Newcastle a one in ten chance of going down, Steve Bruces team seven points clear of the relegation zone.

The bookies have Newcastle at 8/1 to be relegated after the win over Chelsea, pretty much matching the Soccer Power Index model chances of 9/1 (10%).

Read more here:

Super Computer model rates Newcastle United chances of beating Everton and relegation probability - The Mag

ASC20 Finals to be Held in Shenzhen, Tasks Include Quantum Computing Simulation and AI Language Exam – HPCwire

BEIJING, Jan. 21, 2020 The 2020 ASC Student Supercomputer Challenge (ASC20) announced the tasks for the new season: using supercomputers to simulate Quantum circuit and training AI models to take English test. These tasks can be unprecedented challenges for the 300+ ASC teams from around the world. From April 25 to 29, 2020, top 20 finalists will fiercely compete at SUSTech in Shenzhen, China.

ASC20 set up Quantum Computing tasks for the first time. Teams are going to use the QuEST (Quantum Exact Simulation Toolkit) running on supercomputers to simulate 30 qubits in two cases: quantum random circuits (random.c), and quantum fast Fourier transform circuits (GHZ_QFT.c). Quantum computing is a disruptive technology, considered to be the next generation high performance computing. However the R&D of quantum computers is lagging behind due to the unique properties of quantum. It adds extra difficulties for scientists to use real quantum computers to solve some of the most pressing problems such as particle physics modeling, cryptography, genetic engineering, and quantum machine learning. From this perspective, the quantum computing task presented in the ASC20 challenge, hopefully, will inspire new algorithms and architectures in this field.

The other task revealed is Language Exam Challenge. Teams will take on the challenge to train AI models on an English Cloze Test dataset, vying to achieve the highest test scores. The dataset covers multiple levels of English language tests in China, including the college entrance examination, College English Test Band 4 and Band 6, and others. Teaching the machines to understand human language is one of the most elusive and long-standing challenges in the field of AI. The ASC20 AI task signifies such a challenge, by using human-oriented problems to evaluate the performance of neural networks.

Wang Endong, ASC Challenge initiator, member of the Chinese Academy of Engineering and Chief Scientist at Inspur Group, said that through these tasks, students from all over the world get to access and learn the most cutting-edge computing technologies. ASC strives to foster supercomputing & AI talents of global vision, inspiring technical innovation.

Dr. Lu Chun, Vice President of SUSTech host of the ASC20 Finals, commented that supercomputers are important infrastructure for scientific innovation and economic development. SUSTech makes focused efforts on developing supercomputing and hosting ASC20, hoping to drive the training of supercomputing talent, international exchange and cooperation, as well as inter discipline development at SUSTech.

Furthermore, during January 15-16, 2020, the ASC20 organizing committee held a competition training camp in Beijing to help student teams prepare for the ongoing competition. HPC and AI experts from the State Key Laboratory of High-end Server and Storage Technology, Inspur, Intel, NVIDIA, Mellanox, Peng Cheng Laboratory and the Institute of Acoustics of the Chinese Academy of Sciences gathered to provide on-site coaching and guidance. Previous ASC winning teams also shared their successful experiences.

About ASC

The ASC Student Supercomputer Challenge is the worlds largest student supercomputer competition, sponsored and organized by Asia Supercomputer Community in China and supported by Asian, European, and American experts and institutions. The main objectives of ASC are to encourage exchange and training of young supercomputing talent from different countries, improve supercomputing applications and R&D capacity, boost the development of supercomputing, and promote technical and industrial innovation. The annual ASC Supercomputer Challenge was first held in 2012 and has since attracted over 8,500 undergraduates from all over the world. Learn more ASC athttps://www.asc-events.org/.

Source: ASC

Go here to read the rest:

ASC20 Finals to be Held in Shenzhen, Tasks Include Quantum Computing Simulation and AI Language Exam - HPCwire

AMD Epyc 7742 CPUs Tapped for European Weather-Predicting Supercomputer – Tom’s Hardware

After recently pulling off a big win in the Archer 2 Supercomputer installed in Edinburgh, AMD this week announced another big supercomputer contract,. This time it's to build the weather-predicting BullSequana XH2000 with Atos. The supercomputer will be installed for the European Centre for Medium-Range Weather Forecasts (ECMWF) research center and 24/7 operational weather service in Bologna, Italy.

Atos has multiple XH2000 supercomputers already, but the one in question will be built on AMD Epyc 7742 processors, which is a 64-core 128-thread chip with a 225W TDP.

Exactly how many chips will be installed wasn't detailed, but we do know that each individual XH2000 42U cabinet will be able to house up to 32 compute blades. These are liquid-cooled 1U blades each capable of housing three compute nodes that can support a wide variety of platforms, including AMD's Epyc chips as well as Intel's Xeon and Nvidia's Volta V100. On the modular platform, this means that the total systems can be configured exactly to the customer's liking.

This new solution will optimize ECMWF's current workflow to enable it to deliver vastly improved numerical weather predictions," Sophie Proust, Atos Group CTO, said in a statement. "Most importantly though, this is a long-term collaboration, one in which we will work closely with ECMWF to explore new technologies in order to be prepared for next-generation applications.

Assembly of the XH2000 installation in Bologna will commence this year with the aim of starting its service life in 2021. It will be accessible to researchers from more than 30 countries, and the goal is running higher-resolution weather predictions of 10km.

Original post:

AMD Epyc 7742 CPUs Tapped for European Weather-Predicting Supercomputer - Tom's Hardware

Supercomputer forecasts Liverpool’s chances of winning Premier League title – Daily Star

Liverpool have forged a 13-point lead in the Premier League this season having gone on an astounding run of form under Jurgen Klopp.

The Reds havent lost in over a year of league football and have two games in hand over second-placed Manchester City.

Despite failing to win the top flight in almost 30 years, Liverpool have a 98 per cent chance of doing so over the next few months.

The experts at fivethirtyeight have crunched the numbers to rate the chances of Klopps side coming out on top having missed out by a point last season, and they are nothing sort of conclusive.

The Merseyside outfit have been tipped to claim 100 points over the course of the season to match Citys record total from the 2017-18 campaign, having only failed to win once in their first 21 fixtures.

In terms of the rest of the table, Leicester will hang on to third as Chelsea grab the final Champions League qualifying spot.

Manchester United will finish a place above Tottenham in fifth while Mikel Artetas Arsenal sit in the bottom half of the table in 11th.

Fivethirtyeight got their predictions from simulating the remaining fixtures 20,000 times, with Norwich, Bournemouth and Aston Villa facing the drop.

Liverpool boss Klopp hasnt won a league title since his Bundesliga triumph with Borussia Dortmund back in 2012, and remains cautious over his chances this time around.

Video Unavailable

Click to playTap to play

Play now

He said of United ahead of their meeting at Anfield: As a fellow manager it is easy to see the progress they make, although others on the outside are often not so quick to recognise.

They have recruited some exceptional players and the evolution of their squad makes them one of the strongest in Europe, I would say.

Look at their talent from the 'keeper through to the forwards. They have world-class talent in that team and a squad full of match-winners. Big investment has resulted in a very strong group.

Of course, looking to build something takes time. We know this ourselves. But they build with outstanding results and performances as a foundation.

Their win this week in the FA Cup against Wolves was yet more evidence of how dangerous they are.

United compete in four competitions still this says a lot about their quality.

They remain with much to fight for in the league, a semi-final second leg to come in the Carabao Cup, into the fourth round of the FA Cup and the knockout stages in Europe.

You cannot compete as they do in these competitions if they are not an outstanding football team.

Go here to read the rest:

Supercomputer forecasts Liverpool's chances of winning Premier League title - Daily Star

Scientists Combine AI With Biology to Create Xenobots, the World’s First ‘Living Robots’ – EcoWatch

Formosa's plastics plant is seen dominating the landscape in Point Comfort, Texas. Julie Dermansky / DeSmogBlog

Diane Wilson is seen with volunteers before their meeting across the street from Formosa's Point Comfort manufacturing plant. Julie Dermansky / DeSmogBlog

Within 10 minutes she collected an estimated 300 of the little plastic pellets. Wilson says she will save them as evidence, along with any additional material the group collects, to present to the official and yet-to-be-selected monitor.

Wilson received the waiver forms from Formosa a day after the deadline. The group planned to set out by foot on Jan. 18, which would allow them to cover more ground on their next monitoring trip. They hope to check all of the facility's 14 outtakes where nurdles could be still be escaping. Any nurdles discharged on or after Jan. 15 in the area immediately surrounding the plant would be in violation of the court settlement.

Ronnie Hamrick picks up a mixture of new and legacy nurdles near Formosa's Point Comfort plant. Julie Dermansky / DeSmogBlog

Ronnie Hamrick holds a few of the countless nurdles that litter the banks of Cox Creek near Formosa's Point Comfort facility. Julie Dermansky / DeSmogBlog

Lawsuit Against Formosas Planned Louisiana Plant

On that same afternoon, Wilson learned that conservation and community groups in Louisiana had sued the Trump administration, challenging federal environmental permits for Formosa's planned $9.4 billion plastics complex in St. James Parish.

The news made Wilson smile. "I hope they win. The best way to stop the company from polluting is not to let them build another plant," she told me.

The lawsuit was filed in federal court against the Army Corps of Engineers, accusing the Corps of failing to disclose environmental damage and public health risks and failing to adequately consider environmental damage from the proposed plastics plant. Wilson had met some of the Louisiana-based activists last year when a group of them had traveled to Point Comfort and protested with her outside Formosa's plastics plant that had begun operations in 1983. Among them was Sharon Lavigne, founder of the community group Rise St. James, who lives just over a mile and a half from the proposed plastics complex in Louisiana.

Back then, Wilson offered them encouragement in their fight. A few months after winning her own case last June, she gave them boxes of nurdles she had used in her case against Formosa. The Center for Biological Diversity, one of the environmental groups in the Louisiana lawsuit, transported the nurdles to St. James. The hope was that these plastic pellets would help environmental advocates there convince Louisiana regulators to deny Formosa's request for air permits required for building its proposed St. James plastics complex that would also produce nurdles. On Jan. 6, Formosa received those permits, but it still has a few more steps before receiving full approval for the plant.

Anne Rolfes, founder of the Louisiana Bucket Brigade, holding up a bag of nurdles discharged from Formosa's Point Comfort, Texas plant, at a protest against the company's proposed St. James plant in Baton Rouge, Louisiana, on Dec. 10, 2019. Julie Dermansky / DeSmogBlog

Construction underway to expand Formosa's Point Comfort plant. Julie Dermansky / DeSmogBlog

Silhouette of Formosa's Point Comfort Plant looming over the rural landscape. Julie Dermansky / DeSmogBlog

From the Gulf Coast toEurope

Just a day after Wilson found apparently new nurdles in Point Comfort, the Plastic Soup Foundation, an advocacy group based in Amsterdam, took legal steps to stop plastic pellet pollution in Europe. On behalf of the group, environmental lawyers submitted an enforcement request to a Dutch environmental protection agency, which is responsible for regulating the cleanup of nurdles polluting waterways in the Netherlands.

The foundation is the first organization in Europe to take legal steps to stop plastic pellet pollution. It cites in its enforcement request to regulators Wilson's victory in obtaining a "zero discharge" promise from Formosa and is seeking a similar result against Ducor Petrochemicals, the Rotterdam plastic producer. Its goal is to prod regulators into forcing Ducor to remove tens of millions of plastic pellets from the banks immediately surrounding its petrochemical plant.

Detail of a warning sign near the Point Comfort Formosa plant. The waterways near the plant are polluted by numerous industrial facilities in the area. Julie Dermansky / DeSmogBlog

Nurdles on Cox Creek's bank on Jan. 15. Wilson hopes her and her colleagues' work of the past four years will help prevent the building of more plastics plants, including the proposed Formosa plant in St. James Parish. Julie Dermansky / DeSmogBlog

A sign noting the entrance to the Formosa Wetlands Walkway at Port Lavaca Beach. The San Antonio Estuary Waterkeeper describes the messaging as an example of greenwashing. Julie Dermansky / DeSmogBlog

From Your Site Articles

Related Articles Around the Web

Read more:

Scientists Combine AI With Biology to Create Xenobots, the World's First 'Living Robots' - EcoWatch

New, detailed pictures of planets, moons, and comets are neither photos nor animations theyre made using data from 50 years of NASA missions -…

captionA snapshot of the American Natural History Museums Worlds Beyond Earth immersive theater experience.sourceD. Finnin/ AMNH

For many years, there were only two ways for astronomers to see distant worlds in our solar system: Either they used a powerful telescope, or they sent spacecraft into the inky blackness to get up close and personal.

But a third option is emerging to offer unprecedented detail and accuracy: data visualization.

At the American Museum of Natural History, a new planetarium show reveals images of Saturns moon Titan, the 67P comet, and the lunar surface, all generated using data collected during 50 years of space missions.

Were not making anything up here, Carter Emmart, director of astrovisualization for that show, said at a press conference. The height, color, and shapes we see come from actual measurements. You get to see these beautiful objects as they actually are, to the best of our abilities.

Carter and his team relied on data gathered by robotic probes, telescopes, and supercomputer simulations from NASA, the European Space Agency (ESA), and Japan Aerospace Exploration since the 1970s.

Were taking numbers and turning that into a picture, he told Business Insider. Weve created a 3D world that lives in the computer and can be shown on screen.

Take a look at some of the most impressive visuals from the show,Worlds Beyond Earth, which opened Tuesday.

Other planetariums, like Chicagos Adler Planetarium, also utilize data visualization, relying on research about planetary orbits, surface maps, and the location of spacecraft to create accurate imagery. But the Hayden Planetarium in New York displays the most comprehensive color palette.

In 1971, Falcon carried astronauts David Scott and Jim Irwin, along with the first Lunar Roving Vehicle, down to the moon. That so-called moon buggy helped Scott and Irwin explore a much wider swath of the lunar surface.

Spacecraft are extensions of ourselves: our eyes, ears, and nose, the shows curator, Denton Ebel, told Business Insider.

The red planet was once akin to Earth, with plentiful water and active volcanoes. But the planets core cooled just 500 million years after Mars formation. That cooling, according to museum scientists, caused the decay of Mars magnetic field, which protected the planet from solar winds. Without it, Mars lost its atmosphere.

According to Ebel, Magellan revealed that Venus was also once like Earth but now has a surface hot enough to melt lead.

Venus is a hellscape, really, Ebel said, because it, too, lacks a magnetic field. Without that protection, solar winds stripped away any water.

Cassini discovered that Saturns moon Enceladus sprays plumes of water into space. That told scientists that the moon has an ocean of liquid water under its icy surface.

Titan boasts a thick atmosphere and weather. But its too cold to hold liquid water; its lakes and rain are made of liquid methane.

Saturns rings bubble with moonlets: house-sized baby moons that form as space dust coalesces. Ebel said the process by which these moonlets were born parallels how all eight planets in our solar system formed 4.5 billion years ago.

Planets formed within this disk which resembled Saturns rings as did moons, comets, and asteroids. Then eight planets (along with dwarf planet Pluto) grew as they incorporated more material.

In 1979, NASAs Voyager I mission took snapshots of Io, revealing the moon to be the most volcanically active object in the solar system. Some of Ios volcanoes spew lava dozens of miles into the sky; its surface is peppered by lakes of lava.

Ebels group used data from NASAs Galileo mission to glean insight into Jupiters magnetic field.

The largest planet in our solar system, Jupiter has hot, liquid, metallic hydrogen churning around its rocky core. That generates a powerful magnetic field.

One of Jupiters moons, Ganymede, also has such a field.

The comet, called 67P/Churyumov-Gerasimenko, is only a few kilometers across, but it contains frozen water, dust, and amino acids the basic chemical building blocks of life.

When comets collided with planets and moons earlier in the solar systems history, some delivered life-giving chemicals like phosphorus.

It took Rosetta 10 years to reach 67P, enter its orbit, and send a lander down to the surface.

He added that production work on the Worlds Beyond Earth show lasted a full year so that the team could make damn sure that every bit of space that were putting up there on the screen is accurate to the best of our knowledge.

Read more:

New, detailed pictures of planets, moons, and comets are neither photos nor animations theyre made using data from 50 years of NASA missions -...

The race to exascale is on while Canada watches from the sidelines – CBC.ca

This column is an opinionby Kris Rowe,a computational scientist working to get science and engineering applications ready for the next generation of exascale supercomputers. Born and educated in Canada, he has worked at major Canadian and American Universities, as well as two U.S. national laboratories.For more information about CBC's Opinion section, please see theFAQ.

Some of the brightest minds from around the globe have been quietly working on technology that promises to turn the world on its head, but so far Canada has been watching from the sidelines.

While it is unlikely that people will be huddled around their televisions to watch the power to these incredible machines being switched on, the scientific discoveries that follow the debut of exascale computers will change our daily lives in unimaginable ways.

So what exactly is an exascale computer?

It's a supercomputer capable of performing more than a billion billion calculations per second or 1 exaflops.

"Exa"is the metric system prefix for such grandiose numbers, and "flops"is an abbreviation of "floating-point operations per second."

For comparison, my laptop computer is capable of about 124 gigaflops, or 124 billion calculations per second, which sounds fast.

According to the TOP500 list, today's fastest supercomputer is Oak Ridge National Laboratory's Summit, which tops out at a measured 148.6 petaflops about onemillion times faster than my laptop.

However, Summit is a mere welterweight relative to an exascale supercomputer, which is more than 60 times faster.

To put that speed in perspective, if you took all the calculations a modern laptop can perform in a single second, and instead did the arithmetic non-stop with pencil and paper at a rate of one calculation per second, it would take roughly 3,932 years to finish.

In a single second, a supercomputer capable of 1 exaflops could do a series of calculations that would take about 31.7 billion years by hand.

While colloquially a supercomputer is referred to as a single entity, it is actually composed of thousands of servers or compute nodes connected by a dedicated high-speed network.

You might assume that an exascale supercomputer could be built simply by using 60 times more compute nodes than today's fastest supercomputer; however, the cost, power consumption, and other constraints make this approach nearly impossible.

Fortunately, computer scientists have an ace up their sleeves, known as a GPU accelerator.

Graphics processing units (GPUs) are the professional-grade cousins of the graphics card in your personal computer and are capable of performing arithmetic at a rate of several teraflops (ie. really, really fast). And a feasible route to exascale can be realized by not only making supercomputers largerbut also denser.

Sporting six extremely powerful GPUs per compute node, Argonne National Laboratory's Aurora will follow this approach. Scheduled to come online in 2021, Aurora will be the first exascale supercomputer in North America althoughthe title of first in the world may go to China's Tianhe-3, which is slated to power up sometime in 2020.

Several other machines in the U.S., China, Europeand Japan are scheduled to be brought to life soon after Aurora, using similar architectures

What exactly does one do with all that computing power? Change the world, of course.

Exascale supercomputers will allow researchers to tackle problems which were impossible to simulate using the previous generation of machines, due to the massive amounts of data and calculations involved.

Small modular nuclear reactor (SMR) design, wind farm optimizationand cancer drug discovery are just a few of the applications that are priorities of the U.S. Department of Energy (DOE) Exascale Computing Project. The outcomes of this project will have a broad impact and promise to fundamentally change society, both in the U.S. and abroad.

So why isn't Canada building one?

One reason is that exascale supercomputers come with a pretty steep sticker price. The contracts for the American machines are worth more than $500 million US each. On the other side of the Atlantic, the EU signed off on 1 billion for their own exascale supercomputer.

While the U.S. and Europe have much larger populations, the annual per capita spending on large-scale computing projects demonstrates how much Canada is lagging in terms of investment. The U.S. DOE alone will spend close to $1 billion US on its national supercomputing facilities next year, a number which does not take into account spending by other federal organizations, such as the U.S. National Science Foundation.

In comparison, Compute Canada the national advanced research computing consortium providing supercomputing infrastructure to Canadian researchers has a budget that is closer to $114 million Cdn.

In its 2018 budget submission, Compute Canada clearly lays out what it will take to bring our country closer to the forefront of supercomputing on the world stage. Included is the need for increasing the annual national spending on advanced research computing infrastructure to an estimated $151 million a 32 per cent increase from where it is now. Given cost of the American exascale supercomputers, this is likely a conservative estimate.

However, the need for an exascale supercomputer in Canada does not seem to be on the radar of the decision-makers in the federal and provincial governments.

Hanlon's razor would suggest that this is not due to some sinister plot by politicians to punish the nation's computer geeks; rather, our politicians likely don't fully understand the benefits of investing in the technology.

For example, the recent announcement by the premiers of Ontario, Saskatchewanand New Brunswick to collaborate on aggressively developing Canada's small modular reactor (SMR) technology failed to mention the need for advanced computing resources. In contrast, corresponding U.S. DOE projects explicitly state that they will require exascale computing resources to meet their objectives.

Why should the Canadian government and you care?

For the less altruistic, a benefit of supercomputing research is "trickle-down electronics." The quiet but persistent legacy of the space race is technology like the microwave oven found in most kitchens. Similarly, the technological advances necessary to achieve exascale computing will also lead to lower-cost and more energy-efficient laptops, improved high-definition computer graphics, and prevalent AI in our connected devices.

But more importantly for Canada, how we invest our federal dollars says a lot about what we value as a nation.

It's a statement about how we value the sciences. Do we want to attract world-class researchers to our universities? Do we want Canada to be a leader in climate research, renewable energyand medical advances?

It's also a statement about how much we value Canadian businesses and innovation.

The user-facility model of the U.S. DOE provides businesses with access to singular resources, which gives American companies a competitive advantage in the world marketplace. Compute Canada has a similar mandate, and given the large number of startup companies and emerging industries in Canada, we leave our economy on an unequal footing without significant investments in advanced computing infrastructure.

Ultimately, supercomputers are apolitical: they can just as easily be used for oil exploration as wind farming. Their benefits can be applied across the economy and throughout society to develop new products and solve problems.

At a time when Canada seems so divided, building an exascale computer is the kind of project we need to bring the country together.

[Note:The opinions expressed are those of the author and do not necessarily represent the official policy or position of Argonne National Laboratory, the U.S. Department of Energyor the U.S. government.]

Follow this link:

The race to exascale is on while Canada watches from the sidelines - CBC.ca

SC19 Invited Talk: HPC Solutions for Geoscience Application on the Sunway Supercomputer – insideHPC

Lin Gan from Tsinghua University

In this video fromSC19, Lin Gan from Tsinghua University presents: HPC Solutions for Geoscience Application on the Sunway Supercomputer.

In recent years, many complex and challenging numerical problems, in areas such as climate modeling and earthquake simulation, have been efficiently resolved on Sunway TaihuLight, and have been successfully scaled to over 10 million cores with inspiringly good performance. To carefully deal with different issues such as computing efficiency, data locality, and data movement, novel optimizing techniques from different levels are proposed, including some specific ones that fit well with the unique architectural futures of the system and significantly improve the performance. While the architectures for current- and next-generation supercomputers have already diverted, it is important to see the pro and cons of whatever we already have, and to make up the bottlenecks as well as maximize the advantages. This talk contains the summary of the most essential HPC solutions that greatly contribute to the performance boost in our efforts on porting geoscience applications, the discussions and rethinking, and the potential improvements we could undertake.

Lin Gan is a postdoctoral research fellow in the Department of Computer Science and Technology at Tsinghua University and the assistant director of the National Supercomputing Center in Wuxi. His research interests include high-performance computing solutions to geoscience applications based on hybrid platforms such as CPUs, FPGAs, and GPUs. Gan received a PhD in computer science from Tsinghua University. He has received the 2016 ACM Gordon Bell Prize, Tsinghua-Inspur Computational Geosciences Youth Talent Award, and the FPL Significant Paper award. He is a member of IEEE.

Download the MP3

See our complete coverage of SC19

Check out our insideHPC Events Calendar

Continue reading here:

SC19 Invited Talk: HPC Solutions for Geoscience Application on the Sunway Supercomputer - insideHPC

Claiming dividends? ‘Big Brother’ tax computer is watching you – The Times

HMRCs Connect system allows it to match different bits of information about peoples affairsALAMY

The taxman is cracking down on self-assessment taxpayers who claim dividends on shares or income from rental properties.

HM Revenue & Customs is using data from its Connect super-computer programme, which allows it to match different bits of information about peoples affairs.

Saffery Champness, one of Britains biggest chartered accountancy firms, reports that more of its clients are receiving warning letters from HMRC after filling in their tax returns last year.

The letters often question whether information people have put down is correct, asking them to double check.

The accountant says clients have been accused of claiming entrepreneurs relief when not entitled to it; receiving dividends from shares or foreign income that has not been declared for capital gains tax purposes; failing to declare full

Want to read more?

Subscribe now and get unlimited digital access on web and our smartphone and tablet apps, free for your first month.

Read more:

Claiming dividends? 'Big Brother' tax computer is watching you - The Times