The Case Against Quantum Computing – IEEE Spectrum

Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decadesand without any practical results to show for it.

We've been told that quantum computers could provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complex systems, and artificial intelligence." We've been assured that quantum computers will forever alter our economic, industrial, academic, and societal landscape." We've even been told that the encryption that protects the world's most sensitive data may soon be broken" by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.

Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.

It's become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the world's top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.

In light of all this, it's natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5 to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority that answers, Not in the foreseeable future." Having spent decades conducting research in quantum and condensed-matter physics, I've developed my very pessimistic view. It's based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work.

The idea of quantum computing first appeared nearly 40 years ago, in 1980, when the Russian-born mathematician Yuri Manin, who now works at the Max Planck Institute for Mathematics, in Bonn, first put forward the notion, albeit in a rather vague form. The concept really got on the map, though, the following year, when physicist Richard Feynman, at the California Institute of Technology, independently proposed it.

Realizing that computer simulations of quantum systems become impossible to carry out when the system under scrutiny gets too complicated, Feynman advanced the idea that the computer itself should operate in the quantum mode: Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy," he opined. A few years later, University of Oxford physicist David Deutsch formally described a general-purpose quantum computer, a quantum analogue of the universal Turing machine.

The subject did not attract much attention, though, until 1994, when mathematician Peter Shor (then at Bell Laboratories and now at MIT) proposed an algorithm for an ideal quantum computer that would allow very large numbers to be factored much faster than could be done on a conventional computer. This outstanding theoretical result triggered an explosion of interest in quantum computing. Many thousands of research papers, mostly theoretical, have since been published on the subject, and they continue to come out at an increasing rate.

The basic idea of quantum computing is to store and process information in a way that is very different from what is done in conventional computers, which are based on classical physics. Boiling down the many details, it's fair to say that conventional computers operate by manipulating a large number of tiny transistors working essentially as on-off switches, which change state between cycles of the computer's clock.

The state of the classical computer at the start of any given clock cycle can therefore be described by a long sequence of bits corresponding physically to the states of individual transistors. With N transistors, there are 2N possible states for the computer to be in. Computation on such a machine fundamentally consists of switching some of its transistors between their on" and off" states, according to a prescribed program.

Illustration: Christian Gralingen

In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. Although a variety of physical objects could reasonably serve as quantum bits, the simplest thing to use is the electron's internal angular momentum, or spin, which has the peculiar quantum property of having only two possible projections on any coordinate axis: +1/2 or 1/2 (in units of the Planck constant). For whatever the chosen axis, you can denote the two basic quantum states of the electron's spin as and .

Here's where things get weird. With the quantum bit, those two states aren't the only ones possible. That's because the spin state of an electron is described by a quantum-mechanical wave function. And that function involves two complex numbers, and (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, and , each have a certain magnitude, and according to the rules of quantum mechanics, their squared magnitudes must add up to 1.

That's because those two squared magnitudes correspond to the probabilities for the spin of the electron to be in the basic states and when you measure it. And because those are the only outcomes possible, the two associated probabilities must add up to 1. For example, if the probability of finding the electron in the state is 0.6 (60 percent), then the probability of finding it in the state must be 0.4 (40 percent)nothing else would make sense.

In contrast to a classical bit, which can only be in one of its two basic states, a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes and . This property is often described by the rather mystical and intimidating statement that a qubit can exist simultaneously in both of its and states.

Yes, quantum mechanics often defies intuition. But this concept shouldn't be couched in such perplexing language. Instead, think of a vector positioned in the x-y plane and canted at 45 degrees to the x-axis. Somebody might say that this vector simultaneously points in both the x- and y-directions. That statement is true in some sense, but it's not really a useful description. Describing a qubit as being simultaneously in both and states is, in my view, similarly unhelpful. And yet, it's become almost de rigueur for journalists to describe it as such.

In a system with two qubits, there are 22 or 4 basic states, which can be written (), (), (), and (). Naturally enough, the two qubits can be described by a quantum-mechanical wave function that involves four complex numbers. In the general case of N qubits, the state of the system is described by 2N complex numbers, which are restricted by the condition that their squared magnitudes must all add up to 1.

While a conventional computer with N bits at any given moment must be in one of its 2N possible states, the state of a quantum computer with N qubits is described by the values of the 2N quantum amplitudes, which are continuous parameters (ones that can take on any value, not just a 0 or a 1). This is the origin of the supposed power of the quantum computer, but it is also the reason for its great fragility and vulnerability.

How is information processed in such a machine? That's done by applying certain kinds of transformationsdubbed quantum gates"that change these parameters in a precise and controlled manner.

Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. That's a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe.

To repeat: A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.

At this point in a description of a possible future technology, a hardheaded engineer loses interest. But let's continue. In any real-world computer, you have to consider the effects of errors. In a conventional computer, those arise when one or more transistors are switched off when they are supposed to be switched on, or vice versa. This unwanted occurrence can be dealt with using relatively simple error-correction methods, which make use of some level of redundancy built into the hardware.

In contrast, it's absolutely unimaginable how to keep errors under control for the 10300 continuous parameters that must be processed by a useful quantum computer. Yet quantum-computing theorists have succeeded in convincing the general public that this is feasible. Indeed, they claim that something called the threshold theorem proves it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming logical qubits using multiple physical qubits.

How many physical qubits would be required for each logical qubit? No one really knows, but estimates typically range from about 1,000 to 100,000. So the upshot is that a useful quantum computer now needs a million or more qubits. And the number of continuous parameters defining the state of this hypothetical quantum-computing machinewhich was already more than astronomical with 1,000 qubitsnow becomes even more ludicrous.

Even without considering these impossibly large numbers, it's sobering that no one has yet figured out how to combine many physical qubits into a smaller number of logical qubits that can compute something useful. And it's not like this hasn't long been a key goal.

In the early 2000s, at the request of the Advanced Research and Development Activity (a funding agency of the U.S. intelligence community that is now part of Intelligence Advanced Research Projects Activity), a team of distinguished experts in quantum information established a road map for quantum computing. It had a goal for 2012 that requires on the order of 50 physical qubits" and exercises multiple logical qubits through the full range of operations required for fault-tolerant [quantum computation] in order to perform a simple instance of a relevant quantum algorithm." It's now the end of 2018, and that ability has still not been demonstrated.

Illustration: Christian Gralingen

The huge amount of scholarly literature that's been generated about quantum-computing is notably light on experimental studies describing actual hardware. The relatively few experiments that have been reported were extremely difficult to conduct, though, and must command respect and admiration.

The goal of such proof-of-principle experiments is to show the possibility of carrying out basic quantum operations and to demonstrate some elements of the quantum algorithms that have been devised. The number of qubits used for them is below 10, usually from 3 to 5. Apparently, going from 5 qubits to 50 (the goal set by the ARDA Experts Panel for the year 2012) presents experimental difficulties that are hard to overcome. Most probably they are related to the simple fact that 25 = 32, while 250 = 1,125,899,906,842,624.

By contrast, the theory of quantum computing does not appear to meet any substantial difficulties in dealing with millions of qubits. In studies of error rates, for example, various noise models are being considered. It has been proved (under certain assumptions) that errors generated by local" noise can be corrected by carefully designed and very ingenious methods, involving, among other tricks, massive parallelism, with many thousands of gates applied simultaneously to different pairs of qubits and many thousands of measurements done simultaneously, too.

A decade and a half ago, ARDA's Experts Panel noted that it has been established, under certain assumptions, that if a threshold precision per gate operation could be achieved, quantum error correction would allow a quantum computer to compute indefinitely." Here, the key words are under certain assumptions." That panel of distinguished experts did not, however, address the question of whether these assumptions could ever be satisfied.

I argue that they can't. In the physical world, continuous quantities (be they voltages or the parameters defining quantum-mechanical wave functions) can be neither measured nor manipulated exactly. That is, no continuously variable quantity can be made to have an exact value, including zero. To a mathematician, this might sound absurd, but this is the unquestionable reality of the world we live in, as any engineer knows.

Sure, discrete quantities, like the number of students in a classroom or the number of transistors in the on" state, can be known exactly. Not so for quantities that vary continuously. And this fact accounts for the great difference between a conventional digital computer and the hypothetical quantum computer.

Indeed, all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled exactly. They can only be approached with some limited precision. So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? There are no clear answers to these crucial questions.

While various strategies for building quantum computers are now being explored, an approach that many people consider the most promising, initially undertaken by the Canadian company D-Wave Systems and now being pursued by IBM, Google, Microsoft, and others, is based on using quantum systems of interconnected Josephson junctions cooled to very low temperatures (down to about 10 millikelvins).

The ultimate goal is to create a universal quantum computer, one that can beat conventional computers in factoring large numbers using Shor's algorithm, performing database searches by a similarly famous quantum-computing algorithm that Lov Grover developed at Bell Laboratories in 1996, and other specialized applications that are suitable for quantum computers.

On the hardware front, advanced research is under way, with a 49-qubit chip (Intel), a 50-qubit chip (IBM), and a 72-qubit chip (Google) having recently been fabricated and studied. The eventual outcome of this activity is not entirely clear, especially because these companies have not revealed the details of their work.

While I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems, I'm skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulateon a microscopic level and with enormous precisiona physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system?

My answer is simple. No, never.

I believe that, appearances to the contrary, the quantum computing fervor is nearing its end. That's because a few decades is the maximum lifetime of any big bubble in technology or science. After a certain period, too many unfulfilled promises have been made, and anyone who has been following the topic starts to get annoyed by further announcements of impending breakthroughs. What's more, by that time all the tenured faculty positions in the field are already occupied. The proponents have grown older and less zealous, while the younger generation seeks something completely new and more likely to succeed.

All these problems, as well as a few others I've not mentioned here, raise serious doubts about the future of quantum computing. There is a tremendous gap between the rudimentary but very hard experiments that have been carried out with a few qubits and the extremely developed quantum-computing theory, which relies on manipulating thousands to millions of qubits to calculate anything useful. That gap is not likely to be closed anytime soon.

To my mind, quantum-computing researchers should still heed an admonition that IBM physicist Rolf Landauer made decades ago when the field heated up for the first time. He urged proponents of quantum computing to include in their publications a disclaimer along these lines: This scheme, like all other schemes for quantum computation, relies on speculative technology, does not in its current form take into account all possible sources of noise, unreliability and manufacturing error, and probably will not work."

Editor's note: A sentence in this article originally stated that concerns over required precision were never even discussed." This sentence was changed on 30 November 2018 after some readers pointed out to the author instances in the literature that had considered these issues. The amended sentence now reads: There are no clear answers to these crucial questions."

Mikhail Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France. His name is attached to various physical phenomena, perhaps most famously Dyakonov surface waves.

More here:
The Case Against Quantum Computing - IEEE Spectrum

IonQ and University of Maryland Researchers Demonstrate Fault-Tolerant Error Correction, Critical for Unlocking the Full Potential of Quantum…

COLLEGE PARK, Md.--(BUSINESS WIRE)--Researchers from The University of Maryland and IonQ, Inc. (IonQ) (NYSE: IONQ), a leader in trapped-ion quantum computing, on Monday published results in the journal Nature that show a significant breakthrough in error correction technology for quantum computers. In collaboration with scientists from Duke University and the Georgia Institute of Technology, this work demonstrates for the first time how quantum computers can overcome quantum computing errors, a key technical obstacle to large-scale use cases like financial market prediction or drug discovery.

Quantum computers suffer from errors when qubits encounter environmental interference. Quantum error correction works by combining multiple qubits together to form a logical qubit that more securely stores quantum information. But storing information by itself is not enough; quantum algorithms also need to access and manipulate the information. To interact with information in a logical qubit without creating more errors, the logical qubit needs to be fault-tolerant.

The study, completed at the University of Maryland, peer-reviewed, and published in the journal Nature, demonstrates how trapped ion systems like IonQs can soon deploy fault-tolerant logical qubits to overcome the problem of error correction at scale. By successfully creating the first fault-tolerant logical qubit a qubit that is resilient to a failure in any one component the team has laid the foundation for quantum computers that are both reliable and large enough for practical uses such as risk modeling or shipping route optimization. The team demonstrated that this could be achieved with minimal overhead, requiring only nine physical qubits to encode one logical qubit. This will allow IonQ to apply error correction only when needed, in the amount needed, while minimizing qubit cost.

This is about significantly reducing the overhead in computational power that is typically required for error correction in quantum computers," said Peter Chapman, President and CEO of IonQ. "If a computer spends all its time and power correcting errors, that's not a useful computer. What this paper shows is how the trapped ion approach used in IonQ systems can leapfrog others to fault tolerance by taking small, unreliable parts and turning them into a very reliable device. Competitors are likely to need orders of magnitude more qubits to achieve similar error correction results.

Behind todays study are recently graduated UMD PhD students and current IonQ quantum engineers, Laird Egan and Daiwei Zhu, IonQ cofounder Chris Monroe as well as IonQ technical advisor and Duke Professor Ken Brown. Coauthors of the paper include: UMD and Joint Quantum Institute (JQI) research scientist Marko Cetina; postdoctoral researcher Crystal Noel; graduate students Andrew Risinger and Debopriyo Biswas; Duke University graduate student Dripto M. Debroy and postdoctoral researcher Michael Newman; and Georgia Institute of Technology graduate student Muyuan Li.

The news follows on the heels of other significant technological developments from IonQ. The company recently demonstrated the industrys first Reconfigurable Multicore Quantum Architecture (RMQA) technology, which can dynamically configure 4 chains of 16 ions into quantum computing cores. The company also recently debuted patent-pending evaporated glass traps: technology that lays the foundation for continual improvements to IonQs hardware and supports a significant increase in the number of ions that can be trapped in IonQs quantum computers. Furthermore, it recently became the first quantum computer company whose systems are available for use via all major cloud providers. Last week, IonQ also became the first publicly-traded, pure-play quantum computing company.

About IonQ

IonQ, Inc. is a leader in quantum computing, with a proven track record of innovation and deployment. IonQs next-generation quantum computer is the worlds most powerful trapped-ion quantum computer, and IonQ has defined what it believes is the best path forward to scale. IonQ is the only company with its quantum systems available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud, as well as through direct API access. IonQ was founded in 2015 by Christopher Monroe and Jungsang Kim based on 25 years of pioneering research. To learn more, visit http://www.ionq.com.

About the University of Maryland

The University of Maryland, College Park is the state's flagship university and one of the nation's preeminent public research universities. A global leader in research, entrepreneurship and innovation, the university is home to more than 40,000 students,10,000 faculty and staff, and 297 academic programs. As one of the nations top producers of Fulbright scholars, its faculty includes two Nobel laureates, three Pulitzer Prize winners and 58 members of the national academies. The institution has a $2.2 billion operating budget and secures more than $1 billion annually in research funding together with the University of Maryland, Baltimore. For more information about the University of Maryland, College Park, visit http://www.umd.edu.

Forward-Looking Statements

This press release contains certain forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Some of the forward-looking statements can be identified by the use of forward-looking words. Statements that are not historical in nature, including the words anticipate, expect, suggests, plan, believe, intend, estimates, targets, projects, should, could, would, may, will, forecast and other similar expressions are intended to identify forward-looking statements. These statements include those related to the Companys ability to further develop and advance its quantum computers and achieve scale; and the ability of competitors to achieve similar error correction results. Forward-looking statements are predictions, projections and other statements about future events that are based on current expectations and assumptions and, as a result, are subject to risks and uncertainties. Many factors could cause actual future events to differ materially from the forward-looking statements in this press release, including but not limited to: market adoption of quantum computing solutions and the Companys products, services and solutions; the ability of the Company to protect its intellectual property; changes in the competitive industries in which the Company operates; changes in laws and regulations affecting the Companys business; the Companys ability to implement its business plans, forecasts and other expectations, and identify and realize additional partnerships and opportunities; and the risk of downturns in the market and the technology industry including, but not limited to, as a result of the COVID-19 pandemic. The foregoing list of factors is not exhaustive. You should carefully consider the foregoing factors and the other risks and uncertainties described in the Risk Factors section of the registration statement on Form S-4 and other documents filed by the Company from time to time with the Securities and Exchange Commission. These filings identify and address other important risks and uncertainties that could cause actual events and results to differ materially from those contained in the forward-looking statements. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and the Company assumes no obligation and do not intend to update or revise these forward-looking statements, whether as a result of new information, future events, or otherwise. The Company does not give any assurance that it will achieve its expectations.

Link:
IonQ and University of Maryland Researchers Demonstrate Fault-Tolerant Error Correction, Critical for Unlocking the Full Potential of Quantum...

Quantum computing will break today’s encryption standards – here’s what to do about it – Verizon Communications

When you come to the fork in the road, take it. Yogi Berra

For cryptologists, Yogi Berras words have perhaps never rang more true. As a future with quantum computing approaches, our internet and stored secrets are at risk. The tried-and-true encryption mechanisms that we use every day, like Transport Layer Security (TLS) and Virtual Private Networks (VPN), could be cracked and exposed by a hacker equipped with a large enough quantum computer using Shors algorithm, a powerful algorithm with exponential speed over classical algorithms. The result?The security algorithms we use today that would take roughly 10 billion years to decrypt could take as little as 10 seconds. To prevent this, its imperative that we augment our security protocols, and we have two options to choose from: one using physics as its foundation, or one using math our figurative fork in the road.

To understand how to solve the impending security threats in a quantum era, we need to first understand the fundamentals of our current encryption mechanism. The most commonly used in nearly all internet activities TLS is implemented anytime someone performs an online activity involving sensitive information, like logging into a banking app, completing a sale on an online retailer website, or simply checking email. It works by combining the data with a 32-byte key of random 1s and 0s in a complicated and specific way so that the data is completely unrecognizable to anyone except for the two end-to-end parties sending and receiving the data. This process is called public key encryption, and currently it leverages a few popular algorithms for key exchange, e.g., Elliptic curve Diffie-Hellman (ECDH) or RSA (each named after cryptologists,) each of which are vulnerable to quantum computers. The data exchange has two steps: the key exchange and the encryption itself. The encryption of the data with a secure key will still be safe, but the delivery of the key to unlock that information (key distribution) will not be secure in the future quantum era.

To be ready for quantum computers, we need to devise a new method of key distribution, a way to safely deliver the key from one end of the connection to the other.

Imagine a scenario wherein you and a childhood friend want to share secrets, but can only do so once you each have the same secret passcode in front of you (and there are no phones.) One friend has to come up with a unique passcode, write it down on a piece of paper (while maintaining a copy for themselves,) and then walk it down the block so the other has the same passcode. Once you and your friend have the shared key, you can exchange secrets (encrypted data) that even a quantum computer cannot read.

While walking down the block though, your friend could be vulnerable to the school bully accosting him or her and stealing the passcode, and we cant let this happen. What if your friend lives across town, and not just down the block? Or even more difficult in a different country? (And where is that secret decoder ring we got from a box of sugar-coated-sugar cereal we ate as kids?)

In a world where global information transactions are happening nonstop, we need a safe way of delivering keys no matter the distance. Quantum physics can provide a way to securely deliver shared keys quicker and in larger volume, and, most importantly, immune to being intercepted. Using fiber optic cables (like the ones used by telecommunications companies,) special Quantum Key Distribution (QKD) equipment can send tiny particles (or light waves) called photons to each party in the exchange of data. The sequence of the photons encapsulates the identity of the key, a random sequence of 1s and 0s that only the intended recipients can receive to construct the key.

Quantum Key Distribution also has a sort of built-in anti-hacker bonus. Because of the no-cloning theorem (which essentially states that by their very nature, photons cannot be cloned,) QKD also renders the identity of the key untouchable by any hacker. If an attacker tried to grab the photons and alter them, it would automatically be detected, and the affected key material would be discarded.

The other way we could choose to solve the security threats posed by quantum computers is to harness the power of algorithms. Although its true the RSA and ECDH algorithms are vulnerable to Shors algorithm on a suitable quantum computer, the National Institute of Standards and Technology (NIST) is working to develop replacement algorithms that will be safe from quantum computers as part of its post-quantum cryptography (PQC) efforts. Some are already in the process of being vetted, like ones called McEliece, Saber, Crystals-Kyber, and NTRU.

Each of these algorithms has its own strong and weak points that the NIST is working through. For instance, McEliece is one of the most trusted by virtue of its longstanding resistance to attack, but it is also handicapped by its excessively long public keys that may make it impractical for small devices or web browsing. The other algorithms, especially Saber, run very well on practically any device, but, because they are relatively new, the confidence level in them from cryptographers is still relatively low.

With such a dynamic landscape of ongoing efforts, there is promise that a viable solution will emerge in time to keep our data safe.

The jury is still out. We at Verizon and most of the world rely heavily on e-commerce to sell our products and encryption to communicate via email, messaging, and cellular voice calls.All of these need secure encryption technologies in the coming quantum era. But whether we choose pre-shared keys (implemented by the awesome photon) or algorithms, further leveraging mathematics, our communications software will need updating. And while the post quantum cryptography effort is relatively new, it is not clear which algorithms will withstand scrutiny from the cryptographic community. In the meantime, we continue to peer down each fork in the road to seek the best option to take.

Read the original:
Quantum computing will break today's encryption standards - here's what to do about it - Verizon Communications

Scientists are using quantum computing to help them discover signs of life on other planets – ZDNet

Scientists will use quantum computing tools to eventually help them detect molecules in outer space that could be precursors to life.

Quantum computers are assisting researchers in scouting the universe in search of life outside of our planet -- and although it's far from certain they'll find actual aliens, the outcomes of the experiment could be almost as exciting.

Zapata Computing, which provides quantum software services, has announced a new partnership with the UK's University of Hull, which will see scientists use quantum computing tools to eventually help them detect molecules in outer space that could be precursors to life.

During the eight-week program, quantum resources will be combined with classical computing tools to resolve complex calculations with better accuracy, with the end goal of finding out whether quantum computing could provide a useful boost to the work of astrophysicists, despite the technology's current limitations.

See also: There are two types of quantum computing. Now one company says it wants to offer both.

Detecting life in space is as tricky a task as it sounds. It all comes down to finding evidence of molecules that have the potential to create and sustain life -- and because scientists don't have the means to go out and observe the molecules for themselves, they have to rely on alternative methods.

Typically, astrophysicists pay attention to light, which can be analyzed through telescopes. This is because light -- for example, infrared radiation generated by nearby stars -- often interacts with molecules in outer space. And when it does, the particles vibrate, rotate, and absorb some of the light, leaving a specific signature on the spectral data that can be picked up by scientists back on Earth.

Therefore, for researchers, all that is left to do is detect those signatures and trace back to which molecules they correspond.

The problem? MIT researchershave previously established that over 14,000 moleculescould indicate signs of life in exoplanets' atmospheres. In other words, there is still a long way to go before astrophysicists have drawn a database of all the different ways that those molecules might interact with light -- of all the signatures that they should be looking for when pointing their telescopes to other planets.

That's the challenge that the University of Hull has set for itself: the institution's Centre for Astrophysics is effectively hoping to generate a database of detectable biological signatures.

For over two decades, explains David Benoit, senior lecturer in molecular physics and astrochemistry at the University of Hull, researchers have been using classical means to try and predict those signatures. Still, the method is rapidly running out of steam.

The calculations carried out by the researchers at the center in Hull involve describing exactly how electrons interact with each other within a molecule of interest -- think hydrogen, oxygen, nitrogen and so on. "On classical computers, we can describe the interactions, but the problem is this is a factorial algorithm, meaning that the more electrons you have, the faster your problem is going to grow," Benoit tells ZDNet.

"We can do it with two hydrogen atoms, for example, but by the time you have something much bigger, like CO2, you're starting to lose your nerve a little bit because you're using a supercomputer, and even they don't have enough memory or computing power to do that exactly."

Simulating these interactions with classical means, therefore, ultimately comes at the cost of accuracy. But as Benoit says, you don't want to be the one claiming to have detected life on an exo-planet when it was actually something else.

Unlike classical computers, however, quantum systems are built on the principles of quantum mechanics -- those that govern the behavior of particles when they are taken at their smallest scale: the same principles as those that underlie the behavior of electrons and atoms in a molecule.

This prompted Benoit to approach Zapata with a "crazy idea": to use quantum computers to solve the quantum problem of life in space.

"The system is quantum, so instead of taking a classical computer that has to simulate all of the quantum things, you can take a quantum thing and measure it instead to try and extract the quantum data we want," explains Benoit.

Quantum computers, by nature, could therefore allow for accurate calculations of the patterns that define the behavior of complex quantum systems like molecules without calling for the huge compute power that a classical simulation would require.

The data that is extracted from the quantum calculation about the behavior of electrons can then be combined with classical methods to simulate the signature of molecules of interest in space when they come into contact with light.

It remains true that the quantum computers that are currently available to carry out this type of calculation are limited: most systems don't break the 100-qubit count, which is not enough to model very complex molecules.

See also: Preparing for the 'golden age' of artificial intelligence and machine learning.

Benoit explains that this has not put off the center's researchers. "We are going to take something small and extrapolate the quantum behavior from that small system to the real one," says Benoit. "We can already use the data we get from a few qubits, because we know the data is exact. Then, we can extrapolate."

That is not to say that the time has come to get rid of the center's supercomputers, continues Benoit. The program is only starting, and over the course of the next eight weeks, the researchers will be finding out whether it is possible at all to extract those exact physics on a small scale, thanks to a quantum computer, in order to assist large-scale calculations.

"It's trying to see how far we can push quantum computing," says Benoit, "and see if it really works, if it's really as good as we think it is."

If the project succeeds, it could constitute an early use case for quantum computers -- one that could demonstrate the usefulness of the technology despite its current technical limitations. That in itself is a pretty good achievement; the next milestone could be the discovery of our exo-planet neighbors.

More here:
Scientists are using quantum computing to help them discover signs of life on other planets - ZDNet

Quantum computing startups pull in millions as VCs rush to get ahead of the game – The Register

Venture capital firms are pouring billions into quantum computing companies, hedging bets that the technology will pay off big time some day.

Rigetti, which makes quantum hardware, announced a $1.5bn merger with Supernova Partners Acquisition Company II, a finance house focusing on strategic acquisitions. Rigetti, which was valued at $1.04bn before the deal, will now be publicly traded.

Before Rigetti's deal, quantum computer hardware and software companies raked in close to $1.02bn from venture capital investments this year, according to numbers provided to The Register by financial research firm PitchBook. That was a significant increase from $684m invested by VC firms in 2020, and $188m in 2019.

Prior to the Rigetti transaction, the biggest deal was a $450mn investment in PsiQuantum, which was valued at $3.15bn, in a round led by venture capital firm BlackRock on July 27.

Quantum computers process information differently way than classical computing. Quantum computers encode information in qubits, and store exponentially more information in the form of 1s, 0s or a superposition of both. These computers can evaluate data simultaneously, while classical computers evaluate data sequentially, simply put.

Theoretically, that makes quantum computers significantly more powerful, and enables applications like drug discovery, which are limited by the constraints of classical computers.

Rigetti and PsiQuantum are startups in a growing field of quantum computer makers that includes heavyweights IBM and Google, which are building superconducting quantum systems based on transmon qubits. D-Wave offers a quantum-annealing system based on flux bits to solve limited-sized problems, but this week said it was building a new superconducting system to solve larger problems.

Quantum computers show promise but still immature, with questions around stability, said Linley Gwennap, president of Linley Group, in a research note last month.

"Solving the error-rate problem will require substantially new approaches. If researchers can meet that challenge, quantum processors will provide an excellent complement to classical processors," Gwennap wrote.

If quantum ever works, there could be a huge market, hence the VC interest, but the technology is years away from significant revenue, Gwennap told The Register.

Deals by SPAC (special purpose acquisition companies) like Supernova Partners tend to be highly speculative, but the venture firm's due diligence on Rigetti was more around the possible rewards if quantum computers live up to their hype.

Rigetti's quantum technology is scalable, practical and manufacturable, said Supernova's chief financial officer Michael Clifton, in a press conference this week related to the deal.

"Quantum is expected to be as important as mobile and cloud have been over the last two decades," Clifton said, adding, "we were focused on large addressable markets, differentiated technologies and excellent management teams."

Rigetti's quantum computer is modular and scalable with qubit systems linked through faster interconnects. The company's introductory system in 2018 had 8 qubits, and will scale it up to 80 qubit multichip system with high-density I/O and 3D signalling. The company's roadmap includes a 1000-qubit system in 2024 that is "error mitigating," and a 4000-qubit system in 2026 with full error correction features.

Rigetti designs and makes the quantum computers chips in its own fabrication plant, which helps accelerate the delivery of chips. Amazon offers access to Rigetti's quantum hardware through AWS.

IT leaders in non-tech companies are taking quantum computing seriously, IDC said in May.

A survey by the analyst house in April revealed companies would allocate more than 19 per cent the annual IT budgets to quantum computing in 2023, growing from 7 per cent in 2021. Investments would in at quantum algorithms and systems available through the cloud to boost AI and cybersecurity.

Read this article:
Quantum computing startups pull in millions as VCs rush to get ahead of the game - The Register

Zapata, University of Hull researchers take quantum computing to deep space – FierceElectronics

While it could be many years before quantum computing becomes a common presence in daily life, the technology already has been recruited to help search for life in deep space.

Quantum software company Zapata Computing is partnering with the U.K.-based University of Hull on research to evaluate Zapatas Orquestra quantum workflow platform, to enhance a quantum application designed to detect signatures of life in deep space.

Dr David Benoit, Senior Lecturer in Molecular Physics and Astrochemistry at the University of Hull, said the evaluation is not a controlled demonstration of features, but rather a project involving real-world data. We are looking at how Orquestra performs in actual workflows that use quantum computing to provide typical real-life data, he told Fierce Electronics via email. In this project, we are really aiming for real useful data rather than a demo of capabilities.

The evaluation will run for eight weeks before the team publishes an analysis of the research. It is expected to be the first of several collaborations between Zapata and the University of Hull for quantum astrophysics applications, the parties said. The news comes as several giants in quantum computing, including Google, IBM, Amazon and Honeywell, among others, were set to attend a White House forum hosted by the Biden administration to discuss evolving uses for quantum computing.

In some cases, researchers have turned to quantum computing to tackle projects that classical computers would take too long to complete, and the University of Hull is in a similar situation, Benoit said.

He further explained, The tests envisioned are still something that a classical computer can do, however the computational time required to obtain the solution has a factorial scaling, meaning that larger size applications are likely to take days/months/years to complete (along with a very large amount of memory). The quantum counterpart is able to solve those problems in a sub-factorial manner (potentially quartic scaling), but this doesnt necessarily mean its faster for all systems, just that the computational effort is much reduced for large systems. In this application, we are aiming for a scalable way of performing accurate calculations, and this is exactly what we can obtain using quantum computers.

Just how big is the task at hand? A statement from Zapata noted that in 2016 MIT researchers suggested a list of more than 14,000 molecules that could indicate signs of life in atmospheres of far-away exoplanets. However, little is currently known about how these molecules vibrate and rotate in response to infrared radiation generated by nearby stars. The University of Hull is trying to build a database of detectable biological signatures using new computational models of molecular rotations and vibrations.

Though fault tolerance and error correction remain a challenge for quantum computing models, Benoit said researchers are not concerned with the performance of such so-called Noisy Intermediate-Scale Quantum (NISQ) devices.

Our method actually uses the statistical nature of the noise/errors to try and obtain an accurate answer, so we take the fact that the results will be noisy as a useful thing, he said. Obviously, the better the error correction or the less noisy the device, the better the outcome. However, using Orquestra enables us to potentially switch platforms without having to re-implement large parts of the code, which means that as better hardware comes along, we can readily compute with it.

Benoit added that Orquestra will help researchers generate valuable insights from NISQ devices, and that researchers can build applications that use these NISQ devices today with the capacity to leverage the more powerful quantum devices of the future. The result should be extremely accurate calculations of the key variable defining atom-atom interactions electronic correlation and thus could improve scientists ability to detect the building blocks of life in space. This is particularly important because even simple molecules, such as oxygen or nitrogen, have complex interactions that require very accurate calculations.

RELATED: Even noisy quantum systems are revolutionary: Classiq CEO

Excerpt from:
Zapata, University of Hull researchers take quantum computing to deep space - FierceElectronics

Top 10 Quantum Computing Workshop and Conferences to Attend in 2021 – Analytics Insight

As you know, quantum computing is a type of computation that harnesses the collective properties of quantum states, such as superposition, interference, and entanglement, to perform calculations. To discuss the future of quantum computing there are some workshops and conferences taking place in 2021 that every person should attend.

Here are the top ten quantum computing workshops and conferences:

IEEE Quantum Week the IEEE International Conference on Quantum Computing and Engineering (QCE) is bridging the gap between the science of quantum computing and the development of an industry surrounding it. IEEE Quantum Week is a multidisciplinary quantum computing and engineering venue that gives attendees the unique opportunity to discuss challenges and opportunities with quantum researchers, scientists, engineers, entrepreneurs, and more.

The International Conference on Quantum Communication ICQOM 2021 will take place at the Jussieu campus in Paris, France from the 18th to the 22nd of October 2021. The scope of the conference is focused on Quantum Communications, including theoretical and experimental activities related to Quantum Cryptography and Quantum Networks in a broad sense.

Quantum Techniques in Machine Learning (QTML) is an annual international conference focusing on the interdisciplinary field of quantum technology and machine learning. The goal of the conference is to gather leading academic researchers and industry players to interact through a series of scientific talks focused on the interplay between machine learning and quantum physics.

The 23rd Annual SQuInT Workshop is co-organized by the Center for Quantum Information and Control (CQuIC) at the University of New Mexico (UNM) and the Oregon Center for Optical Molecular and Quantum Science (OMQ) at the University of Oregon (UO). The last date of registration is October 11, 2021.

Keysight World 2021 will be held as a virtual conference. As part of a track focusing on Driving the Digital Transformation, there will be a session titled Pushing the Envelope on Quantum Computing that will include panel sessions with authorities from Rigetti, Google, IQC, and Keysight.

The Quantum Startup Foundry at the University of Maryland will be holding an Investment Summit for quantum startups to showcase their companies to potential investors on October 12-13, 2021. The focus of the event is to link the most promising early- and growth-stage companies with investors and informing key stakeholders about the unique aspects of investing in quantum.

The Inside Quantum Technology (IQT) Fall Conference will be held as a hybrid conference, both in-person and online, in New York City. The conference will be a gathering of business leaders, product developers, marketing strategists, and investors anywhere in the world focused on quantum technology.

The annual Chicago Quantum Summit engages scientific and government leaders, the industries that will scale and drive the applications of emerging quantum research, and the trainees that will lead this future. Focusing on fostering a domestic and international community, experts discuss the future of quantum information science and technology research, the companies in the quantum ecosystem, and strategies to educate and build tomorrows quantum workforce.

The Quantum Computing Summit Silicon Valley organized by Informa Tech will occur on November 3-4, 2021. It will run alongside the AI Summit that has been designed to provide business, technical, research, academic, and innovation insight, qualified via application-based quantum experiences to showcase how quantum is delivering real business value, drive process efficiency, and cost optimization.

The Optical Society (OSA) will hold its Quantum Information and Measurement VI as a virtual conference. The conference topics will cover the latest in theoretical developments and experimental implementations of quantum information technology, including the advanced engineering needed to realize such technologies. In addition to the conferences traditional focus on quantum optics and more.

Read more:
Top 10 Quantum Computing Workshop and Conferences to Attend in 2021 - Analytics Insight

How science and diplomacy inform each other – SWI swissinfo.ch – swissinfo.ch

The potential of quantum computing is one of the focuses ofa summit in Genevathataimstoimprove the dialogue between diplomatsandthescientific communityto safeguard our collective welfare.Tworesearchersexplaintherewards and risks ofquantum computing.

Dorian Burkhalter

Thescientists, diplomats, captains of industry and investors gathering inGenevafor the first-ever summit of theScience and Diplomacy Anticipator (GESDA)External linkwill, among other lofty goals, discuss howpolicymakersshouldprepare forquantumcomputing, provide governance for it,and ensure thatitis accessible to all.But what are quantum computers, and whatwill they be able to do?

Quantum computersperform calculations byexploitingtheproperties ofquantummechanics, which describes thebehaviourofatoms andparticles at a subatomic scale,for example,howelectrons interact with each other.As quantum computersoperate onthe same set of rules asmolecules do,they are,for instance,much better suitedto simulate them than classical computers are.

Today, quantum computers are small and unreliable. They are not yet able to solve problems classical computers cannot.

There is still some uncertainty, but I don't see any reason to not be able to develop such a quantum computer, although it's a huge engineering challenge, says Nicolas Gisin, professor emeritus at the University of Genevaand at the Schaffhausen Institute of Technology,and an expert in quantum technologies.

Quantum computerscouldhelp solvesome of the worlds most pressing problems. They couldaccelerate thediscovery ofmaterials for longer-lasting batteries,bettersolar panels, andnew medicaltreatments.They could also break current encryptionmethods, meaning that information secure today maybecomeat risk tomorrow.

For private companies, winning the race to develop reliable and powerful quantum computers means reaping large economic rewards. For countries, it means gaining a significant national security advantage.

Gisinsaysquantum computers capable of simulating new molecules could be 5-10 years away, while more powerful quantum computers that can break encryption could become a reality in 10-20 years.

The pace at whichthesetechnologies develop will depend on the level of investments made.Large technology firms such as IBM, Microsoft, and Googleare all developing quantum computers, while the US, China,and Europeareinvestingheavilyinquantum technologies.

Anticipating the arrival ofthesetechnologies isimportant,because you play through different scenarios, and some you may like,some you may not like,says HeikeRiel, IBM Fellow at IBMResearch in Zurich.Then you can also think of what type of regulations you may need,or what type of research you need to foster.

TheSwiss governmentis a supporter oftheGESDAfoundationwhichorganisedits first summit in Geneva fromOctober 7-9.The conferencebringstogetherscientists, diplomats, andother stakeholders to discussfuturescientific developmentsandtoanticipate their impacton society.

To work well, scientists needfavourableframeworks. There is definitely a back and forth between science and diplomacy, and science and politics, because diplomacy can also advance science, Riel says.

Politicians and diplomatsare responsible forcreatingopportunities for researchers to collaborate across borders. Initiatives and funding aimed at addressingspecifictechnical problems influence the directionofresearchefforts.

The fact that Switzerland is outside of the European research framework is an absurdity for everyone because this is just going to harm both Switzerland and Europe, Gisin says. It would be really important that Europe and Switzerland understand that we will both benefit if we talk together more and collaborate more.

Since July 2021, Switzerland haslimited accessto Horizon Europe, the European Unions flagship funding program for research and innovation due to a breakdown in negotiations on regulating bilateral relations.

Many of ourproblemstodaysuch as climate change or the Covid-19 pandemicare globalin nature.Getting governments across the world to agree to work togetheronsolutions is not easy, but researcherscan help.

The research communitylikes to worktogether globally, and this collaboration has helped historically to overcome certainbarriers, Riel says, emphasising the importance of communication in this regard.

Researchers working togetheron a global scaleduring the pandemichasled to vaccines being developed atarecord-breakingspeed.During the Cold Warat theEuropean Organization for Nuclear Research (CERN) in Geneva,Sovietscientistsremained involvedin projectswhich allowedforsomecommunicationto take place.

In science, we have a common ground and it's kind of universal; the scientists in the UnitedStates, Canada, Australia,Europeand China, they all work on the same problems, they all try to solve the same technical issues, Riel says.

Scientists also have an important role to play to inform and share facts with both policymakers and the public, even if politicians cannotrely solely on scientific evidence when making decisions. The challenges of communicatingfact-based evidencehavebeen laid bare during the pandemic.

I think it's very important that we also inform the society of what we are doingthat it's not a mystery thatscares people, Riel says.

Ultimately,to successfullyaddress global challenges scientists,diplomats and politicians willhave towork together.

It's really a cooperation between the global collaboration of the scientists and the global collaboration of the diplomats to solve the problems together, Riel says.

Read the rest here:
How science and diplomacy inform each other - SWI swissinfo.ch - swissinfo.ch

Is Neuromorphic Computing The Answer For Autonomous Driving And Personal Robotics? – Forbes

Intel's Loihi 2 is the company's second generation neuromorphic computing chip, a technology that's ... [+] designed to function like a digital representation of a biological brain complete with neurons and synapses.

If you follow the latest trends in the tech industry, you probably know that theres been a fair amount of debate about what the next big thing is going to be. Odds-on favorite for many has been augmented reality (AR) glasses, while others point to fully autonomous cars, and a few are clinging to the potential of 5G. With the surprise debut of Amazons Astro a few weeks back, personal robotic devices and digital companions have also thrown their hat into the ring.

However, while there has been little agreement on exactly what the next thing is, there seems to be little disagreement that whatever it turns out to be, it will be somehow powered, enabled, or enhanced by artificial intelligence (AI). Indeed, the fact that AI and machine learning (ML) are our future seems to be a foregone conclusion.

Yet, if we do an honest assessment of where some of these technologies actually stand on a functionality basis versus initial expectations, its fair to argue that the results have been disappointing on many levels. In fact, if we extend that thought process out to what AI/ML were supposed to do for us overall, then we start to come to a similarly disappointing conclusion.

To be clear, weve seen some incredible advancements in many areas that AI has powered. Advanced analytics, neural network training, and other related fields (where large chunks of data are used to find patterns, learn rules, and then apply them) have been huge benefactors of existing AI approaches.

At the same time, if we look at an application like autonomous driving, it seems increasingly clear that just pushing more and more data into algorithms that crank out ever refined, yet still flawed, ML models isnt really working. Were still years away from true Level 5 autonomy, and, given the number of accidents and even deaths that efforts like Teslas AutoPilot have led to, its probably time to consider another approach.

Similarly, though we are still at the dawn of the personal robotics age, its easy to imagine how the conceptual similarities between autonomous cars and robots will lead to conceptually similar problems in this new field. The problem, ultimately, is that there is simply no way to feed every potential scenario into an AI training model and create a predetermined answer on how to react for any given situation. Randomness and unexpected surprises are simply too strong an influence.

Whats needed is a type of computing that can really think and learn on its own and then adapt its learning to those unexpected scenarios. As crazy and potentially controversial as that may sound, thats essentially what researchers in the field of neuromorphic computing are attempting to do. The basic idea is to replicate the structure and function of the most adaptable computing/thinking device we know ofthe human brainin digital form. Following the principles of basic biology, neuromorphic chips attempt to re-create a series of connected neurons using digital synapses that send electrical pulses between them, much as biological brains do.

Its an area of academic research thats been around for a few decades now, but only recently has it started to make real progress and gain more attention. In fact, buried in the wave of tech industry announcements that have been made over the last few weeks was news that Intel had released the second generation of its neuromorphic chip, named Loihi 2, along with a new open-source software framework for it that theyve dubbed Lava.

To put realistic expectations around all of this, Loihi 2 is not going to be made commercially availableits termed a research chipand the latest version offers 1 million neurons, a far cry from the approximately 100 billion found in a human brain. Still, its an extremely impressive, ambitious project that offers 10x the performance, 15x the density of its 2018-era predecessor (its built on the companys new Intel 4 chip manufacturing process technology), and improved energy efficiency. In addition, it also provides better (and easier) means of interconnecting its unique architecture with other more traditional chips.

Intel clearly learned a great deal from the first Loihi, and one of the biggest realizations was that software development for this radically new architecture is extremely hard. As a result, another essential part of the companys news was the debut of Lava, an open-source software framework and set of tools that can be used to write applications for Loihi. The company is also offering tools that can simulate its operation on traditional CPUs and GPUs so that developers can create code without having access to the chips.

Whats particularly fascinating about how neuromorphic chips operate is that, despite the fact they function in a dramatically different fashion from both traditional CPU computing and parallel GPU-like computing models, they can be used to achieve some of the same goals. In other words, neuromorphic chips like Loihi 2 can provide the desired outcomes that traditional AI is shooting for, but in a significantly faster, more energy efficient, and less data intensive way. Through a series of event-based spikes that occur asynchronously and trigger digital neurons to respond in various waysmuch as a human brain operates (vs. the synchronous, structured processing in CPUs and GPUs)a neuromorphic chip can essentially learn things on the fly. As a result, its ideally suited for devices that must react to new stimuli in real-time.

These capabilities are why these chips are so appealing to those designing and building robots and robotic-like systems, which autonomous driving cars essentially are. Bottom line is that it could take commercially available neuromorphic chips to power the kind of autonomous cars and personal robots of our science fiction-inspired dreams.

Of course, neuromorphic computing isnt the only new approach to advancing the world of technology. Theres also a great deal of work being done in the more widely discussed world of quantum computing. Like quantum computing, the inner workings of neuromorphic computing are extraordinarily complex and, for now, primarily seen as research projects for corporate R&D labs and academic research. Unlike quantum, however, neuromorphic computing doesnt require the extreme physical challenges (temperatures near absolute zero) and power requirements that quantum currently does. In fact, one of the many appealing aspects of neuromorphic architectures is that theyre designed to be extremely low power, making them suitable for a variety of mobile or other battery-powered applications (like autonomous cars and robots).

Despite recent advancements, its important to remember that commercial application of neuromorphic chips is still several years away. However, its hard not to get excited and intrigued by a technology that has the potential to make AI-powered devices truly intelligent, instead of simply very well-trained. The distinction may seem subtle, but ultimately, its that kind of new smarts that well likely need in order to make some of the next big things really happen in a way that we can all appreciate and imagine.

Disclosure: TECHnalysis Research is a tech industry market research and consulting firm and, like all companies in that field, works with many technology vendors as clients, some of whom may be listed in this article.

See the original post:
Is Neuromorphic Computing The Answer For Autonomous Driving And Personal Robotics? - Forbes

QunaSys to participate in IEEE International Conference on Quantum Computing and Engineering (QCE21) – KEVN Black Hills Fox

Published: Oct. 4, 2021 at 10:00 AM MDT

TOKYO, Oct. 4, 2021 /PRNewswire/ -- QunaSys Inc. is a sponsor of Quantum Week 2021 (Oct 17 - 21) the leading quantum computing event that bridges the gap between the science of quantum computing and the development of the surrounding industry.

QunaSys researchers are deeply engaged in the event with an exhibit booth, a hands-on tutorial, and a panel:

"Japan's technology ecosystem is actively advancing quantum computing. QunaSys is a key player in driving business, government, and academia collaboration to enable the quantum chemistry ecosystem and boost the adoption of this technology." Tennin Yan, QunaSys Inc. CEO, and Hausi Mller, General Chair IEEE Quantum Week 2021 and Co-Chair IEEE Quantum Initiative.

"Companies are getting ready by learning the skills to develop and test quantum algorithms. Collaboration within an ecosystem and a multi-platform approach is key to expand use case proliferation that in turn advances the technology." Tennin Yan, QunaSys Inc. CEO.

"As organizers, we are very pleased with the outstanding contributions from the international quantum community for IEEE International Conference on Quantum Computing and Engineering (QCE). We look forward to welcoming 800+ participants from 45+ countries and 220+ companies." Hausi Mller.

Register now for the conference and learn how to maximize the power of quantum computing, understand the industry use cases potential and how to implement algorithms to solve chemistry related complex problems, please register here: https://qce.quantum.ieee.org/registration/registration-overview

Additional resources

About QunaSys Inc.

QunaSys is the world's leading developer of innovative algorithms in chemistry focused on accelerating the development of quantum technology applicability. QunaSys enables maximization of the power of quantum computing through its advanced joint research that addresses cutting-edge technologies providing Qamuy, the most powerful quantum chemical calculation cloud software; fostering development of collaboration through QPARC industry consortium; and working with research institutions from academia and government. QunaSys software runs on multiple technology platforms with applicability in all chemical related industries to boost quantum computing adoption.

About IEEE event

IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. IEEE is actively contributing to the global R&D efforts to understand the power and promise of quantum computing. IEEE Quantum Week is bridging the gap between the science of quantum computing and the development of an industry surrounding it.

HIroki Okaoka@qunasys.com/ pr@qunasys.com+81-9060589550

View original content to download multimedia:

SOURCE QunaSys Inc.

The above press release was provided courtesy of PRNewswire. The views, opinions and statements in the press release are not endorsed by Gray Media Group nor do they necessarily state or reflect those of Gray Media Group, Inc.

Go here to see the original:
QunaSys to participate in IEEE International Conference on Quantum Computing and Engineering (QCE21) - KEVN Black Hills Fox