OpenSSL 3.0.5 awaits release to fix potential worse-than-Heartbleed flaw – The Register

The latest version of OpenSSL v3, a widely used open-source library for secure networking using the Transport Layer Security (TLS) protocol, contains a memory corruption vulnerability that imperils x64 systems with Intel's Advanced Vector Extensions 512 (AVX512).

OpenSSL 3.0.4 was released on June 21 to address a command-injection vulnerability (CVE-2022-2068) that was not fully addressed with a previous patch (CVE-2022-1292).

But this release itself needs further fixing. OpenSSL 3.0.4 "is susceptible to remote memory corruption which can be triggered trivially by an attacker," according to security researcher Guido Vranken. We're imagining two devices establishing a secure connection between themselves using OpenSSL and this flaw being exploited to run arbitrary malicious code on one of them.

Vranken said that if this bug can be exploited remotely and it's not certain it can be it could be more severe than Heartbleed, at least from a purely technical point of view.

However, Vranken notes several mitigating factors, including the continued use of the 1.1.1 tree of the library rather than v3 tree; the fork of libssl into LibreSSL and BoringSSL; the short amount of time 3.0.4 has been available; and the fact that the error only affects x64 with AVX512 available on certain Intel chips released between 2016 and early 2022.

Intel this year began disabling AVX512 support on Alder Lake, its 12th Gen Intel Core processors.

The bug, an AVX512-specific buffer overflow, was reported six days ago. It has been fixed, but OpenSSL 3.0.5 has not yet been released.

Meanwhile, Linux distributions like Gentoo have not yet rolled out OpenSSL 3.0.4 as a result of this bug and a test build failure bug. So they include OpenSSL 3.0.3, with its command injection flaw.

In the GitHub Issues thread discussing the bug, Tom Mrz, software developer at the OpenSSL Foundation, argues the bug shouldn't be classified as a security vulnerability.

"I do not think this is a security vulnerability," he said. "It is just a serious bug making [the] 3.0.4 release unusable on AVX512 capable machines."

Xi Ruoyao, a PhD student at Xidian University, also said he disagreed with the policy of calling every heap buffer overflow a security flaw. Vim, he said, started doing so this year and the result has been something like ten "high severity" vim CVEs every month without any proof-of-concept exploit code.

"I think we shouldn't mark a bug as 'security vulnerability' unless we have some evidence showing it can (or at least, may) be exploited," he wrote, adding that nonetheless 3.0.5 should be released as soon as possible because it's very severe.

Alex Gaynor, software resilience engineer with the US Digital Service, however, argues to the contrary.

"I'm not sure I understand how it's not a security vulnerability," responded Gaynor. "It's a heap buffer overflow that's triggerable by things like RSA signatures, which can easily happen in remote contexts (e.g. a TLS handshake)."

Gaynor urged releasing the fix quickly. "I think this issue qualifies as a CRITICAL within OpenSSL's vulnerability severity policy, and it makes it effectively impossible for users to upgrade to 3.0.4 to obtain its security fixes," he said .

See original here:
OpenSSL 3.0.5 awaits release to fix potential worse-than-Heartbleed flaw - The Register

Exploiting symmetries: Speeding up the computational study of solid solutions – EurekAlert

image:Atomic substitution with La atoms: Ce8Pd24Sb (Ce5,La3)Pd24Sb. The crystal structure was obtained from the ICSD database (CollCode: 83378). The space group is 221-Pm3m, and the crystal structures are depicted using VESTA. view more

Credit: Kousuke Nakano from JAIST.

Ishikawa, Japan -- Symmetry is a prevalent feature of nature at all scales. For example, our naked eyes can easily identify symmetries in the bodily shape of countless organisms. Symmetry is also very important in the fields of physics and chemistry, especially in the microscopic realm of atoms and molecules. Crystals, which are highly ordered materials, can even have multiple types of symmetry at the same time, such as rotational symmetry, inversion symmetry, and translational symmetry.

Lately, alongside rapid progress in computer science, researchers have developed computational methods that seek to predict the physical properties of crystals based on their electronic structure. In practice, however, pure and perfectly symmetric crystals are seldom used. This is because a crystals properties can be tuned as desired by alloying them with other materials or randomly substituting certain atoms with other elements, i.e., doping.

Accordingly, materials scientists are seeking computationally efficient approaches to analyze such alloys and substituted crystals, also known as solid solutions. The supercell method is one such approach and is widely used to model crystal structures with random substitutions of different atoms. The symmetry of crystals, however, is actually a problem when using this technique. In crystals, there can be many substitution patterns that are physically equivalent to other substitutions if we simply translate or rotate them. Findings these symmetric substitution patterns is not very meaningful, and thus their calculation when using the supercell method is a waste of time.

In a recent study, a team of researchers led by Assistant Professor Kousuke Nakano from Japan Advanced Institute of Science and Technology (JAIST) found a solution to this problem. They developed an open-source software called Suite for High-throughput generation of models with atomic substitutions implemented by Python, or SHRY that can, in terms of symmetry, generate distinct substitution patterns in solid solutions and alloys [https://github.com/giprayogo/SHRY]. This work, which was published in the ACS Journal of Chemical Information and Modeling, was co-authored by doctoral student Genki I. Prayogo, Dr. Andrea Tirelli, Professor Ryo Maezono, and Associate Professor Kenta Hongo.

The team approached the problem from the angle of group theory. It turns out that searching for atomic substitution patterns in crystals is analogous to the problem of finding coloring patterns on the vertices of graphs under certain restrictions. This allows one to reformulate the original problem of finding non-symmetric atomic substitutions in crystals as exploring search trees depicting the coloring of vertices in graphs.

However, the way in which the search tree is explored is crucial. A simple, nave approach in which all possible branches are searched and directly compared is impossible; the time and calculations required grow uncontrollably for large systems. This happens because deciding whether to explore further down a branch requires information about all other branches besides the one being explored, which is technically referred to as non-local information.

To avoid this issue, the researchers implemented in SHRY a technique called canonical augmentation. This method can decide whether a tree branch should be explored more deeply or not based solely on local information, explains Dr. Nakano, Most importantly, theorems from group theory guarantee that only distinct substitution patterns will be extracted, without over- or under-exploring the tree structure in terms of symmetry. The team verified that their algorithm was error-free by testing it thoroughly with data from a database of crystal structures.

It is worth noting that SHRY was written in Python 3, one of the most popular cross-platform programming languages, and uploaded to GitHub, a leading project-sharing online platform. SHRY can be used as a stand-alone program or imported into another Python program as a module, highlights Dr. Nakano, Our software also uses the widely supported Crystallographic Information File (CIF) format for both the input and output of the sets of substituted crystal structures. The team plans to keep improving SHRYs code based on feedback from other users, boosting its speed and capabilities.

Overall, the software developed in this study could help scientists identify potential atomic substitutions in solids, which is the most common strategy used to tune the properties of materials for practical applications. SHRY will help speed up research and develop substituted crystals with unprecedented functionalities and superior characteristics.

###

Reference

Title of original paper:

SHRY: Application of Canonical Augmentation to the Atomic Substitution Problem

Journal:

Journal of Chemical Information and Modeling

DOI:

10.1021/acs.jcim.2c00389

About Japan Advanced Institute of Science and Technology, Japan

Founded in 1990 in Ishikawa prefecture, the Japan Advanced Institute of Science and Technology (JAIST) was the first independent national graduate school in Japan. Now, after 30 years of steady progress, JAIST has become one of Japans top-ranking universities. JAIST counts with multiple satellite campuses and strives to foster capable leaders with a state-of-the-art education system where diversity is key; about 40% of its alumni are international students. The university has a unique style of graduate education based on a carefully designed coursework-oriented curriculum to ensure that its students have a solid foundation on which to carry out cutting-edge research. JAIST also works closely both with local and overseas communities by promoting industryacademia collaborative research.

About Assistant Professor Kousuke Nakano from Japan Advanced Institute of Science and Technology, Japan

Dr. Kousuke Nakano obtained B.Sc. and M.Sc. degrees in Engineering from Kyoto University, Japan, in 2012 and 2014, respectively. He then joined JAIST, where he obtained a Ph.D. in computer and information science in 2017. Since 2019, he works there as Assistant Professor, researching the topics of first-principles quantum Monte Carlo simulations, density functional theory, machine learning for materials informatics, and the synthesis of novel inorganic compounds using solid-state reactions. He has over 40 publications to his name on these topics and his h-index is 14 with over 700 citations (Google Scholar, Jun. 2022).

Funding information

This work was financially supported by JST SPRING (Grant Number JPMJSP2102), MIUR Progetti di Ricerca di Rilevante Interesse Nazionale (PRIN) Bando 2017 (Grant Number 2017BZPKSZ), the HPCI System Research Project (Project IDs: hp210019,hp210131, and jh210045), MEXT-KAKENHI (JP16H06439, JP17K17762, JP19K05029, JP19H05169, JP21K03400, JP21H01998, JP22H02170, JP19H04692, and JP21K03400), the U.S. Air Force Office of Scientific Research (Award Number FA2386-20-1-4036, AFOSR-AOARD/FA2386-17-1-4049; FA2386-19-1-4015), JSPS Bilateral Joint Projects (JPJSBP120197714), JSPS Overseas Research Fellowships, a Grant-in-Aid for Early-Career Scientists (Grant Number JP21K17752), and a Grant-in-Aid for Scientific Research (C) (Grant Number JP21K03400).

Journal of Chemical Information and Modeling

SHRY: Application of Canonical Augmentation to the Atomic Substitution Problem

9-Jun-2022

See the original post:
Exploiting symmetries: Speeding up the computational study of solid solutions - EurekAlert

Building An Insights And Analytics Newsletter: From Proof Of Concept To Feature Release – Forbes

As software engineers at Forbes, were always building innovative features and working on prototypes for potential projects. Some proof of concept initiatives remain in the testing state, but some end up being shared with a larger audience. Bertie Bulletin, a monthly email containing user stats, was a project that started as an engineering initiative and turned into a monthly email sent to both active Forbes Staff and Contributors who write for the Forbes website.

Currently, writers have access to their user and story stats through Bertie, Forbes Content Management System where they can write, edit, and publish their articles. Bertie Bulletin was created from a suggestion to better equip writers with the knowledge they need to understand their stats, as well as provide them with a record of historical stats. Bertie Bulletin is similar to a bank statement that helps writers keep track of their performance from month-to-month. We included the top performing stories for the previous month, based on how many page views the story received, audience stats, and a referral breakdown indicating where readers are coming from. While these emails have some of the data that already exists in the stats dashboard, they also extend to include insights. Insights are comparisons of data, such as this month or year versus the previous month or year.

Linking entities. Network, networking, social media, internet co

As we built Bertie Bulletin, we made sure to fully utilize pre existing projects in the Forbes Engineering universe. The initial approach used Directed Acyclic Graphs, or DAGs, which are written in Python, therefore, so was Bertie Bulletin. To create the mailing list we consulted with our API team, who utilized cloud functions to generate the mailing lists with the given parameters. We used Mailgun, an email delivery service used by other Forbes projects, to store mailing lists and email templates as well as trigger emails. Our codebase made calls to the Forbes stats API to fetch the numbers and generate insights, which were then stored as recipient variables in Mailgun.

A lot of research went into figuring out which templating engine made sense to use. The initial email was set up in a single HTML file, though it wasnt pretty to look at or easy to edit. Our list of wants included an engine that would allow for use of template inheritance and control structures. In other words, we could break the email down into sections with conditionals that would then be compiled. This led us to Jinja, which describes itself as a fast, expressive, extensible templating engine, making it ideal for our purposes.

Setting up the Jinja flow for Bertie Bulletin required three different steps: creating the Jinja files containing our HTML elements, using built-in methods to render a single HTML file, and uploading that file to Mailgun to use as the email template.

Our Minimum Viable Product (MVP) had a couple of must-haves for our writers. This included top stories, audience stats, total published stories, and information on page views. This first version of the email was released in February 2022, rendering January stats data. With each iteration of these emails, we included new features to enable writers to visualize their information in a more palatable manner, such as an audience stats donut chart and a traffic source types referral table, to name a few.

Screenshot of Bertie Bulletin donut chart and referral breakdown designs.

The process to deploy Bertie Bulletin emails can be split into a few broad steps. Initially, the email template is generated and uploaded, and the mailing list is sanitized. The next steps retrieve each writers data from the stats API and transform it into useful pieces of information. Lastly, each users data is updated and the email is triggered.

In order to figure out the best deployment strategy for Bertie Bulletin, we once again turned to Forbes Engineering. We consulted with our DevOps team and they suggested using Argo-Workflows, a container-native engine that enables the orchestration of parallel jobs on Kubernetes, an open-source platform for working with containers. Each step was containerized into a reliable package of the execution environment and applied into a self-contained image. The advantage of using containers was that each one would be small and do a defined unit of work which could then be scaled horizontally and completed in parallel, thus finishing the batch work quicker. Optimization was important because we expected to send a couple thousand emails every month. With Argo's fully featured UI, we were able to see each containerized step as a stage in the Argo workflow and visualize its progress. If a step errored out for any reason, Argo could retry the step and provide us with logs to better debug the issue.

As we reflected on the process of creating Bertie Bulletin, we realized that the advice from Jackie Has previous article still applied at any level.

3d rendering of question mark on speech bubble. Brainstorming, business concept.

Be Inquisitive: In the beginning, we had a lot of conversations with product owners to hash out details. What did the MVP look like? What were the specific requirements at each stage?

Know When To Ask For Help: With a large project like Bertie Bulletin, it was inevitable (like Thanos) that we would run into the unknown (like Elsa). To overcome these roadblocks, we consulted with various people throughout the process for guidance on the best way to execute.

Learn How To Debug: Because Bertie Bulletin had so many moving parts, if something went wrong we needed to be able to pinpoint exactly where the error came from. This meant we had to figure out how to debug effectively and efficiently.

Proofread Your PRs: Whether you are on team unified view or team split view, its good practice to review your own PR in Github before sending it out to others. The visual comparison can help you catch a sneaky print() or syntax errors easily missed within your code editor.

Taking this project from prototype to production was a strenuous effort that required collaboration across many teams. After many hours of pair programming, screen sharing, and debugging sessions, being able to finally see the end result live and hearing positive feedback from the recipients made the process worthwhile.

Go here to read the rest:
Building An Insights And Analytics Newsletter: From Proof Of Concept To Feature Release - Forbes

Calendar of events, awards and opportunities – ASBMB Today

Every week, we update this list with new meetings, awards, scholarships and events to help you advance your career.If youd like us to feature something that youre offering to the bioscience community, email us with the subject line For calendar. ASBMB members offerings take priority, and we do not promote products/services. Learn how to advertise in ASBMB Today.

As we do each year, we'll be hosting a Twitter chat for Pride Month. It will be at 2 p.m. Eastern on June 27 and will feature ASBMB staffers, members and representatives of allied organizations. We hope you can join us! Follow us at @ASBMB.

This webinar will feature the ins and outs of the Early Career Reviewer Program at the National Institutes of Healths Center for Scientific Review, which gives emerging investigators an inside look at the scientific peer-review process. Elyse Schauwecker, a scientific review officer at CSR, will talk about the benefits of participating, eligibility, the application process and recent changes. There will also be time to ask Schauwecker questions about the program and other CSR opportunities for early-career scientists. Anita Corbett of Emory University, a member of the ASBMB Public Affairs Advisory Commitee, will moderate.Register.

The National Cancer Institutes Frederick National Laboratory for Cancer Research is the only national laboratory dedicated to biomedical research. FNLCR is conducting a survey to determine how familiar researchers are with the lab and the services, tools and resources it offers to the scientific community. Take the survey.

This five-day conference will be held Aug. 1418 in person in Cambridge, Massachusetts, and online. It will be an international forum for discussion of the remarkable advances in cell and human protein biology revealed by ever-more-innovative and powerful mass spectrometric technologies. The conference will juxtapose sessions about methodological advances with sessions about the roles those advances play in solving problems and seizing opportunities to understand the composition, dynamics and function of cellular machinery in numerous biological contexts. In addition to celebrating these successes, we also intend to articulate urgent, unmet needs and unsolved problems that will drive the field in the future. The registration deadline is July 1.Learn more.

The Journal of Science Policy & Governance, the United Nations Educational, Scientific and Cultural Organization and the Major Group for Children and Youth announced in February a call for papers for a special issue on "open science policies as an accelerator for achieving the sustainable development goals." The deadline for submissions is July 10. To help authors prepare their submissions, the group will be hosting a series of webinars (April 8 & 29, May 20, and June 10) and a science policy paper-writing workshop (March 2627). Read the call for submissions and learn more about the events.

This in-person meeting will be held Sept. 29 through Oct. 2 in Snowbird, Utah. Sessionswill cover recent advances and new technologies in RNA polymerase II regulation, including the contributions of non-coding RNAs, enhancers and promoters, chromatin structure and post-translational modifications, molecular condensates, and other factors that regulate gene expression. Patrick Cramer of the Max Planck Institute will present the keynote address on the structure and function of transcription regulatory complexes. The deadline for oral presentation abstracts is July 14. The deadline for poster presentation abstracts is Aug. 18.Learn more.

Head to beautiful Denver, Colorado, for a summer experience as a PRIDE (Programs to Increase Diversity Among Individuals Engaged in Health-Related Research) scholar. PRIDE is an initiative of the National Heart, Lung and Blood Institute that trains junior faculty from underrepresented backgrounds and/or with disabilities to advance their scientific careers and make them more competitive for external research funding. The University of Colorado PRIDE (led by Sonia C. Flores, who also leads the ASBMB Minority Affairs Committee) is one of nine national PRIDE sites. Its focus is on the "impact of ancestry and gender on omics of lung and cardiovascular diseases" (which is why it's called PRIDEAGOLD). The program consists of two consecutive summer institutes (two and one week, respectively) that offer comprehensive formal instruction on multi-omics, data sciences and bioinformatics, with an emphasis on interpretations based on ancestry and/or gender; career development and grant-writing tools; pairing with expert mentors; and pilot funds to develop a small research project. Learn more.

Most meetings on epigenetics and chromatin focus on transcription, while most meetings on genome integrity include little attention to epigenetics and chromatin. This conference in Seattle will bridge this gap to link researchers who are interested in epigenetic regulations and chromatin with those who are interested in genome integrity. The oral and poster abstract deadline and early registration deadline is Aug. 2. The regular registration deadline is Aug. 29.Learn more.

For Discover BMB, the ASBMB's annual meeting in March in Seattle, we're seeking two types of proposals:

In May, the Howard Hughes Medical Institute launched a roughly $1.5 billion program to "help build a scientific workforce that more fully reflects our increasingly diverse country." The Freeman Hrabowski Scholars Program will fund 30 scholars every other year, and each appointment can last up to 10 years. That represents up to $8.6 million in total support per scholar. HHMI is accepting applications from researchers "who are strongly committed to advancing diversity, equity, and inclusion in science." Learn more.

Save the date for the ASBMB Career Expo. This virtual event aims to highlight the diversity of career choices available to modern biomedical researchers. No matter your career stage, this expo will provide a plethora of career options for you to explore while simultaneously connecting you with knowledgeable professionals in these careers. Each 60-minute session will focus on a different career path and will feature breakout rooms with professionals in those paths. Attendees can choose to meet in a small group with a single professional for the entire session or move freely between breakout rooms to sample advice from multiple professionals. Sessions will feature the following five sectors: industry, government, science communication, science policy and other. The expo will be held from 11 a.m. to 5 p.m. Eastern on Nov. 2. Stay tuned for a link to register!

The ASBMB provides members with a virtual platform to share scientific research and accomplishments and to discuss emerging topics and technologies with the BMB community.

The ASBMB will manage the technical aspects, market the event to tens of thousands of contacts and present the digital event live to a remote audience. Additional tools such as polling, Q&A, breakout rooms and post event Twitter chats may be used to facilitate maximum engagement.

Seminars are typically one to two hours long. A workshop or conference might be longer and even span several days.

Prospective organizers may submit proposals at any time. Decisions are usually made within four to six weeks.

Propose an event.

If you are a graduate student, postdoc or early-career investigator interested in hosting a #LipidTakeover, fill out this application. You can spend a day tweeting from the Journal of Lipid Research's account (@JLipidRes) about your favorite lipids and your work.

The International Union of Biochemistry and Molecular Biology is offering $500 to graduate students and postdocs displaced from their labs as a result of natural disaster, war or "other events beyond their control that interrupt their training." The money is for travel and settling in. Learn more and spread the word to those who could use assistance.

TheCenter for Open Bioimaging Analysismaintains open-source softwareCellProfilerandImageJ. COBA has partnered withBioimaging North Americaand theRoyal Microscopical Societyto create a survey to assess the needs of the community for software and training materials. Take the survey.

Read more here:
Calendar of events, awards and opportunities - ASBMB Today

What is quantum computing? – TechTarget

Quantum computing is an area of study focused on the development of computer based technologies centered around the principles ofquantum theory. Quantum theory explains the nature and behavior of energy and matter on thequantum(atomic and subatomic) level. Quantum computing uses a combination ofbitsto perform specific computational tasks. All at a much higher efficiency than their classical counterparts. Development ofquantum computersmark a leap forward in computing capability, with massive performance gains for specific use cases. For example quantum computing excels at like simulations.

The quantum computer gains much of its processing power through the ability for bits to be in multiple states at one time. They can perform tasks using a combination of 1s, 0s and both a 1 and 0 simultaneously. Current research centers in quantum computing include MIT, IBM, Oxford University, and the Los Alamos National Laboratory. In addition, developers have begun gaining access toquantum computers through cloud services.

Quantum computing began with finding its essential elements. In 1981, Paul Benioff at Argonne National Labs came up with the idea of a computer that operated with quantum mechanical principles. It is generally accepted that David Deutsch of Oxford University provided the critical idea behind quantum computing research. In 1984, he began to wonder about the possibility of designing a computer that was based exclusively on quantum rules, publishing a breakthrough paper a few months later.

Quantum Theory

Quantum theory's development began in 1900 with a presentation by Max Planck. The presentation was to the German Physical Society, in which Planck introduced the idea that energy and matter exists in individual units. Further developments by a number of scientists over the following thirty years led to the modern understanding of quantum theory.

Quantum Theory

Quantum theory's development began in 1900 with a presentation by Max Planck. The presentation was to the German Physical Society, in which Planck introduced the idea that energy and matter exists in individual units. Further developments by a number of scientists over the following thirty years led to the modern understanding of quantum theory.

The Essential Elements of Quantum Theory:

Further Developments of Quantum Theory

Niels Bohr proposed the Copenhagen interpretation of quantum theory. This theory asserts that a particle is whatever it is measured to be, but that it cannot be assumed to have specific properties, or even to exist, until it is measured. This relates to a principle called superposition. Superposition claims when we do not know what the state of a given object is, it is actually in all possible states simultaneously -- as long as we don't look to check.

To illustrate this theory, we can use the famous analogy of Schrodinger's Cat. First, we have a living cat and place it in a lead box. At this stage, there is no question that the cat is alive. Then throw in a vial of cyanide and seal the box. We do not know if the cat is alive or if it has broken the cyanide capsule and died. Since we do not know, the cat is both alive and dead, according to quantum law -- in a superposition of states. It is only when we break open the box and see what condition the cat is in that the superposition is lost, and the cat must be either alive or dead.

The principle that, in some way, one particle can exist in numerous states opens up profound implications for computing.

A Comparison of Classical and Quantum Computing

Classical computing relies on principles expressed by Boolean algebra; usually Operating with a 3 or 7-modelogic gateprinciple. Data must be processed in an exclusive binary state at any point in time; either 0 (off / false) or 1 (on / true). These values are binary digits, or bits. The millions of transistors and capacitors at the heart of computers can only be in one state at any point. In addition, there is still a limit as to how quickly these devices can be made to switch states. As we progress to smaller and faster circuits, we begin to reach the physical limits of materials and the threshold for classical laws of physics to apply.

The quantum computer operates with a two-mode logic gate:XORand a mode called QO1 (the ability to change 0 into a superposition of 0 and 1). In a quantum computer, a number of elemental particles such as electrons or photons can be used. Each particle is given a charge, or polarization, acting as a representation of 0 and/or 1. Each particle is called a quantum bit, or qubit. The nature and behavior of these particles form the basis of quantum computing and quantum supremacy. The two most relevant aspects of quantum physics are the principles of superposition andentanglement.

Superposition

Think of a qubit as an electron in a magnetic field. The electron's spin may be either in alignment with the field, which is known as aspin-upstate, or opposite to the field, which is known as aspin-downstate. Changing the electron's spin from one state to another is achieved by using a pulse of energy, such as from alaser. If only half a unit of laser energy is used, and the particle is isolated the particle from all external influences, the particle then enters a superposition of states. Behaving as if it were in both states simultaneously.

Each qubit utilized could take a superposition of both 0 and 1. Meaning, the number of computations a quantum computer could take is 2^n, where n is the number of qubits used. A quantum computer comprised of 500 qubits would have a potential to do 2^500 calculations in a single step. For reference, 2^500 is infinitely more atoms than there are in the known universe. These particles all interact with each other via quantum entanglement.

In comparison to classical, quantum computing counts as trueparallel processing. Classical computers today still only truly do one thing at a time. In classical computing, there are just two or more processors to constitute parallel processing.EntanglementParticles (like qubits) that have interacted at some point retain a type can be entangled with each other in pairs, in a process known ascorrelation. Knowing the spin state of one entangled particle - up or down -- gives away the spin of the other in the opposite direction. In addition, due to the superposition, the measured particle has no single spin direction before being measured. The spin state of the particle being measured is determined at the time of measurement and communicated to the correlated particle, which simultaneously assumes the opposite spin direction. The reason behind why is not yet explained.

Quantum entanglement allows qubits that are separated by large distances to interact with each other instantaneously (not limited to the speed of light). No matter how great the distance between the correlated particles, they will remain entangled as long as they are isolated.

Taken together, quantum superposition and entanglement create an enormously enhanced computing power. Where a 2-bit register in an ordinary computer can store only one of four binary configurations (00, 01, 10, or 11) at any given time, a 2-qubit register in a quantum computer can store all four numbers simultaneously. This is because each qubit represents two values. If more qubits are added, the increased capacity is expanded exponentially.

Quantum Programming

Quantum computing offers an ability to write programs in a completely new way. For example, a quantum computer could incorporate a programming sequence that would be along the lines of "take all the superpositions of all the prior computations." This would permit extremely fast ways of solving certain mathematical problems, such as factorization of large numbers.

The first quantum computing program appeared in 1994 by Peter Shor, who developed a quantum algorithm that could efficiently factorize large numbers.

The Problems - And Some Solutions

The benefits of quantum computing are promising, but there are huge obstacles to overcome still. Some problems with quantum computing are:

There are many problems to overcome, such as how to handle security and quantum cryptography. Long time quantum information storage has been a problem in the past too. However, breakthroughs in the last 15 years and in the recent past have made some form of quantum computing practical. There is still much debate as to whether this is less than a decade away or a hundred years into the future. However, the potential that this technology offers is attracting tremendous interest from both the government and the private sector. Military applications include the ability to break encryptions keys via brute force searches, while civilian applications range from DNA modeling to complex material science analysis.

Read the original here:
What is quantum computing? - TechTarget

Quantum Error Correction: Time to Make It Work – IEEE Spectrum

Dates chiseled into an ancient tombstone have more in common with the data in your phone or laptop than you may realize. They both involve conventional, classical information, carried by hardware that is relatively immune to errors. The situation inside a quantum computer is far different: The information itself has its own idiosyncratic properties, and compared with standard digital microelectronics, state-of-the-art quantum-computer hardware is more than a billion trillion times as likely to suffer a fault. This tremendous susceptibility to errors is the single biggest problem holding back quantum computing from realizing its great promise.

Fortunately, an approach known as quantum error correction (QEC) can remedy this problem, at least in principle. A mature body of theory built up over the past quarter century now provides a solid theoretical foundation, and experimentalists have demonstrated dozens of proof-of-principle examples of QEC. But these experiments still have not reached the level of quality and sophistication needed to reduce the overall error rate in a system.

The two of us, along with many other researchers involved in quantum computing, are trying to move definitively beyond these preliminary demos of QEC so that it can be employed to build useful, large-scale quantum computers. But before describing how we think such error correction can be made practical, we need to first review what makes a quantum computer tick.

Information is physical. This was the mantra of the distinguished IBM researcher Rolf Landauer. Abstract though it may seem, information always involves a physical representation, and the physics matters.

Conventional digital information consists of bits, zeros and ones, which can be represented by classical states of matter, that is, states well described by classical physics. Quantum information, by contrast, involves qubitsquantum bitswhose properties follow the peculiar rules of quantum mechanics.

A classical bit has only two possible values: 0 or 1. A qubit, however, can occupy a superposition of these two information states, taking on characteristics of both. Polarized light provides intuitive examples of superpositions. You could use horizontally polarized light to represent 0 and vertically polarized light to represent 1, but light can also be polarized on an angle and then has both horizontal and vertical components at once. Indeed, one way to represent a qubit is by the polarization of a single photon of light.

These ideas generalize to groups of n bits or qubits: n bits can represent any one of 2n possible values at any moment, while n qubits can include components corresponding to all 2n classical states simultaneously in superposition. These superpositions provide a vast range of possible states for a quantum computer to work with, albeit with limitations on how they can be manipulated and accessed. Superposition of information is a central resource used in quantum processing and, along with other quantum rules, enables powerful new ways to compute.

Researchers are experimenting with many different physical systems to hold and process quantum information, including light, trapped atoms and ions, and solid-state devices based on semiconductors or superconductors. For the purpose of realizing qubits, all these systems follow the same underlying mathematical rules of quantum physics, and all of them are highly sensitive to environmental fluctuations that introduce errors. By contrast, the transistors that handle classical information in modern digital electronics can reliably perform a billion operations per second for decades with a vanishingly small chance of a hardware fault.

Of particular concern is the fact that qubit states can roam over a continuous range of superpositions. Polarized light again provides a good analogy: The angle of linear polarization can take any value from 0 to 180 degrees.

Pictorially, a qubits state can be thought of as an arrow pointing to a location on the surface of a sphere. Known as a Bloch sphere, its north and south poles represent the binary states 0 and 1, respectively, and all other locations on its surface represent possible quantum superpositions of those two states. Noise causes the Bloch arrow to drift around the sphere over time. A conventional computer represents 0 and 1 with physical quantities, such as capacitor voltages, that can be locked near the correct values to suppress this kind of continuous wandering and unwanted bit flips. There is no comparable way to lock the qubits arrow to its correct location on the Bloch sphere.

Early in the 1990s, Landauer and others argued that this difficulty presented a fundamental obstacle to building useful quantum computers. The issue is known as scalability: Although a simple quantum processor performing a few operations on a handful of qubits might be possible, could you scale up the technology to systems that could run lengthy computations on large arrays of qubits? A type of classical computation called analog computing also uses continuous quantities and is suitable for some tasks, but the problem of continuous errors prevents the complexity of such systems from being scaled up. Continuous errors with qubits seemed to doom quantum computers to the same fate.

We now know better. Theoreticians have successfully adapted the theory of error correction for classical digital data to quantum settings. QEC makes scalable quantum processing possible in a way that is impossible for analog computers. To get a sense of how it works, its worthwhile to review how error correction is performed in classical settings.

Simple schemes can deal with errors in classical information. For instance, in the 19th century, ships routinely carried clocks for determining the ships longitude during voyages. A good clock that could keep track of the time in Greenwich, in combination with the suns position in the sky, provided the necessary data. A mistimed clock could lead to dangerous navigational errors, though, so ships often carried at least three of them. Two clocks reading different times could detect when one was at fault, but three were needed to identify which timepiece was faulty and correct it through a majority vote.

The use of multiple clocks is an example of a repetition code: Information is redundantly encoded in multiple physical devices such that a disturbance in one can be identified and corrected.

As you might expect, quantum mechanics adds some major complications when dealing with errors. Two problems in particular might seem to dash any hopes of using a quantum repetition code. The first problem is that measurements fundamentally disturb quantum systems. So if you encoded information on three qubits, for instance, observing them directly to check for errors would ruin them. Like Schrdingers cat when its box is opened, their quantum states would be irrevocably changed, spoiling the very quantum features your computer was intended to exploit.

The second issue is a fundamental result in quantum mechanics called the no-cloning theorem, which tells us it is impossible to make a perfect copy of an unknown quantum state. If you know the exact superposition state of your qubit, there is no problem producing any number of other qubits in the same state. But once a computation is running and you no longer know what state a qubit has evolved to, you cannot manufacture faithful copies of that qubit except by duplicating the entire process up to that point.

Fortunately, you can sidestep both of these obstacles. Well first describe how to evade the measurement problem using the example of a classical three-bit repetition code. You dont actually need to know the state of every individual code bit to identify which one, if any, has flipped. Instead, you ask two questions: Are bits 1 and 2 the same? and Are bits 2 and 3 the same? These are called parity-check questions because two identical bits are said to have even parity, and two unequal bits have odd parity.

The two answers to those questions identify which single bit has flipped, and you can then counterflip that bit to correct the error. You can do all this without ever determining what value each code bit holds. A similar strategy works to correct errors in a quantum system.

Learning the values of the parity checks still requires quantum measurement, but importantly, it does not reveal the underlying quantum information. Additional qubits can be used as disposable resources to obtain the parity values without revealing (and thus without disturbing) the encoded information itself.

Like Schrdingers cat when its box is opened, the quantum states of the qubits you measured would be irrevocably changed, spoiling the very quantum features your computer was intended to exploit.

What about no-cloning? It turns out it is possible to take a qubit whose state is unknown and encode that hidden state in a superposition across multiple qubits in a way that does not clone the original information. This process allows you to record what amounts to a single logical qubit of information across three physical qubits, and you can perform parity checks and corrective steps to protect the logical qubit against noise.

Quantum errors consist of more than just bit-flip errors, though, making this simple three-qubit repetition code unsuitable for protecting against all possible quantum errors. True QEC requires something more. That came in the mid-1990s when Peter Shor (then at AT&T Bell Laboratories, in Murray Hill, N.J.) described an elegant scheme to encode one logical qubit into nine physical qubits by embedding a repetition code inside another code. Shors scheme protects against an arbitrary quantum error on any one of the physical qubits.

Since then, the QEC community has developed many improved encoding schemes, which use fewer physical qubits per logical qubitthe most compact use fiveor enjoy other performance enhancements. Today, the workhorse of large-scale proposals for error correction in quantum computers is called the surface code, developed in the late 1990s by borrowing exotic mathematics from topology and high-energy physics.

It is convenient to think of a quantum computer as being made up of logical qubits and logical gates that sit atop an underlying foundation of physical devices. These physical devices are subject to noise, which creates physical errors that accumulate over time. Periodically, generalized parity measurements (called syndrome measurements) identify the physical errors, and corrections remove them before they cause damage at the logical level.

A quantum computation with QEC then consists of cycles of gates acting on qubits, syndrome measurements, error inference, and corrections. In terms more familiar to engineers, QEC is a form of feedback stabilization that uses indirect measurements to gain just the information needed to correct errors.

QEC is not foolproof, of course. The three-bit repetition code, for example, fails if more than one bit has been flipped. Whats more, the resources and mechanisms that create the encoded quantum states and perform the syndrome measurements are themselves prone to errors. How, then, can a quantum computer perform QEC when all these processes are themselves faulty?

Remarkably, the error-correction cycle can be designed to tolerate errors and faults that occur at every stage, whether in the physical qubits, the physical gates, or even in the very measurements used to infer the existence of errors! Called a fault-tolerant architecture, such a design permits, in principle, error-robust quantum processing even when all the component parts are unreliable.

A long quantum computation will require many cycles of quantum error correction (QEC). Each cycle would consist of gates acting on encoded qubits (performing the computation), followed by syndrome measurements from which errors can be inferred, and corrections. The effectiveness of this QEC feedback loop can be greatly enhanced by including quantum-control techniques (represented by the thick blue outline) to stabilize and optimize each of these processes.

Even in a fault-tolerant architecture, the additional complexity introduces new avenues for failure. The effect of errors is therefore reduced at the logical level only if the underlying physical error rate is not too high. The maximum physical error rate that a specific fault-tolerant architecture can reliably handle is known as its break-even error threshold. If error rates are lower than this threshold, the QEC process tends to suppress errors over the entire cycle. But if error rates exceed the threshold, the added machinery just makes things worse overall.

The theory of fault-tolerant QEC is foundational to every effort to build useful quantum computers because it paves the way to building systems of any size. If QEC is implemented effectively on hardware exceeding certain performance requirements, the effect of errors can be reduced to arbitrarily low levels, enabling the execution of arbitrarily long computations.

At this point, you may be wondering how QEC has evaded the problem of continuous errors, which is fatal for scaling up analog computers. The answer lies in the nature of quantum measurements.

In a typical quantum measurement of a superposition, only a few discrete outcomes are possible, and the physical state changes to match the result that the measurement finds. With the parity-check measurements, this change helps.

Imagine you have a code block of three physical qubits, and one of these qubit states has wandered a little from its ideal state. If you perform a parity measurement, just two results are possible: Most often, the measurement will report the parity state that corresponds to no error, and after the measurement, all three qubits will be in the correct state, whatever it is. Occasionally the measurement will instead indicate the odd parity state, which means an errant qubit is now fully flipped. If so, you can flip that qubit back to restore the desired encoded logical state.

In other words, performing QEC transforms small, continuous errors into infrequent but discrete errors, similar to the errors that arise in digital computers.

Researchers have now demonstrated many of the principles of QEC in the laboratoryfrom the basics of the repetition code through to complex encodings, logical operations on code words, and repeated cycles of measurement and correction. Current estimates of the break-even threshold for quantum hardware place it at about 1 error in 1,000 operations. This level of performance hasnt yet been achieved across all the constituent parts of a QEC scheme, but researchers are getting ever closer, achieving multiqubit logic with rates of fewer than about 5 errors per 1,000 operations. Even so, passing that critical milestone will be the beginning of the story, not the end.

On a system with a physical error rate just below the threshold, QEC would require enormous redundancy to push the logical rate down very far. It becomes much less challenging with a physical rate further below the threshold. So just crossing the error threshold is not sufficientwe need to beat it by a wide margin. How can that be done?

If we take a step back, we can see that the challenge of dealing with errors in quantum computers is one of stabilizing a dynamic system against external disturbances. Although the mathematical rules differ for the quantum system, this is a familiar problem in the discipline of control engineering. And just as control theory can help engineers build robots capable of righting themselves when they stumble, quantum-control engineering can suggest the best ways to implement abstract QEC codes on real physical hardware. Quantum control can minimize the effects of noise and make QEC practical.

In essence, quantum control involves optimizing how you implement all the physical processes used in QECfrom individual logic operations to the way measurements are performed. For example, in a system based on superconducting qubits, a qubit is flipped by irradiating it with a microwave pulse. One approach uses a simple type of pulse to move the qubits state from one pole of the Bloch sphere, along the Greenwich meridian, to precisely the other pole. Errors arise if the pulse is distorted by noise. It turns out that a more complicated pulse, one that takes the qubit on a well-chosen meandering route from pole to pole, can result in less error in the qubits final state under the same noise conditions, even when the new pulse is imperfectly implemented.

One facet of quantum-control engineering involves careful analysis and design of the best pulses for such tasks in a particular imperfect instance of a given system. It is a form of open-loop (measurement-free) control, which complements the closed-loop feedback control used in QEC.

This kind of open-loop control can also change the statistics of the physical-layer errors to better comport with the assumptions of QEC. For example, QEC performance is limited by the worst-case error within a logical block, and individual devices can vary a lot. Reducing that variability is very beneficial. In an experiment our team performed using IBMs publicly accessible machines, we showed that careful pulse optimization reduced the difference between the best-case and worst-case error in a small group of qubits by more than a factor of 10.

Some error processes arise only while carrying out complex algorithms. For instance, crosstalk errors occur on qubits only when their neighbors are being manipulated. Our team has shown that embedding quantum-control techniques into an algorithm can improve its overall success by orders of magnitude. This technique makes QEC protocols much more likely to correctly identify an error in a physical qubit.

For 25 years, QEC researchers have largely focused on mathematical strategies for encoding qubits and efficiently detecting errors in the encoded sets. Only recently have investigators begun to address the thorny question of how best to implement the full QEC feedback loop in real hardware. And while many areas of QEC technology are ripe for improvement, there is also growing awareness in the community that radical new approaches might be possible by marrying QEC and control theory. One way or another, this approach will turn quantum computing into a realityand you can carve that in stone.

This article appears in the July 2022 print issue as Quantum Error Correction at the Threshold.

From Your Site Articles

Related Articles Around the Web

Originally posted here:
Quantum Error Correction: Time to Make It Work - IEEE Spectrum

Quantum computing will revolutionize every large industry – CTech

Israeli Team8 venture group officially opened this years Cyber Week with an event that took place in Tel Aviv on Sunday. The event, which included international guests and cybersecurity professionals, showcased the country and the industry as a powerhouse in relation to Startup Nation.

Opening remarks were made by Niv Sultan, star of Apple TVs Tehran, who also moderated the event. She then welcomed Gili Drob-Heinstein, Executive Director at the Blavatnik Interdisciplinary Cyber Research Center (ICRC) at Tel Aviv University, and Nadav Zafrir, Co-founder of Team8 and Managing Partner of Team8 Platform to the stage.

I would like to thank the 100 CSOs who came to stay with us, Zafrir said on stage. Guests from around the world had flown into Israel and spent time connecting with one another ahead of the official start of Cyber Week on Monday. Team8 was also celebrating its 8th year as a VC, highlighting the work it has done in the cybersecurity arena.

The stage was then filled with Admiral Mike Rogers and Nir Minerbi, Co-founder and CEO of Classiq, who together discussed The Quantum Opportunity in computing. Classical computers are great, but for some of the most complex challenges humanity is facing, they are not suitable, said Minerbi. Quantum computing will revolutionize every large industry.

Classiq develops software for quantum algorithms. Founded in 2020, it has raised a total of $51 million and is funded by Team8 among other VC players in the space. Admiral Mike Rogers is the Former Director of American agency the NSA and is an Operating Partner at Team8.

We are in a race, Rogers told the large crowd. This is a technology believed to have advantages for our daily lives and national security. I told both presidents I worked under why they should invest billions into quantum, citing the ability to look at multiple qubits simultaneously thus speeding up the ability to process information. According to Rogers, governments have already publicly announced $29 billion of funding to help develop quantum computing.

Final remarks were made by Renee Wynn, former CIO at NASA, who discussed the potential of cyber in space. Space may be the final frontier, and if we do not do anything else than what we are doing now, it will be chaos 100 miles above your head, she warned. On stage, she spoke to the audience about the threats in space and how satellites could be hijacked for nefarious reasons.

Cybersecurity and satellites are so important, she concluded. Lets bring the space teams together with the cybersecurity teams and help save lives.

After the remarks, the stage was then transformed to host the evenings entertainment. Israeli-American puppet band Red Band performed a variety of songs and was then joined by Marina Maximilian, an Israeli singer-songwriter and actress, who shared the stage with the colorful puppets.

The event was sponsored by Meitar, Delloitte, LeumiTech, Valley, Palo Alto, FinSec Innovation Lab, and SentinelOne. It marked the beginning of Cyber Week, a three-day conference hosted by Tel Aviv University that will welcome a variety of cybersecurity professionals for workshops, networking opportunities, and panel discussions. It is understood that this year will have 9,000 attendees, 400 speakers, and host people from 80 different countries.

2 View gallery

Red Band performing 'Seven Nation Army'.

(Photo: James Spiro)

See the original post here:
Quantum computing will revolutionize every large industry - CTech

QC Ware Announces Q2B22 Tokyo To Be Held July 13-14 – HPCwire

PALO ALTO, Calif., June 28, 2022 QC Ware, a leading quantum software and services company, today announced the inaugural Q2B22 Tokyo Practical Quantum Computing, to be held exclusively in person at The Westin Tokyo in Japan on July 13- 14, 2022. Q2B is the worlds largest gathering of the quantum computing community, focusing solely on quantum computing applications and driving the discourse on quantum advantage and commercialization. Registration and other information onQ2B22 Tokyo is available athttp://q2b.jp.

Q2B22 Tokyo will feature top academics, industry end users, government representatives, and quantum computing vendors from around the world.

Japan has led the way with ground-breaking research on quantum computing, said Matt Johnson, CEO of QC Ware. In addition, the ecosystem includes some of Japans largest enterprises, forward-thinking government organizations, and a thriving venture- backed startup community. Im excited to be able to connect the Japanese and international quantum computing ecosystems at this unique event.

QC Ware has been operating in Japan since 2019 and recently opened up an office in Tokyo.

Q2B22 Tokyo will be co-hosted by QunaSys, a leading Japanese developer company working on innovative algorithms focused on accelerating the development of quantum technology applicability in chemistry and sponsored by IBM Quantum.

Japans technology ecosystem is actively advancing quantum computing. QunaSys is a key player in boosting technology adoption, driving business, government, and academia collaboration to enable the quantum chemistry ecosystem. We are pleased to work with QC Ware and co-host Q2B22 Tokyo bringing Q2B to Japan, said Tennin Yan, CEO of QunaSys.

IBM Quantum has strategically invested in Japan to accelerate an ecosystem of world- class academic, private sector and government partners, including installation of the IBM Quantum System One at the University of Tokyo, and the co-development of the Quantum Innovation Initiative Consortium (QIIC), said Aparna Prabhakar, Vice President, Partners and Alliances, IBM Quantum. We are excited to work with QC Ware and QunaSys to bring experts from a wide variety of quantum computing fields to Q2B22 Tokyo.

Q2B22 Tokyo will feature keynotes from top academics such as:

Other keynotes include:

Japanese and international end-users discussing active quantum initiatives, such as:Automotive:

Materials and Chemistry:

Finance and more:

In addition to IBM Quantum, Q2B22 Tokyo, is sponsored by D-Wave Systems, KeysightTechnologies, NVIDIA, Quantinuum Ltd., Quantum Machines, andStrangeworks, Inc.Other sponsors include:

Q2B has been run by QC Ware since 2017, with the annual flagship event held in Northern Californias Silicon Valley. Q2B Silicon Valley is currently scheduled for December 6-8 at the Santa Clara Convention Center.

About QC Ware

QC Ware is a quantum software and services company focused on ensuring enterprises are prepared for the emerging quantum computing disruption. QC Ware specializes in the development of applications for near-term quantum computing hardware with a team composed of some of the industrys foremost experts in quantum computing. Its growing network of customers includes AFRL, Aisin Group, Airbus, BMW Group, Covestro, Equinor, Goldman Sachs, Itau Unibanco, and Total. QC Ware Forge, the companys flagship quantum computing cloud service, is built for data scientists with no quantum computing background. It provides unique, performant, turnkey quantum computing algorithms. QC Ware is headquartered in Palo Alto, California, and supports its European customers through its subsidiary in Paris and its Asian customers from a Tokyo office. QC Ware also organizes Q2B, the largest annual gathering of the international quantum computing community.

Source: QC Ware

The rest is here:
QC Ware Announces Q2B22 Tokyo To Be Held July 13-14 - HPCwire

IonQ and GE Research Demonstrate High Potential of Quantum Computing for Risk Aggregation – Business Wire

COLLEGE PARK, Md.--(BUSINESS WIRE)--IonQ (NYSE: IONQ), an industry leader in quantum computing, today announced promising early results with its partner, GE Research, to explore the benefits of quantum computing for modeling multi-variable distributions in risk management.

Leveraging a Quantum Circuit Born Machine-based framework on standardized, historical indexes, IonQ and GE Research, the central innovation hub for the General Electric Company (NYSE: GE), were able to effectively train quantum circuits to learn correlations among three and four indexes. The prediction derived from the quantum framework outperformed those of classical modeling approaches in some cases, confirming that quantum copulas can potentially lead to smarter data-driven analysis and decision-making across commercial applications. A blog post further explaining the research methodology and results is available here.

Together with GE Research, IonQ is pushing the boundaries of what is currently possible to achieve with quantum computing, said Peter Chapman, CEO and President, IonQ. While classical techniques face inefficiencies when multiple variables have to be modeled together with high precision, our joint effort has identified a new training strategy that may optimize quantum computing results even as systems scale. Tested on our industry-leading IonQ Aria system, were excited to apply these new methodologies when tackling real world scenarios that were once deemed too complex to solve.

While classical techniques to form copulas using mathematical approximations are a great way to build multi-variate risk models, they face limitations when scaling. IonQ and GE Research successfully trained quantum copula models with up to four variables on IonQs trapped ion systems by using data from four representative stock indexes with easily accessible and variating market environments.

By studying the historical dependence structure among the returns of the four indexes during this timeframe, the research group trained its model to understand the underlying dynamics. Additionally, the newly presented methodology includes optimization techniques that potentially allow models to scale by mitigating local minima and vanishing gradient problems common in quantum machine learning practices. Such improvements demonstrate a promising way to perform multi-variable analysis faster and more accurately, which GE researchers hope lead to new and better ways to assess risk with major manufacturing processes such as product design, factory operations, and supply chain management.

As we have seen from recent global supply chain volatility, the world needs more effective methods and tools to manage risks where conditions can be so highly variable and interconnected to one another, said David Vernooy, a Senior Executive and Digital Technologies Leader at GE Research. The early results we achieved in the financial use case with IonQ show the high potential of quantum computing to better understand and reduce the risks associated with these types of highly variable scenarios.

Todays results follow IonQs recent announcement of the companys new IonQ Forte quantum computing system. The system features novel, cutting-edge optics technology that enables increased accuracy and further enhances IonQs industry leading system performance. Partnerships with the likes of GE Research and Hyundai Motors illustrate the growing interest in our industry-leading systems and feeds into the continued success seen in Q1 2022.

About IonQ

IonQ, Inc. is a leader in quantum computing, with a proven track record of innovation and deployment. IonQ's current generation quantum computer, IonQ Forte, is the latest in a line of cutting-edge systems, including IonQ Aria, a system that boasts industry-leading 20 algorithmic qubits. Along with record performance, IonQ has defined what it believes is the best path forward to scale. IonQ is the only company with its quantum systems available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud, as well as through direct API access. IonQ was founded in 2015 by Christopher Monroe and Jungsang Kim based on 25 years of pioneering research. To learn more, visit http://www.ionq.com.

IonQ Forward-Looking Statements

This press release contains certain forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Some of the forward-looking statements can be identified by the use of forward-looking words. Statements that are not historical in nature, including the words anticipate, expect, suggests, plan, believe, intend, estimates, targets, projects, should, could, would, may, will, forecast and other similar expressions are intended to identify forward-looking statements. These statements include those related to IonQs ability to further develop and advance its quantum computers and achieve scale; IonQs ability to optimize quantum computing results even as systems scale; the expected launch of IonQ Forte for access by select developers, partners, and researchers in 2022 with broader customer access expected in 2023; IonQs market opportunity and anticipated growth; and the commercial benefits to customers of using quantum computing solutions. Forward-looking statements are predictions, projections and other statements about future events that are based on current expectations and assumptions and, as a result, are subject to risks and uncertainties. Many factors could cause actual future events to differ materially from the forward-looking statements in this press release, including but not limited to: market adoption of quantum computing solutions and IonQs products, services and solutions; the ability of IonQ to protect its intellectual property; changes in the competitive industries in which IonQ operates; changes in laws and regulations affecting IonQs business; IonQs ability to implement its business plans, forecasts and other expectations, and identify and realize additional partnerships and opportunities; and the risk of downturns in the market and the technology industry including, but not limited to, as a result of the COVID-19 pandemic. The foregoing list of factors is not exhaustive. You should carefully consider the foregoing factors and the other risks and uncertainties described in the Risk Factors section of IonQs Quarterly Report on Form 10-Q for the quarter ended March 31, 2022 and other documents filed by IonQ from time to time with the Securities and Exchange Commission. These filings identify and address other important risks and uncertainties that could cause actual events and results to differ materially from those contained in the forward-looking statements. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and IonQ assumes no obligation and does not intend to update or revise these forward-looking statements, whether as a result of new information, future events, or otherwise. IonQ does not give any assurance that it will achieve its expectations.

The rest is here:
IonQ and GE Research Demonstrate High Potential of Quantum Computing for Risk Aggregation - Business Wire

The Spooky Quantum Phenomenon You’ve Never Heard Of – Quanta Magazine

Perhaps the most famously weird feature of quantum mechanics is nonlocality: Measure one particle in an entangled pair whose partner is miles away, and the measurement seems to rip through the intervening space to instantaneously affect its partner. This spooky action at a distance (as Albert Einstein called it) has been the main focus of tests of quantum theory.

Nonlocality is spectacular. I mean, its like magic, said Adn Cabello, a physicist at the University of Seville in Spain.

But Cabello and others are interested in investigating a lesser-known but equally magical aspect of quantum mechanics: contextuality. Contextuality says that properties of particles, such as their position or polarization, exist only within the context of a measurement. Instead of thinking of particles properties as having fixed values, consider them more like words in language, whose meanings can change depending on the context: Timeflies likean arrow. Fruitflies likebananas.

Although contextuality has lived in nonlocalitys shadow for over 50 years, quantum physicists now consider it more of a hallmark feature of quantum systems than nonlocality is. A single particle, for instance, is a quantum system in which you cannot even think about nonlocality, since the particle is only in one location, said Brbara Amaral, a physicist at the University of So Paulo in Brazil. So [contextuality] is more general in some sense, and I think this is important to really understand the power of quantum systems and to go deeper into why quantum theory is the way it is.

Researchers have also found tantalizing links between contextuality and problems that quantum computers can efficiently solve that ordinary computers cannot; investigating these links could help guide researchers in developing new quantum computing approaches and algorithms.

And with renewed theoretical interest comes a renewed experimental effort to prove that our world is indeed contextual. In February, Cabello, in collaboration with Kihwan Kim at Tsinghua University in Beijing, China, published a paper in which they claimed to have performed the first loophole-free experimental test of contextuality.

The Northern Irish physicist John Stewart Bell is widely credited with showing that quantum systems can be nonlocal. By comparing the outcomes of measurements of two entangled particles, he showed with his eponymous theorem of 1965 that the high degree of correlations between the particles cant possibly be explained in terms of local hidden variables defining each ones separate properties. The information contained in the entangled pair must be shared nonlocally between the particles.

Bell also proved a similar theorem about contextuality. He and, separately, Simon Kochen and Ernst Specker showed that it is impossible for a quantum system to have hidden variables that define the values of all their properties in all possible contexts.

In Kochen and Speckers version of the proof, they considered a single particle with a quantum property called spin, which has both a magnitude and a direction. Measuring the spins magnitude along any direction always results in one of two outcomes: 1 or 0. The researchers then asked: Is it possible that the particle secretly knows what the result of every possible measurement will be before it is measured? In other words, could they assign a fixed value a hidden variable to all outcomes of all possible measurements at once?

Quantum theory says that the magnitudes of the spins along three perpendicular directions must obey the 101 rule: The outcomes of two of the measurements must be 1 and the other must be 0. Kochen and Specker used this rule to arrive at a contradiction. First, they assumed that each particle had a fixed, intrinsic value for each direction of spin. They then conducted a hypothetical spin measurement along some unique direction, assigning either 0 or 1 to the outcome. They then repeatedly rotated the direction of their hypothetical measurement and measured again, each time either freely assigning a value to the outcome or deducing what the value must be in order to satisfy the 101 rule together with directions they had previously considered.

They continued until, in the 117th direction, the contradiction cropped up. While they had previously assigned a value of 0 to the spin along this direction, the 101 rule was now dictating that the spin must be 1. The outcome of a measurement could not possibly return both 0 and 1. So the physicists concluded that there is no way a particle can have fixed hidden variables that remain the same regardless of context.

While the proof indicated that quantum theory demands contextuality, there was no way to actually demonstrate this through 117 simultaneous measurements of a single particle. Physicists have since devised more practical, experimentally implementable versions of the original Bell-Kochen-Specker theorem involving multiple entangled particles, where a particular measurement on one particle defines a context for the others.

In 2009, contextuality, a seemingly esoteric aspect of the underlying fabric of reality, got a direct application: One of the simplified versions of the original Bell-Kochen-Specker theorem was shown to be equivalent to a basic quantum computation.

The proof, named Mermins star after its originator, David Mermin, considered various combinations of contextual measurements that could be made on three entangled quantum bits, or qubits. The logic of how earlier measurements shape the outcomes of later measurements has become the basis for an approach called measurement-based quantum computing. The discovery suggested that contextuality might be key to why quantum computers can solve certain problems faster than classical computers an advantage that researchers have struggled mightily to understand.

Robert Raussendorf, a physicist at the University of British Columbia and a pioneer of measurement-based quantum computing, showed that contextuality is necessary for a quantum computer to beat a classical computer at some tasks, but he doesnt think its the whole story. Whether contextuality powers quantum computers is probably not exactly the right question to ask, he said. But we need to get there question by question. So we ask a question that we understand how to ask; we get an answer. We ask the next question.

Some researchers have suggested loopholes around Bell, Kochen and Speckers conclusion that the world is contextual. They argue that context-independent hidden variables havent been conclusively ruled out.

In February, Cabello and Kim announced that they had closed every plausible loophole by performing a loophole free Bell-Kochen-Specker experiment.

The experiment entailed measuring the spins of two entangled trapped ions in various directions, where the choice of measurement on one ion defined the context for the other ion. The physicists showed that, although making a measurement on one ion does not physically affect the other, it changes the context and hence the outcome of the second ions measurement.

Skeptics would ask: How can you be certain that the context created by the first measurement is what changed the second measurement outcome, rather than other conditions that might vary from experiment to experiment? Cabello and Kim closed this sharpness loophole by performing thousands of sets of measurements and showing that the outcomes dont change if the context doesnt. After ruling out this and other loopholes, they concluded that the only reasonable explanation for their results is contextuality.

Cabello and others think that these experiments could be used in the future to test the level of contextuality and hence, the power of quantum computing devices.

If you want to really understand how the world is working, said Cabello, you really need to go into the detail of quantum contextuality.

Read more:
The Spooky Quantum Phenomenon You've Never Heard Of - Quanta Magazine