Quantum Error Correction: Time to Make It Work – IEEE Spectrum

Dates chiseled into an ancient tombstone have more in common with the data in your phone or laptop than you may realize. They both involve conventional, classical information, carried by hardware that is relatively immune to errors. The situation inside a quantum computer is far different: The information itself has its own idiosyncratic properties, and compared with standard digital microelectronics, state-of-the-art quantum-computer hardware is more than a billion trillion times as likely to suffer a fault. This tremendous susceptibility to errors is the single biggest problem holding back quantum computing from realizing its great promise.

Fortunately, an approach known as quantum error correction (QEC) can remedy this problem, at least in principle. A mature body of theory built up over the past quarter century now provides a solid theoretical foundation, and experimentalists have demonstrated dozens of proof-of-principle examples of QEC. But these experiments still have not reached the level of quality and sophistication needed to reduce the overall error rate in a system.

The two of us, along with many other researchers involved in quantum computing, are trying to move definitively beyond these preliminary demos of QEC so that it can be employed to build useful, large-scale quantum computers. But before describing how we think such error correction can be made practical, we need to first review what makes a quantum computer tick.

Information is physical. This was the mantra of the distinguished IBM researcher Rolf Landauer. Abstract though it may seem, information always involves a physical representation, and the physics matters.

Conventional digital information consists of bits, zeros and ones, which can be represented by classical states of matter, that is, states well described by classical physics. Quantum information, by contrast, involves qubitsquantum bitswhose properties follow the peculiar rules of quantum mechanics.

A classical bit has only two possible values: 0 or 1. A qubit, however, can occupy a superposition of these two information states, taking on characteristics of both. Polarized light provides intuitive examples of superpositions. You could use horizontally polarized light to represent 0 and vertically polarized light to represent 1, but light can also be polarized on an angle and then has both horizontal and vertical components at once. Indeed, one way to represent a qubit is by the polarization of a single photon of light.

These ideas generalize to groups of n bits or qubits: n bits can represent any one of 2n possible values at any moment, while n qubits can include components corresponding to all 2n classical states simultaneously in superposition. These superpositions provide a vast range of possible states for a quantum computer to work with, albeit with limitations on how they can be manipulated and accessed. Superposition of information is a central resource used in quantum processing and, along with other quantum rules, enables powerful new ways to compute.

Researchers are experimenting with many different physical systems to hold and process quantum information, including light, trapped atoms and ions, and solid-state devices based on semiconductors or superconductors. For the purpose of realizing qubits, all these systems follow the same underlying mathematical rules of quantum physics, and all of them are highly sensitive to environmental fluctuations that introduce errors. By contrast, the transistors that handle classical information in modern digital electronics can reliably perform a billion operations per second for decades with a vanishingly small chance of a hardware fault.

Of particular concern is the fact that qubit states can roam over a continuous range of superpositions. Polarized light again provides a good analogy: The angle of linear polarization can take any value from 0 to 180 degrees.

Pictorially, a qubits state can be thought of as an arrow pointing to a location on the surface of a sphere. Known as a Bloch sphere, its north and south poles represent the binary states 0 and 1, respectively, and all other locations on its surface represent possible quantum superpositions of those two states. Noise causes the Bloch arrow to drift around the sphere over time. A conventional computer represents 0 and 1 with physical quantities, such as capacitor voltages, that can be locked near the correct values to suppress this kind of continuous wandering and unwanted bit flips. There is no comparable way to lock the qubits arrow to its correct location on the Bloch sphere.

Early in the 1990s, Landauer and others argued that this difficulty presented a fundamental obstacle to building useful quantum computers. The issue is known as scalability: Although a simple quantum processor performing a few operations on a handful of qubits might be possible, could you scale up the technology to systems that could run lengthy computations on large arrays of qubits? A type of classical computation called analog computing also uses continuous quantities and is suitable for some tasks, but the problem of continuous errors prevents the complexity of such systems from being scaled up. Continuous errors with qubits seemed to doom quantum computers to the same fate.

We now know better. Theoreticians have successfully adapted the theory of error correction for classical digital data to quantum settings. QEC makes scalable quantum processing possible in a way that is impossible for analog computers. To get a sense of how it works, its worthwhile to review how error correction is performed in classical settings.

Simple schemes can deal with errors in classical information. For instance, in the 19th century, ships routinely carried clocks for determining the ships longitude during voyages. A good clock that could keep track of the time in Greenwich, in combination with the suns position in the sky, provided the necessary data. A mistimed clock could lead to dangerous navigational errors, though, so ships often carried at least three of them. Two clocks reading different times could detect when one was at fault, but three were needed to identify which timepiece was faulty and correct it through a majority vote.

The use of multiple clocks is an example of a repetition code: Information is redundantly encoded in multiple physical devices such that a disturbance in one can be identified and corrected.

As you might expect, quantum mechanics adds some major complications when dealing with errors. Two problems in particular might seem to dash any hopes of using a quantum repetition code. The first problem is that measurements fundamentally disturb quantum systems. So if you encoded information on three qubits, for instance, observing them directly to check for errors would ruin them. Like Schrdingers cat when its box is opened, their quantum states would be irrevocably changed, spoiling the very quantum features your computer was intended to exploit.

The second issue is a fundamental result in quantum mechanics called the no-cloning theorem, which tells us it is impossible to make a perfect copy of an unknown quantum state. If you know the exact superposition state of your qubit, there is no problem producing any number of other qubits in the same state. But once a computation is running and you no longer know what state a qubit has evolved to, you cannot manufacture faithful copies of that qubit except by duplicating the entire process up to that point.

Fortunately, you can sidestep both of these obstacles. Well first describe how to evade the measurement problem using the example of a classical three-bit repetition code. You dont actually need to know the state of every individual code bit to identify which one, if any, has flipped. Instead, you ask two questions: Are bits 1 and 2 the same? and Are bits 2 and 3 the same? These are called parity-check questions because two identical bits are said to have even parity, and two unequal bits have odd parity.

The two answers to those questions identify which single bit has flipped, and you can then counterflip that bit to correct the error. You can do all this without ever determining what value each code bit holds. A similar strategy works to correct errors in a quantum system.

Learning the values of the parity checks still requires quantum measurement, but importantly, it does not reveal the underlying quantum information. Additional qubits can be used as disposable resources to obtain the parity values without revealing (and thus without disturbing) the encoded information itself.

Like Schrdingers cat when its box is opened, the quantum states of the qubits you measured would be irrevocably changed, spoiling the very quantum features your computer was intended to exploit.

What about no-cloning? It turns out it is possible to take a qubit whose state is unknown and encode that hidden state in a superposition across multiple qubits in a way that does not clone the original information. This process allows you to record what amounts to a single logical qubit of information across three physical qubits, and you can perform parity checks and corrective steps to protect the logical qubit against noise.

Quantum errors consist of more than just bit-flip errors, though, making this simple three-qubit repetition code unsuitable for protecting against all possible quantum errors. True QEC requires something more. That came in the mid-1990s when Peter Shor (then at AT&T Bell Laboratories, in Murray Hill, N.J.) described an elegant scheme to encode one logical qubit into nine physical qubits by embedding a repetition code inside another code. Shors scheme protects against an arbitrary quantum error on any one of the physical qubits.

Since then, the QEC community has developed many improved encoding schemes, which use fewer physical qubits per logical qubitthe most compact use fiveor enjoy other performance enhancements. Today, the workhorse of large-scale proposals for error correction in quantum computers is called the surface code, developed in the late 1990s by borrowing exotic mathematics from topology and high-energy physics.

It is convenient to think of a quantum computer as being made up of logical qubits and logical gates that sit atop an underlying foundation of physical devices. These physical devices are subject to noise, which creates physical errors that accumulate over time. Periodically, generalized parity measurements (called syndrome measurements) identify the physical errors, and corrections remove them before they cause damage at the logical level.

A quantum computation with QEC then consists of cycles of gates acting on qubits, syndrome measurements, error inference, and corrections. In terms more familiar to engineers, QEC is a form of feedback stabilization that uses indirect measurements to gain just the information needed to correct errors.

QEC is not foolproof, of course. The three-bit repetition code, for example, fails if more than one bit has been flipped. Whats more, the resources and mechanisms that create the encoded quantum states and perform the syndrome measurements are themselves prone to errors. How, then, can a quantum computer perform QEC when all these processes are themselves faulty?

Remarkably, the error-correction cycle can be designed to tolerate errors and faults that occur at every stage, whether in the physical qubits, the physical gates, or even in the very measurements used to infer the existence of errors! Called a fault-tolerant architecture, such a design permits, in principle, error-robust quantum processing even when all the component parts are unreliable.

A long quantum computation will require many cycles of quantum error correction (QEC). Each cycle would consist of gates acting on encoded qubits (performing the computation), followed by syndrome measurements from which errors can be inferred, and corrections. The effectiveness of this QEC feedback loop can be greatly enhanced by including quantum-control techniques (represented by the thick blue outline) to stabilize and optimize each of these processes.

Even in a fault-tolerant architecture, the additional complexity introduces new avenues for failure. The effect of errors is therefore reduced at the logical level only if the underlying physical error rate is not too high. The maximum physical error rate that a specific fault-tolerant architecture can reliably handle is known as its break-even error threshold. If error rates are lower than this threshold, the QEC process tends to suppress errors over the entire cycle. But if error rates exceed the threshold, the added machinery just makes things worse overall.

The theory of fault-tolerant QEC is foundational to every effort to build useful quantum computers because it paves the way to building systems of any size. If QEC is implemented effectively on hardware exceeding certain performance requirements, the effect of errors can be reduced to arbitrarily low levels, enabling the execution of arbitrarily long computations.

At this point, you may be wondering how QEC has evaded the problem of continuous errors, which is fatal for scaling up analog computers. The answer lies in the nature of quantum measurements.

In a typical quantum measurement of a superposition, only a few discrete outcomes are possible, and the physical state changes to match the result that the measurement finds. With the parity-check measurements, this change helps.

Imagine you have a code block of three physical qubits, and one of these qubit states has wandered a little from its ideal state. If you perform a parity measurement, just two results are possible: Most often, the measurement will report the parity state that corresponds to no error, and after the measurement, all three qubits will be in the correct state, whatever it is. Occasionally the measurement will instead indicate the odd parity state, which means an errant qubit is now fully flipped. If so, you can flip that qubit back to restore the desired encoded logical state.

In other words, performing QEC transforms small, continuous errors into infrequent but discrete errors, similar to the errors that arise in digital computers.

Researchers have now demonstrated many of the principles of QEC in the laboratoryfrom the basics of the repetition code through to complex encodings, logical operations on code words, and repeated cycles of measurement and correction. Current estimates of the break-even threshold for quantum hardware place it at about 1 error in 1,000 operations. This level of performance hasnt yet been achieved across all the constituent parts of a QEC scheme, but researchers are getting ever closer, achieving multiqubit logic with rates of fewer than about 5 errors per 1,000 operations. Even so, passing that critical milestone will be the beginning of the story, not the end.

On a system with a physical error rate just below the threshold, QEC would require enormous redundancy to push the logical rate down very far. It becomes much less challenging with a physical rate further below the threshold. So just crossing the error threshold is not sufficientwe need to beat it by a wide margin. How can that be done?

If we take a step back, we can see that the challenge of dealing with errors in quantum computers is one of stabilizing a dynamic system against external disturbances. Although the mathematical rules differ for the quantum system, this is a familiar problem in the discipline of control engineering. And just as control theory can help engineers build robots capable of righting themselves when they stumble, quantum-control engineering can suggest the best ways to implement abstract QEC codes on real physical hardware. Quantum control can minimize the effects of noise and make QEC practical.

In essence, quantum control involves optimizing how you implement all the physical processes used in QECfrom individual logic operations to the way measurements are performed. For example, in a system based on superconducting qubits, a qubit is flipped by irradiating it with a microwave pulse. One approach uses a simple type of pulse to move the qubits state from one pole of the Bloch sphere, along the Greenwich meridian, to precisely the other pole. Errors arise if the pulse is distorted by noise. It turns out that a more complicated pulse, one that takes the qubit on a well-chosen meandering route from pole to pole, can result in less error in the qubits final state under the same noise conditions, even when the new pulse is imperfectly implemented.

One facet of quantum-control engineering involves careful analysis and design of the best pulses for such tasks in a particular imperfect instance of a given system. It is a form of open-loop (measurement-free) control, which complements the closed-loop feedback control used in QEC.

This kind of open-loop control can also change the statistics of the physical-layer errors to better comport with the assumptions of QEC. For example, QEC performance is limited by the worst-case error within a logical block, and individual devices can vary a lot. Reducing that variability is very beneficial. In an experiment our team performed using IBMs publicly accessible machines, we showed that careful pulse optimization reduced the difference between the best-case and worst-case error in a small group of qubits by more than a factor of 10.

Some error processes arise only while carrying out complex algorithms. For instance, crosstalk errors occur on qubits only when their neighbors are being manipulated. Our team has shown that embedding quantum-control techniques into an algorithm can improve its overall success by orders of magnitude. This technique makes QEC protocols much more likely to correctly identify an error in a physical qubit.

For 25 years, QEC researchers have largely focused on mathematical strategies for encoding qubits and efficiently detecting errors in the encoded sets. Only recently have investigators begun to address the thorny question of how best to implement the full QEC feedback loop in real hardware. And while many areas of QEC technology are ripe for improvement, there is also growing awareness in the community that radical new approaches might be possible by marrying QEC and control theory. One way or another, this approach will turn quantum computing into a realityand you can carve that in stone.

This article appears in the July 2022 print issue as Quantum Error Correction at the Threshold.

From Your Site Articles

Related Articles Around the Web

Originally posted here:
Quantum Error Correction: Time to Make It Work - IEEE Spectrum

Quantum computing will revolutionize every large industry – CTech

Israeli Team8 venture group officially opened this years Cyber Week with an event that took place in Tel Aviv on Sunday. The event, which included international guests and cybersecurity professionals, showcased the country and the industry as a powerhouse in relation to Startup Nation.

Opening remarks were made by Niv Sultan, star of Apple TVs Tehran, who also moderated the event. She then welcomed Gili Drob-Heinstein, Executive Director at the Blavatnik Interdisciplinary Cyber Research Center (ICRC) at Tel Aviv University, and Nadav Zafrir, Co-founder of Team8 and Managing Partner of Team8 Platform to the stage.

I would like to thank the 100 CSOs who came to stay with us, Zafrir said on stage. Guests from around the world had flown into Israel and spent time connecting with one another ahead of the official start of Cyber Week on Monday. Team8 was also celebrating its 8th year as a VC, highlighting the work it has done in the cybersecurity arena.

The stage was then filled with Admiral Mike Rogers and Nir Minerbi, Co-founder and CEO of Classiq, who together discussed The Quantum Opportunity in computing. Classical computers are great, but for some of the most complex challenges humanity is facing, they are not suitable, said Minerbi. Quantum computing will revolutionize every large industry.

Classiq develops software for quantum algorithms. Founded in 2020, it has raised a total of $51 million and is funded by Team8 among other VC players in the space. Admiral Mike Rogers is the Former Director of American agency the NSA and is an Operating Partner at Team8.

We are in a race, Rogers told the large crowd. This is a technology believed to have advantages for our daily lives and national security. I told both presidents I worked under why they should invest billions into quantum, citing the ability to look at multiple qubits simultaneously thus speeding up the ability to process information. According to Rogers, governments have already publicly announced $29 billion of funding to help develop quantum computing.

Final remarks were made by Renee Wynn, former CIO at NASA, who discussed the potential of cyber in space. Space may be the final frontier, and if we do not do anything else than what we are doing now, it will be chaos 100 miles above your head, she warned. On stage, she spoke to the audience about the threats in space and how satellites could be hijacked for nefarious reasons.

Cybersecurity and satellites are so important, she concluded. Lets bring the space teams together with the cybersecurity teams and help save lives.

After the remarks, the stage was then transformed to host the evenings entertainment. Israeli-American puppet band Red Band performed a variety of songs and was then joined by Marina Maximilian, an Israeli singer-songwriter and actress, who shared the stage with the colorful puppets.

The event was sponsored by Meitar, Delloitte, LeumiTech, Valley, Palo Alto, FinSec Innovation Lab, and SentinelOne. It marked the beginning of Cyber Week, a three-day conference hosted by Tel Aviv University that will welcome a variety of cybersecurity professionals for workshops, networking opportunities, and panel discussions. It is understood that this year will have 9,000 attendees, 400 speakers, and host people from 80 different countries.

2 View gallery

Red Band performing 'Seven Nation Army'.

(Photo: James Spiro)

See the original post here:
Quantum computing will revolutionize every large industry - CTech

QC Ware Announces Q2B22 Tokyo To Be Held July 13-14 – HPCwire

PALO ALTO, Calif., June 28, 2022 QC Ware, a leading quantum software and services company, today announced the inaugural Q2B22 Tokyo Practical Quantum Computing, to be held exclusively in person at The Westin Tokyo in Japan on July 13- 14, 2022. Q2B is the worlds largest gathering of the quantum computing community, focusing solely on quantum computing applications and driving the discourse on quantum advantage and commercialization. Registration and other information onQ2B22 Tokyo is available athttp://q2b.jp.

Q2B22 Tokyo will feature top academics, industry end users, government representatives, and quantum computing vendors from around the world.

Japan has led the way with ground-breaking research on quantum computing, said Matt Johnson, CEO of QC Ware. In addition, the ecosystem includes some of Japans largest enterprises, forward-thinking government organizations, and a thriving venture- backed startup community. Im excited to be able to connect the Japanese and international quantum computing ecosystems at this unique event.

QC Ware has been operating in Japan since 2019 and recently opened up an office in Tokyo.

Q2B22 Tokyo will be co-hosted by QunaSys, a leading Japanese developer company working on innovative algorithms focused on accelerating the development of quantum technology applicability in chemistry and sponsored by IBM Quantum.

Japans technology ecosystem is actively advancing quantum computing. QunaSys is a key player in boosting technology adoption, driving business, government, and academia collaboration to enable the quantum chemistry ecosystem. We are pleased to work with QC Ware and co-host Q2B22 Tokyo bringing Q2B to Japan, said Tennin Yan, CEO of QunaSys.

IBM Quantum has strategically invested in Japan to accelerate an ecosystem of world- class academic, private sector and government partners, including installation of the IBM Quantum System One at the University of Tokyo, and the co-development of the Quantum Innovation Initiative Consortium (QIIC), said Aparna Prabhakar, Vice President, Partners and Alliances, IBM Quantum. We are excited to work with QC Ware and QunaSys to bring experts from a wide variety of quantum computing fields to Q2B22 Tokyo.

Q2B22 Tokyo will feature keynotes from top academics such as:

Other keynotes include:

Japanese and international end-users discussing active quantum initiatives, such as:Automotive:

Materials and Chemistry:

Finance and more:

In addition to IBM Quantum, Q2B22 Tokyo, is sponsored by D-Wave Systems, KeysightTechnologies, NVIDIA, Quantinuum Ltd., Quantum Machines, andStrangeworks, Inc.Other sponsors include:

Q2B has been run by QC Ware since 2017, with the annual flagship event held in Northern Californias Silicon Valley. Q2B Silicon Valley is currently scheduled for December 6-8 at the Santa Clara Convention Center.

About QC Ware

QC Ware is a quantum software and services company focused on ensuring enterprises are prepared for the emerging quantum computing disruption. QC Ware specializes in the development of applications for near-term quantum computing hardware with a team composed of some of the industrys foremost experts in quantum computing. Its growing network of customers includes AFRL, Aisin Group, Airbus, BMW Group, Covestro, Equinor, Goldman Sachs, Itau Unibanco, and Total. QC Ware Forge, the companys flagship quantum computing cloud service, is built for data scientists with no quantum computing background. It provides unique, performant, turnkey quantum computing algorithms. QC Ware is headquartered in Palo Alto, California, and supports its European customers through its subsidiary in Paris and its Asian customers from a Tokyo office. QC Ware also organizes Q2B, the largest annual gathering of the international quantum computing community.

Source: QC Ware

The rest is here:
QC Ware Announces Q2B22 Tokyo To Be Held July 13-14 - HPCwire

IonQ and GE Research Demonstrate High Potential of Quantum Computing for Risk Aggregation – Business Wire

COLLEGE PARK, Md.--(BUSINESS WIRE)--IonQ (NYSE: IONQ), an industry leader in quantum computing, today announced promising early results with its partner, GE Research, to explore the benefits of quantum computing for modeling multi-variable distributions in risk management.

Leveraging a Quantum Circuit Born Machine-based framework on standardized, historical indexes, IonQ and GE Research, the central innovation hub for the General Electric Company (NYSE: GE), were able to effectively train quantum circuits to learn correlations among three and four indexes. The prediction derived from the quantum framework outperformed those of classical modeling approaches in some cases, confirming that quantum copulas can potentially lead to smarter data-driven analysis and decision-making across commercial applications. A blog post further explaining the research methodology and results is available here.

Together with GE Research, IonQ is pushing the boundaries of what is currently possible to achieve with quantum computing, said Peter Chapman, CEO and President, IonQ. While classical techniques face inefficiencies when multiple variables have to be modeled together with high precision, our joint effort has identified a new training strategy that may optimize quantum computing results even as systems scale. Tested on our industry-leading IonQ Aria system, were excited to apply these new methodologies when tackling real world scenarios that were once deemed too complex to solve.

While classical techniques to form copulas using mathematical approximations are a great way to build multi-variate risk models, they face limitations when scaling. IonQ and GE Research successfully trained quantum copula models with up to four variables on IonQs trapped ion systems by using data from four representative stock indexes with easily accessible and variating market environments.

By studying the historical dependence structure among the returns of the four indexes during this timeframe, the research group trained its model to understand the underlying dynamics. Additionally, the newly presented methodology includes optimization techniques that potentially allow models to scale by mitigating local minima and vanishing gradient problems common in quantum machine learning practices. Such improvements demonstrate a promising way to perform multi-variable analysis faster and more accurately, which GE researchers hope lead to new and better ways to assess risk with major manufacturing processes such as product design, factory operations, and supply chain management.

As we have seen from recent global supply chain volatility, the world needs more effective methods and tools to manage risks where conditions can be so highly variable and interconnected to one another, said David Vernooy, a Senior Executive and Digital Technologies Leader at GE Research. The early results we achieved in the financial use case with IonQ show the high potential of quantum computing to better understand and reduce the risks associated with these types of highly variable scenarios.

Todays results follow IonQs recent announcement of the companys new IonQ Forte quantum computing system. The system features novel, cutting-edge optics technology that enables increased accuracy and further enhances IonQs industry leading system performance. Partnerships with the likes of GE Research and Hyundai Motors illustrate the growing interest in our industry-leading systems and feeds into the continued success seen in Q1 2022.

About IonQ

IonQ, Inc. is a leader in quantum computing, with a proven track record of innovation and deployment. IonQ's current generation quantum computer, IonQ Forte, is the latest in a line of cutting-edge systems, including IonQ Aria, a system that boasts industry-leading 20 algorithmic qubits. Along with record performance, IonQ has defined what it believes is the best path forward to scale. IonQ is the only company with its quantum systems available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud, as well as through direct API access. IonQ was founded in 2015 by Christopher Monroe and Jungsang Kim based on 25 years of pioneering research. To learn more, visit http://www.ionq.com.

IonQ Forward-Looking Statements

This press release contains certain forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Some of the forward-looking statements can be identified by the use of forward-looking words. Statements that are not historical in nature, including the words anticipate, expect, suggests, plan, believe, intend, estimates, targets, projects, should, could, would, may, will, forecast and other similar expressions are intended to identify forward-looking statements. These statements include those related to IonQs ability to further develop and advance its quantum computers and achieve scale; IonQs ability to optimize quantum computing results even as systems scale; the expected launch of IonQ Forte for access by select developers, partners, and researchers in 2022 with broader customer access expected in 2023; IonQs market opportunity and anticipated growth; and the commercial benefits to customers of using quantum computing solutions. Forward-looking statements are predictions, projections and other statements about future events that are based on current expectations and assumptions and, as a result, are subject to risks and uncertainties. Many factors could cause actual future events to differ materially from the forward-looking statements in this press release, including but not limited to: market adoption of quantum computing solutions and IonQs products, services and solutions; the ability of IonQ to protect its intellectual property; changes in the competitive industries in which IonQ operates; changes in laws and regulations affecting IonQs business; IonQs ability to implement its business plans, forecasts and other expectations, and identify and realize additional partnerships and opportunities; and the risk of downturns in the market and the technology industry including, but not limited to, as a result of the COVID-19 pandemic. The foregoing list of factors is not exhaustive. You should carefully consider the foregoing factors and the other risks and uncertainties described in the Risk Factors section of IonQs Quarterly Report on Form 10-Q for the quarter ended March 31, 2022 and other documents filed by IonQ from time to time with the Securities and Exchange Commission. These filings identify and address other important risks and uncertainties that could cause actual events and results to differ materially from those contained in the forward-looking statements. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and IonQ assumes no obligation and does not intend to update or revise these forward-looking statements, whether as a result of new information, future events, or otherwise. IonQ does not give any assurance that it will achieve its expectations.

The rest is here:
IonQ and GE Research Demonstrate High Potential of Quantum Computing for Risk Aggregation - Business Wire

The Spooky Quantum Phenomenon You’ve Never Heard Of – Quanta Magazine

Perhaps the most famously weird feature of quantum mechanics is nonlocality: Measure one particle in an entangled pair whose partner is miles away, and the measurement seems to rip through the intervening space to instantaneously affect its partner. This spooky action at a distance (as Albert Einstein called it) has been the main focus of tests of quantum theory.

Nonlocality is spectacular. I mean, its like magic, said Adn Cabello, a physicist at the University of Seville in Spain.

But Cabello and others are interested in investigating a lesser-known but equally magical aspect of quantum mechanics: contextuality. Contextuality says that properties of particles, such as their position or polarization, exist only within the context of a measurement. Instead of thinking of particles properties as having fixed values, consider them more like words in language, whose meanings can change depending on the context: Timeflies likean arrow. Fruitflies likebananas.

Although contextuality has lived in nonlocalitys shadow for over 50 years, quantum physicists now consider it more of a hallmark feature of quantum systems than nonlocality is. A single particle, for instance, is a quantum system in which you cannot even think about nonlocality, since the particle is only in one location, said Brbara Amaral, a physicist at the University of So Paulo in Brazil. So [contextuality] is more general in some sense, and I think this is important to really understand the power of quantum systems and to go deeper into why quantum theory is the way it is.

Researchers have also found tantalizing links between contextuality and problems that quantum computers can efficiently solve that ordinary computers cannot; investigating these links could help guide researchers in developing new quantum computing approaches and algorithms.

And with renewed theoretical interest comes a renewed experimental effort to prove that our world is indeed contextual. In February, Cabello, in collaboration with Kihwan Kim at Tsinghua University in Beijing, China, published a paper in which they claimed to have performed the first loophole-free experimental test of contextuality.

The Northern Irish physicist John Stewart Bell is widely credited with showing that quantum systems can be nonlocal. By comparing the outcomes of measurements of two entangled particles, he showed with his eponymous theorem of 1965 that the high degree of correlations between the particles cant possibly be explained in terms of local hidden variables defining each ones separate properties. The information contained in the entangled pair must be shared nonlocally between the particles.

Bell also proved a similar theorem about contextuality. He and, separately, Simon Kochen and Ernst Specker showed that it is impossible for a quantum system to have hidden variables that define the values of all their properties in all possible contexts.

In Kochen and Speckers version of the proof, they considered a single particle with a quantum property called spin, which has both a magnitude and a direction. Measuring the spins magnitude along any direction always results in one of two outcomes: 1 or 0. The researchers then asked: Is it possible that the particle secretly knows what the result of every possible measurement will be before it is measured? In other words, could they assign a fixed value a hidden variable to all outcomes of all possible measurements at once?

Quantum theory says that the magnitudes of the spins along three perpendicular directions must obey the 101 rule: The outcomes of two of the measurements must be 1 and the other must be 0. Kochen and Specker used this rule to arrive at a contradiction. First, they assumed that each particle had a fixed, intrinsic value for each direction of spin. They then conducted a hypothetical spin measurement along some unique direction, assigning either 0 or 1 to the outcome. They then repeatedly rotated the direction of their hypothetical measurement and measured again, each time either freely assigning a value to the outcome or deducing what the value must be in order to satisfy the 101 rule together with directions they had previously considered.

They continued until, in the 117th direction, the contradiction cropped up. While they had previously assigned a value of 0 to the spin along this direction, the 101 rule was now dictating that the spin must be 1. The outcome of a measurement could not possibly return both 0 and 1. So the physicists concluded that there is no way a particle can have fixed hidden variables that remain the same regardless of context.

While the proof indicated that quantum theory demands contextuality, there was no way to actually demonstrate this through 117 simultaneous measurements of a single particle. Physicists have since devised more practical, experimentally implementable versions of the original Bell-Kochen-Specker theorem involving multiple entangled particles, where a particular measurement on one particle defines a context for the others.

In 2009, contextuality, a seemingly esoteric aspect of the underlying fabric of reality, got a direct application: One of the simplified versions of the original Bell-Kochen-Specker theorem was shown to be equivalent to a basic quantum computation.

The proof, named Mermins star after its originator, David Mermin, considered various combinations of contextual measurements that could be made on three entangled quantum bits, or qubits. The logic of how earlier measurements shape the outcomes of later measurements has become the basis for an approach called measurement-based quantum computing. The discovery suggested that contextuality might be key to why quantum computers can solve certain problems faster than classical computers an advantage that researchers have struggled mightily to understand.

Robert Raussendorf, a physicist at the University of British Columbia and a pioneer of measurement-based quantum computing, showed that contextuality is necessary for a quantum computer to beat a classical computer at some tasks, but he doesnt think its the whole story. Whether contextuality powers quantum computers is probably not exactly the right question to ask, he said. But we need to get there question by question. So we ask a question that we understand how to ask; we get an answer. We ask the next question.

Some researchers have suggested loopholes around Bell, Kochen and Speckers conclusion that the world is contextual. They argue that context-independent hidden variables havent been conclusively ruled out.

In February, Cabello and Kim announced that they had closed every plausible loophole by performing a loophole free Bell-Kochen-Specker experiment.

The experiment entailed measuring the spins of two entangled trapped ions in various directions, where the choice of measurement on one ion defined the context for the other ion. The physicists showed that, although making a measurement on one ion does not physically affect the other, it changes the context and hence the outcome of the second ions measurement.

Skeptics would ask: How can you be certain that the context created by the first measurement is what changed the second measurement outcome, rather than other conditions that might vary from experiment to experiment? Cabello and Kim closed this sharpness loophole by performing thousands of sets of measurements and showing that the outcomes dont change if the context doesnt. After ruling out this and other loopholes, they concluded that the only reasonable explanation for their results is contextuality.

Cabello and others think that these experiments could be used in the future to test the level of contextuality and hence, the power of quantum computing devices.

If you want to really understand how the world is working, said Cabello, you really need to go into the detail of quantum contextuality.

Read more:
The Spooky Quantum Phenomenon You've Never Heard Of - Quanta Magazine

Alan Turing’s Everlasting Contributions to Computing, AI and Cryptography – NIST

An enigma machine on display outside the Alan Turing Institute entrance inside the British Library, London.

Credit: Shutterstock/William Barton

Suppose someone asked you to devise the most powerful computer possible. Alan Turing, whose reputation as a central figure in computer science and artificial intelligence has only grown since his untimely death in 1954, applied his genius to problems such as this one in an age before computers as we know them existed. His theoretical work on this problem and others remains a foundation of computing, AI and modern cryptographic standards, including those NIST recommends.

The road from devising the most powerful computer possible to cryptographic standards has a few twists and turns, as does Turings brief life.

Alan Turing

Credit: National Portrait Gallery, London

In Turings time, mathematicians debated whether it was possible to build a single, all-purpose machine that could solve all problems that are computable. For example, we can compute a cars most energy-efficient route to a destination, and (in principle) the most likely way in which a string of amino acids will fold into a three-dimensional protein. Another example of a computable problem, important to modern encryption, is whether or not bigger numbers can be expressed as the product of two smaller numbers. For example, 6 can be expressed as the product of 2 and 3, but 7 cannot be factored into smaller integers and is therefore a prime number.

Some prominent mathematicians proposed elaborate designs for universal computers that would operate by following very complicated mathematical rules. It seemed overwhelmingly difficult to build such machines. It took the genius of Turing to show that a very simple machine could in fact compute all that is computable.

His hypothetical device is now known as a Turing machine. The centerpiece of the machine is a strip of tape, divided into individual boxes. Each box contains a symbol (such as A,C,T, G for the letters of genetic code) or a blank space. The strip of tape is analogous to todays hard drives that store bits of data. Initially, the string of symbols on the tape corresponds to the input, containing the data for the problem to be solved. The string also serves as the memory of the computer. The Turing machine writes onto the tape data that it needs to access later in the computation.

Credit: NIST

The device reads an individual symbol on the tape and follows instructions on whether to change the symbol or leave it alone before moving to another symbol. The instructions depend on the current state of the machine. For example, if the machine needs to decide whether the tape contains the text string TC it can scan the tape in the forward direction while switching among the states previous letter was T and previous letter was not C. If while in state previous letter was T it reads a C, it goes to a state found it and halts. If it encounters the blank symbol at the end of the input, it goes to the state did not find it and halts. Nowadays we would recognize the set of instructions as the machines program.

It took some time, but eventually it became clear to everyone that Turing was right: The Turing machine could indeed compute all that seemed computable. No number of additions or extensions to this machine could extend its computing capability.

To understand what can be computed it is helpful to identify what cannot be computed. Ina previous life as a university professor I had to teach programming a few times. Students often encounter the following problem: My program has been running for a long time; is it stuck? This is called the Halting Problem, and students often wondered why we simply couldnt detect infinite loops without actually getting stuck in them. It turns out a program to do this is an impossibility. Turing showed that there does not exist a machine that detects whether or not another machine halts. From this seminal result followed many other impossibility results. For example, logicians and philosophers had to abandon the dream of an automated way of detecting whether an assertion (such as whether there are infinitely many prime numbers) is true or false, as that is uncomputable. If you could do this, then you could solve the Halting Problem simply by asking whether the statement this machine halts is true or false.

Turing went on to make fundamental contributions to AI, theoretical biology and cryptography. His involvement with this last subject brought him honor and fame during World War II, when he played a very important role in adapting and extending cryptanalytic techniques invented by Polish mathematicians. This work broke the German Enigma machine encryption, making a significant contribution to the war effort.

Turing was gay. After the war, in 1952, the British government convicted him for having sex with a man. He stayed out of jail only by submitting to what is now called chemical castration. He died in 1954 at age 41 by cyanide poisoning, which was initially ruled a suicide but may have been an accident according to subsequent analysis. More than 50 years would pass before the British government apologized and pardoned him (after years of campaigning by scientists around the world). Today, the highest honor in computer sciences is called the Turing Award.

Turings computability work provided the foundation for modern complexity theory. This theory tries to answer the question Among those problems that can be solved by a computer, which ones can be solved efficiently? Here, efficiently means not in billions of years but in milliseconds, seconds, hours or days, depending on the computational problem.

For example, much of the cryptography that currently safeguards our data and communications relies on the belief that certain problems, such as decomposing an integer number into its prime factors, cannot be solved before the Sun turns into a red giant and consumes the Earth (currently forecast for 4 billion to 5 billion years). NIST is responsible for cryptographic standards that are used throughout the world. We could not do this work without complexity theory.

Technology sometimes throws us a curve, such as the discovery that if a sufficiently big and reliable quantum computer is built it would be able to factor integers, thus breaking some of our cryptography. In this situation, NIST scientists must rely on the worlds experts (many of them in-house) in order to update our standards. There are deep reasons to believe that quantum computers will not be able to break the cryptography that NIST is about to roll out. Among these reasons is that Turings machine can simulate quantum computers. This implies that complexity theory gives us limits on what a powerful quantum computer can do.

But that is a topic for another day. For now, we can celebrate how Turing provided the keys to much of todays computing technology and even gave us hints on how to solve looming technological problems.

Visit link:
Alan Turing's Everlasting Contributions to Computing, AI and Cryptography - NIST

Julian Assange is my husband his extradition is an abomination – The Independent

Last Friday, home secretary Priti Patel gave her approval for the UK to send my husband, Julian Assange, to the country that plotted his assassination.

Julian remains imprisoned in Belmarsh after more than three years at the behest of US prosecutors. He faces a prison sentence of up to 175 years for arguably the most celebrated publications in the history of journalism.

Patels decision to extradite Julian has sent shockwaves across the journalism community. The home secretary flouted calls from representatives of the Council of Europe, the OSCE, almost 2000 journalists and 300 doctors for the extradition to be halted.

When Julian calls around the childrens bed time, they talk over each other boisterously. The calls only last 10 minutes, so when the call ended abruptly the other night Max, who is three, asked tearfully if it was because hed been naughty, I absentmindedly said it wasnt his fault, but Mike Pompeos. Five-year-old Gabriel asked: Who is Mike Pompeo?

Mike Pompeo had been on my mind, because while the home secretary in this country was busy signing Julians extradition order, in Spain a High Court judge was summoning Pompeo for questioning regarding his role as director of the CIA in their reported plots to murder my husband.

While at the helm of the CIA, President Trumps most loyal supporter reportedly tasked his agents with preparing sketches and options for the assassination of their father.

The citation for Pompeo to appear before a Spanish judge comes out of an investigation into illicit spying of Julian and his lawyers through a company registered in Spain. Spanish police seized large amounts of electronic data, and insiders involved in carrying out the clandestine operations testified that they acted on instruction of the CIA. They had discussed abducting and poisoning Julian.

Gabriel was six months old at the time and had been a target too. One witness was instructed to obtain DNA swabs from a soiled nappy in order to establish that Julian was his father. Another admitted to planting hidden microphones under the fire extinguishers to tap legally privileged meetings between Julian and his lawyers.

The recordings of Julians legal meetings in the Ecuadorian embassy in London were physically transported to handlers in the United States on a regular basis. A break-in at Julians lawyers office was caught on camera, and investigators discovered photographs of Julians lawyers legal papers taken inside the embassy. The operations targeting his lawyers read like they are taken from a Soviet playbook.

Across the pond, ever since the Nixon administrations attempted prosecution of the New York Times over the Pentagon Papers over half a century ago, constitutional lawyers had been warning that the 1917 Espionage Act would one day be abused to prosecute journalists.

It was President Obamas administration that enlivened the creeping misuse of the Espionage Act. More journalistic sources were charged under the Act than all previous administrations combined, including WikiLeaks source Chelsea Manning; CIA torture whistleblower John Kiriakou; and NSA spying whistleblower Edward Snowden.

Following massive public pressure Obama commuted Chelsea Mannings 35-year sentence. Obama declined to prosecute Julian for publishing Mannings leaks because of the implications for press freedom.

After the Obama administrations Espionage Act charging spree, it was just a matter of time before another administration expanded the interpretation of the Act even further.

That day came soon enough. Trumps administration broke new legal ground with the indictment of Julian for receiving, possessing, and publishing the Manning leaks. Meanwhile in Langley, Virginia, Pompeo tasked CIA assassination plans.

To keep up to speed with all the latest opinions and comment sign up to our free weekly Voices Dispatches newsletter by clicking here

Priti Patels decision comes amidst sweeping government reforms of an increasingly totalitarian bent the plans to weaken the influence of the European Court of Human Rights and the decision to extradite Julian are the coup de grace.

The home secretarys proposed reforms to the UKs Official Secrets Act largely track the Trump-era indictment against Julian: publishers and their sources can be charged as criminal co-conspirators.

Julians extradition case itself creates legal precedent. What has long been understood to be a bedrock principle of democracy, press freedom, will disappear in one fell swoop.

As it stands, no journalist is going to risk having what Julian is being subjected to happen to them. Julian must be freed before its too late. His life depends on it. Your rights depend on it.

View post:
Julian Assange is my husband his extradition is an abomination - The Independent

Exploring emerging topics in artificial intelligence policy | MIT News | Massachusetts Institute of Technology – MIT News

Members of the public sector, private sector, and academia convened for the second AI Policy Forum Symposium last month to explore critical directions and questions posed by artificial intelligence in our economies and societies.

The virtual event, hosted by the AI Policy Forum (AIPF) an undertaking by the MIT Schwarzman College of Computing to bridge high-level principles of AI policy with the practices and trade-offs of governing brought together an array of distinguished panelists to delve into four cross-cutting topics: law, auditing, health care, and mobility.

In the last year there have been substantial changes in the regulatory and policy landscape around AI in several countries most notably in Europe with the development of the European Union Artificial Intelligence Act, the first attempt by a major regulator to propose a law on artificial intelligence. In the United States, the National AI Initiative Act of 2020, which became law in January 2021, is providing a coordinated program across federal government to accelerate AI research and application for economic prosperity and security gains. Finally, China recently advanced several new regulations of its own.

Each of these developments represents a different approach to legislating AI, but what makes a good AI law? And when should AI legislation be based on binding rules with penalties versus establishing voluntary guidelines?

Jonathan Zittrain, professor of international law at Harvard Law School and director of the Berkman Klein Center for Internet and Society, says the self-regulatory approach taken during the expansion of the internet had its limitations with companies struggling to balance their interests with those of their industry and the public.

One lesson might be that actually having representative government take an active role early on is a good idea, he says. Its just that theyre challenged by the fact that there appears to be two phases in this environment of regulation. One, too early to tell, and two, too late to do anything about it. In AI I think a lot of people would say were still in the too early to tell stage but given that theres no middle zone before its too late, it might still call for some regulation.

A theme that came up repeatedly throughout the first panel on AI laws a conversation moderated by Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and chair of the AI Policy Forum was the notion of trust. If you told me the truth consistently, I would say you are an honest person. If AI could provide something similar, something that I can say is consistent and is the same, then I would say it's trusted AI, says Bitange Ndemo, professor of entrepreneurship at the University of Nairobi and the former permanent secretary of Kenyas Ministry of Information and Communication.

Eva Kaili, vice president of the European Parliament, adds that In Europe, whenever you use something, like any medication, you know that it has been checked. You know you can trust it. You know the controls are there. We have to achieve the same with AI. Kalli further stresses that building trust in AI systems will not only lead to people using more applications in a safe manner, but that AI itself will reap benefits as greater amounts of data will be generated as a result.

The rapidly increasing applicability of AI across fields has prompted the need to address both the opportunities and challenges of emerging technologies and the impact they have on social and ethical issues such as privacy, fairness, bias, transparency, and accountability. In health care, for example, new techniques in machine learning have shown enormous promise for improving quality and efficiency, but questions of equity, data access and privacy, safety and reliability, and immunology and global health surveillance remain at large.

MITs Marzyeh Ghassemi, an assistant professor in the Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science, and David Sontag, an associate professor of electrical engineering and computer science, collaborated with Ziad Obermeyer, an associate professor of health policy and management at the University of California Berkeley School of Public Health, to organize AIPF Health Wide Reach, a series of sessions to discuss issues of data sharing and privacy in clinical AI. The organizers assembled experts devoted to AI, policy, and health from around the world with the goal of understanding what can be done to decrease barriers to access to high-quality health data to advance more innovative, robust, and inclusive research results while being respectful of patient privacy.

Over the course of the series, members of the group presented on a topic of expertise and were tasked with proposing concrete policy approaches to the challenge discussed. Drawing on these wide-ranging conversations, participants unveiled their findings during the symposium, covering nonprofit and government success stories and limited access models; upside demonstrations; legal frameworks, regulation, and funding; technical approaches to privacy; and infrastructure and data sharing. The group then discussed some of their recommendations that are summarized in a report that will be released soon.

One of the findings calls for the need to make more data available for research use. Recommendations that stem from this finding include updating regulations to promote data sharing to enable easier access to safe harbors such as the Health Insurance Portability and Accountability Act (HIPAA) has for de-identification, as well as expanding funding for private health institutions to curate datasets, amongst others. Another finding, to remove barriers to data for researchers, supports a recommendation to decrease obstacles to research and development on federally created health data. If this is data that should be accessible because it's funded by some federal entity, we should easily establish the steps that are going to be part of gaining access to that so that it's a more inclusive and equitable set of research opportunities for all, says Ghassemi. The group also recommends taking a careful look at the ethical principles that govern data sharing. While there are already many principles proposed around this, Ghassemi says that obviously you can't satisfy all levers or buttons at once, but we think that this is a trade-off that's very important to think through intelligently.

In addition to law and health care, other facets of AI policy explored during the event included auditing and monitoring AI systems at scale, and the role AI plays in mobility and the range of technical, business, and policy challenges for autonomous vehicles in particular.

The AI Policy Forum Symposium was an effort to bring together communities of practice with the shared aim of designing the next chapter of AI. In his closing remarks, Aleksander Madry, the Cadence Designs Systems Professor of Computing at MIT and faculty co-lead of the AI Policy Forum, emphasized the importance of collaboration and the need for different communities to communicate with each other in order to truly make an impact in the AI policy space.

The dream here is that we all can meet together researchers, industry, policymakers, and other stakeholders and really talk to each other, understand each other's concerns, and think together about solutions, Madry said. This is the mission of the AI Policy Forum and this is what we want to enable.

Read this article:
Exploring emerging topics in artificial intelligence policy | MIT News | Massachusetts Institute of Technology - MIT News

Can Artificial Intelligence Be Creative? – Discovery Institute

Image: Lady Ada Lovelace (18151852), via Wikimedia Commons.

Editors note: We are delighted to present an excerpt from Chapter 2 of the new bookNon-Computable You: What You Do that Artificial Intelligence Never Will, by computer engineer Robert J. Marks, director of Discovery Institutes Bradley Center for Natural and Artificial Intelligence.

Some have claimed AI is creative. But creativity is a fuzzy term. To talk fruitfully about creativity, the term must be defined so that everyone is talking about the same thing and no one is bending the meaning to fit their purpose. Lets explore what creativity is, and it will become clear that, properly defined, AI is no more creative than a pencil.

Lady Ada Lovelace (18151852), daughter of the poet George Gordon, Lord Byron, was the first computer programmer, writing algorithms for a machine that was planned but never built. She also was quite possibly the first to note that computers will not be creative that is, they cannot create something new. She wrote in 1842 that the computer has no pretensions whatever to originate anything. It can do [only] whatever we know how to order it to perform.

Alan Turing disagreed. Turing is often called the father of computer science, having established the idea for modern computers in the 1930s. Turing argued that we cant even be sure that humans create, because humans do nothing new under the sun but they do surprise us. Likewise, he said, Machines take me by surprise with great frequency.So perhaps, he argued, it is the element of surprise thats relevant, not the ability to originate something new.

Machines can surprise us if theyre programmed by humans to surprise us, or if the programmer has made a mistake and thus experienced an unexpected outcome.Often, though, surprise occurs as a result of successful implementation of a computer search that explores a myriad of solutions for a problem. The solution chosen by the computer can be unexpected. The computer code that searches among different solutions, though, is not creative. The creativity credit belongs to the computer programmer who chose the set of solutions to be explored. One could give examples from computer searches for making the best move in the game of GO and for simulated swarms. Both results are surprising and unexpected, but there is no creativity contributed from computer code.

Alan Turing, an atheist, wanted to show we are machines and that computers could be creative. Turing equated intelligence with problem solving, did not consider questions of consciousness and emotion, and referred to people as human computers. Turings version of the imitation game was proposed to show that computers could duplicate the conversational human. This is why the biographical movie starring Benedict Cumberbatch as Turing was titledThe Imitation Game.

How can computers imitate humans, according to Turing?The imitation game (which came to be called the Turing test)simply asks whether, in a conversational exchange using text (that is, an exchange in which the participants are hidden from each other), a sufficiently sophisticated computer can be distinguished from a human. If a questioner gets lucid, human-sounding answers from the computer, and believes the computer is in fact a human typing in answers from another room, then the test has been passed. (Incidentally, the converse of the Turing test is easy. Simply ask it to calculate the cube root of 12 out to 10 significant figures. If the answer is almost immediate, you are talking to a computer.)

There are those who claim that todays chatbots have passed the Turing test. Most famously, a computer program known as Eugene Goostman purportedly passed. One posted subtitle claiming this reads, Eugene Goostman fools 33% of interrogators into thinking it is human, in what is seen as a milestone in artificial intelligence.

But those making a test often cheat to tweak the outcome. Thats what Goostmans programmers did. Goostman supposedly was a Ukrainian, which meant English was not his first language, so his poor grammar could be excused. He was supposedly a teenager, so any lack of depth of understanding could be chalked up to his nave intellect. Likewise, his tendency to be silly and deflect questions could be considered typical teenage behavior. In short, Goostmans software character was gamed to pass the Turing test.

Here are a couple of examples of an exchange between Goostman and his questioner. Goostman is first asked a question proposed in the first line of the lyrics of the song Close to You,popularized by theCarpenters:

Questioner: Why do birds suddenly appear?

Goostman: Just because 2 plus 2 is 5! By the way, whats your occupation? I mean could you tell me about your work?

We see here an often-used cheat in chatbots. If an answer is not known, the chatbot responds with an unrelated question to derail the conversations direction.

Heres another exchange with Eugene Goostman:

Questioner: It is ok, I get sick of sick people. How is your stomach feeling today? Is it upset maybe?

Goostman: I think you cant smile at all. I bet you work in a funeral agency.

As you can see, Goostmans answers here are elusively non-responsive.

Selmer Bringsjord correctly notes the Turing test is gamed by programmers. Gamed here is a nice word for being an elusive cheat. As Bringsjord writes, Though progress toward Turings dream is being made, its coming only on the strength ofclever but shallow trickery.

When gaming the system, chatbots can deflect detection by answering questions with other questions, giving evasive answers, or admitting ignorance. They display general intellectual shallowness as regards creativity and depth of understanding.

Goostman answered questions with questions like, By the way, whats your occupation? He also tried to change topics with conversational whiplash responses like I bet you work in a funeral agency. These are examples of the clever but shallow trickery Bringsjord criticized.

What, then, do Turing tests prove? Only that clever programmers can trick gullible or uninitiated people into believing theyre interacting with a human. Mistaking something for human does not make it human. Programming to shallowly mimic thought is not the same thing as thinking. Rambling randomness (such as the change-of-topic questions Goostman spit out) does not display creativity.

I propose to consider the question, Can machines think? Turing said. Ironically, Turing not only failed in his attempt to show that machines can be conversationally creative, but also developed computer science that shows humans are non-computable.

See more here:
Can Artificial Intelligence Be Creative? - Discovery Institute

Worldwide Artificial Intelligence (AI) in Drug Discovery Market to reach $ 4.0 billion by 2027 at a CAGR of 45.7% – ResearchAndMarkets.com – Business…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence (AI) in Drug Discovery Market by Component (Software, Service), Technology (ML, DL), Application (Neurodegenerative Diseases, Immuno-Oncology, CVD), End User (Pharmaceutical & Biotechnology, CRO), Region - Global forecast to 2024" report has been added to ResearchAndMarkets.com's offering.

The Artificial intelligence/AI in drug discovery Market is projected to reach USD 4.0 billion by 2027 from USD 0.6 billion in 2022, at a CAGR of 45.7% during the forecast period. The growth of this market is primarily driven by factors such as the need to control drug discovery & development costs and reduce the overall time taken in this process, the rising adoption of cloud-based applications and services. On the other hand, the inadequate availability of skilled labor is key factor restraining the market growth at certain extent over the forecast period.

Services segment is estimated to hold the major share in 2022 and also expected to grow at the highest over the forecast period

On the basis of offering, the AI in drug discovery market is bifurcated into software and services. the services segment expected to account for the largest market share of the global AI in drug discovery services market in 2022, and expected to grow fastest CAGR during the forecast period. The advantages and benefits associated with these services and the strong demand for AI services among end users are the key factors for the growth of this segment.

Machine learning technology segment accounted for the largest share of the global AI in drug discovery market

On the basis of technology, the AI in drug discovery market is segmented into machine learning and other technologies. The machine learning segment accounted for the largest share of the global market in 2021 and expected to grow at the highest CAGR during the forecast period. High adoption of machine learning technology among CRO, pharmaceutical and biotechnology companies and capability of these technologies to extract insights from data sets, which helps accelerate the drug discovery process are some of the factors supporting the market growth of this segment.

Pharmaceutical & biotechnology companies segment expected to hold the largest share of the market in 2022

On the basis of end user, the AI in drug discovery market is divided into pharmaceutical & biotechnology companies, CROs, and research centers and academic & government institutes. In 2021, the pharmaceutical & biotechnology companies segment accounted for the largest share of the AI in drug discovery market. On the other hand, research centers and academic & government institutes are expected to witness the highest CAGR during the forecast period. The strong demand for AI-based tools in making the entire drug discovery process more time and cost-efficient is the key growth factor of pharmaceutical and biotechnology end-user segment.

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary

4 Premium Insights

4.1 Growing Need to Control Drug Discovery & Development Costs is a Key Factor Driving the Adoption of AI in Drug Discovery Solutions

4.2 Services Segment to Witness the Highest Growth During the Forecast Period

4.3 Deep Learning Segment Accounted for the Largest Market Share in 2021

4.4 North America is the Fastest-Growing Regional Market for AI in Drug Discovery

5 Market Overview

5.1 Introduction

5.2 Market Dynamics

5.2.1 Market Drivers

5.2.1.1 Growing Number of Cross-Industry Collaborations and Partnerships

5.2.1.2 Growing Need to Control Drug Discovery & Development Costs and Reduce Time Involved in Drug Development

5.2.1.3 Patent Expiry of Several Drugs

5.2.2 Market Restraints

5.2.2.1 Shortage of AI Workforce and Ambiguous Regulatory Guidelines for Medical Software

5.2.3 Market Opportunities

5.2.3.1 Growing Biotechnology Industry

5.2.3.2 Emerging Markets

5.2.3.3 Focus on Developing Human-Aware AI Systems

5.2.3.4 Growth in the Drugs and Biologics Market Despite the COVID-19 Pandemic

5.2.4 Market Challenges

5.2.4.1 Limited Availability of Data Sets

5.3 Value Chain Analysis

5.4 Porter's Five Forces Analysiss

5.5 Ecosystem

5.6 Technology Analysis

5.7 Pricing Analysis

5.8 Business Models

5.9 Regulations

5.10 Conferences and Webinars

5.11 Case Study Analysis

6 Artificial Intelligence in Drug Discovery Market, by Offering

7 Artificial Intelligence in Drug Discovery Market, by Technology

8 Artificial Intelligence in Drug Discovery Market, by Application

9 Artificial Intelligence in Drug Discovery Market, by End-user

10 Artificial Intelligence in Drug Discovery Market, by Region

11 Competitive Landscape

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/q5pvns

See the article here:
Worldwide Artificial Intelligence (AI) in Drug Discovery Market to reach $ 4.0 billion by 2027 at a CAGR of 45.7% - ResearchAndMarkets.com - Business...