Quantum Error Correction: Time to Make It Work – IEEE Spectrum

Dates chiseled into an ancient tombstone have more in common with the data in your phone or laptop than you may realize. They both involve conventional, classical information, carried by hardware that is relatively immune to errors. The situation inside a quantum computer is far different: The information itself has its own idiosyncratic properties, and compared with standard digital microelectronics, state-of-the-art quantum-computer hardware is more than a billion trillion times as likely to suffer a fault. This tremendous susceptibility to errors is the single biggest problem holding back quantum computing from realizing its great promise.

Fortunately, an approach known as quantum error correction (QEC) can remedy this problem, at least in principle. A mature body of theory built up over the past quarter century now provides a solid theoretical foundation, and experimentalists have demonstrated dozens of proof-of-principle examples of QEC. But these experiments still have not reached the level of quality and sophistication needed to reduce the overall error rate in a system.

The two of us, along with many other researchers involved in quantum computing, are trying to move definitively beyond these preliminary demos of QEC so that it can be employed to build useful, large-scale quantum computers. But before describing how we think such error correction can be made practical, we need to first review what makes a quantum computer tick.

Information is physical. This was the mantra of the distinguished IBM researcher Rolf Landauer. Abstract though it may seem, information always involves a physical representation, and the physics matters.

Conventional digital information consists of bits, zeros and ones, which can be represented by classical states of matter, that is, states well described by classical physics. Quantum information, by contrast, involves qubitsquantum bitswhose properties follow the peculiar rules of quantum mechanics.

A classical bit has only two possible values: 0 or 1. A qubit, however, can occupy a superposition of these two information states, taking on characteristics of both. Polarized light provides intuitive examples of superpositions. You could use horizontally polarized light to represent 0 and vertically polarized light to represent 1, but light can also be polarized on an angle and then has both horizontal and vertical components at once. Indeed, one way to represent a qubit is by the polarization of a single photon of light.

These ideas generalize to groups of n bits or qubits: n bits can represent any one of 2n possible values at any moment, while n qubits can include components corresponding to all 2n classical states simultaneously in superposition. These superpositions provide a vast range of possible states for a quantum computer to work with, albeit with limitations on how they can be manipulated and accessed. Superposition of information is a central resource used in quantum processing and, along with other quantum rules, enables powerful new ways to compute.

Researchers are experimenting with many different physical systems to hold and process quantum information, including light, trapped atoms and ions, and solid-state devices based on semiconductors or superconductors. For the purpose of realizing qubits, all these systems follow the same underlying mathematical rules of quantum physics, and all of them are highly sensitive to environmental fluctuations that introduce errors. By contrast, the transistors that handle classical information in modern digital electronics can reliably perform a billion operations per second for decades with a vanishingly small chance of a hardware fault.

Of particular concern is the fact that qubit states can roam over a continuous range of superpositions. Polarized light again provides a good analogy: The angle of linear polarization can take any value from 0 to 180 degrees.

Pictorially, a qubits state can be thought of as an arrow pointing to a location on the surface of a sphere. Known as a Bloch sphere, its north and south poles represent the binary states 0 and 1, respectively, and all other locations on its surface represent possible quantum superpositions of those two states. Noise causes the Bloch arrow to drift around the sphere over time. A conventional computer represents 0 and 1 with physical quantities, such as capacitor voltages, that can be locked near the correct values to suppress this kind of continuous wandering and unwanted bit flips. There is no comparable way to lock the qubits arrow to its correct location on the Bloch sphere.

Early in the 1990s, Landauer and others argued that this difficulty presented a fundamental obstacle to building useful quantum computers. The issue is known as scalability: Although a simple quantum processor performing a few operations on a handful of qubits might be possible, could you scale up the technology to systems that could run lengthy computations on large arrays of qubits? A type of classical computation called analog computing also uses continuous quantities and is suitable for some tasks, but the problem of continuous errors prevents the complexity of such systems from being scaled up. Continuous errors with qubits seemed to doom quantum computers to the same fate.

We now know better. Theoreticians have successfully adapted the theory of error correction for classical digital data to quantum settings. QEC makes scalable quantum processing possible in a way that is impossible for analog computers. To get a sense of how it works, its worthwhile to review how error correction is performed in classical settings.

Simple schemes can deal with errors in classical information. For instance, in the 19th century, ships routinely carried clocks for determining the ships longitude during voyages. A good clock that could keep track of the time in Greenwich, in combination with the suns position in the sky, provided the necessary data. A mistimed clock could lead to dangerous navigational errors, though, so ships often carried at least three of them. Two clocks reading different times could detect when one was at fault, but three were needed to identify which timepiece was faulty and correct it through a majority vote.

The use of multiple clocks is an example of a repetition code: Information is redundantly encoded in multiple physical devices such that a disturbance in one can be identified and corrected.

As you might expect, quantum mechanics adds some major complications when dealing with errors. Two problems in particular might seem to dash any hopes of using a quantum repetition code. The first problem is that measurements fundamentally disturb quantum systems. So if you encoded information on three qubits, for instance, observing them directly to check for errors would ruin them. Like Schrdingers cat when its box is opened, their quantum states would be irrevocably changed, spoiling the very quantum features your computer was intended to exploit.

The second issue is a fundamental result in quantum mechanics called the no-cloning theorem, which tells us it is impossible to make a perfect copy of an unknown quantum state. If you know the exact superposition state of your qubit, there is no problem producing any number of other qubits in the same state. But once a computation is running and you no longer know what state a qubit has evolved to, you cannot manufacture faithful copies of that qubit except by duplicating the entire process up to that point.

Fortunately, you can sidestep both of these obstacles. Well first describe how to evade the measurement problem using the example of a classical three-bit repetition code. You dont actually need to know the state of every individual code bit to identify which one, if any, has flipped. Instead, you ask two questions: Are bits 1 and 2 the same? and Are bits 2 and 3 the same? These are called parity-check questions because two identical bits are said to have even parity, and two unequal bits have odd parity.

The two answers to those questions identify which single bit has flipped, and you can then counterflip that bit to correct the error. You can do all this without ever determining what value each code bit holds. A similar strategy works to correct errors in a quantum system.

Learning the values of the parity checks still requires quantum measurement, but importantly, it does not reveal the underlying quantum information. Additional qubits can be used as disposable resources to obtain the parity values without revealing (and thus without disturbing) the encoded information itself.

Like Schrdingers cat when its box is opened, the quantum states of the qubits you measured would be irrevocably changed, spoiling the very quantum features your computer was intended to exploit.

What about no-cloning? It turns out it is possible to take a qubit whose state is unknown and encode that hidden state in a superposition across multiple qubits in a way that does not clone the original information. This process allows you to record what amounts to a single logical qubit of information across three physical qubits, and you can perform parity checks and corrective steps to protect the logical qubit against noise.

Quantum errors consist of more than just bit-flip errors, though, making this simple three-qubit repetition code unsuitable for protecting against all possible quantum errors. True QEC requires something more. That came in the mid-1990s when Peter Shor (then at AT&T Bell Laboratories, in Murray Hill, N.J.) described an elegant scheme to encode one logical qubit into nine physical qubits by embedding a repetition code inside another code. Shors scheme protects against an arbitrary quantum error on any one of the physical qubits.

Since then, the QEC community has developed many improved encoding schemes, which use fewer physical qubits per logical qubitthe most compact use fiveor enjoy other performance enhancements. Today, the workhorse of large-scale proposals for error correction in quantum computers is called the surface code, developed in the late 1990s by borrowing exotic mathematics from topology and high-energy physics.

It is convenient to think of a quantum computer as being made up of logical qubits and logical gates that sit atop an underlying foundation of physical devices. These physical devices are subject to noise, which creates physical errors that accumulate over time. Periodically, generalized parity measurements (called syndrome measurements) identify the physical errors, and corrections remove them before they cause damage at the logical level.

A quantum computation with QEC then consists of cycles of gates acting on qubits, syndrome measurements, error inference, and corrections. In terms more familiar to engineers, QEC is a form of feedback stabilization that uses indirect measurements to gain just the information needed to correct errors.

QEC is not foolproof, of course. The three-bit repetition code, for example, fails if more than one bit has been flipped. Whats more, the resources and mechanisms that create the encoded quantum states and perform the syndrome measurements are themselves prone to errors. How, then, can a quantum computer perform QEC when all these processes are themselves faulty?

Remarkably, the error-correction cycle can be designed to tolerate errors and faults that occur at every stage, whether in the physical qubits, the physical gates, or even in the very measurements used to infer the existence of errors! Called a fault-tolerant architecture, such a design permits, in principle, error-robust quantum processing even when all the component parts are unreliable.

A long quantum computation will require many cycles of quantum error correction (QEC). Each cycle would consist of gates acting on encoded qubits (performing the computation), followed by syndrome measurements from which errors can be inferred, and corrections. The effectiveness of this QEC feedback loop can be greatly enhanced by including quantum-control techniques (represented by the thick blue outline) to stabilize and optimize each of these processes.

Even in a fault-tolerant architecture, the additional complexity introduces new avenues for failure. The effect of errors is therefore reduced at the logical level only if the underlying physical error rate is not too high. The maximum physical error rate that a specific fault-tolerant architecture can reliably handle is known as its break-even error threshold. If error rates are lower than this threshold, the QEC process tends to suppress errors over the entire cycle. But if error rates exceed the threshold, the added machinery just makes things worse overall.

The theory of fault-tolerant QEC is foundational to every effort to build useful quantum computers because it paves the way to building systems of any size. If QEC is implemented effectively on hardware exceeding certain performance requirements, the effect of errors can be reduced to arbitrarily low levels, enabling the execution of arbitrarily long computations.

At this point, you may be wondering how QEC has evaded the problem of continuous errors, which is fatal for scaling up analog computers. The answer lies in the nature of quantum measurements.

In a typical quantum measurement of a superposition, only a few discrete outcomes are possible, and the physical state changes to match the result that the measurement finds. With the parity-check measurements, this change helps.

Imagine you have a code block of three physical qubits, and one of these qubit states has wandered a little from its ideal state. If you perform a parity measurement, just two results are possible: Most often, the measurement will report the parity state that corresponds to no error, and after the measurement, all three qubits will be in the correct state, whatever it is. Occasionally the measurement will instead indicate the odd parity state, which means an errant qubit is now fully flipped. If so, you can flip that qubit back to restore the desired encoded logical state.

In other words, performing QEC transforms small, continuous errors into infrequent but discrete errors, similar to the errors that arise in digital computers.

Researchers have now demonstrated many of the principles of QEC in the laboratoryfrom the basics of the repetition code through to complex encodings, logical operations on code words, and repeated cycles of measurement and correction. Current estimates of the break-even threshold for quantum hardware place it at about 1 error in 1,000 operations. This level of performance hasnt yet been achieved across all the constituent parts of a QEC scheme, but researchers are getting ever closer, achieving multiqubit logic with rates of fewer than about 5 errors per 1,000 operations. Even so, passing that critical milestone will be the beginning of the story, not the end.

On a system with a physical error rate just below the threshold, QEC would require enormous redundancy to push the logical rate down very far. It becomes much less challenging with a physical rate further below the threshold. So just crossing the error threshold is not sufficientwe need to beat it by a wide margin. How can that be done?

If we take a step back, we can see that the challenge of dealing with errors in quantum computers is one of stabilizing a dynamic system against external disturbances. Although the mathematical rules differ for the quantum system, this is a familiar problem in the discipline of control engineering. And just as control theory can help engineers build robots capable of righting themselves when they stumble, quantum-control engineering can suggest the best ways to implement abstract QEC codes on real physical hardware. Quantum control can minimize the effects of noise and make QEC practical.

In essence, quantum control involves optimizing how you implement all the physical processes used in QECfrom individual logic operations to the way measurements are performed. For example, in a system based on superconducting qubits, a qubit is flipped by irradiating it with a microwave pulse. One approach uses a simple type of pulse to move the qubits state from one pole of the Bloch sphere, along the Greenwich meridian, to precisely the other pole. Errors arise if the pulse is distorted by noise. It turns out that a more complicated pulse, one that takes the qubit on a well-chosen meandering route from pole to pole, can result in less error in the qubits final state under the same noise conditions, even when the new pulse is imperfectly implemented.

One facet of quantum-control engineering involves careful analysis and design of the best pulses for such tasks in a particular imperfect instance of a given system. It is a form of open-loop (measurement-free) control, which complements the closed-loop feedback control used in QEC.

This kind of open-loop control can also change the statistics of the physical-layer errors to better comport with the assumptions of QEC. For example, QEC performance is limited by the worst-case error within a logical block, and individual devices can vary a lot. Reducing that variability is very beneficial. In an experiment our team performed using IBMs publicly accessible machines, we showed that careful pulse optimization reduced the difference between the best-case and worst-case error in a small group of qubits by more than a factor of 10.

Some error processes arise only while carrying out complex algorithms. For instance, crosstalk errors occur on qubits only when their neighbors are being manipulated. Our team has shown that embedding quantum-control techniques into an algorithm can improve its overall success by orders of magnitude. This technique makes QEC protocols much more likely to correctly identify an error in a physical qubit.

For 25 years, QEC researchers have largely focused on mathematical strategies for encoding qubits and efficiently detecting errors in the encoded sets. Only recently have investigators begun to address the thorny question of how best to implement the full QEC feedback loop in real hardware. And while many areas of QEC technology are ripe for improvement, there is also growing awareness in the community that radical new approaches might be possible by marrying QEC and control theory. One way or another, this approach will turn quantum computing into a realityand you can carve that in stone.

This article appears in the July 2022 print issue as Quantum Error Correction at the Threshold.

From Your Site Articles

Related Articles Around the Web

More:
Quantum Error Correction: Time to Make It Work - IEEE Spectrum

IonQ and GE Research Demonstrate High Potential of Quantum Computing for Risk Aggregation – Business Wire

COLLEGE PARK, Md.--(BUSINESS WIRE)--IonQ (NYSE: IONQ), an industry leader in quantum computing, today announced promising early results with its partner, GE Research, to explore the benefits of quantum computing for modeling multi-variable distributions in risk management.

Leveraging a Quantum Circuit Born Machine-based framework on standardized, historical indexes, IonQ and GE Research, the central innovation hub for the General Electric Company (NYSE: GE), were able to effectively train quantum circuits to learn correlations among three and four indexes. The prediction derived from the quantum framework outperformed those of classical modeling approaches in some cases, confirming that quantum copulas can potentially lead to smarter data-driven analysis and decision-making across commercial applications. A blog post further explaining the research methodology and results is available here.

Together with GE Research, IonQ is pushing the boundaries of what is currently possible to achieve with quantum computing, said Peter Chapman, CEO and President, IonQ. While classical techniques face inefficiencies when multiple variables have to be modeled together with high precision, our joint effort has identified a new training strategy that may optimize quantum computing results even as systems scale. Tested on our industry-leading IonQ Aria system, were excited to apply these new methodologies when tackling real world scenarios that were once deemed too complex to solve.

While classical techniques to form copulas using mathematical approximations are a great way to build multi-variate risk models, they face limitations when scaling. IonQ and GE Research successfully trained quantum copula models with up to four variables on IonQs trapped ion systems by using data from four representative stock indexes with easily accessible and variating market environments.

By studying the historical dependence structure among the returns of the four indexes during this timeframe, the research group trained its model to understand the underlying dynamics. Additionally, the newly presented methodology includes optimization techniques that potentially allow models to scale by mitigating local minima and vanishing gradient problems common in quantum machine learning practices. Such improvements demonstrate a promising way to perform multi-variable analysis faster and more accurately, which GE researchers hope lead to new and better ways to assess risk with major manufacturing processes such as product design, factory operations, and supply chain management.

As we have seen from recent global supply chain volatility, the world needs more effective methods and tools to manage risks where conditions can be so highly variable and interconnected to one another, said David Vernooy, a Senior Executive and Digital Technologies Leader at GE Research. The early results we achieved in the financial use case with IonQ show the high potential of quantum computing to better understand and reduce the risks associated with these types of highly variable scenarios.

Todays results follow IonQs recent announcement of the companys new IonQ Forte quantum computing system. The system features novel, cutting-edge optics technology that enables increased accuracy and further enhances IonQs industry leading system performance. Partnerships with the likes of GE Research and Hyundai Motors illustrate the growing interest in our industry-leading systems and feeds into the continued success seen in Q1 2022.

About IonQ

IonQ, Inc. is a leader in quantum computing, with a proven track record of innovation and deployment. IonQ's current generation quantum computer, IonQ Forte, is the latest in a line of cutting-edge systems, including IonQ Aria, a system that boasts industry-leading 20 algorithmic qubits. Along with record performance, IonQ has defined what it believes is the best path forward to scale. IonQ is the only company with its quantum systems available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud, as well as through direct API access. IonQ was founded in 2015 by Christopher Monroe and Jungsang Kim based on 25 years of pioneering research. To learn more, visit http://www.ionq.com.

IonQ Forward-Looking Statements

This press release contains certain forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Some of the forward-looking statements can be identified by the use of forward-looking words. Statements that are not historical in nature, including the words anticipate, expect, suggests, plan, believe, intend, estimates, targets, projects, should, could, would, may, will, forecast and other similar expressions are intended to identify forward-looking statements. These statements include those related to IonQs ability to further develop and advance its quantum computers and achieve scale; IonQs ability to optimize quantum computing results even as systems scale; the expected launch of IonQ Forte for access by select developers, partners, and researchers in 2022 with broader customer access expected in 2023; IonQs market opportunity and anticipated growth; and the commercial benefits to customers of using quantum computing solutions. Forward-looking statements are predictions, projections and other statements about future events that are based on current expectations and assumptions and, as a result, are subject to risks and uncertainties. Many factors could cause actual future events to differ materially from the forward-looking statements in this press release, including but not limited to: market adoption of quantum computing solutions and IonQs products, services and solutions; the ability of IonQ to protect its intellectual property; changes in the competitive industries in which IonQ operates; changes in laws and regulations affecting IonQs business; IonQs ability to implement its business plans, forecasts and other expectations, and identify and realize additional partnerships and opportunities; and the risk of downturns in the market and the technology industry including, but not limited to, as a result of the COVID-19 pandemic. The foregoing list of factors is not exhaustive. You should carefully consider the foregoing factors and the other risks and uncertainties described in the Risk Factors section of IonQs Quarterly Report on Form 10-Q for the quarter ended March 31, 2022 and other documents filed by IonQ from time to time with the Securities and Exchange Commission. These filings identify and address other important risks and uncertainties that could cause actual events and results to differ materially from those contained in the forward-looking statements. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and IonQ assumes no obligation and does not intend to update or revise these forward-looking statements, whether as a result of new information, future events, or otherwise. IonQ does not give any assurance that it will achieve its expectations.

Here is the original post:
IonQ and GE Research Demonstrate High Potential of Quantum Computing for Risk Aggregation - Business Wire

Quantum computing will revolutionize every large industry – CTech

Israeli Team8 venture group officially opened this years Cyber Week with an event that took place in Tel Aviv on Sunday. The event, which included international guests and cybersecurity professionals, showcased the country and the industry as a powerhouse in relation to Startup Nation.

Opening remarks were made by Niv Sultan, star of Apple TVs Tehran, who also moderated the event. She then welcomed Gili Drob-Heinstein, Executive Director at the Blavatnik Interdisciplinary Cyber Research Center (ICRC) at Tel Aviv University, and Nadav Zafrir, Co-founder of Team8 and Managing Partner of Team8 Platform to the stage.

I would like to thank the 100 CSOs who came to stay with us, Zafrir said on stage. Guests from around the world had flown into Israel and spent time connecting with one another ahead of the official start of Cyber Week on Monday. Team8 was also celebrating its 8th year as a VC, highlighting the work it has done in the cybersecurity arena.

The stage was then filled with Admiral Mike Rogers and Nir Minerbi, Co-founder and CEO of Classiq, who together discussed The Quantum Opportunity in computing. Classical computers are great, but for some of the most complex challenges humanity is facing, they are not suitable, said Minerbi. Quantum computing will revolutionize every large industry.

Classiq develops software for quantum algorithms. Founded in 2020, it has raised a total of $51 million and is funded by Team8 among other VC players in the space. Admiral Mike Rogers is the Former Director of American agency the NSA and is an Operating Partner at Team8.

We are in a race, Rogers told the large crowd. This is a technology believed to have advantages for our daily lives and national security. I told both presidents I worked under why they should invest billions into quantum, citing the ability to look at multiple qubits simultaneously thus speeding up the ability to process information. According to Rogers, governments have already publicly announced $29 billion of funding to help develop quantum computing.

Final remarks were made by Renee Wynn, former CIO at NASA, who discussed the potential of cyber in space. Space may be the final frontier, and if we do not do anything else than what we are doing now, it will be chaos 100 miles above your head, she warned. On stage, she spoke to the audience about the threats in space and how satellites could be hijacked for nefarious reasons.

Cybersecurity and satellites are so important, she concluded. Lets bring the space teams together with the cybersecurity teams and help save lives.

After the remarks, the stage was then transformed to host the evenings entertainment. Israeli-American puppet band Red Band performed a variety of songs and was then joined by Marina Maximilian, an Israeli singer-songwriter and actress, who shared the stage with the colorful puppets.

The event was sponsored by Meitar, Delloitte, LeumiTech, Valley, Palo Alto, FinSec Innovation Lab, and SentinelOne. It marked the beginning of Cyber Week, a three-day conference hosted by Tel Aviv University that will welcome a variety of cybersecurity professionals for workshops, networking opportunities, and panel discussions. It is understood that this year will have 9,000 attendees, 400 speakers, and host people from 80 different countries.

2 View gallery

Red Band performing 'Seven Nation Army'.

(Photo: James Spiro)

Go here to read the rest:
Quantum computing will revolutionize every large industry - CTech

The Spooky Quantum Phenomenon You’ve Never Heard Of – Quanta Magazine

Perhaps the most famously weird feature of quantum mechanics is nonlocality: Measure one particle in an entangled pair whose partner is miles away, and the measurement seems to rip through the intervening space to instantaneously affect its partner. This spooky action at a distance (as Albert Einstein called it) has been the main focus of tests of quantum theory.

Nonlocality is spectacular. I mean, its like magic, said Adn Cabello, a physicist at the University of Seville in Spain.

But Cabello and others are interested in investigating a lesser-known but equally magical aspect of quantum mechanics: contextuality. Contextuality says that properties of particles, such as their position or polarization, exist only within the context of a measurement. Instead of thinking of particles properties as having fixed values, consider them more like words in language, whose meanings can change depending on the context: Timeflies likean arrow. Fruitflies likebananas.

Although contextuality has lived in nonlocalitys shadow for over 50 years, quantum physicists now consider it more of a hallmark feature of quantum systems than nonlocality is. A single particle, for instance, is a quantum system in which you cannot even think about nonlocality, since the particle is only in one location, said Brbara Amaral, a physicist at the University of So Paulo in Brazil. So [contextuality] is more general in some sense, and I think this is important to really understand the power of quantum systems and to go deeper into why quantum theory is the way it is.

Researchers have also found tantalizing links between contextuality and problems that quantum computers can efficiently solve that ordinary computers cannot; investigating these links could help guide researchers in developing new quantum computing approaches and algorithms.

And with renewed theoretical interest comes a renewed experimental effort to prove that our world is indeed contextual. In February, Cabello, in collaboration with Kihwan Kim at Tsinghua University in Beijing, China, published a paper in which they claimed to have performed the first loophole-free experimental test of contextuality.

The Northern Irish physicist John Stewart Bell is widely credited with showing that quantum systems can be nonlocal. By comparing the outcomes of measurements of two entangled particles, he showed with his eponymous theorem of 1965 that the high degree of correlations between the particles cant possibly be explained in terms of local hidden variables defining each ones separate properties. The information contained in the entangled pair must be shared nonlocally between the particles.

Bell also proved a similar theorem about contextuality. He and, separately, Simon Kochen and Ernst Specker showed that it is impossible for a quantum system to have hidden variables that define the values of all their properties in all possible contexts.

In Kochen and Speckers version of the proof, they considered a single particle with a quantum property called spin, which has both a magnitude and a direction. Measuring the spins magnitude along any direction always results in one of two outcomes: 1 or 0. The researchers then asked: Is it possible that the particle secretly knows what the result of every possible measurement will be before it is measured? In other words, could they assign a fixed value a hidden variable to all outcomes of all possible measurements at once?

Quantum theory says that the magnitudes of the spins along three perpendicular directions must obey the 101 rule: The outcomes of two of the measurements must be 1 and the other must be 0. Kochen and Specker used this rule to arrive at a contradiction. First, they assumed that each particle had a fixed, intrinsic value for each direction of spin. They then conducted a hypothetical spin measurement along some unique direction, assigning either 0 or 1 to the outcome. They then repeatedly rotated the direction of their hypothetical measurement and measured again, each time either freely assigning a value to the outcome or deducing what the value must be in order to satisfy the 101 rule together with directions they had previously considered.

They continued until, in the 117th direction, the contradiction cropped up. While they had previously assigned a value of 0 to the spin along this direction, the 101 rule was now dictating that the spin must be 1. The outcome of a measurement could not possibly return both 0 and 1. So the physicists concluded that there is no way a particle can have fixed hidden variables that remain the same regardless of context.

While the proof indicated that quantum theory demands contextuality, there was no way to actually demonstrate this through 117 simultaneous measurements of a single particle. Physicists have since devised more practical, experimentally implementable versions of the original Bell-Kochen-Specker theorem involving multiple entangled particles, where a particular measurement on one particle defines a context for the others.

In 2009, contextuality, a seemingly esoteric aspect of the underlying fabric of reality, got a direct application: One of the simplified versions of the original Bell-Kochen-Specker theorem was shown to be equivalent to a basic quantum computation.

The proof, named Mermins star after its originator, David Mermin, considered various combinations of contextual measurements that could be made on three entangled quantum bits, or qubits. The logic of how earlier measurements shape the outcomes of later measurements has become the basis for an approach called measurement-based quantum computing. The discovery suggested that contextuality might be key to why quantum computers can solve certain problems faster than classical computers an advantage that researchers have struggled mightily to understand.

Robert Raussendorf, a physicist at the University of British Columbia and a pioneer of measurement-based quantum computing, showed that contextuality is necessary for a quantum computer to beat a classical computer at some tasks, but he doesnt think its the whole story. Whether contextuality powers quantum computers is probably not exactly the right question to ask, he said. But we need to get there question by question. So we ask a question that we understand how to ask; we get an answer. We ask the next question.

Some researchers have suggested loopholes around Bell, Kochen and Speckers conclusion that the world is contextual. They argue that context-independent hidden variables havent been conclusively ruled out.

In February, Cabello and Kim announced that they had closed every plausible loophole by performing a loophole free Bell-Kochen-Specker experiment.

The experiment entailed measuring the spins of two entangled trapped ions in various directions, where the choice of measurement on one ion defined the context for the other ion. The physicists showed that, although making a measurement on one ion does not physically affect the other, it changes the context and hence the outcome of the second ions measurement.

Skeptics would ask: How can you be certain that the context created by the first measurement is what changed the second measurement outcome, rather than other conditions that might vary from experiment to experiment? Cabello and Kim closed this sharpness loophole by performing thousands of sets of measurements and showing that the outcomes dont change if the context doesnt. After ruling out this and other loopholes, they concluded that the only reasonable explanation for their results is contextuality.

Cabello and others think that these experiments could be used in the future to test the level of contextuality and hence, the power of quantum computing devices.

If you want to really understand how the world is working, said Cabello, you really need to go into the detail of quantum contextuality.

See the original post here:
The Spooky Quantum Phenomenon You've Never Heard Of - Quanta Magazine

Global Quantum Computing Market is estimated to be US$ 4531.04 billion by 2030 with a CAGR of 28.2% during the forecast period – By PMI -…

Covina, June 22, 2022 (GLOBE NEWSWIRE) -- The discovery of potential COVID-19 therapeutics has a bright future due toquantum computing. New approaches to drug discovery are being investigated with funding from the Penn State Institute for Computational and Data Sciences, coordinated through the Penn State Huck Institutes of the Life Sciences. For businesses in the quantum computing market, these tendencies are turning into lucrative opportunities during forecast period. Research initiatives that are assisting in the screening of billions of chemical compounds to uncover suitable medication candidates have been made possible by the convergence of machine learning and quantum physics. Stakeholders in the quantum computing business are expanding the availability of supercomputers and growing R&D in artificial intelligence to support these studies (AI). The energy and electricity sector offers lucrative potential for businesses in the quantum computing market. As regard to whole assets, work overs, and infrastructure, this technology is assisting players in the energy and power sector in making crucial investment decisions. Budgetary considerations, resource constraints, and contractual commitments may all be factors in these issues that quantum computing can help to resolve.

Region Analysis:

North America is predicted to hold a large market share for quantum computing due to its early adoption of cutting-edge technology. Additionally, the existence of a competitive market and end-user acceptance of cutting-edge technology may promote market growth. Sales are anticipated to increase throughout Europe as a result of the rise of multiple startups, favourable legislative conditions, and the growing use of cloud technology. In addition, it is anticipated that leading companies' company expansion will accelerate market growth. The market is anticipated to grow in Asia Pacific as a result of the growing need for quantum computing solutions for simulation, optimization, and machine learning.

Key Highlights:

Before purchasing this report, request a sample or make an inquiry by clicking the following link:

https://www.prophecymarketinsights.com/market_insight/Insight/request-sample/571

Key Market Insights from the report:

Global Quantum Computing Market size accounted for US$ 387.3 billion in 2020 and is estimated to be US$ 4531.04 billion by 2030 and is anticipated to register a CAGR of 28.2%.The Global Quantum Computing Market is segmented based on component, application, end-user industry and region.

Competitive Landscape & their strategies of Quantum Computing Market:

Key players in the global quantum computing market include Wave Systems Corp, 1QB Information Technologies Inc, QC Ware, Corp, Google Inc, QxBranch LLC, Microsoft Corporation, International Business Machines Corporation, Huawei Technologies Co., Ltd, ID Quantique SA, and Atos SE.

Scope of the Report:

Global Quantum Computing Market, By Component, 2019 2029, (US$ Mn)

To know more:Click here

Some Important Points Answered in this Market Report Are Given Below:

Browse Related Reports:

1.Photonic Integrated Circuit Market, By Integration (Monolithic Integration, Hybrid Integration, and Module Integration), By Raw Material (Gallium Arsenide, Indium Phosphide, Silica On Silicon, Silicon On Insulator, and Lithium Niobate), By Application (Optical Fiber Communication, Optical Fiber Sensors, Biomedical, and Quantum Computing), and By Region (North America, Europe, Asia-Pacific, Latin America, and Middle East & Africa) - Trends, Analysis, and Forecast till 2029

2.Edge Computing Market, By Component (Hardware, Services, Platform, and Solutions), By Application (Location Services, Analytics, Data Caching, Smart Cities, Environmental Monitoring, Optimized Local Content, Augmented Reality, Optimized Local Content, and Others), By End-User (Telecommunication & IT, Healthcare, Government & Public, Retail, Media & Entertainment, Transportation, Energy & Utilities, and Manufacturing), and By Region (North America, Europe, Asia Pacific, Latin America, and Middle East & Africa) - Trends, Analysis, and Forecast till 2029

3.Global 5G Technology Infrastructure Market, By Communication Infrastructure (Small Cell, Macro Cell, Radio Access Network, and Distributed Antenna System), By Network Technology (Software Defined Networking & Network Function Virtualization, Mobile Edge Computing, and Fog Computing), By Application (Automotive, Energy & Utilities, Healthcare, Retail, and Others), and By Region (North America, Europe, Asia Pacific, Latin America, and Middle East & Africa) - Trends, Analysis and Forecast till 2029

See the article here:
Global Quantum Computing Market is estimated to be US$ 4531.04 billion by 2030 with a CAGR of 28.2% during the forecast period - By PMI -...

Alan Turing’s Everlasting Contributions to Computing, AI and Cryptography – NIST

An enigma machine on display outside the Alan Turing Institute entrance inside the British Library, London.

Credit: Shutterstock/William Barton

Suppose someone asked you to devise the most powerful computer possible. Alan Turing, whose reputation as a central figure in computer science and artificial intelligence has only grown since his untimely death in 1954, applied his genius to problems such as this one in an age before computers as we know them existed. His theoretical work on this problem and others remains a foundation of computing, AI and modern cryptographic standards, including those NIST recommends.

The road from devising the most powerful computer possible to cryptographic standards has a few twists and turns, as does Turings brief life.

Alan Turing

Credit: National Portrait Gallery, London

In Turings time, mathematicians debated whether it was possible to build a single, all-purpose machine that could solve all problems that are computable. For example, we can compute a cars most energy-efficient route to a destination, and (in principle) the most likely way in which a string of amino acids will fold into a three-dimensional protein. Another example of a computable problem, important to modern encryption, is whether or not bigger numbers can be expressed as the product of two smaller numbers. For example, 6 can be expressed as the product of 2 and 3, but 7 cannot be factored into smaller integers and is therefore a prime number.

Some prominent mathematicians proposed elaborate designs for universal computers that would operate by following very complicated mathematical rules. It seemed overwhelmingly difficult to build such machines. It took the genius of Turing to show that a very simple machine could in fact compute all that is computable.

His hypothetical device is now known as a Turing machine. The centerpiece of the machine is a strip of tape, divided into individual boxes. Each box contains a symbol (such as A,C,T, G for the letters of genetic code) or a blank space. The strip of tape is analogous to todays hard drives that store bits of data. Initially, the string of symbols on the tape corresponds to the input, containing the data for the problem to be solved. The string also serves as the memory of the computer. The Turing machine writes onto the tape data that it needs to access later in the computation.

Credit: NIST

The device reads an individual symbol on the tape and follows instructions on whether to change the symbol or leave it alone before moving to another symbol. The instructions depend on the current state of the machine. For example, if the machine needs to decide whether the tape contains the text string TC it can scan the tape in the forward direction while switching among the states previous letter was T and previous letter was not C. If while in state previous letter was T it reads a C, it goes to a state found it and halts. If it encounters the blank symbol at the end of the input, it goes to the state did not find it and halts. Nowadays we would recognize the set of instructions as the machines program.

It took some time, but eventually it became clear to everyone that Turing was right: The Turing machine could indeed compute all that seemed computable. No number of additions or extensions to this machine could extend its computing capability.

To understand what can be computed it is helpful to identify what cannot be computed. Ina previous life as a university professor I had to teach programming a few times. Students often encounter the following problem: My program has been running for a long time; is it stuck? This is called the Halting Problem, and students often wondered why we simply couldnt detect infinite loops without actually getting stuck in them. It turns out a program to do this is an impossibility. Turing showed that there does not exist a machine that detects whether or not another machine halts. From this seminal result followed many other impossibility results. For example, logicians and philosophers had to abandon the dream of an automated way of detecting whether an assertion (such as whether there are infinitely many prime numbers) is true or false, as that is uncomputable. If you could do this, then you could solve the Halting Problem simply by asking whether the statement this machine halts is true or false.

Turing went on to make fundamental contributions to AI, theoretical biology and cryptography. His involvement with this last subject brought him honor and fame during World War II, when he played a very important role in adapting and extending cryptanalytic techniques invented by Polish mathematicians. This work broke the German Enigma machine encryption, making a significant contribution to the war effort.

Turing was gay. After the war, in 1952, the British government convicted him for having sex with a man. He stayed out of jail only by submitting to what is now called chemical castration. He died in 1954 at age 41 by cyanide poisoning, which was initially ruled a suicide but may have been an accident according to subsequent analysis. More than 50 years would pass before the British government apologized and pardoned him (after years of campaigning by scientists around the world). Today, the highest honor in computer sciences is called the Turing Award.

Turings computability work provided the foundation for modern complexity theory. This theory tries to answer the question Among those problems that can be solved by a computer, which ones can be solved efficiently? Here, efficiently means not in billions of years but in milliseconds, seconds, hours or days, depending on the computational problem.

For example, much of the cryptography that currently safeguards our data and communications relies on the belief that certain problems, such as decomposing an integer number into its prime factors, cannot be solved before the Sun turns into a red giant and consumes the Earth (currently forecast for 4 billion to 5 billion years). NIST is responsible for cryptographic standards that are used throughout the world. We could not do this work without complexity theory.

Technology sometimes throws us a curve, such as the discovery that if a sufficiently big and reliable quantum computer is built it would be able to factor integers, thus breaking some of our cryptography. In this situation, NIST scientists must rely on the worlds experts (many of them in-house) in order to update our standards. There are deep reasons to believe that quantum computers will not be able to break the cryptography that NIST is about to roll out. Among these reasons is that Turings machine can simulate quantum computers. This implies that complexity theory gives us limits on what a powerful quantum computer can do.

But that is a topic for another day. For now, we can celebrate how Turing provided the keys to much of todays computing technology and even gave us hints on how to solve looming technological problems.

See the original post here:
Alan Turing's Everlasting Contributions to Computing, AI and Cryptography - NIST

Quantum computing: D-Wave shows off prototype of its next quantum annealing computer – ZDNet

Image: Wacomka/Shutterstock

Quantum-computing outfit D-Wave has announced commercial access to an "experimental prototype" of its Advantage2 quantum annealing computer.

D-Wave is beating its own path to qubit processors with its quantum annealing approach. According to D-Wave, the Advantage2 prototype available today features over 500 qubits. It's a preview of a much larger Advantage2 it hopes to be available by 2024 with 7,000 qubits.

Access to the Advantage2 prototype is restricted to customers who have a D-Wave's Leap cloud service subscription, but developers interested in trying D-Wave's quantum cloud can sign up to get "one minute of free use of the actual quantum processing units (QPUs) and quantum hybrid solvers" that run on its earlier Advantage QPU.

The Advantage2 prototype is built with D-Wave's Zephyr connection technology that it claims offers higher connectivity between qubits than its predecessor topology called Pegasus, which is used in its Advantage QPU.

D-Wave says the Zephyr design enables shorter chains in its Advantage2 quantum chips, which can make them friendlier for calculations that require extra precision.

SEE:What is quantum computing? Everything you need to know about the strange world of quantum computers

"The Advantage2 prototype is designed to share what we're learning and gain feedback from the community as we continue to build towards the full Advantage2 system," says Emile Hoskinson, director of quantum annealing products at D-Wave.

"With Advantage2, we're pushing that envelope again demonstrating that connectivity and reduction in noise can be a delivery vehicle for even greater performance once the full system is available. The Advantage2 prototype is an opportunity for us to share our excitement and give a sneak peek into the future for customers bringing quantum into their applications."

While quantum computing is still experimental, senior execs are priming up for it as a business disruptor by 2030, according to a survey by consultancy EY. The firm found found that 81% of senior UK executives expect quantum computing to play a significant role in their industry by 2030.

Fellow consultancy McKinsey this month noted funding for quantum technology startups doubled in the past two years, from $700 million in 2020 to $1.4 billion in 2021. McKinsey sees quantum computing shaking up pharmaceuticals, chemicals, automotive, and finance industries, enabling players to "capture nearly $700 billion in value as early as 2035" through improved simulation and better machine learning. It expects revenues from quantum computing to exceed $90 billion by 2040.

D-Wave's investors include PSP Investments, Goldman Sachs, BDC Capital, NEC Corp, Aegis Group Partners, and the CIA's VC firm, In-Q-Tel.

See the rest here:
Quantum computing: D-Wave shows off prototype of its next quantum annealing computer - ZDNet

Quantum computing: Definition, facts & uses | Live Science

Quantum computing is a new generation of technology that involves a type of computer 158 million times faster than the most sophisticated supercomputer we have in the world today. It is a device so powerful that it could do in four minutes what it would take a traditional supercomputer 10,000 years to accomplish.

For decades, our computers have all been built around the same design. Whether it is the huge machines at NASA, or your laptop at home, they are all essentially just glorified calculators, but crucially they can only do one thing at a time.

The key to the way all computers work is that they process and store information made of binary digits called bits. These bits only have two possible values, a one or a zero. It is these numbers that create binary code, which a computer needs to read in order to carry out a specific task, according to the book Fundamentals of Computers (opens in new tab).

Quantum theory is a branch of physics which deals in the tiny world of atoms and the smaller (subatomic) particles inside them, according to the journal Documenta Mathematica (opens in new tab). When you delve into this minuscule world, the laws of physics are very different to what we see around us. For instance, quantum particles can exist in multiple states at the same time. This is known as superposition.

Instead of bits, quantum computers use something called quantum bits, 'qubits' for short. While a traditional bit can only be a one or a zero, a qubit can be a one, a zero or it can be both at the same time, according to a paper published from IEEE International Conference on Big Data (opens in new tab).

This means that a quantum computer does not have to wait for one process to end before it can begin another, it can do them at the same time.

Imagine you had lots of doors which were all locked except for one, and you needed to find out which one was open. A traditional computer would keep trying each door, one after the other, until it found the one which was unlocked. It might take five minutes, it might take a million years, depending on how many doors there were. But a quantum computer could try all the doors at once. This is what makes them so much faster.

As well as superposition, quantum particles also exhibit another strange behaviour called entanglement which also makes this tech so potentially ground-breaking. When two quantum particles are entangled, they form a connection to each other no matter how far apart they are. When you alter one, the other responds the same way even if they're thousands of miles apart. Einstein called this particle property "spooky action at a distance", according to the journal Nature (opens in new tab).

As well as speed, another advantage quantum computers have over traditional computers is size. According to Moore's Law, computing power doubles roughly every two years, according to the journal IEEE Annals of the History of Computing (opens in new tab). But in order to enable this, engineers have to fit more and more transistors onto a circuit board. A transistor is like a microscopic light switch which can be either off or on. This is how a computer processes a zero or a one that you find in binary code.

To solve more complex problems, you need more of those transistors. But no matter how small you make them there's only so many you can fit onto a circuit board. So what does that mean? It means sooner or later, traditional computers are going to be as smart as we can possibly make them, according to the Young Scientists Journal (opens in new tab). That is where quantum machines can change things.

The quest to build quantum computers has turned into something of a global race, with some of the biggest companies and indeed governments on the planet vying to push the technology ever further, prompting a rise in interest in quantum computing stocks on the money markets.

One example is the device created by D-Wave. It has built the Advantage system which it says is the first and only quantum computer designed for business use, according to a press release (opens in new tab) from the company.

D-wave said it has been designed with a new processor architecture with over 5,000 qubits and 15-way qubit connectivity, which it said enables companies to solve their largest and most complex business problems.

The firm claims the machine is the first and only quantum computer that enables customers to develop and run real-world, in-production quantum applications at scale in the cloud. The firm said the Advantage is 30 times faster and delivers equal or better solutions 94% of the time compared to its previous generation system.

But despite the huge, theoretical computational power of quantum computers, there is no need to consign your old laptop to the wheelie bin just yet. Conventional computers will still have a role to play in any new era, and are far more suited to everyday tasks such as spreadsheets, emailing and word processing, according to Quantum Computing Inc. (QCI) (opens in new tab).

Where quantum computing could really bring about radical change though is in predictive analytics. Because a quantum computer can make analyses and predictions at breakneck speeds, it would be able to predict weather patterns and perform traffic modelling, things where there are millions if not billions of variables that are constantly changing.

Standard computers can do what they are told well enough if they are fed the right computer programme by a human. But when it comes to predicting things, they are not so smart. This is why the weather forecast is not always accurate. There are too many variables, too many things changing too quickly for any conventional computer to keep up.

Because of their limitations, there are some computations which an ordinary computer may never be able to solve, or it might take literally a billion years. Not much good if you need a quick prediction or piece of analysis.

But a quantum computer is so fast, almost infinitely so, that it could respond to changing information quickly and examine a limitless number of outcomes and permutations simultaneously, according to research by Rigetti Computing (opens in new tab).

Quantum computers are also relatively small because they do not rely on transistors like traditional machines. They also consume comparatively less power, meaning they could in theory be better for the environment.

You can read about how to get started in quantum computing in this article by Nature (opens in new tab). To learn more about the future of quantum computing, you can watch this TED Talk (opens in new tab) by PhD student Jason Ball.

Read the original here:
Quantum computing: Definition, facts & uses | Live Science

Businesses brace for quantum computing disruption by end of decade – The Register

While business leaders expect quantum computing to play a significant role in industry by 2030, some experts don't believe the tech is going to be ready for production deployment in the near future.

The findings, from a survey titled "2022 Quantum Readiness" commissioned by consultancy EY, refer to UK businesses, although it is likely that the conclusions are equally applicable to global organizations.

According to EY, 81 percent of senior UK executives expect quantum computing to have a significant impact in their industry within seven and a half years, with almost half (48 percent) believing that quantum technology will begin to transform industries as soon as 2025.

As for the naysayers who say quantum tech won't be ready for live deployment any time soon, the industry also suffers from a hype problem, with capabilities being exaggerated and even some accusations flying around of alleged falsification, as with the example of quantum startup IonQ that was recently accused by Scorpion Capital of misleading investors about the effectiveness of its quantum hardware.

Joseph Reger, Fujitsu Fellow, CTO of Central and Eastern Europe and Member of Quantum Computing Council of World Economic Forum, told The Register he is getting some "heat" for saying quantum is not nearly a thing yet.

"There are impressive advantages that pre-quantum or quantum-inspired technologies provide. They are less sexy, but very powerful."

He added: "Some companies are exaggerating the time scales. If quantum computing gets overhyped, we are likely to face the first quantum winter."

Fujitsu is itself developing quantum systems, and announced earlier this year that it was working to integrate quantum computing with traditional HPC technology. The company also unveiled a high performance quantum simulator based on its PRIMEHPC FX 700 systems that it said will serve as an important bridge towards the development of quantum computing applications in future.

Meanwhile, EY claims that respondents were "almost unanimous" in their belief that quantum computing will create a moderate or high level of disruption for their own organization, industry sector, and the broader economy in the next five years.

Despite this, the survey finds that strategic planning for quantum computing is still at an embryonic stage for most organizations, with only 33 percent involved in strategic planning for how quantum will affect them and only a quarter have appointed specialist leaders or set up pilot teams.

The survey conducted in February-March 2022 covered 501 UK-based executives, all with senior roles in their organisations, who had to demonstrate at least a moderate (but preferably a high) level of understanding of quantum computing. EY said they originally approached 1,516 executives, but only 501 met this requirement, which in and of itself tells a tale.

EY's Quantum Computing Leader, Piers Clinton-Tarestad, said the survey reveals a disconnect between the pace at which some industry leaders expect quantum to start affecting business and their preparedness for those impacts.

"Maximizing the potential of quantum technologies will require early planning to build responsive and adaptable organisational capabilities," he said, adding that this is a challenge because the progress of quantum has accelerated, but it is "not following a steady trajectory."

For example, companies with quantum processors have increased the power of their hardware dramatically over the past several years, from just a handful of qubits to over a hundred in the case of IBM, which expects to deliver a 4,158-qubit system by 2025. Yet despite these advances, quantum computers remain a curiosity, with most operational systems deployed in research laboratories or made available via a cloud service for developers to experiment with.

Clinton-Tarestad said "quantum readiness" is "not so much a gap to be assessed as a road to be walked," with the next steps in the process being regularly revisited as the landscape evolves. He warned businesses that expect to see disruption in their industry within the next three or five years need to act now.

According to EY's report, executives in consumer and retail markets are those most likely to believe that quantum will play a significant role by 2025, with just over half of technology, media and telecommunications (TMT) executives expecting an impact within the same time frame. Most respondents among health and life sciences companies think this is more likely to happen later, between 2026 and 2035.

Most organizations surveyed expect to start their quantum preparations within the next two years, with 72 percent aiming to start by 2024.

However, only a quarter of organizations have got as far as recruiting people with the necessary skills to lead quantum computing efforts, although 68 percent said they are aiming to set up pilot teams to explore the potential of quantum for their business by 2024.

Fear of falling behind because rival companies are working to develop their own quantum capabilities is driving some respondents to start quantum projects, while the applications of quantum computing anticipated by industry leaders would advance operations involving AI and machine learning, especially among financial services, automotive and manufacturing companies. TMT respondents cited potential applications in cryptography and encryption as being the most likely use of quantum computing.

While the EY report warns about companies potentially losing out to rivals on the benefits of quantum computing, there are also dangers that organizations should be preparing for now, as Intel warned about during its Intel Vision conference last month.

One of these is that quantum computers could be used to break current cryptographic algorithms, meaning that the confidentiality of both personal and enterprise data could be at risk. This is not a far-off threat, but something that organizations need to consider right now, according to Sridhar Iyengar, VP of Intel Labs and Director of Security and Privacy Research.

"Adversaries could be harvesting encrypted data right now, so that they can decrypt it later when quantum computers are available. This could be sensitive data, such as your social security number or health records, which are required to be protected for a long period of time," Iyengar told us.

Organizations may want to address threats like this by taking steps such as evaluating post-quantum cryptography algorithms and increasing the key sizes for current crypto algorithms like AES.

Or they may simply decide to adopt a wait and see attitude. EY will no doubt be on hand to sell consultancy services to help clarify their thinking.

Read the original post:
Businesses brace for quantum computing disruption by end of decade - The Register

McKinsey thinks quantum computing could create $80b in revenue … eventually – The Register

In the hype-tastic world of quantum computing, consulting giant McKinsey & Company claims that the still-nascent field has the potential to create $80 billion in new revenue for businesses across industries.

It's a claim McKinsey has repeated nearly two dozen times on Twitter since March to promote its growing collection of research diving into various aspects of quantum computing, from startup and government funding to use cases and its potential impact on a range of industries.

The consulting giant believes this $80 billion figure represents the "value at stake" for quantum computing players but not the actual value that use cases could create [PDF]. This includes companies working in all aspects of quantum computing, from component makers to service providers.

Despite wildly optimistic numbers, McKinsey does ground the report in a few practical realities. For instance, in a Wednesday report, the firm says the hardware for quantum systems "remains too immature to enable a significant number of use cases," which, in turn, limits the "opportunities for fledgling software players." The authors add that this is likely one of the reasons why the rate of new quantum startups entering the market has begun to slow.

Even the top of McKinsey's page for quantum computing admits that capable systems won't be ready until 2030, which is in line with what various industry players, including Intel, are expecting. Like fusion, it's always a decade or so away.

McKinsey, like all companies navigating if quantum computing has any real-world value, is trying to walk a fine line, exploring the possibilities of quantum computing while showing the ways the tech is still disconnected from ordinary enterprise reality.

"While quantum computing promises to help businesses solve problems that are beyond the reach and speed of conventional high-performance computers, use cases are largely experimental and hypothetical at this early stage. Indeed, experts are still debating the most foundational topics for the field," McKinsey wrote in a December 2021 article about how use cases "are getting real."

One could argue the report is something of a metaphor for the quantum industry in 2022. Wildl optimism about future ecosystem profitability without really understanding what the tech will mean and to whom--and at what scale.

Go here to see the original:
McKinsey thinks quantum computing could create $80b in revenue ... eventually - The Register

Quantum computing can solve EVs safety woes – Times of India

Recent incidents of electric vehicle (EV) catching fire has shocked the Indian ecosystem and hindered the broad adoption of these vehicles. Before March of this year, there has been a substantial rise in the demand for electric vehicles and rapid advances in innovation and technology. Improvements in the battery technology, through increased efficiency and range, have made the EVs more accessible to the mass public, as the sector is currently dominated by two-wheelers and three-wheelers in India. According to Mordor Intelligence, Indias electric vehicle market was valued at $1.4 trillion in 2021, and it is expected to reach $15.4 trillion by 2027, recording a CAGR of 47.09% over the forecast period (2022-2027). Since March, the challenge in EV has shifted from affordability, charging, and range anxiety to safety. Safety has been of prime importance and an EV catching fire has led to dire consequences and even fatal.

The question is, why is this happening?

A report by the Defence Research and Development Organisations (DRDO) Centre for Fire Explosive and Environment Safety points it to the EV batteries. The issues highlighted includes poor quality cells, lack of fuse, issues with thermal management, and battery management system (BMS).

The highlighted issues cause the batteries to experience Thermal Runaway problem, leading to the fires. This phenomenon occurs when an increase in temperature changes the conditions in a manner that causes further increase in temperature, often leading to a destructive result. The issue highlighted by the DRDO report are all potential causes of thermal runaway. Lets explain why.

Local atmospheric temperature directly affects the operating temperature of battery. For efficient performance, batterys operating temperature should be around 20-35 C. To keep the battery at this temperature, EVs need battery thermal management system (BTMS). Now, with rising temperatures in our cities, the BTMS are being challenged and possibly due to the poor thermal management system of EV batteries, thermal runaway is being caused.

Another cause for the thermal runaway, is possibly due to the rapid battery charging. With the evolution of battery technology, charging technology is also advancing. While the fast charging can greatly improve the convenience of EVs, it increases the risks related to batteries. Fast charging an EV can overheat the battery system, enough to melt the electrical wires and cause short circuits, leading to explosive consequences, as already seen by several charging-related incidents.

While hot weather conditions and inadequate thermal management systems of the battery can negatively impact performance and shorten life, they alone cannot cause thermal runaway. As mentioned by DRDO report, inefficient, or even absence of, fuse as a fail-safe mechanism is a missing component causing thermal runaway.

The causes of thermal runaway highlighted above could be due to either inefficient design or not enough testing by EV manufacturers. But the manufacturers cannot spend more time on increased testing due to time-to-market constraints.

Whats the solution?

As stated, design and testing phase are very important phases of any product manufacturing. Since the era of industry 4.0, all design and testing have moved digitally and carried out on large-scale powerful computers through what is called Engineering Simulations (referred to as Simulations hereafter). Simulations can be of various types some of which are thermal (studying the effect of heat and temperature on object), structural (studying effect of objects strength, stress, and failure), fluid (studying effect of flow in and around an object), and electrochemical (studying effect of chemistry on electricity). Thermal runaway is a complex engineering problem, entailing all the types of simulations mentioned above. With the right simulation tools, simulations allow to mimic every possible physical condition, rising temperature, fast charging, or fuse placement and find areas of problem. After identifying, it can also aid in testing different solutions and hence avoid thermal runaway all together.

The question then becomes why are we seeing the news at all?

Biggest issue EV manufactures have with performing numerous simulations is the duration of time. To run a series of simulations, it can take months to obtain results with minimal flaws and defects (high accuracy simulations). Manufacturers cannot afford this as it greatly hampers the time to market. Thus, companies opt for simulations that can provide solutions but with several minor flaws and defects (low accuracy simulations) to them, leading to large mishaps like EV explosions, system failures, and affecting human lives. In addition, if the companies do find some time to perform these simulations with minimum flaws and defects (high accuracy simulations), the cost that manufacturers incur is very high due to the need for supercomputers whether on-premises (setup and maintenance cost) or on cloud (due high duration time of the computing).

So the real issue is the computing technology bottleneck. This is where the next-generation computing technology of Quantum computers can step in and revolutionize the industries like EV and Battery Design. This new technology is much more powerful, enabling exponential abilities to these industries.

Prospect of Quantum-powered simulations

The power Quantum computers is showcased by its ability to perform the same simulations in much less time compared to classical supercomputers. Hence, this technology can significantly help EV manufacturers in their time to market.

Moreover, the ability to obtain high accuracy from simulations is vital in using them in the product development process. Since high accuracy simulations took lot of time before, making them prohibitive, quantum-powered simulations can now enable the manufacturers to perform accurate simulations at reasonable time, in hours instead of months. Added accuracy will not only help companies create more efficient designs and improve the reliability of their vehicles, but also help in saving something invaluable, i.e., Lives. In addition, the speedup from Quantum computations enables lower computing usages, decreasing the overall cost and making it affordable for EV manufacturers.

Whats next?

In the computing sphere, Quantum Computing is the revolutionizing system, changing our understanding of computations and shows tremendous potential as shown by various use cases. While the prospect of Quantum-powered simulations offers the advantage of Better, Faster, and Cheaper, the development is very challenging as the Quantum computers work in entirely different ways.

Good news is that companies are already developing & building Quantum-powered simulation software, which can solve problems of thermal runaway and optimization of BTMS. Quantum Computing is here and now!

Views expressed above are the author's own.

END OF ARTICLE

The rest is here:
Quantum computing can solve EVs safety woes - Times of India

How we learned to break down barriers to machine learning – Ars Technica

Dr. Sephus discusses breaking down barriers to machine learning at Ars Frontiers 2022. Click here for transcript.

Welcome to the week after Ars Frontiers! This article is the first in a short series of pieces that will recap each of the day's talks for the benefit of those who weren't able to travel to DC for our first conference. We'll be running one of these every few days for the next couple of weeks, and each one will include an embedded video of the talk (along with a transcript).

For today's recap, we're going over our talk with Amazon Web Services tech evangelist Dr. Nashlie Sephus. Our discussion was titled "Breaking Barriers to Machine Learning."

Dr. Sephus came to AWS via a roundabout path, growing up in Mississippi before eventually joining a tech startup called Partpic. Partpic was an artificial intelligence and machine-learning (AI/ML) company with a neat premise: Users could take photographs of tooling and parts, and the Partpic app would algorithmically analyze the pictures, identify the part, and provide information on what the part was and where to buy more of it. Partpic was acquired by Amazon in 2016, and Dr. Sephus took her machine-learning skills to AWS.

When asked, she identified accessasthe biggest barrier to the greater use of AI/MLin a lot of ways, it's another wrinkle in the old problem of the digital divide. A core component of being able to utilize most common AI/ML tools is having reliable and fast Internet access, and drawing on experience from her background, Dr. Sephus pointed out that a lack of access to technology in primary schools in poorer areas of the country sets kids on a path away from being able to use the kinds of tools we're talking about.

Furthermore, lack of early access leads to resistance to technology later in life. "You're talking about a concept that a lot of people think is pretty intimidating," she explained. "A lot of people are scared. They feel threatened by the technology."

One way of tackling the divide here, in addition to simply increasing access, is changing the way that technologists communicate about complex topics like AI/ML to regular folks. "I understand that, as technologists, a lot of times we just like to build cool stuff, right?" Dr. Sephus said. "We're not thinking about the longer-term impact, but that's why it's so important to have that diversity of thought at the table and those different perspectives."

Dr. Sephus said that AWS has been hiring sociologists and psychologists to join its tech teams to figure out ways to tackle the digital divide by meeting people where they are rather than forcing them to come to the technology.

Simply reframing complex AI/ML topics in terms of everyday actions can remove barriers. Dr. Sephus explained that one way of doing this is to point out that almost everyone has a cell phone, and when you're talking to your phone or using facial recognition to unlock it, or when you're getting recommendations for a movie or for the next song to listen tothese things are all examples of interacting with machine learning. Not everyone groks that, especially technological laypersons, and showing people that these things are driven by AI/ML can be revelatory.

"Meeting them where they are, showing them how these technologies affect them in their everyday lives, and having programming out there in a way that's very approachableI think that's something we should focus on," she said.

Continued here:
How we learned to break down barriers to machine learning - Ars Technica

Keeping water on the radar: Machine learning to aid in essential water cycle measurement – CU Boulder Today

Department of Computer Science assistant professor Chris Heckman and CIRES research hydrologist Toby Minear have been awarded a Grand Challenge Research & Innovation Seed Grant to create an instrument that could revolutionize our understanding of the amount of water in our rivers, lakes, wetlands and coastal areas by greatly increasing the places where we measure it.

The new low-cost instrument would use radar and machine learning to quickly and safely measure water levels in a variety of scenarios.

This work could prove vital as the USDA recently proclaimed the entire state of Colorado to be a "primary natural disaster area" due to an ongoing drought that has made the American West potentially the driest it has been in over a millennium. Other climate records across the globe also continue to be broken, year after year. Our understanding of the changing water cycle has never been more essential at a local, national and global level.

A fundamental part to developing this understanding is knowing changes in the surface height of bodies of water. Currently, measuring changing water surface levels involves high-cost sensors that are easily damaged by floods, difficult to install and time consuming to maintain.

"One of the big issues is that we have limited locations where we take measurements of surface water heights," Minear said.

Heckman and Minear are aiming to change this by building a low-cost instrument that doesn't need to be in a body of water to read its average water surface level. It can instead be placed several meters away safely elevated from floods.

The instrument, roughly the size of two credit-cards stacked on one another, relies on high-frequency radio waves, often referred to as "millimeter wave", which have only been made commercially accessible in the last decade.

Through radar, these short waves can be used to measure the distance between the sensor and the surface of a body of water with great specificity. As the water's surface level increases or decreases over time, the distance between the sensor and the water's surface level changes.

The instrument's small form-factor and potential off-the-shelf usability separate it from previous efforts to identify water through radar.

It also streamlines data transmitted over often limited and expensive cellular and satellite networks, lowering the cost.

In addition, the instrument will use machine learning to determine whether a change in measurements could be a temporary outlier, like a bird swimming by, and whether or not a surface is liquid water.

Machine learning is a form of data analysis that seeks to identify patterns from data to make decisions with little human intervention.

While traditionally radar has been used to detect solid objects, liquids require different considerations to avoid being misidentified. Heckman believes that traditional ways of processing radar may not be enough to measure liquid surfaces at such close proximity.

"We're considering moving further up the radar processing chain and reconsidering how some of these algorithms have been developed in light of new techniques in this kind of signal processing," Heckman said.

In addition to possible fundamental shifts in radar processing, the project could empower communities of citizen scientists, according to Minear.

"Right now, many of the systems that we use need an expert installer. Our idea is to internalize some of those expert decisions, which takes out a lot of the cost and makes this instrument more friendly to a citizen science approach," he said.

By lowering the barrier of entry to water surface level measurement through low-cost devices with smaller data requirements, the researchers broaden opportunities for communities, even in areas with limited cellular networks, to measure their own water sources.

The team is also committing to open-source principles to ensure that anyone can use and build on the technology, allowing for new innovations to happen more quickly and democratically.

Minear, who is a Science Team and Cal/Val Team member for the upcoming NASA Surface Water and Ocean Topography (SWOT) Mission, also hopes that the new instrument could help check the accuracy of water surface level measurements made by satellites.

These sensors could also give local, regional and national communities more insight into their water usage and supply over time and could be used to help make evidence-informed policy decisions about water rights and usage.

"I'm very excited about the opportunities that are presented by getting data in places that we don't currently get it. I anticipate that this could give us better insight into what is happening with our water sources, even in our backyard," said Heckman.

More here:
Keeping water on the radar: Machine learning to aid in essential water cycle measurement - CU Boulder Today

The role of AI and machine learning in revolutionizing clinical research – MedCity News

Advanced technologies such as artificial intelligence (AI), machine learning (ML), and natural language processing (NLP) have become a cornerstone of successful modern clinical trials, integrated into many of the technologies enabling the transformation of clinical development.

The health and life sciences industrys dramatic leap forward into the digital age in recent years has been a game-changer with innovations and scientific breakthroughs that are improving patient outcomes and population health. Consequently, embracing digital transformation is no longer an option but an industry standard. Lets explore what that truly means for clinical development.

An accelerated path to better results

Over the years, technology has equipped clinical leaders to successfully reduce costs while accelerating stages of research and development. These technologies have aided in the structurization of complex data environmentsa need created by the exponential growth in data sources containing valuable information for clinical research.

Today, the volume, variety and velocity of structured and unstructured data generated by clinical trials are outpacing traditional data management processes. The reality is that there is simply too much data coming from too many sources to be manageable by human teams alone. As a response to this, AI/ML technologies have proven in recent years to hold the remarkable potential to automate data standardization while ensuring quality control, in turn easing the burden on researchers with minimal manual intervention.

Once the collection and streamlining of data is compiled within a single automated ecosystem, clinical trial leaders begin to benefit from faster and smarter insights driven by the application of machine analysis. These include the creation of predictive and prescriptive insights that can aid researchers and sites to uncover best practices for future processes. Altogether, these capabilities can improve research outcomes, patients experience and safety.

A look into compliance and privacy

When we think about the use of patient data, privacy and compliance adherence must be a consideration. The bar is set high for any technology being implemented into clinical trial execution.

Efforts must adhere to Good Clinical Practice (GcP) and validation requirements that ensure an outcome is valid by it being predictable and repeatable. Additionally, there must be transparency and explainability around how any AI algorithm makes decisions to prove correctness and avoidance of any potential bias. This is becoming more essential than ever from a compliance perspective as regulators look at algorithms as part of what they base their approvals on.

Keeping the h(uman) in healthcare

The goal of implementing AI/ML in clinical research is not to replace humans with digital tools but to increase their productivity through high-efficiency human augmentation and the automation of mundane tasks. Before the application of advanced technologies to clinical trials, there was an unmet need for an agile methodology where researchers and organizers could solely focus on critical requirements and the delivery of results.

The intelligent application of technology allows for human interaction with AI models to bring better outcomes to research, and even in its most advanced stage, data science technology never replaces the human data scientist. It does, however, provide a mutually beneficial circumstance wherein the augmentation of workflows allows data scientists to ease data burden while AI models flourish through human feedback. This continuous learning by an AI model is known as Continuous Integration/Continuous Delivery (CI/CD).

The integration of human capacity and technology results in accelerated efficiency, improved compliance and superb patient personalization. Furthermore, regardless of how efficient algorithms become, the decision-making power will always belong to humans.

Envisioning a bold future

AI/ML strategies are redefining the clinical development cycle like never beforeand as the industry leaps into new frontiers, digital transformation is leading the way to incredible advancements that will revolutionize the space forever. Leaders today have the opportunity to apply advanced technologies to solve historically complicated problems in the field.

Already, weve seen better site selection, more effective risk-based quality management, improved patient monitoring and safety, enhanced patient recruitment and engagement, and improved overall study qualityand this is just the beginning.

Photo: Blue Planet Studio, Getty Images

Continue reading here:
The role of AI and machine learning in revolutionizing clinical research - MedCity News

Link Machine Learning (LML) has a Neutral Sentiment Score, is Rising, and Outperforming the Crypto Market Sunday: What’s Next? – InvestorsObserver

Link Machine Learning (LML) gets a neutral rating from InvestorsObserver Sunday. The token is up 82.14% to $0.004303122367 while the broader crypto market is up 2.46%.

The Sentiment Score provides a quick, short-term look at the cryptos recent performance. This can be useful for both short-term investors looking to ride a rally and longer-term investors trying to buy the dip.

Link Machine Learning price is currently above resistance. With support set around $0.00162854200285358 and resistance at $0.00377953835819346, Link Machine Learning is potentially in a volatile position if the rally burns out.

Link Machine Learning has traded on low volume recently. This means that today's volume is below its average volume over the past seven days.

Due to a lack of data, this crypto may be less suitable for some investors.

Click here to unlock the rest of the report on Link Machine Learning

Subscribe to our daily morning update newsletter and never miss out on the need-to-know market news, movements, and more.

Thank you for signing up! You're all set to receive the Morning Update newsletter

Link:
Link Machine Learning (LML) has a Neutral Sentiment Score, is Rising, and Outperforming the Crypto Market Sunday: What's Next? - InvestorsObserver

AiM Future Joins the Edge AI and Vision Alliance – AiThority

AiM Future, a leader in embedded machine learning intellectual property (IP) for edge computing devices, announced it has joined the Edge AI and Vision Alliance.

AiM Future is accelerating the transition from centralized cloud-native AI to the distributed intelligent edge. Its market-proven NeuroMosAIc Processor (NMP) family of machine learning hardware accelerators and software, NeuroMosAIc Studio, enables the efficient execution of deep learning models common to computer vision applications. Shipping in smart home devices since 2019, the co-designed hardware and software offer a highly flexible and scalable solution meeting end-application performance, power, and cost requirements from always-on, battery-operated cameras to high-performance edge infrastructure.

Recommended AI News: How Startups are Leveraging the Cloud to Scale

It is our companys pleasure to join the Edge AI and Vision Alliance, said ChangSoo Kim, founder, and CEO of AiM Future. As a premier organization for technology innovators revolutionizing artificial intelligence across the edge computing spectrum, the partnership is a natural fit. It is clear AiM Futures vision of bringing the impossible to reality is shared by the Alliance and its ecosystem. The field of edge AI is rapidly advancing and partnerships are fundamental to addressing the many challenges and limitations of todays edge devices.

Today, more and more devices and systems are gaining the ability to see and understand their environments, said Jeff Bier, founder of the Edge AI and Vision Alliance. And thanks to visual perception, these machines are becoming more autonomous, safer, more capable and easier to use. With its processing architecture and accompanying toolset, AiM Future is implementing an intriguing approach to deep learning inference acceleration. We welcome AiM Future as one of the Edge AI and Vision Alliances newest members and look forward to their participation at the Embedded Vision Summit.

Recommended AI News: Nintex Named a Leader in Workflow and Content Automation by Aragon Research

[To share your insights with us, please write to sghosh@martechseries.com]

Read more:
AiM Future Joins the Edge AI and Vision Alliance - AiThority

NSF award will boost UAB research in machine-learning-enabled plasma synthesis of novel materials – University of Alabama at Birmingham

The $20 million National Science Foundation award will help UAB and eight other Alabama-based universities build research infrastructure. UABs share will be about $2 million.

Yogesh Vohra Yogesh Vohra, Ph.D., is a co-principal investigator on a National Science Foundation award that will bring the University of Alabama at Birmingham about $2 million over five years.

The total NSF EPSCoR Research Infrastructure Improvement Program award of $20 million with its principal investigator Gary Zank, Ph.D., based at the University of Alabama in Huntsville will help strengthen research infrastructure at UAB, UAH, Auburn University, Tuskegee University, the University of South Alabama, Alabama A&M University, Alabama State University, Oakwood University, and the University of Alabama.

The award, Future technologies and enabling plasma processes, or FTPP, aims to develop new technologies using plasma in hard and soft biomaterials, food safety and sterilization, and space weather prediction. This project will build plasma expertise, research and industrial capacity, as well as a highly trained and capable plasma science and engineering workforce, across Alabama.

Unlike solids, liquids and gas, plasma the fourth state of matter does not exist naturally on Earth. This ionized gaseous substance can be made by heating neutral gases. At UAB, Vohra, a professor and university scholar in the UAB Department of Physics, has employed microwave-generated plasmas to create thin diamond films that have many potential uses, including super-hard coatings and diamond-encapsulated sensors for extreme environments. This new FTPP grant will support research into plasma synthesis of materials that maintain their strength at high temperatures, superconducting thin films and developing plasma surface modifications that incorporate antimicrobial materials in biomedical implants.

Vohra says the UAB Department of Physics will mostly use its share of the award to support faculty in the UAB Center for Nanoscale Materials and Biointegration and two full-time postdoctoral scholars, and support hiring of a new faculty member in computational physics with a background in machine-learning. The machine-learning predictions using the existing databases on materials properties will enable our research team to reduce the time from materials discovery to actual deployment in real-world applications, Vohra said.

The NSF EPSCoR Research Infrastructure Improvement Program helps establish partnerships among academic institutions to make sustainable improvements in research infrastructure, and research and development capacity. EPSCoR is the acronym for Established Program to Stimulate Competitive Research, an effort to level the playing field for states, territories and a commonwealth that historically have received lesser amounts of federal research and development funding.

Jurisdictions can compete for NSF EPSCoR awards if their five-year level of total NSF funding is less than 0.75 percent of the total NSF budget. Current qualifiers include Alabama, 22 other states, and Guam, the U.S. Virgin Islands and Puerto Rico.

Besides Alabama, the other four 2022 EPSCoR Research Infrastructure Improvement Program awardees are Hawaii, Kansas, Nevada and Wyoming.

In 2017, UAB was part of another five-year, $20 million NSF EPSCoR award to Alabama universities.

The Department of Physics is part of the UAB College of Arts and Sciences.

Read the original post:
NSF award will boost UAB research in machine-learning-enabled plasma synthesis of novel materials - University of Alabama at Birmingham

Machine learning innovation among power industry companies dropped off in the last quarter – Power Technology

Research and innovation in machine learning in the power industry operations and technologies sector has declined in the last quarter but remains higher than it was a year ago.

The most recent figures show that the number of related patent applications in the industry stood at 108 in the three months ending March up from 103 over the same period in 2021.

Figures for patent grants related to followed a similar pattern to filings growing from 15 in the three months ending March 2021 to 19 in the same period in 2022.

The figures are compiled by GlobalData, which tracks patent filings and grants from official offices around the world. Using textual analysis, as well as official patent classifications, these patents are grouped into key thematic areas and linked to key companies across various industries.

Machine learning is one of the key areas tracked by GlobalData. It has been identified as being a key disruptive force facing companies in the coming years, and is one of the areas that companies investing resources in now are expected to reap rewards from. The figures also provide an insight into the largest innovators in the sector.

Siemens was the top innovator in the power industry operations and technologies sector in the latest quarter. The company, which has its headquarters in Germany, filed 83 related patents in the three months ending March. That was up from 77 over the same period in 2021.

It was followed by the Switzerland-based ABB with 11 patent applications, South Korea-based Korea Electric Power Corp (9 applications), and the US-based Honeywell International Inc (9 applications).

ABB has recently ramped up R&D in machine learning. It saw growth of 36.4% in related patent applications in the three months ending March compared to the same period in 2021 the highest percentage growth out of all companies tracked with more than 10 quarterly patents in the power industry operations and technologies sector.

Fabric Expansion Joints, Metal Expansion Joints and Elastomer Expansion Joints

Excerpt from:
Machine learning innovation among power industry companies dropped off in the last quarter - Power Technology

Machine Learning to Virtual Reality: Learn from Anywhere with 5 Online Courses by IITs – The Better India

In a welcome move, the Indian Institute of Technology (IIT) has partnered with the online learning platform, Coursera. You can now access several courses from the comfort of your home while getting degrees certified by the premium institute.

Here are five courses that you may want to check out.

The course offers a strong foundation in business and technology. This is an opportunity to learn from industry experts at the B school. Dive into your area of specialisation, after choosing from over 55 electives. The curriculum spans business, management, data science and data analytics.

Eligibility criteria: A Bachelors degree with 65 per cent; four years of relevant work experience after graduation.Fees: Rs. 10,93,000/Duration: 24 months to 60 months

For more details, click here.

As algorithms shape our world and businesses, it is becoming more important to keep pace. In the course, you will gain exposure to the different algorithms that are needed for machine learning. You will also be trained in the application of Python programming in solving real-world financial problems.

Eligibility criteria: Knowledge of basic mathematics, linear algebra, calculus, statistics and spreadsheetsFees: Rs. 90,000/Duration: 6 months

For more details, click here.

Learn from industry experts who share with you their knowledge of mechatronics. In this course, you will be introduced to manufacturing processes and how these can be enhanced through computer technology. You will be trained in computer-aided design (CAD) and computer-aided manufacturing (CAM) softwares.

Eligibility criteria: Bachelors degree in any technology or engineering field; basic knowledge of programming. Students pursuing BE or BTech may also enrol.Fees: 1,12,500/Duration: 6 months

For more details, click here.

If you have been intrigued by the world of virtual reality, this course will help you delve deeper into understanding it. The course gives you a firm footing in the design and development of these technologies. Get expert insights into how to build deep learning models such as Encoder-Decoder.

Eligibility criteria: Bachelors degree in a related field with basic knowledge of programming.Fees: 1,12,500/Duration: 6 months

For more details, click here.

Understand the computational properties of Natural Language Processing with this course. Learn how to integrate machine learning and natural language processing to solve real-world problems across industries.

Eligibility criteria: Bachelors degree in a related field; a mathematics background in linear algebra, calculus, probability, statistics, data structures and algorithms; knowledge of Python.Fees: 1,12,500/Duration: 6 months

For more details, click here.

Read the original:
Machine Learning to Virtual Reality: Learn from Anywhere with 5 Online Courses by IITs - The Better India

UT Researchers Aim to Change the Cancer Equation – UT News – University of Texas

Cancer is arguably the greatest health challenge of our time. During the past 50 years, clinical advances have substantially reduced the mortality rate for people with cancer, but new breakthroughs often require years of trial and error in the lab.

An innovative partnership between The University of Texas at Austins Machine Learning Lab, Oden Institute for Computational Engineering and Sciences and Dell Medical School aims to speed up those discoveries, saving lives in the process. What would have previously taken years in the lab can potentially be accomplished in days with the appropriate computing simulations.

The research collaboration is possible because of a $10 million leadership gift from Dheeraj and Swapna Pandey.

The biggest promise of computational oncology is personalized medicine, Dheeraj Pandey said. The ability for us to answer questions that save precious lives. More importantly, the field is attempting to break silos between physics, biology, and computing researchers who are fighting indefatigably against cancer.

UT researchers will integrate two emerging disciplines computational oncology and machine learning to transform the future of cancer care. Machine learning applies algorithms to large data sets to build classifiers that can make accurate predictions, even in complex biological and chemical domains. Computational oncology uses physics-based and data-driven advanced mathematical and computational approaches to model tumors, calibrate patient-specific models, and simulate patient responses to potential treatment options.

Modeling and simulation occur across a spectrum of scales, from the cellular level to the organ level of the human body. The models can be theory-driven, knowledge-driven, or data-driven. Or, increasingly, a combination of all three. Substantial computational skills and capabilities, as well as medical knowledge, are required to capture the individuality of each cancer patients situation for accurate decision making at all levels.

UT Austin has a unique environment that enables the interdisciplinary research critical to tackling societal grand challenges such as personalized care for cancer patients, said Karen Willcox, director of the Oden Institute. We are thrilled to build a new partnership with the Machine Learning Lab, building on the Oden Institutes strength in computational oncology and our existing partnerships with Dell Med, MD Anderson Cancer Center and the Texas Advanced Computing Center. Computational medicine is a top priority for the Oden Institute, and the generosity of the Pandey family is a game changer in taking our efforts to a new level.

The Oden Institute and its Center for Computational Oncology sit at the forefront of developing mechanism-based modeling techniques that optimize treatment and outcomes for an individual patient. The Machine Learning Laboratory is the universitys headquarters for machine learning and artificial intelligence.

A new wave of machine learning is creating predictive models that are transforming science, said Adam Klivans, director of the Machine Learning Lab and NSF-funded Institute for Foundations of Machine Learning. Our technologies can anticipate new biological and chemical interactions to advance the automated discovery of new treatments.

Currently, cancer biologists and chemists rely on trial and error to determine what treatments will be most effective. Connecting university research with community providers is central to the mission of Dell Med. Through initiatives such as the Livestrong Cancer Institutes, Dell Med translates leading-edge research into high-quality clinical trials and patient-focused precision medicine.

Time is critical when treating cancer, said Gail Eckhardt, director of the Livestrong Cancer Institutes at Dell Med. The Pandeys gift brings us that much closer to the day when clinicians and researchers can integrate patient data and computational methods to individualize therapy, thereby improving the lives of patients with cancer.

Computational approaches are the key to accelerating progress against cancer, said David Jaffray, chief technology and digital officer at The University of Texas MD Anderson Cancer Center. This investment will further the collaborative, team science approach we have developed with the leadership at UT Austin. Together, we are building a critical mass of talent to use the power of data and computing to make real progress against this terrible disease.

Read the feature story to learn more about this partnership.

Link:
UT Researchers Aim to Change the Cancer Equation - UT News - University of Texas