The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Quantum Physics
Huawei CFO Meng Wanzhou to appear in Canada court for final phase of extradition hearings – ETTelecom.com
Posted: March 16, 2021 at 2:44 am
Huawei Technologies Co Ltd. Chief Financial Officer Meng Wanzhou will appear in a Canadian court on Monday as her U.S. extradition case enters its last phase of arguments leading to a final hearing in May.
Meng, 49, was arrested in December 2018 at Vancouver International Airport on a U.S. warrant for allegedly misleading HBSC about Huawei's business dealings in Iran and causing the bank to violate U.S. sanctions.
She has since been fighting the case from under house arrest in Vancouver and has said she is innocent.
Beginning Monday, the court will hear arguments regarding allegations that Canadian and U.S. authorities committed legal missteps during Meng's initial questioning and arrest, which her lawyers say should invalidate her extradition.
Witness testimony on these allegations concluded in December 2020.
Meng's team has previously argued that the extradition should be rejected due to the alleged political interference by then-U.S. President Donald Trump in her case.
Trump told Reuters in December 2018 that he would intervene in the case if it would serve national security interests or help close a trade deal with China.
Canadian prosecutors representing the federal government assert that appropriate processes were followed. They have argued that now that Trump is no longer president his comments are moot, and that their influence is best judged by a politician, not a judge.
The case has caused a frost in relations between Ottawa and Beijing. Shortly after Meng's arrest, China detained two Canadians - Michael Spavor and Michael Kovrig - on espionage charges, which Canada has called retaliation.
Hearings are scheduled to finish in May, but the potential for appeals from either side means the case could drag on for years.
Read more here:
Posted in Quantum Physics
Comments Off on Huawei CFO Meng Wanzhou to appear in Canada court for final phase of extradition hearings – ETTelecom.com
Quantum Mechanics, the Chinese Room Experiment and the Limits of Understanding – Scientific American
Posted: March 11, 2021 at 12:19 pm
Like great art, great thought experiments have implications unintended by their creators. Take philosopher John Searles Chinese room experiment. Searle concocted it to convince us that computers dont really think as we do; they manipulate symbols mindlessly, without understanding what they are doing.
Searle meant to make a point about the limits of machine cognition. Recently, however, the Chinese room experiment has goaded me into dwelling on the limits of human cognition. We humans can be pretty mindless too, even when engaged in a pursuit as lofty as quantum physics.
Some background. Searle first proposed the Chinese room experiment in 1980. At the time, artificial intelligence researchers, who have always been prone to mood swings, were cocky. Some claimed that machines would soon pass the Turing test, a means of determining whether a machine thinks.
Computer pioneer Alan Turing proposed in 1950 that questions be fed to a machine and a human. If we cannot distinguish the machines answers from the humans, then we must grant that the machine does indeed think. Thinking, after all, is just the manipulation of symbols, such as numbers or words, toward a certain end.
Some AI enthusiasts insisted that thinking, whether carried out by neurons or transistors, entails conscious understanding. Marvin Minsky espoused this strong AI viewpoint when I interviewed him in 1993. After defining consciousness as a record-keeping system, Minsky asserted that LISP software, which tracks its own computations, is extremely conscious, much more so than humans. When I expressed skepticism, Minsky called me racist.
Back to Searle, who found strong AI annoying and wanted to rebut it. He asks us to imagine a man who doesnt understand Chinese sitting in a room. The room contains a manual that tells the man how to respond to a string of Chinese characters with another string of characters. Someone outside the room slips a sheet of paper with Chinese characters on it under the door. The man finds the right response in the manual, copies it onto a sheet of paper and slips it back under the door.
Unknown to the man, he is replying to a question, like What is your favorite color?, with an appropriate answer, like Blue. In this way, he mimics someone who understands Chinese even though he doesnt know a word. Thats what computers do, too, according to Searle. They process symbols in ways that simulate human thinking, but they are actually mindless automatons.
Searles thought experiment has provoked countless objections. Heres mine. The Chinese room experiment is a splendid case of begging the question (not in the sense of raising a question, which is what most people mean by the phrase nowadays, but in the original sense of circular reasoning). The meta-question posed by the Chinese Room Experiment is this: How do we know whether any entity, biological or non-biological, has a subjective, conscious experience?
When you ask this question, you are bumping into what I call the solipsism problem. No conscious being has direct access to the conscious experience of any other conscious being. I cannot be absolutely sure that you or any other person is conscious, let alone that a jellyfish or smartphone is conscious. I can only make inferences based on the behavior of the person, jellyfish or smartphone.
Now, I assume that most humans, including those of you reading these words, are conscious, as I am. I also suspect that Searle is probably right, and that an intelligent program like Siri only mimics understanding of English. It doesnt feel like anything to be Siri, which manipulates bits mindlessly. Thats my guess, but I cant know for sure, because of the solipsism problem.
Nor can I know what its like to be the man in the Chinese room. He may or may not understand Chinese; he may or may not be conscious. There is no way of knowing, again, because of the solipsism problem. Searles argument assumes that we can know whats going on, or not going on, in the mans mind, and hence, by implication, whats going on or not in a machine. His flawed initial assumption leads to his flawed, question-begging conclusion.
That doesnt mean the Chinese room experiment has no value. Far from it. The Stanford Encyclopedia of Philosophy calls it the most widely discussed philosophical argument in cognitive science to appear since the Turing Test. Searles thought experiment continues to pop up in my thoughts. Recently, for example, it nudged me toward a disturbing conclusion about quantum mechanics, which Ive been struggling to learn over the last year or so.
Physicists emphasize that you cannot understand quantum mechanics without understanding its underlying mathematics. You should have, at a minimum, a grounding in logarithms, trigonometry, calculus (differential and integral) and linear algebra. Knowing Fourier transforms wouldnt hurt.
Thats a lot of math, especially for a geezer and former literature major like me. I was thus relieved to discover Q Is for Quantum by physicist Terry Rudolph. He explains superposition, entanglement and other key quantum concepts with a relatively simple mathematical system, which involves arithmetic, a little algebra and lots of diagrams with black and white balls falling into and out of boxes.
Rudolph emphasizes, however, that some math is essential. Trying to grasp quantum mechanics without any math, he says, is like having van Goghs Starry Night described in words to you by someone who has only seen a black and white photograph. One that a dog chewed.
But heres the irony. Mastering the mathematics of quantum mechanics doesnt make it easier to understand and might even make it harder. Rudolph, who teaches quantum mechanics and co-founded a quantum-computer company, says he feels cognitive dissonance when he tries to connect quantum formulas to sensible physical phenomena.
Indeed, some physicists and philosophers worry that physics education focuses too narrowly on formulas and not enough on what they mean. Philosopher Tim Maudlin complains in Philosophy of Physics: Quantum Theory that most physics textbooks and courses do not present quantum mechanics as a theory, that is, a description of the world; instead, they present it as a recipe, or set of mathematical procedures, for accomplishing certain tasks.
Learning the recipe can help you predict the results of experiments and design microchips, Maudlin acknowledges. But if a physics student happens to be unsatisfied with just learning these mathematical techniques for making predictions and asks instead what the theory claims about the physical world, she or he is likely to be met with a canonical response: Shut up and calculate!
In his book, Maudlin presents several attempts to make sense of quantum mechanics, including the pilot-wave and many-worlds models. His goal is to show that we can translate the Schrdinger equation and other formulas into intelligible accounts of whats happening in, say, the double-slit experiment. But to my mind, Maudlins ruthless examination of the quantum models subverts his intention. Each model seems preposterous in its own way.
Pondering the plight of physicists, Im reminded of an argument advanced by philosopher Daniel Dennett in From Bacteria to Bach and Back: The Evolution of Minds. Dennett elaborates on his long-standing claim that consciousness is overrated, at least when it comes to doing what we need to do to get through a typical day. We carry out most tasks with little or no conscious attention.
Dennett calls this competence without comprehension. Adding insult to injury, Dennett suggests that we are virtual zombies. When philosophers refer to zombies, they mean not the clumsy, grunting cannibals of The Walking Dead but creatures that walk and talk like sentient humans but lack inner awareness.
When I reviewed Dennetts book, I slammed him for downplaying consciousness and overstating the significance of unconscious cognition. Competence without comprehension may apply to menial tasks like brushing your teeth or driving a car but certainly not to science and other lofty intellectual pursuits. Maybe Dennett is a zombie, but Im not! That, more or less, was my reaction.
But lately Ive been haunted by the ubiquity of competence without comprehension. Quantum physicists, for example, manipulate differential equations and matrices with impressive competenceenough to build quantum computers!but no real understanding of what the math means. If physicists end up like information-processing automatons, what hope is there for the rest of us? After all, our minds are habituation machines, designed to turn even complex taskslike being a parent, husband or teacherinto routines that we perform by rote, with minimal cognitive effort.
The Chinese room experiment serves as a metaphor not only for physics but also for the human condition. Each of us sits alone within the cell of our subjective awareness. Now and then we receive cryptic messages from the outside world. Only dimly comprehending what we are doing, we compose responses, which we slip under the door. In this way, we manage to survive, even though we never really know what the hell is happening.
Further Reading:
Is the Schrdinger Equation True?
Will Artificial Intelligence Ever Live Up to Its Hype?
Can Science Illuminate Our Inner Dark Matter
Read this article:
Quantum Mechanics, the Chinese Room Experiment and the Limits of Understanding - Scientific American
Posted in Quantum Physics
Comments Off on Quantum Mechanics, the Chinese Room Experiment and the Limits of Understanding – Scientific American
Quantum Mischief Rewrites the Laws of Cause and Effect – Quanta Magazine
Posted: at 12:19 pm
Alice and Bob, the stars of so many thought experiments, are cooking dinner when mishaps ensue. Alice accidentally drops a plate; the sound startles Bob, who burns himself on the stove and cries out. In another version of events, Bob burns himself and cries out, causing Alice to drop a plate.
Over the last decade, quantum physicists have been exploring the implications of a strange realization: In principle, both versions of the story can happen at once. That is, events can occur in an indefinite causal order, where both A causes B and B causes A are simultaneously true.
It sounds outrageous, admitted aslav Brukner, a physicist at the University of Vienna.
The possibility follows from the quantum phenomenon known as superposition, where particles maintain all possible realities simultaneously until the moment theyre measured. In labs in Austria, China, Australia and elsewhere, physicists observe indefinite causal order by putting a particle of light (called a photon) in a superposition of two states. They then subject one branch of the superposition to process A followed by process B, and subject the other branch to B followed by A. In this procedure, known as the quantum switch, As outcome influences what happens in B, and vice versa; the photon experiences both causal orders simultaneously.
Over the last five years, a growing community of quantum physicists has been implementing the quantum switch in tabletop experiments and exploring the advantages that indefinite causal order offers for quantum computing and communication. Its really something that could be useful in everyday life, said Giulia Rubino, a researcher at the University of Bristol who led the first experimental demonstration of the quantum switch in 2017.
But the practical uses of the phenomenon only make the deep implications more acute.
Physicists have long sensed that the usual picture of events unfolding as a sequence of causes and effects doesnt capture the fundamental nature of things. They say this causal perspective probably has to go if were ever to figure out the quantum origin of gravity, space and time. But until recently, there werent many ideas about how post-causal physics might work. Many people think that causality is so basic in our understanding of the world that if we weaken this notion we would not be able to make coherent, meaningful theories, said Brukner, who is one of the leaders in the study of indefinite causality.
Thats changing as physicists contemplate the new quantum switch experiments, as well as related thought experiments in which Alice and Bob face causal indefiniteness created by the quantum nature of gravity. Accounting for these scenarios has forced researchers to develop new mathematical formalisms and ways of thinking. With the emerging frameworks, we can make predictions without having well-defined causality, Brukner said.
Progress has grown swifter recently, but many practitioners trace the origin of this line of attack on the quantum gravity problem to work 16 years ago by Lucien Hardy, a British-Canadian theoretical physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. In my case, said Brukner, everything started with Lucien Hardys paper.
Hardy was best known at the time for taking a conceptual approach made famous by Albert Einstein and applying it to quantum mechanics.
Einstein revolutionized physics not by thinking about what exists in the world, but by considering what individuals can possibly measure. In particular, he imagined people on moving trains making measurements with rulers and clocks. By using this operational approach, he was able to conclude that space and time must be relative.
In 2001, Hardy applied this same approach to quantum mechanics. He reconstructed all of quantum theory starting from five operational axioms.
He then set out to apply it to an even bigger problem: the 80-year-old problem of how to reconcile quantum mechanics and general relativity, Einsteins epic theory of gravity. Im driven by this idea that perhaps the operational way of thinking about quantum theory may be applied to quantum gravity, Hardy told me over Zoom this winter.
The operational question is: In quantum gravity, what can we, in principle, observe? Hardy thought about the fact that quantum mechanics and general relativity each have a radical feature. Quantum mechanics is famously indeterministic; its superpositions allow for simultaneous possibilities. General relativity, meanwhile, suggests that space and time are malleable. In Einsteins theory, massive objects like Earth stretch the space-time metric essentially the distance between hash marks on a ruler, and the duration between ticks of clocks. The nearer you are to a massive object, for instance, the slower your clock ticks. The metric then determines the light cone of a nearby event the region of space-time that the event can causally influence.
When you combine these two radical features, Hardy said, two simultaneous quantum possibilities will stretch the metric in different ways. The light cones of events become indefinite and thus, so does causality itself.
Most work on quantum gravity elides one of these features. Some researchers, for instance, attempt to characterize the behavior of gravitons, quantum units of gravity. But the researchers have the gravitons interact against a fixed background time. Were so used to thinking about the world evolving in time, Hardy noted. He reasons, though, that quantum gravity will surely inherit general relativitys radical feature and lack fixed time and fixed causality. So the idea is really to throw caution to the wind, said the calm, serious physicist, and really embrace this wild situation where you have no definite causal structure.
Over Zoom, Hardy used a special projector to film a whiteboard, where he sketched out various thought experiments, starting with one that helped him see how to describe data entirely without reference to the causal order of events.
He imagined an array of probesdrifting in space. Theyre taking data recording, say, the polarized light spewing out of a nearby exploding star, or supernova. Every second, each probe logs its location, the orientation of its polarizer (a device like polarized sunglasses that either lets a photon through or blocks it depending on its polarization), and whether a detector, located behind the polarizer, detects a photon or not. The probe transmits this data to aman in a room, who prints it on a card. After some time, the experimentalrunends; the man in the room shuffles all the cards from all the probes and forms a stack.
The probes then rotate their polarizers and make a newseries of measurements, producing a new stack of cards, and repeat the process, so thatthe man in the roomultimately has many shuffled stacks of out-of-order measurements.His job is to try to make some sense of the cards, Hardy said. The man wantsto devise a theory that accounts for all the statistical correlations in the data(and, in this way, describes the supernova) without any information about the datas causal relationships or temporal order, since those might not be fundamental aspects of reality.
How might the man do this? He could first arrange the cards by location, dealing out cards from each stack so that those pertaining to spacecraft in a certain region of space go in the same pile. In doing this for each stack, he could start to notice correlations between piles.He might note that whenever a photon is detected in one region, theres a high detection probability in another region, so long as the polarizers are angled the same way in both places. (Such a correlation would mean that the light passing through these regions tends to share a common polarization.)He could then combine probabilities into expressions pertaining to larger composite regions, and in this way, he could build up mathematical objects for bigger and bigger regions from smaller regions, Hardy said.
What we normally think of as causalrelationships such as photons traveling from one region of the sky to another, correlating measurements made in the first region with measurements made laterinthe second region act, in Hardys formalism, like data compression. Theresa reduction in the amount of information needed todescribe the whole system, sinceone set of probabilities determines another.
Hardy called his new formalism the causaloid framework, where the causaloid is the mathematical object used to calculate the probabilities of outcomes of any measurement in any region. He introduced the general framework in a dense 68-page paper in 2005, which showed how to formulate quantum theory in the framework (essentially by reducing its general probability expressions to the specific case of interacting quantum bits).
Hardy thought it should be possible to formulate general relativity in the causaloid framework too, but he couldnt quite see how to proceed. If he could manage that, then, he wrote in another paper, the framework might be used to construct a theory of quantum gravity.
A few years later, in Pavia, Italy, the quantum information theorist Giulio Chiribella and three colleagues were mulling over a different question: What kinds of computations are possible? They had in mind the canonical work of the theoretical computer scientist Alonzo Church. Church developed a set of formal rules for building functions mathematical machines that take an input and yield an output. A striking feature of Churchs rulebook is that the input of a function can be another function.
The four Italian physicists asked themselves: What kinds of functions of functions might be possible in general, beyond what computers were currently capable of? They came up with a procedure that involves two functions, A and B, that get assembled into a new function. This new function what they called the quantum switch is a superposition of two options. In one branch of the superposition, the functions input passes through A, then B. In the other, it passes through B, then A. They hoped that the quantum switch could be the basis of a new model of computation, inspired by the one of Church, Chiribella told me.
At first, the revolution sputtered. Physicists couldnt decide whether the quantum switch was deep or trivial, or if it was realizable or merely hypothetical. Their paper took four years to get published.
By the time it finally came out in 2013, researchers were starting to see how they might build quantum switches.
They might, for instance, shoot a photon toward an optical device called a beam splitter. According to quantum mechanics, the photon has a 50-50 chance of being transmitted or reflected, and so it does both.
The transmitted version of the photon hurtles toward an optical device that rotates the polarization direction of the light in some well-defined way. The photon next encounters a similar device that rotates it a different way. Lets call these devices A and B, respectively.
Meanwhile, the reflected version of the photon encounters B first, then A. The end result of the polarization in this case is different.
We can think of these two possibilities A before B, or B before A as indefinite causal order. In the first branch, A causally influences B in the sense that if A hadnt occurred, Bs input and output would be totally different. Likewise, in the second branch, B causally influences A in that the latter process couldnt have happened otherwise.
After these alternative causal events have occurred, another beam splitter reunites the two versions of the photon. Measuring its polarization (and that of many other photons) yields a statistical spread of outcomes.
Brukner and two collaborators devised ways to quantitatively test whether these photons are really experiencing an indefinite causal order. In 2012, the researchers calculated a ceiling on how statistically correlated the polarization results can be with the rotations performed at A and B if the rotations occurred in a fixed causal order. If the value exceeds this causal inequality, then causal influences must go in both directions; causal order must have been indefinite.
The idea of the causal inequality was really cool, and a lot of people decided to jump in the field, said Rubino, who jumped in herself in 2015. She and her colleagues produced a landmark demonstration of the quantum switch in 2017 that worked roughly like the one above. Using a simpler test devised by Brukner and company, they confirmed that causal order was indefinite.
Attention turned to what could be done with the indefiniteness. Chiribella and co-authors argued that far more information could be transmitted over noisy channels when sent through the channels in an indefinite order. Experimentalists at the University of Queensland and elsewhere have since demonstrated this communication advantage.
In the most beautiful experiment done so far, according to Rubino, Jian-Wei Pan at the University of Science and Technology of China in Hefei demonstrated in 2019 that two parties can compare long strings of bits exponentially more efficiently when transmitting bits in both directions at once rather than in a fixed causal order an advantage proposed by Brukner and co-authors in 2016. A different group in Hefei reported in January that, whereas engines normally need a hot and cold reservoir to work, with a quantum switch they could extract heat from reservoirs of equal temperature a surprising use suggested a year ago by Oxford theorists.
Its not immediately clear how to extend this experimental work to investigate quantum gravity. All the papers about the quantum switch nod at the link between quantum gravity and indefinite causality. But superpositions of massive objects which stretch the space-time metric in multiple ways at once collapse so quickly that no one has thought of how to detect the resulting fuzziness of causal relationships. So instead researchers turn to thought experiments.
Youll recall Alice and Bob. Imagine theyre stationed in separate laboratory spaceships near Earth. Bizarrely (but not impossibly), Earth is in a quantum superposition of two different places. You dont need a whole planet to be in superposition for gravity to create causal indefiniteness: Even a single atom, when its in a superposition of two places, defines the metric in two ways simultaneously. But when youre talking about whats measurable in principle, you might as well go big.
In one branch of the superposition, Earth is nearer to Alices lab, and so her clock ticks slower. In the other branch, Earth is nearer to Bob, so his clock ticks slower. When Alice and Bob communicate, causal order gets all switched up.
In a key paper in 2019, Magdalena Zych, Brukner and collaborators proved that this situation would allow Alice and Bob to achieve indefinite causal order.
Read the original:
Quantum Mischief Rewrites the Laws of Cause and Effect - Quanta Magazine
Posted in Quantum Physics
Comments Off on Quantum Mischief Rewrites the Laws of Cause and Effect – Quanta Magazine
Welcome To The Future: Navigating The Rich, Intertwined Quantum Software Ecosystem – Forbes
Posted: at 12:19 pm
Paul Lipman is an experienced cybersecurity CEO. He's passionate about the intersection of quantum computing and cybersecurity.
getty
As a software CEO, Ive witnessed the transformative impact of advanced technologies like machine learning. Quantum computing is poised to have a similar impact in the coming years, as Ive previously opined. While fault-tolerant quantum computers are still several years away, a well-funded vibrant quantum software industry is rapidly emerging to enable near-term devices to deliver value.
Quantum computers utilize quantum effects such as superposition and entanglement to solve classes of problems that are intractable to classical computers. Quantum advantage, in which a quantum computer solves a useful problem significantly faster than a classical computer, has yet to be achieved. Its unlikely to for at least the next few years. The level of investment in quantum computing, however, is a testament to the profound impact this technology will have once that milestone is reached. Early movers like JPMorgan Chase, BMW and Airbus are building quantum teams and making significant investments now to climb the quantum learning curve and be ready the moment the technology matures to the point where it will disrupt their industries.
Developing quantum computing software is hard, arguably significantly more so than developing software for classical computers. For example, the concept of phase kickback is fundamental to many quantum algorithms. It requires a deep understanding of linear algebra, plus a combination of physics and algorithmic intuition. Furthermore, quantum applications attempt to solve complex world-changing problems, which inherently require sophisticated solutions. As a result, the pool of quantum software talent is extremely limited.
Quantum Computing Platforms
Each of the major hardware vendors has developed its own platform: IBM Qiskit is arguably the furthest along, with a rich community, compelling road map and application modules. Other offerings include Amazon Braket, Google Cirq and Microsoft Azure Quantum. These platforms are all open source, and vendors are enabling their offerings to interoperate with competitors devices, lifting all boats and helping each vendor maximize its reach and potential as the various hardware modalities mature. Quantum computing will largely be utilized as a cloud-based service: QCaaS. The value of QCaaS will be accelerated by developments that enable quantum applications and workloads to operate in a device-agnostic fashion that utilizes the unique advantages of each platform. Early cross-device services, from Zapata Orquestra and Riverlane Deltaflow.OS, are promising.
Quantum System Controls
The main challenge in scaling todays quantum devices is qubit noise and errors. Software companies such as Q-CTRL and Quantum Benchmark are developing solutions to algorithmically mitigate these effects. Again, given the cost and complexity of quantum devices, its expected that QCaaS will dominate commercial usage. Like conventional cloud computing, a range of services will evolve to ensure secure usage and protect users data and code. A notable early example is Agnostiq.
Quantum Finance And Quantum Machine Learning
Many aspects of modern finance, such as complex securities pricing, portfolio optimization and forecasting, rely on algorithms that are susceptible to potential quadratic or exponential speedup using quantum computers. Companies such as Multiverse Computing are developing quantum applications for the finance industry. Last year, they published compelling results from a joint study with BBVA. Standard Chartered Bank announced a research project to explore quantum applications, including machine learning.
Machine learning and quantum computing are two of the most buzzworthy topics in computing. The emerging field of quantum machine learning (QML) unites them, incorporating a parameterized quantum circuit into a larger classical ML model to speed up learning and improve its efficacy by leveraging unique quantum computational benefits. QML can also be used to enhance and optimize quantum algorithms. Xanadus PennyLane and Googles Tensorflow Quantum are two of the early leading packages in this field.
Quantum Chemistry
Physicist Richard Feynman famously said, Nature isn't classical ... and if you want to make a simulation of nature, you'd better make it quantum mechanical. One of the most exciting applications of quantum computing is the simulation of chemical reactions, which are governed by the laws of quantum mechanics. Modeling anything but the simplest of molecules is intractable for classical computers. Algorithms such as VQE enable the simulation of chemical reactions on a quantum computer, which may ultimately enable us to identify new materials and more efficient chemical processes. For example, the HaberBosch process used to manufacture fertilizer accounts for over 1% of the worlds carbon dioxide emissions and energy usage. If quantum computing can deliver even small improvements to this process, the benefit would be enormous.
Beyond The Valley
The classical software industry is concentrated in Silicon Valley. However, quantum software is far more globally distributed, tapping into academic centers of excellence and large-scale government funding. Cambridge (UK) is home to Cambridge Quantum Computing and Riverlane, which between them have raised almost $100 million. Other well-funded start-ups include Qubit Pharmaceuticals (France), Multiverse Computing (Spain), Q-CTRL (Australia), 1QBit (Canada) and Classiq (Israel). The industry will benefit tremendously from this scale and diversity.
Path To Commercial Success
Quantum computing is experiencing a virtuous cycle. Continued progress in improving qubit counts, fidelity and applications is driving interest from early commercial and government adopters who want to get ahead of the learning curve and their competitors as the technology matures. Its also driving substantial increases in venture investment. Governments, which view quantum as a strategic national priority, are following suit with multibillion-dollar funding programs.
Software startups are raising large funding rounds, driven by a land-grab for limited talent, the need to build deep defensible IP portfolios, a desire to position themselves as leading players in the emerging quantum software space and the likely long path to break even. One could argue that investment is far ahead of current commercial demand; however, the potentially transformative impact of quantum computing is so profound investors are willing to place substantial bets today for the promise of outsize returns tomorrow.
Quantum computing promises to revolutionize many industries. The rich evolving ecosystem of quantum software providers will enable early movers to quickly climb the learning curve, differentiate from their competition and achieve exponential benefits to their business.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read more here:
Welcome To The Future: Navigating The Rich, Intertwined Quantum Software Ecosystem - Forbes
Posted in Quantum Physics
Comments Off on Welcome To The Future: Navigating The Rich, Intertwined Quantum Software Ecosystem – Forbes
Physicists have measured gravity on the smallest scale ever – New Scientist News
Posted: at 12:19 pm
By Leah Crane
Gravity is measured between two gold masses that are brought close to each other
Tobias Westphal, University of Vienna
Physicists have measured the gravitational field of a smaller object than ever before, a gold sphere with a mass of about 90 milligrams. This could help us understand how gravity fits together with quantum mechanics on the smallest scales.
We have long known that our understanding of gravity is missing something it doesnt explain how dark energyaccelerates the expansion of the universe, nor does it fit with quantum mechanics, which describes how objects behave on very small scales. One way to try to fit the pieces together is to observe how small objects interact with gravity.
Advertisement
Markus Aspelmeyer at the University of Vienna in Austria and his colleagues have taken this to the smallest extreme yet. They used a specialised horizontal pendulum to measure the gravitational field of a tiny gold sphere with a radius of about 1 millimetre.
They wiggled the gold sphere back and forth by 1.6 millimetres while it was near a similar gold sphere attached to the pendulum. The gravity of the first sphere moved the second one by just a few nanometres, which then swung the pendulum.
Measuring how much the pendulum moved allowed them to calculate the gravitational field of the first gold sphere, the least massive object whose gravity has ever been measured.
To measure these tiny gravitational effects, their experiment had to be extraordinarily sensitive. The researchers shielded it from electromagnetic forces using a Faraday cage between the gold spheres, and they performed the experiment in the middle of the night during the least seismically active time of year around Christmas in a vacuum so that gas molecules bouncing off one another wouldnt affect the results.
We even detected the first finisher at the Vienna marathon, which ends 2 kilometres outside our lab thats how sensitive the experiment is, says Aspelmeyer. To test the most fundamental properties of gravity, it will need to be even more sensitive; the researchers are already working on that, including a proposed experimental set-up where the spheres and pendulum levitate.
It turns out that when you do experiments that test gravity on very small scales with very small masses, you can, in theory, probe both dark energy and quantum physics, says Aspelmeyer. This experiment is a door-opener. Someday, we may even be able to directly measure the gravitational forces at work in a quantum system in an attempt to unify gravity and the quantum world, he says.
Journal reference: Nature, DOI: 10.1038/s41586-021-03250-7
Sign up to Lost in Space-Time, a free monthly newsletter on the weirdness of reality
More on these topics:
See the rest here:
Physicists have measured gravity on the smallest scale ever - New Scientist News
Posted in Quantum Physics
Comments Off on Physicists have measured gravity on the smallest scale ever – New Scientist News
Mindblowing Approach to Electromobility: Is nanoFlowcell Overtaking the Future? – autoevolution
Posted: March 7, 2021 at 1:10 pm
25 years passed by since Nunzio La Vecchia achieved his private studies in quantum mechanics and quantum physics, then establishing Juno Technologies, the ancestor of nanoFlowcell Holdings. A lot has happened between 1996 and 2016 regarding the projects initiated by nanoFlowcell, yet in a rather discrete way. Long story short, lets jump to the Quantino electric prototype, shown to the public for the first time back at the 2015 Geneva Motor Show.
Quantino got rid of the traditional electricity storage in batteries and its electric propulsion system allegedly ensures a range of over 1000 km (620 miles). The nanoFlowcell technology applied to build the energetic heart of the Quantino (it can be regarded as a kind of cell and generator, all in one), is based on a special membrane.
The material it is made of allows the transfer of electrically charged subatomic particles, without an explicit mixture of substances. And here goes the electric current! In chemistry terms, this membrane permits the development of a "redox" phenomenon.
It's all about providing an electric voltage of 48 V to animate four electric motors able to develop, all together, some 136 HP (4 x 25 kW). Quantinos maximum speed is 200 kph (124 mph) and its range should reach about 1000 km - which sounds like a "goodnight" song even for the most fuel-efficient turbodiesels.
Quantino has a length of 3.91 m and it offers four seats - this kind of data corresponds to the European B-segment standards. As the manufacturer claims, the Quantino was set to be an "electric car for everyone.
As the potential of the electrolytic liquid inside the fuel cell is decreasing, two tanks (positively and negatively charged liquid) are providing fresh resources. This feeding process was solved by gravitational means. The neutralized liquids are ejected in two other tanks.
Until now, test drives of the Quantinohave apparently covered about five hundred thousand kilometers (310,000 miles, which is like going 12,5 times around the world. Many parts of the car had to be fixed or replaced over times, yet the nanoFlow unit remained the same.
We are convinced of the nanoFlowcell flow cells reliability that we guarantee a minumum of 50.000 operating hours. The nanoFlowcell unit will probably survive any other component in the car. It is predestined for long-distance runners such as buses or trucks, but also for a fail-safe stationary energy supply, says La Vecchia.
Not long ago, nanoFlowcell started to look for a 500.000 square meters surface of industrial land appropriate for a new pilot facility and the QUANT-City innovation centre. As Nunzio La Vecchia declared, We have achieved a laboratory breakthrough in our bi-ION research that has prepared the way for mass production of the electrolytes - which is a key prerequisite for the market success of our flow cell technology. Supported by investors, we will now build a pilot facility replicating the entire value-creation cycle of our technology. It will serve as a blueprint for further innovation centres for our nanoFlowcell technology to be established worldwide.
Excerpt from:
Mindblowing Approach to Electromobility: Is nanoFlowcell Overtaking the Future? - autoevolution
Posted in Quantum Physics
Comments Off on Mindblowing Approach to Electromobility: Is nanoFlowcell Overtaking the Future? – autoevolution
Physicists Just Found 4 New Subatomic Particles That May Test The Laws of Nature – ScienceAlert
Posted: at 1:10 pm
This month is a time to celebrate. CERN has just announced the discovery of four brand new particles at the Large Hadron Collider (LHC) in Geneva.
This means that the LHC has now found a total of 59 new particles, in addition to the Nobel prize-winning Higgs boson, since it started colliding protons particles that make up the atomic nucleus along with neutrons in 2009.
Excitingly, while some of these new particles were expected based on our established theories, some were altogether more surprising.
The LHC's goal is to explore the structure of matter at the shortest distances and highest energies ever probed in the lab testing our current best theory of nature: the Standard Model of Particle Physics. And the LHC has delivered the goods it enabled scientists to discover the Higgs boson, the last missing piece of the model. That said, the theory is still far from being fully understood.
One of its most troublesome features is its description of the strong force which holds the atomic nucleus together. The nucleus is made up of protons and neutrons, which are in turn each composed of three tiny particles called quarks (there are six different kinds of quarks: up, down, charm, strange, top and bottom).
If we switched the strong force off for a second, all matter would immediately disintegrate into a soup of loose quarks a state that existed for a fleeting instant at the beginning of the universe.
Don't get us wrong: the theory of the strong interaction, pretentiously called "quantum chromodynamics", is on very solid footing. It describes how quarks interact through the strong force by exchanging particles called gluons. You can think of gluons as analogues of the more familiar photon, the particle of light and carrier of the electromagnetic force.
However, the way gluons interact with quarks makes the strong force behave very differently from electromagnetism. While the electromagnetic force gets weaker as you pull two charged particles apart, the strong force actually gets stronger as you pull two quarks apart.
As a result, quarks are forever locked up inside particles called hadrons particles made of two or more quarks which includes protons and neutrons. Unless, of course, you smash them open at incredible speeds, as we are doing at Cern.
To complicate matters further, all the particles in the standard model have antiparticles which are nearly identical to themselves but with the opposite charge (or other quantum property). If you pull a quark out of a proton, the force will eventually be strong enough to create a quark-antiquark pair, with the newly created quark going into the proton.
You end up with a proton and a brand new "meson", a particle made of a quark and an antiquark. This may sound weird but according to quantum mechanics, which rules the universe on the smallest of scales, particles can pop out of empty space.
This has been shown repeatedly by experiments we have never seen a lone quark. An unpleasant feature of the theory of the strong force is that calculations of what would be a simple process in electromagnetism can end up being impossibly complicated. We therefore cannot (yet) prove theoretically that quarks can't exist on their own.
Worse still, we can't even calculate which combinations of quarks would be viable in nature and which would not.
Illustration of a tetraquark. (CERN)
When quarks were first discovered, scientists realized that several combinations should be possible in theory. This included pairs of quarks and antiquarks (mesons); three quarks (baryons); three antiquarks (antibaryons); two quarks and two antiquarks (tetraquarks); and four quarks and one antiquark (pentaquarks) as long as the number of quarks minus antiquarks in each combination was a multiple of three.
For a long time, only baryons and mesons were seen in experiments. But in 2003, the Belle experiment in Japan discovered a particle that didn't fit in anywhere. It turned out to be the first of a long series of tetraquarks.
In 2015, the LHCb experiment at the LHC discovered two pentaquarks.
The four new particles we've discovered recently are all tetraquarks with a charm quark pair and two other quarks. All these objects are particles in the same way as the proton and the neutron are particles. But they are not fundamental particles: quarks and electrons are the true building blocks of matter.
Is a pentaquark tightly (above) or weakly bound (see image below)? (CERN)
The LHC has now discovered 59 new hadrons. These include the tetraquarks most recently discovered, but also new mesons and baryons. All these new particles contain heavy quarks such as "charm" and "bottom".
These hadrons are interesting to study. They tell us what nature considers acceptable as a bound combination of quarks, even if only for very short times.
They also tell us what nature does not like. For example, why do all tetra- and pentaquarks contain a charm-quark pair (with just one exception)? And why are there no corresponding particles with strange-quark pairs? There is currently no explanation.
Is a pentaquark a molecule? A meson (left) interacting with a proton (right). (CERN)
Another mystery is how these particles are bound together by the strong force. One school of theorists considers them to be compact objects, like the proton or the neutron.
Others claim they are akin to "molecules" formed by two loosely bound hadrons. Each newly found hadron allows experiments to measure its mass and other properties, which tell us something about how the strong force behaves. This helps bridge the gap between experiment and theory. The more hadrons we can find, the better we can tune the models to the experimental facts.
These models are crucial to achieve the ultimate goal of the LHC: find physics beyond the standard model. Despite its successes, the standard model is certainly not the last word in the understanding of particles. It is for instance inconsistent with cosmological models describing the formation of the universe.
The LHC is searching for new fundamental particles that could explain these discrepancies. These particles could be visible at the LHC, but hidden in the background of particle interactions.Or they could show up as small quantum mechanical effects in known processes.
In either case, a better understanding of the strong force is needed to find them. With each new hadron, we improve our knowledge of nature's laws, leading us to a better description of the most fundamental properties of matter.
Patrick Koppenburg, Research Fellow in Particle Physics, Dutch National Institute for Subatomic Physics and Harry Cliff, Particle physicist, University of Cambridge.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read more:
Physicists Just Found 4 New Subatomic Particles That May Test The Laws of Nature - ScienceAlert
Posted in Quantum Physics
Comments Off on Physicists Just Found 4 New Subatomic Particles That May Test The Laws of Nature – ScienceAlert
New History of the Physics Department by Raj Gupta and Paul Sharrah Published – University of Arkansas Newswire
Posted: March 3, 2021 at 1:57 am
Cover design by UA Printing Services
A Centennial History of the Physics Department, University of Arkansas.
A new history of the Department of Physics, titledAcoustics to Quantum Materials: A Centennial History of the Department of Physics, University of Arkansas andauthored by Rajendra Gupta and Paul C. Sharrah, has been published.
The Department of Physics was born during the 1907-08 academic year when the first full-time physics teacher was appointed and a syllabus for a physics major was defined for the first time. The department celebrated its centennial in April 2008. For 35 years, from 1872 to 1907, physics was taught by teachers whose primary discipline was not physics, for example, chemistry, applied mathematics, mechanic arts and engineering, and even biology and geology. While primary emphasis of this book is on the one hundred years, 1907-2007; for completeness, the previous 35 years are covered in two prologues. The period, 2008 t0 2018 is summarized in an epilogue. The history includes the perspective of the authors, both emeritus professors of physics, who were eye-witnessto the events unfolding in the department over a combined period of 76 years.
The book traces the evolution of the department from a one-person department with no physics majors in 1907, to 1920s when it expanded to three faculty and graduated its first major; to the 1930s when professor Llyod Ham established the first research laboratory in physics; to the 1940s when the department's ambitious vision of starting a credible research program was interrupted by the World War I and it had to teach an estimated 3,500 army trainees, but that was followed by a blossoming of its physics majors program; to the 1950s when a credible research program did start and the department's Ph.D. program was approved; to the 1960s when post-Sputnik government support for research helped the department expand its research efforts; to the evolution of the department's research in many diverse areas, including atomic and molecular physics, quantum optics, biophysics and condensed matter physics.
The first research laboratory in physics established by professor Ham was in acoustics. Today, the largest research effort is in the area of quantum materials, which explains the title of the book.
While the department had a modest beginning, starting with just one teacher and no majors, today it can claim its rightful place among the noteworthy physics departments at the U.S. public institutions.
An electronic copy of the book can be downloaded from the university repository: scholarworks.uark.edu/physpub/23.
See original here:
Posted in Quantum Physics
Comments Off on New History of the Physics Department by Raj Gupta and Paul Sharrah Published – University of Arkansas Newswire
New research indicates the whole universe could be a giant neural network – The Next Web
Posted: at 1:57 am
The core idea is deceptively simple: every observable phenomenon in the entire universe can be modeled by a neural network. And that means, by extension, the universe itself may be a neural network.
Vitaly Vanchurin, a professor of physics at the University of Minnesota Duluth, published an incredible paper last August entitled The World as a Neural Network on the arXiv pre-print server. It managed to slide past our notice until today when Futurisms Victor Tangermann published an interview with Vanchurin discussing the paper.
The big idea
According to the paper:
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: trainable variables (e.g. bias vector or weight matrix) and hidden variables (e.g. state vector of neurons).
At its most basic, Vanchurins work here attempts to explain away the gap between quantum and classical physics. We know that quantum physics does a great job of explaining whats going on in the universe at very small scales. When were, for example, dealing with individual photons we can dabble with quantum mechanics at an observable, repeatable, measurable scale.
But when we start to pan out were forced to use classical physics to describe whats happening because we sort of lose the thread when we make the transition from observable quantum phenomena to classical observations.
The argument
The root problem with sussing out a theory of everything in this case, one that defines the very nature of the universe itself is that it usually ends up replacing one proxy-for-god with another. Where theorists have posited everything from a divine creator to the idea were all living in a computer simulation, the two most enduring explanations for our universe are based on distinct interpretations of quantum mechanics. These are called the many worlds and hidden variables interpretations and theyre the ones Vanchurin attempts to reconcile with his world as a neural network theory.
To this end, Vanchurin concludes:
In this paper we discussed a possibility that the entire universe on its most fundamental level is a neural network. This is a very bold claim. We are not just saying that the artificial neural networks can be useful for analyzing physical systems or for discovering physical laws, we are saying that this is how the world around us actually works. With this respect it could be considered as a proposal for the theory of everything, and as such it should be easy to prove it wrong. All that is needed is to find a physical phenomenon which cannot be described by neural networks. Unfortunately (or fortunately) it is easier said than done.
Quick take: Vanchurin specifically says hes not adding anything to the many worlds interpretation, but thats where the most interesting philosophical implications lie (in this authors humble opinion).
If Vanchurins work pans out in peer review, or at least leads to a greater scientific fixation on the idea of the universe as a fully-functioning neural network, then well have a found a thread to pull on that could put us on the path to a successful theory of everything.
If were all nodes in a neural network, whats the networks purpose? Is the universe one giant, closed network or is it a single layer in a grander network? Or perhaps were just one of trillions of other universes connected to the same network. When we train our neural networks we run thousands or millions of cycles until the AI is properly trained. Are we just one of an innumerable number of training cycles for some larger-than-universal machines greater purpose?
You can read the paper whole paper here on arXiv.
Published March 2, 2021 19:18 UTC
Continue reading here:
New research indicates the whole universe could be a giant neural network - The Next Web
Posted in Quantum Physics
Comments Off on New research indicates the whole universe could be a giant neural network – The Next Web
Roivant Grows Computational Drug Discovery Engine with Acquisition of Silicon Therapeutics – Business Wire
Posted: at 1:57 am
NEW YORK & BOSTON & BASEL, Switzerland--(BUSINESS WIRE)--Roivant Sciences today announced it has entered into a definitive agreement to acquire Silicon Therapeutics for $450 million in Roivant equity, with additional potential regulatory and commercial milestone payments.
Silicon Therapeutics has built a proprietary industry-leading computational physics platform for the in silico design and optimization of small molecule drugs for challenging disease targets. The platform includes custom methods based on quantum mechanics, molecular dynamics and statistical thermodynamics to overcome critical bottlenecks in drug discovery projects, such as predicting binding energies and conformational behavior of molecules.
Silicon Therapeutics computational platform is powered by a proprietary supercomputing cluster and custom hardware enabling accurate all-atom simulations at biologically meaningful timescales. This computational platform is tightly integrated with experimental laboratories equipped for biophysics, medical chemistry and biology in order to facilitate the rapid progression of drug candidates by augmenting simulations with biophysical data. The company has used these capabilities to discover multiple drug candidates.
The acquisition of Silicon Therapeutics bolsters and complements Roivants targeted protein degradation (TPD) platform. That platform will be powered by VantAIs advanced machine learning models trained on proprietary degrader-specific experimental data and by Silicon Therapeutics proprietary computational physics capabilities, which help address many of the modality-specific challenges of degrader design and optimization. Integrating Silicon Therapeutics and VantAI will enable Roivant to distinctively capture the power of both computational physics and machine learning-based approaches to drug design; for instance, by incorporating proprietary computational physics simulations as training data for VantAIs degrader-specific deep learning models.
The combination of Silicon Therapeutics and VantAI also gives Roivant distinctive advantages in designing other types of novel small molecule drugs against difficult targets, such as allosteric inhibitors, molecular glues and high-affinity ligands.
Silicon Therapeutics drug discovery efforts are led by Drs. Woody Sherman, Huafeng Xu, and Chris Winter, who will join Roivants drug discovery leadership.
Dr. Sherman is a recognized leader in computational chemistry and biomolecular simulations who spent 12 years as a senior scientific executive at Schrdinger, where he served as vice president and global head of applications science. Dr. Sherman is an authority in the emerging field of physics-driven drug design who has developed novel methods for free energy simulations, conformational modulation, virtual screening, improved force fields, lead optimization and precision selectivity design.
Dr. Huafeng Xu is a pioneer in novel molecular dynamics methods who spent 12 years at D. E. Shaw Research where he led development of the methods and software for free energy calculations that are now widely used in the pharmaceutical industry, including the Anton chip and Desmond software.
Dr. Chris Winter is an accomplished drug discovery biologist who has delivered 11 targeted cancer therapies into clinical development. Before joining Silicon Therapeutics, Dr. Winter served as Sanofi Oncologys head of discovery biology. He joined Sanofi from Blueprint Medicines, where he served as head of biology. Prior to Blueprint, Dr. Winter held senior research positions at Merck Research Laboratories and Exelixis.
We are delighted to integrate Silicon Therapeutics into Roivant as we continue to expand our capabilities in computationally-powered drug discovery, said Matt Gline, chief executive officer of Roivant Sciences. We intend to leverage our established development apparatus as we rapidly advance promising compounds from our drug discovery engine into clinical studies.
Silicon Therapeutics was founded with a vision of transforming the pharmaceutical industry through use of technology, said Lanny Sun, co-founder and chief executive officer of Silicon Therapeutics. By joining forces with Roivant, we can significantly accelerate making this vision a reality. Roivant has an impressive track record in clinical execution and building and deploying technology platforms to power pharmaceutical research, development and commercialization.
The combination of Silicon Therapeutics integrated approach, platform and highly capable team with Roivants technologies and commitment to transforming the pharmaceutical industry represents a new and exciting paradigm in drug discovery and development, said Roger Pomerantz, M.D., F.A.C.P., chairman of the board of directors of Silicon Therapeutics.
The acquisition is subject to customary closing conditions including receipt of requisite regulatory approvals.
About Roivant Sciences
Roivant's mission is to improve the delivery of healthcare to patients by treating every inefficiency as an opportunity. Roivant develops transformative medicines faster by building technologies and developing talent in creative ways, leveraging the Roivant platform to launch Vants nimble and focused biopharmaceutical and health technology companies.
For more information, please visit http://www.roivant.com.
About Silicon Therapeutics
Silicon Therapeutics is a fully integrated drug design and development company focused on small molecule therapeutics. The Silicon Therapeutics proprietary physics-driven drug design platform combines quantum physics, statistical thermodynamics, molecular simulations, a dedicated HPC super-computing cluster, purpose-built software, in-house laboratories and clinical development capabilities. The platform was built from the ground up to address difficult targets using physics-based simulations and experiments to pioneer a new path for drug design with the prime goal of delivering novel medicines to improve the lives of patients.
Silicon Therapeutics is currently the only company that owns the entire spectrum of proprietary physics-driven drug discovery from chip-to-clinic. The companys lead program is a highly differentiated small molecule Stimulator of Interferon Genes (STING) agonist for the treatment of cancer, which entered the clinic in November 2020. The companys headquarters are located in Boston. To learn more about Silicon Therapeutics, please visit our website at http://www.silicontx.com or follow us on LinkedIn, Twitter and YouTube.
Read more here:
Posted in Quantum Physics
Comments Off on Roivant Grows Computational Drug Discovery Engine with Acquisition of Silicon Therapeutics – Business Wire