The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Quantum Physics
Could There Be More Than One Dimension of Time? – Interesting Engineering
Posted: September 4, 2021 at 6:04 am
It has long been proposed that when our universe came to fruition, three dimensions of space and one dimension of time sprang forth from the big bang: the event that made everything we see around us - from the hydrogen and helium that fuse to make stars and help planets coalesce, to black holes and galaxies, and even our own existences - possible,
However, there has been some debate over the traditional belief that the universe didn't just manifest three dimensions of space and one of time. In fact, throwing a couple of extra dimensions into the mix actually solves several of the more tricky problems we're still working to make sense of within the framework of the standard model of physics.
So could our universereallyhave more than one dimension of time?
Put away your hat, make sure your restraints are secure, and hold on tight: This is going to be an interesting (and pretty complex) ride through physics.
Einstein is responsible for much of what we know about how the universe works on a macroscale. His theories of general and special relativity play a big role in shaping our understanding of physics as a whole, but string theory and quantum physics, which are very important to the discussion we are about to have, were still in their infancy when Einstein died. Since then, both have become fully fleshed hypotheses, but there are certain aspects of both that cannot be reconciled with Einstein's work. Before we get into all that fun stuff, what is a dimension, and how do we know they exist?
Let's start with the three dimensions we deal with in our everyday lives: they can be summarized as length, width, and height, A straight line would be considered one dimensional. It merely has length, but no thickness, and it can travel forever in both directions. Two-dimensional objects, for example, something flat like a circle, or a square, have width and height, but no depth or "thickness" to them. Anything that has length, width, and height is considered three-dimensional, Time, as we know, is the somewhat more elusive fourth dimension... hence why the fabric of the universe is formally called spacetime instead of just plain 'ole space.
This is where the discussion of string theory comes into play.
Traditional physics asserts that the universe is comprised of 3 spatial dimensions and one temporal dimension. The universe itself is mostly empty by volume, but what we can observe is thought to be only around 5 percent of "normal matter" (think protons, electrons. neutrons, quarks, etc.), roughly 27 percent is made up of something called dark matter, while the remaining 68 percent is attributable to a mysterious unknown force believed to be causing the universe to expand, known as dark energy.
String theory, in the simplest possible terms, tells us to imagine that the very fabric of the universe and everything in it is not composed of point-like particles, but rather, it is comprised of incomprehensibly tiny strings - much smaller than even the smallest atom.
These strings are vibrating at their own special frequencies, making it conceivable that what we perceive to be point-like particles are actually not particles at all, but strings so tiny, they are "just 10^33 centimeters long. Written the long way, that would be a decimal point followed by 32 zeroes and then a 1. Some have theorized that the length of a string would have the same ratio to the diameter of a proton as the proton has to the diameter of the solar system.
There are two different types of these strings: one is open and the other is closed - though both are too small to be observed by current technology. As the name suggests, open strings are like wavy lines that don't touch ends, whereas the opposite is true for closed strings - they form loops and do not have open ends.
However, in order for string theory to work, mathematics dictates that many additional dimensions of space and time must exist. Should we find proof of these other dimensions, and should the number of extra dimensions range from 10 to 26, it would not only change our very understanding of quantum physics, but we'd be one step closer to creating a cohesive and credible "theory of everything." One that would be in agreement with both quantum physics and macrophysics - like the forces of nature and gravity, which is no easy feat.
As per usual, a lot of these theoretical arguments circle back around to Einstein's work. Ignoring Einstein's belief time mightsimply be a really elaborate illusion, the question as to whether there could be additional dimensions of time is a firm maybe. In fact, through studying how the fundamental forces of nature and the laws of physics are affected by time, some astronomers believe that throwing in at least one extra dimension of time would solve one of the biggest remaining cosmological bugaboos.
You see, we still don't know exactly what gravity is, or how itfully affects matter seen and unseen. Our best guess, once again an Einsteinian theory, says that gravity is theforce created by the warping of spacetime. The larger the object is, the more it warps the spacetime around it, and the stronger its gravitational pull becomes. That would be all well and good if we didn't have to marry our theories of gravity with quantum theory, but we do, and therein lies the problem.
Our current understanding of gravity is simply not consistent with other elements of quantum mechanics. The remaining forces of nature -the electromagnetic force and the strong and weak nuclear forces - all fit into the framework of the micro-universe, but not gravity.It is hoped that maybe string theory can help solve that mystery. At least one physicist argues that time isn't merely one-dimensional.
Itzhak Bars, atheoretical physicist from USC College, told NewScientist,There isnt just one dimension of time, there are two. One whole dimension has until now gone entirely unnoticed by us.
He also described how extra dimensions of space could exist 'in plain sight' saying, "Extra space dimensions aren't easy to imagine in everyday life, nobody ever notices more than three. Any move you make can be described as the sum of movements in three directions up-down, back and forth, or sideways. Similarly, any location can be described by three numbers (on Earth, latitude, longitude, and altitude), corresponding to spaces three dimensions. Other dimensions could exist, however, if they were curled up in little balls, too tiny to notice. If you moved through one of those dimensions, youd get back to where you started so fast youd never realize that you had moved."
An extra dimension of space could really be there, its just so small that we dont see it,
Have you made it thisfar? Congratulations. The conclusion is worth wading through all these complicated ideas.
Physics mostly argues that time must be a dimension, but there are certainly physicists that believe time is just a human construct. Others argue there must be more dimensions of time than previously believed. What would that mean for physics, should this be true?
Well, for Bars, it would mean, "The green light to the idea of time travel. If time is one-dimensional, like a straight line, the route linking the past, present, and future is clearly defined. Adding another dimension transforms time into a two-dimensional plane, like a flat sheet of paper. On such a plane, the path between the past and future would loop back on itself, allowing you to travel back and forwards in time. That would permit all kinds of absurd situations, such as the famous grandfather paradox. In this scenario, you could go back and kill your grandfather before your mother was a twinkle in his eye, thereby preventing your own birth."
Two-dimensional time gives every appearance of being a non-starter. Yet in 1995, when Bars found hints in M-theory (a theory that unifies all consistent versions ofsuperstring theory) that an extra time dimension was possible, he was determined to take a closer look. When he did, Bars found that a key mathematical structure common to all 11 of the posited dimensions in M-theory (10 dimensions of space and 1 of time) remained intact when he added an extra dimension. On one condition, says Bars. The extra dimension had to be time-like.
What do you think about string theory, quantum gravity, and extra dimensions of time?
See original here:
Could There Be More Than One Dimension of Time? - Interesting Engineering
Posted in Quantum Physics
Comments Off on Could There Be More Than One Dimension of Time? – Interesting Engineering
Would Doctor Strange think of using a quantum crystal to reveal the secrets of dark matter? – SYFY WIRE
Posted: at 6:04 am
Doctor Strange already has the Time Stone, but if he lived in our universe, he would probably be after a quantum crystal. So maybe it cant transport you through space and time or be worn in a necklace that looks like mesmerizing cosmic eye.
What would make quantum crystal desirable even in the Marvel universe is its hypersensitivity, since it can pick up on such faint electromagnetic signals that it might detect dark matter particles, or axions, in the future. That means it could prove the existence of (still hypothetical) dark matter. Magic? More like quantum mechanics.
There was just one issue with transforming an ordinary beryllium crystal into something borderline paranormal. Quantum noise was in the way. Researcher Diego Barbarena and his colleagues, led by atomic physicist Ana Maria Rey of JILA, realized that quantum entanglement was the only way out.
Creation of quantum entanglement between our system and our probe allowed us to avoid the effects of the noise on our readout and hence we end up with just the signal, he told SYFY WIRE. Entanglement allows us to avoid some of the sources of noise present on our system.
Quantum entanglement of two particles means that they will do the same thing simultaneously no matter how far away they are from each other. This is the same idea behind Hawking radiation, in which one entangled particle escapes a black hole while the other falls in. It is thought that the escaped particle may be able to tell us what actually happens inside a black hole. Entanglement gets around of the Heisenberg Uncertainty Principle, which claims that the more precision you observe a particle with, the less you will find out less about its properties.
Creating a quantum crystal involved using a system of electrodes and magnetic fields to trap beryllium ions and prevent their usual tendency to try to repel each other. Without that repulsion, the atoms arranged themselves into a thin, flat crystal. The motions of the beryllium ions were entangled with their spins. Because the beryllium atoms were now able to move as a whole when they felt a signal, the entire crystal would vibrate.
So what makes this crystal so hypersensitive, which would be highly desirable for a superhero who is constantly breaking the laws of physics?
The crystal consists of many ions, which like to move in an oscillatory fashion with a certain natural frequency, Barbarena said. If you hit it with something that oscillates at that same frequency the effect on the motion is going to be greater than if you hit it with something oscillating at a lower or higher frequency.
Another reason this quantum crystal can pick up on such low frequencies is the amount of ions it has. The crystals 150 ions to make it seem as if its responses are being measured as many times. Because the thing being measured is the motion of the crystal in response to an electromagnetic signal, and a signal affects the ion spin entangled with it, the researchers were looking for signals that oscillated at the same frequency as the ions. This is the resonance conditionwhen a signal and a detector are moving at the same frequency.
By turning on that magnetic field, and assuming axions are present, an electric field signal would be automatically generated, said Barbarena.
At least hypothetically, detecting dark matter would mean that axions would have to morph into photons when they ran into a magnetic field. Axions are believed to create an invisible mass rather than just float around as individual particles. This is why scientists think that entire globs of dark matter exist, but dark matter is supposedly extremely light, so it helps having something that can pick up the faintest signals. This crystal is already ten times more sensitive than anything else, and if axions did turn into photons, it would probably find them.
You can probably imagine how useful this would be for Doctor Strange trying to sense enemies on the prowl, so long as they gave off some sort of electromagnetic signal.
More here:
Posted in Quantum Physics
Comments Off on Would Doctor Strange think of using a quantum crystal to reveal the secrets of dark matter? – SYFY WIRE
Large-Scale Simulations Of The Brain May Need To Wait For Quantum Computers – Forbes
Posted: at 6:04 am
Will quantum computer simulations crack open our understanding of the biological brain?
Looking back at the history of computers, its hard to overestimate the rate at which computing power has scaled in the course of just a single human lifetime. But yet, existing classical computers have fundamental limits. If quantum computers are successfully built and eventually fully come online, they will be able to tackle certain classes of problems that elude classical computers. And they may be the computational tool needed to fully understand and simulate the brain.
As of this writing, the fastest supercomputer in the world is Japans Fugaku supercomputer, developed jointly by Riken and Fujitsu. It can perform 442 peta-floating-point operations per second.
Lets break that number down in order to arrive at an intuitive (as much as possible) grasp of what it means.
A floating-point number is a way to express, or write down, a real number - real in a mathematical sense - with a fixed amount of precision. Real numbers are all the continuous numbers from the number line. 5, -23, 7/8, and numbers like pi (3.1415926 ...) that go on forever are all real numbers. The problem is a computer, which is digital, has a hard time internally representing continuous numbers. So one way around this is to specify a limited number of digits, and then specify how big or small the actual number is by some base power. For example, the number 234 can be written as 2.34 x 102, because 2.34 x 100 equals 234. Floating point numbers specify a fixed number of significant digits the computer must store in its memory. It fixes the accuracy of the number. This is important because if you do any mathematical operation (e.g. addition, subtraction, division or multiplication) with the fixed accuracy version of a real number, small errors in your results will be generated that propagate (and can grow) throughout other calculations. But as long as the errors remain small its okay.
A floating point operation then, is any arithmetic operation between two floating-point numbers (abbreviated as FLOP). Computer scientists and engineers use the number of FLOP per second - or FLOPS - as a benchmark to compare the speed and computing power of different computers.
One petaFLOP is equivalent to 1,000,000,000,000,000 - or one quadrillion - mathematical operations. A supercomputer with a computing speed of one petaFLOPS is therefore performing one quadrillion operations per second! The Fugaku supercomputer is 442 times faster than that.
For many types of important scientific and technological problems however, even the fastest supercomputer isnt fast enough. In fact, they never will be. This is because for certain classes of problems, the number of possible combinations of solutions that need to be checked grow so fast, compared to the number of things that need to be ordered, that it becomes essentially impossible to compute and check them all.
Heres a version of a classic example. Say you have a group of people with differing political views, and you want to seat them around a table in order to maximize constructive dialogue while minimizing potential conflict. The rules you decide to use dont matter here, just that some set of rules exist. For example, maybe you always want to seat a moderate between a conservative and a liberal in order to act as a bit of a buffer.
This is what scientists and engineers call an optimization problem. How many possible combinations of seating arrangements are there? Well, if you only have two people, there are only two possible arrangements. One individual on each side of a table, and then the reverse, where the two individuals change seats. But if you have five people, the number of possible combinations jumps to 120. Ten people? Well, now youre looking at 3,628,800 different combinations. And thats just for ten people, or more generally, any ten objects. If you had 100 objects, the number of combinations is so huge that its a number with 158 digits (roughly, 9 x 10157). By comparison, there are only about 1021 stars in the observable universe.
Imagine now if you were trying to do a biophysics simulation of a protein in order to develop a new drug that had millions or billions of individual molecules interacting with each other. The number of possible combinations that would need to be computed and checked far exceed the capability of any computer that exists today. Because of how theyre designed, even the fastest supercomputer is forced to check each combination sequentially - one after another. No matter how fast a classical computer is or can be, given the literally greater than astronomical sizes of the number of combinations, many of these problems would take a practical infinity to solve. It just becomes impossible.
Related, the other problem classical computers face is its impossible to build one with sufficient memory to store each of the combinations, even if all the combinations could be computed.
The details of how a quantum computer and quantum computing algorithms work is well beyond the scope or intent of this article, but we can briefly introduce one of the key ideas in order to understand how they can overcome the combinatorial limitations of classical computers.
Classical computers represent information - all information - as numbers. And all numbers can be represented as absolute binary combinations of 1s and 0s. The 1 and 0 each represent a bit of information, the fundamental unit of classical information. Or put another way, information is represented by combinations of two possible states. For example, the number 24 in binary notation is 11000. The number 13 is 1101. You can also do all arithmetic in binary as well. This is convenient, because physically, at the very heart of classical computers is the transistor, which is just an on-off electrical switch. When its on it encodes a 1, and when its off it encodes a 0. Computers do all their math by combining billions of tiny transistors that very quickly switch back and forth as needed. Yet, as fast as this can occur, it still takes finite amounts of time, and all calculations need to be done in an appropriate ordered sequence. If the number of necessary calculations become big enough, as is the case with the combinatorial problems discussed above, you run into an unfeasible computational wall.
Quantum computers are fundamentally different. They overcome the classical limitations by being able to represent information internally not just as a function of two discrete states, but as a continuous probabilistic mixing of states. This allows quantum bits, or qubits, to have many more possible states they can represent at once, and so many more possible combinations of arrangements of objects at once. Put another way, the state space and computational space that a quantum computer has access too is much larger than that of a classical computer. And because of the wave nature of quantum mechanics and superposition (concepts we will not explore here), the internal mixing and probabilistic representation of states and information eventually converge to one dominant solution that the computer outputs. You cant actually observe that internal mixing, but you can observe the final computed output. In essence, as the number of qubits in the quantum computer increase, you can exponentially do more calculations in parallel.
The key concept here is not that quantum computers will necessarily be able to solve new and exotic classes of problems that classical computers cant - although computer scientists have discovered a theoretical class of problem that only quantum computers can solve - but rather that they will be able to solve classes of problems that are - and always will be - beyond the reach of classical computers.
And this isnt to say that quantum computers will replace classical computers. That is not likely to happen anytime in the foreseeable future. For most classes of computational problems classical computers will still work just fine and probably continue being the tool of choice. But for certain classes of problems, quantum computers will far exceed anything possible today.
Well, it depends on the scale at which the dynamics of the brain is being simulated. For sure, there has been much work within the field of computational neuroscience over many decades successfully carrying out computer simulations of the brain and brain activity. But its important to understand the scale at which any given simulation is done.
The brain is exceedingly structurally and functionally hierarchical - from genes, to molecules, cells, network of cells and networks of brain regions. Any simulation of the brain needs to begin with an appropriate mathematical model, a set of equations that capture the chosen scale being modeled that then specify a set of rules to simulate on a computer. Its like a map of a city. The mapmaker needs to make a decision about the scale of the map - how much detail to include and how much to ignore. Why? Because the structural and computational complexity of the brain is so vast and huge that its impossible given existing classical computers to carry out simulations that cut across the many scales with any significant amount of detail.
Even though a wide range of mathematical models about the molecular and cell biology and physiology exist across this huge structural and computational landscape, it is impossible to simulate with any accuracy because of the sheer size of the combinatorial space this landscape presents. It is the same class of problem as that of optimizing people with different political views around a table. But on a much larger scale.
Once again, it in part depends on how you choose to look at it. There is an exquisite amount of detail and structure to the brain across many scales of organization. Heres a more in depth article on this topic.
But if you just consider the number of cells that make up the brain and the number of connections between them as a proxy for the computational complexity - the combinatorial space - of the brain, then it is staggeringly large. In fact, it defies any intuitive grasp.
The brain is a massive network of densely interconnected cells consisting of about 171 trillion brain cells - 86 billion neurons, the main class of brain cell involved in information processing, and another 85 billion non-neuronal cells. There are approximately 10 quadrillion connections between neurons that is a 1 followed by 16 zeros. And of the 85 billion other non-neuronal cells in the brain, one major type of cell called astrocyte glial cells have the ability to both listen in and modulate neuronal signaling and information processing. Astrocytes form a massive network onto themselves, while also cross-talking with the network of neurons. So the brain actually has two distinct networks of cells. Each carrying out different physiological and communication functions, but at the same time overlapping and interacting with each other.
The computational size of the human brain in numbers.
On top of all that structure, there are billions upon billions upon billions of discrete electrical impulses, called action potentials, that act as messages between connected neurons. Astrocytes, unlike neurons, dont use electrical signals. They rely on a different form of biochemical signaling to communicate with each other and with neurons. So there is an entire other molecularly-based information signaling mechanism at play in the brain.
Somehow, in ways neuroscientists still do not fully understand, the interactions of all these electrical and chemical signals carry out all the computations that produce everything the brain is capable of.
Now pause for a moment, and think about the uncountable number of dynamic and ever changing combinations that the state of the brain can take on given this incredible complexity. Yet, it is this combinatorial space, the computations produced by trillions of signals and billions of cells in a hierarchy of networks, that result in everything your brain is capable of doing, learning, experiencing, and perceiving.
So any computer simulation of the brain is ultimately going to be very limited. At least on a classical computer.
How big and complete are the biggest simulations of the brain done to date? And how much impact have they had on scientists understanding of the brain? The answer critically depends on whats being simulated. In other words, at what scale - or scales - and with how much detail given the myriad of combinatorial processes. There certainly continue to be impressive attempts from various research groups around the world, but the amount of cells and brain being simulated, the level of detail, and the amount of time being simulated remains rather limited. This is why headlines and claims that tout ground-breaking large scale simulations of the brain can be misleading, sometimes resulting in controversy and backlash.
The challenges of doing large multi-scale simulations of the brain are significant. So in the end, the answer to how big and complete are the biggest simulations of the brain done to date and how much impact have they had on scientists understanding of the brain - is not much.
First, by their very nature, given a sufficient number of qubits quantum computers will excel at solving and optimizing very large combinatorial problems. Its an inherent consequence of the physics of quantum mechanics and the design of the computers.
Second, given the sheer size and computational complexity of the human brain, any attempt at a large multi-scale simulation with sufficient detail will have to contend with the combinatorial space of the problem.
Third, how a potential quantum computer neural simulation is set up might be able to take advantage of the physics the brain is subject to. Despite its computational power, the brain is still a physical object, and so physical constraints could be used to design and guide simulation rules (quantum computing algorithms) that are inherently combinatorial and parallelizable, thereby taking advantage of what quantum computers do best.
For example, local rules, such as the computational rules of individual neurons, can be used to calculate aspects of the emergent dynamics of networks of neurons in a decentralized way. Each neuron is doing their own thing and contributing to the larger whole, in this case the functions of the whole brain itself, all acting at the same time, and without realizing what theyre contributing too.
In the end, the goal will be to understand the emergent functions of the brain that give rise to cognitive properties. For example, large scale quantum computer simulations might discover latent (hidden) properties and states that are only observable at the whole brain scale, but not computable without a sufficient level of detail and simulation from the scales below it.
If these simulations and research are successful, one can only speculate about what as of yet unknown brain algorithms remain to be discovered and understood. Its possible that such future discoveries will have a significant impact on related topics such as artificial quantum neural networks, or on specially designed hardware that some day may challenge the boundaries of existing computational systems. For example, just published yesterday, an international team of scientists and engineers announced a computational hardware device composed of a molecular-chemical network capable of energy-efficient rapid reconfigurable states, somewhat similar to the reconfigurable nature of biological neurons.
One final comment regarding quantum computers and the brain: This discussion has focused on the potential use of future quantum computers to carry out simulations of the brain that are not currently possible. While some authors and researchers have proposed that neurons themselves might be tiny quantum computers, that is completely different and unrelated to the material here.
It may be that quantum computers will usher in a new era for neuroscience and the understanding of the brain. It may even be the only real way forward. But as of now, actually building workable quantum computers with sufficient stable qubits that outperform classical computers at even modest tasks remains a work in progress. While a handful of commercial efforts exist and have claimed various degrees of success, many difficult hardware and technological challenges remain. Some experts argue that quantum computers may in the end never be built due to technical reasons. But there is much research across the world both in academic labs and in industry attempting to overcome these engineering challenges. Neuroscientists will just have to be patient a bit longer.
See the original post here:
Large-Scale Simulations Of The Brain May Need To Wait For Quantum Computers - Forbes
Posted in Quantum Physics
Comments Off on Large-Scale Simulations Of The Brain May Need To Wait For Quantum Computers – Forbes
Quantum crystal could reveal the identity of dark matter – Livescience.com
Posted: at 6:04 am
Using a quirk of quantum mechanics, researchers have created a beryllium crystal capable of detecting incredibly weak electromagnetic fields. The work could one day be used to detect hypothetical dark matter particles called axions.
The researchers created their quantum crystal by trapping 150 charged beryllium particles or ions using a system of electrodes and magnetic fields that helped overcome their natural repulsion for each other, Ana Maria Rey, an atomic physicist at JILA, a joint institute between the National Institute of Standards and Technology and the University of Colorado Boulder, told Live Science.
Related: The 18 biggest unsolved mysteries in physics
When Rey and her colleagues trapped the ions with their system of fields and electrodes, the atoms self-assembled into a flat sheet twice as thick as a human hair. This organized collective resembled a crystal that would vibrate when disturbed by some outside force.
"When you excite the atoms, they don't move individually," Rey said. "They move as a whole."
When that beryllium "crystal" encountered an electromagnetic field, it moved in response, and that movement could be translated into a measurement of the field strength.
But measurements of any quantum mechanical system are subject to limits set by the Heisenberg uncertainty principle, which states that certain properties of a particle, such as its position and momentum, can't simultaneously be known with high precision.
The team figured out a way to get around this limit with entanglement, where quantum particles' attributes are inherently linked together.
"By using entanglement, we can sense things that aren't possible otherwise," Rey said.
In this case, she and her colleagues entangled the motions of the beryllium ions with their spins. Quantum systems resemble tiny tops and spin describes the direction, say up or down, that those tops are pointing.
When the crystal vibrated, it would move a certain amount. But because of the uncertainty principle, any measurement of that displacement, or the amount the ions moved, would be subject to precision limits and contain a lot of what's known as quantum noise, Rey said.
To measure the displacement, "we need a displacement larger than the quantum noise," she said.
Entanglement between the ions' motions and their spins spreads this noise out, reducing it and allowing the researchers to measure ultra-tiny fluctuations in the crystal. They tested the system by sending a weak electromagnetic wave through it and seeing it vibrate. The work is described Aug. 6 in the journal Science.
The crystal is already 10 times more sensitive at detecting teensy electromagnetic signals than previous quantum sensors. But the team thinks that with more beryllium ions, they could create an even more sensitive detector capable of searching for axions.
Axions are a proposed ultralight dark matter particle with a millionth or a billionth the mass of an electron. Some models of the axion suggest that it may be able to sometimes convert into a photon, in which case it would no longer be dark and would produce a weak electromagnetic field. Were any axions to fly through a lab containing this beryllium crystal, the crystal might pick up their presence.
"I think it's a beautiful result and an impressive experiment," Daniel Carney, a theoretical physicist at Lawrence Berkeley National Laboratory in Berkeley, California, who was not involved in the research, told Live Science.
Along with helping in the hunt for dark matter, Carney believes the work could find many applications, such as looking for stray electromagnetic fields from wires in a lab or searching for defects in a material.
Originally published on Live Science.
View original post here:
Quantum crystal could reveal the identity of dark matter - Livescience.com
Posted in Quantum Physics
Comments Off on Quantum crystal could reveal the identity of dark matter – Livescience.com
Ask Ethan: What Impact Could Magnetic Monopoles Have On The Universe? – Forbes
Posted: at 6:04 am
Electromagnetic fields as they would be generated by positive and negative electric charges, both at ... [+] rest and in motion (top), as well as those that would theoretically be created by magnetic monopoles (bottom), were they to exist.
Out of all of the known particles both fundamental and composite there are a whole slew of properties that emerge. Each individual quantum in the Universe can have a mass, or they can be massless. They can have a color charge, meaning they couple to the strong force, or they can be chargeless. They can have a weak hypercharge and/or weak isospin, or they can be completely decoupled from the weak interactions. They can have an electric charge, or they can be electrically neutral. They can have a spin, or an intrinsic angular momentum, or they can be spinless. And if you have both an electric charge and some form of angular momentum, youll also have a magnetic moment: a magnetic property that behaves as a dipole, with a north end and a south end.
But there are no fundamental entities that have a unique magnetic charge, like a north pole or south pole by itself. This idea, of a magnetic monopole, has been around for a long time as a purely theoretical construct, but there are reasons to take it seriously as a physical presence in our Universe. Patreon supporter Jim Nance writes in because he wants to know why:
You've talked in the past about how we know the universe didn't get arbitrarily hot because we don't see relics like magnetic monopoles.You say that with a lot of confidence which makes me wonder, given that no one has ever seen a magnetic monopole or any of the other relics, why are we confident that they exist?
Its a deep question that demands an in-depth answer. Lets start at the beginning: going all the way back to the 19th century.
When you move a magnet into (or out of) a loop or coil of wire, it causes the field to change around ... [+] the conductor, which causes a force on charged particles and induces their motion, creating a current. The phenomena are very different if the magnet is stationary and the coil is moved, but the currents generated are the same. This was the jumping-off point for the principle of relativity.
A little bit was known about electricity and magnetism at the start of the 1800s. It was generally recognized that there was such a thing as electric charge, that it came in two types, where like charges repelled and opposite charges attracted, and that electric charges in motion created currents: what we know as electricity today. We also knew about permanent magnets, where one side acted like a north pole and the other side like a south pole. However, if you broke a permanent magnet in two, no matter how small you chopped it up, youd never wind up with a north pole or a south pole by itself; magnetic charges only came paired up in a dipole configuration.
Throughout the 1800s, a number of discoveries took place that helped us make sense of the electromagnetic Universe. We learned about induction: how moving electric charges actually generate magnetic fields, and how changing magnetic fields, in turn, induce electric currents. We learned about electromagnetic radiation, and how accelerating electric charges can emit light of various wavelengths. And when we put all of our knowledge together, we learned that the Universe wasnt symmetric between electric and magnetic fields and charges: Maxwells equations only possess electric charges and currents. There are no fundamental magnetic charges or currents, and the only magnetic properties we observe come about as being induced by electric charges and currents.
It's possible to write down a variety of equations, like Maxwell's equations, that describe the ... [+] Universe. We can write them down in a variety of ways, but only by comparing their predictions with physical observations can we draw any conclusion about their validity. It's why the version of Maxwell's equations with magnetic monopoles (right) don't correspond to reality, while the ones without (left) do.
Mathematically or if you prefer, from a theoretical physics perspective its very easy to modify Maxwells equations to include magnetic charges and currents: where you simply add in the ability for objects to also possess a fundamental magnetic charge: an individual north or south pole inherent to an object itself. When you introduce those extra terms, Maxwells equations get a modification, and become completely symmetric. All of a sudden, induction now works the other way as well: moving magnetic charges would generate electric fields, and a changing electric field can induce a magnetic current, causing magnetic charges to move and accelerate within a material that can carry a magnetic current.
All of this was simply fanciful consideration for a long time, until we started to recognize the roles that symmetries play in physics, and the quantum nature of the Universe. Its eminently possible that electromagnetism, at some higher energy state, was symmetric between electric and magnetic components, and that we live in a low-energy, broken symmetry version of that world. Although Pierre Curie, in 1894, was one of the first to point out that magnetic charges could exist, it was Paul Dirac, in 1931, who showed something remarkable: that if you had even one magnetic charge, anywhere in the Universe, then it quantum mechanically implied that electric charges should be quantized everywhere.
The difference between a Lie algebra based on the E(8) group (left) and the Standard Model (right). ... [+] The Lie algebra that defines the Standard Model is mathematically a 12-dimensional entity; the E(8) group is fundamentally a 248-dimensional entity. There is a lot that has to go away to get back the Standard Model from String Theories as we know them.
This is fascinating, because not only are electric charges observed to be quantized, but theyre quantized in fractional amounts when it comes to quarks. In physics, one of the most powerful hints we have that new discoveries might be around the corner are by discovering a mechanism that could explain why the Universe has the properties we observe it to have.
However, none of that provides any evidence that magnetic monopoles actually do exist, it simply suggests that they might. On the theoretical side, quantum mechanics was soon superseded by quantum field theory, where the fields are also quantized. To describe electromagnetism, a gauge group known as U(1) was introduced, and this is still used at the present. In gauge theory, the fundamental charges associated with electromagnetism will be quantized only if the gauge group, U(1), is compact; if the U(1) gauge group is compact, however, we get magnetic monopoles anyway.
Again, there might turn out to be a different reason why electric charges have to be quantized, but it seemed at least with Diracs reasoning and what we know about the Standard Model that theres no reason why magnetic monopoles shouldnt exist.
This diagram displays the structure of the standard model (in a way that displays the key ... [+] relationships and patterns more completely, and less misleadingly, than in the more familiar image based on a 4x4 square of particles). In particular, this diagram depicts all of the particles in the Standard Model (including their letter names, masses, spins, handedness, charges, and interactions with the gauge bosons: i.e., with the strong and electroweak forces). It also depicts the role of the Higgs boson, and the structure of electroweak symmetry breaking, indicating how the Higgs vacuum expectation value breaks electroweak symmetry, and how the properties of the remaining particles change as a consequence.
For many decades, even after numerous mathematical advances, the idea of magnetic monopoles remained only a curiosity that hung around in the back of theorists minds, without any substantial progress being made. But in 1974, a few years after we recognized the full structure of the Standard Model which in group theory, is described by SU(3) SU(2) U(1) physicists started to entertain the idea of unification. While, at low energies, SU(2) describes the weak interaction and U(1) describes the electromagnetic interaction, they actually unify at energies of around ~100 GeV: the electroweak scale. At those energies, the combined group SU(2) U(1) describes the electroweak interactions, and those two forces unify.
Is it possible, then, that all of the fundamental forces unify into some larger structure at high energies? They might, and thus the idea of Grand Unified Theories began to come about. Larger gauge groups, like SU(5), SO(10), SU(6), and even exceptional groups began to be considered. Almost immediately, however, a number of unsettling but exciting consequences began to emerge. These Grand Unified Theories all predicted that the proton would be fundamentally stable and would decay; that new, super-heavy particles would exist; and that, as shown in 1974 by both Gerard tHooft and Alexander Polyakov, they would lead to the existence of magnetic monopoles.
The concept of a magnetic monopole, emitting magnetic field lines the same way an isolated electric ... [+] charge would emit electric field lines. Unlike magnetic dipoles, there's only a single, isolated source, and it would be an isolated north or south pole with no counterpart to balance it out.
Now, we have no proof that the ideas of grand unification are relevant for our Universe, but again, its possible that they do. Whenever we consider a theoretical idea, one of the things we look for are pathologies: reasons that whatever scenario were interested in would break the Universe in some way or another. Originally, when tHooft-Polyakov monopoles were proposed, one such pathology was discovered: the fact that magnetic monopoles would do something called overclose the Universe.
In the early Universe, things are hot and energetic enough that any particle-antiparticle pair you can create with enough energy via Einsteins E = mc2 will get created. When you have a broken symmetry, you can either give a non-zero rest mass to a previously massless particle, or you can spontaneously rip copious numbers of particles (or particle-antiparticle pairs) out of the vacuum when the symmetry breaks. An example of the first case is what happens when the Higgs symmetry breaks; the second case could occur, for example, when the Peccei-Quinn symmetry breaks, pulling axions out of the quantum vacuum.
In either case, this could lead to something devastating.
If the Universe had just a slightly higher matter density (red), it would be closed and have ... [+] recollapsed already; if it had just a slightly lower density (and negative curvature), it would have expanded much faster and become much larger. The Big Bang, on its own, offers no explanation as to why the initial expansion rate at the moment of the Universe's birth balances the total energy density so perfectly, leaving no room for spatial curvature at all and a perfectly flat Universe. Our Universe appears perfectly spatially flat, with the initial total energy density and the initial expansion rate balancing one another to at least some 20+ significant digits. We can be certain that the energy density didn't spontaneously increase by large amounts in the early Universe by the fact that it hasn't recollapsed.
Normally, the Universe expands and cools, with the overall energy density being closely related to the rate of expansion at any point in time. If you either take a large number of previously massless particles and give them a non-zero mass, or you suddenly and spontaneously add a large number of massive particles to the Universe, you rapidly increase the energy density. With more energy present, suddenly the expansion rate and the energy density are no longer in balance; theres too much stuff in the Universe.
This causes the expansion rate to not only drop, but in the case of monopole production, to plummet all the way to zero, and then to begin contracting. In short order, this leads to a recollapse of the Universe, ending in a Big Crunch. This is called overclosing the Universe, and cannot be an accurate description of our reality; were still here and things havent recollapsed. This puzzle was known as the monopole problem, and was one of the three main motivations for cosmic inflation.
Just as inflation stretches the Universe, whatever its geometry was previously, to a state indistinguishable from flat (solving the flatness problem), and imparts the same properties everywhere to all locations within our observable Universe (solving the horizon problem), so long as the Universe never heats back up to above the grand unification scale after inflation ends, it can solve the monopole problem, too.
If the Universe inflated, then what we perceive as our visible Universe today arose from a past ... [+] state that was all causally connected to the same small initial region. Inflation stretched that region to give our Universe the same properties everywhere (top), made its geometry appear indistinguishable from flat (middle), and removed any pre-existing relics by inflating them away (bottom). So long as the Universe never heats back up to high enough temperatures to produce magnetic monopoles anew, we will be safe from overclosure.
This was understood way back in 1980, and the combined interest in tHooft-Polyakov monopoles, grand unified theories, and the earliest models of cosmic inflation led some people to embark on a remarkable undertaking: to try and experimentally detect magnetic monopoles. In 1981, experimental physicist Blas Cabrera built a cryogenic experiment involving a coil of wire, explicitly designed to search for magnetic monopoles.
By building a coil with eight loops in it, he reasoned that if a magnetic monopole ever passed through the coil, hed see a specific signal due to the electric induction that would occur. Just like passing one end of a permanent magnet into (or out of) a coil of wire will induce a current, passing a magnetic monopole through that coil of wire should induce not only an electric current, but an electric current that corresponds to exactly 8 times the theoretical value of the magnetic monopoles charge, owing to the 8 loops in his experimental setup. (If a dipole were to pass through, instead, there would be a signal of +8 followed shortly after by a signal of -8, allowing the two scenarios to be differentiated.)
On February 14, 1982, no one was in the office monitoring the experiment. The next day, Cabrera came back, and was shocked at what he observed. The experiment had recorded a single signal: one corresponding almost exactly to the signal a magnetic monopole ought to produce.
In 1982, an experiment running under the leadership of Blas Cabrera, one with eight turns of wire, ... [+] detected a flux change of eight magnetons: indications of a magnetic monopole. Unfortunately, no one was present at the time of detection, and no one has ever reproduced this result or found a second monopole. Still, if string theory and this new result are correct, magnetic monopoles, being not forbidden by any law, must exist at some level.
This set off a tremendous interest in the endeavor. Did it mean inflation was wrong, and we really did have a Universe with magnetic monopoles? Did it mean that inflation was correct, and the one (at most) monopole that should remain in our Universe happened to pass through Cabreras detector? Or did it means that this was the ultimate in experimental errors: a glitch, a prank, or something else that we couldnt explain, but was spurious?
A number of copycat experiments ensued, many of which were larger, ran for longer times, and had greater numbers of loops in their coils, but no one else ever saw anything that resembled a magnetic monopole. On February 14, 1983, Stephen Weinberg wrote a Valentines Day poem to Cabrera, which read:
Roses are red,
Violets are blue,
Its time for monopole
Number TWO!
But despite all the experiments weve ever run, including some that have continued to the present day, there have been no other signs of magnetic monopoles ever seen. Cabrera himself went on to lead numerous other experiments, but we may never know what truly happened on that day in 1982. All we know is that, without the ability to confirm and reproduce that result, we cannot claim that we have direct evidence for the existence of magnetic monopoles.
These are the modern constraints available, from a variety of experiments largely driven from ... [+] neutrino astrophysics, that place the tightest bounds on the existence and abundance of magnetic monopoles in the Universe. The current bound is many orders of magnitude below the expected abundance if Cabrera's 1982 detection was normal, rather than an outlier.
Theres so much that we dont know about the Universe, including what happens at energies far in excess of what we can observe in the collisions that take place at the Large Hadron Collider. We dont know whether, at some high energy scale, the Universe can actually produce magnetic monopoles; we simply know that at the energies we can probe, we havent seen them. We dont know whether grand unification is a property of our Universe in the earliest stages, but we do know this much: whatever occurred early on, it didnt overclose the Universe, and it didnt fill our Universe with these leftover, high-energy relics from a hot, dense state.
Does our Universe, at some level, admit the existence of magnetic monopoles? Thats not a question we can presently answer. What we can state with confidence, however is the following:
Its nearly 40 years since the one experimental clue hinting at the possible existence of magnetic monopoles simply dropped into our lap. Until a second clue comes along, however, all well be able to do is tighten our constraints on where these hypothetical monopoles arent allowed to be hiding.
Send in your Ask Ethan questions to startswithabang at gmail dot com!
Read the original:
Ask Ethan: What Impact Could Magnetic Monopoles Have On The Universe? - Forbes
Posted in Quantum Physics
Comments Off on Ask Ethan: What Impact Could Magnetic Monopoles Have On The Universe? – Forbes
Making OLED Displays In The Home Lab – Hackaday
Posted: at 6:04 am
Just a general observation: when your projects BOM includes ytterbium metal, chances are pretty good that its something interesting. Wed say that making your own OLED displays at home definitely falls into that category.
Of course, the making of organic light-emitting diodes requires more than just a rare-earth metal, not least of which is the experience in the field that [Jeroen Vleggaar] brings to this project. Having worked on OLEDs at Philips for years, [Jeroen] is well-positioned to tackle the complex process, involving things like physical vapor deposition and the organic chemistry of coordinated quinolones. And thats not to mention the quantum physics of it all, which is nicely summarized in the first ten minutes or so of the video below. From there its all about making a couple of OLED displays using photolithography and the aforementioned PVD to build up a sandwich of Alq3, an electroluminescent organic compound, on a substrate of ITO (indium tin oxide) glass. We especially appreciate the use of a resin 3D printer to create the photoresist masks, as well as the details on the PVD process.
The displays themselves look fantastic at least for a while. The organic segments begin to oxidize rapidly from pinholes in the material; a cleanroom would fix that, but this was just a demonstration, after all. And as a bonus, the blue-green glow of [Jeroen]s displays reminds us strongly of the replica Apollo DSKY display that [Ben Krasnow] built a while back.
Read more:
Posted in Quantum Physics
Comments Off on Making OLED Displays In The Home Lab – Hackaday
Putting a new theory of many-particle quantum systems to the test | Penn State University – Penn State News
Posted: September 2, 2021 at 2:17 pm
UNIVERSITY PARK, Pa. New experiments using trapped one-dimensional gases atoms cooled to the coldest temperatures in the universe and confined so that they can only move in a line fit with the predictions of the recently developed theory of generalized hydrodynamics. Quantum mechanics is necessary to describe the novel properties of these gases. Achieving a better understanding of how such systems with many particles evolve in time is a frontier of quantum physics. The result could greatly simplify the study of quantum systems that have been excited out of equilibrium. Besides its fundamental importance, it could eventually inform the development of quantum-based technologies, which include quantum computers and simulators, quantum communication, and quantum sensors. A paper describing the experiments by a team led by Penn State physicists appears Sept. 2 in the journal Science.
Even within classical physics, where the additional complexities of quantum mechanics can be ignored, it is impossible to simulate the motion of all the atoms in a moving fluid. To approximate these systems of particles, physicists use hydrodynamics descriptions.
The basic idea behind hydrodynamics is to forget about the atoms and consider the fluid as a continuum, said Marcos Rigol, professor of physics at Penn State and one of the leaders of the research team. To simulate the fluid, one ends up writing coupled equations that result from imposing a few constraints, such as the conservation of mass and energy. These are the same types of equations solved, for example, to simulate how air flows when you open windows to improve ventilation in a room.
Matter becomes more complicated if quantum mechanics is involved, as is the case when one wants to simulate quantum many-body systems that are out of equilibrium.
Quantum many-body systems which are composed of many interacting particles, such as atoms are at the heart of atomic, nuclear, and particle physics, said David Weiss, distinguished professor of physics at Penn State and one of the leaders of the research team. It used to be that except in extreme limits you couldnt do a calculation to describe out-of-equilibrium, quantum many-body systems. That recently changed.
The change was motivated by the development of a theoretical framework known as generalized hydrodynamics.
The problem with those quantum many-body systems in one dimension is that they have so many constraints on their motion that regular hydrodynamics descriptions cannot be used, said Rigol. Generalized hydrodynamics was developed to keep track of all those constraints.
Until now, generalized hydrodynamics had only previously been experimentally tested under conditions where the strength of interactions among particles was weak.
We set out to test the theory further, by looking at the dynamics of one dimensional gases with a wide range of interaction strengths, said Weiss. The experiments are extremely well controlled, so the results can be precisely compared to the predictions of this theory.
The research team uses one dimensional gases of interacting atoms that are initially confined in a very shallow trap in equilibrium. They then very suddenly increase the depth of the trap by 100 times, which forces the particles to collapse into the center of the trap, causing their collective properties to change. Throughout the collapse, the team precisely measures their properties, which they can then compare to the predictions of generalized hydrodynamics.
Our measurements matched the prediction of theory across dozens of trap oscillations, said Weiss. There currently arent other ways to study out-of-equilibrium quantum systems for long periods of time with reasonable accuracy, especially with a lot of particles. Generalized hydrodynamics allow us to do this for some systems like the one we tested, but how generally applicable it is still needs to be determined.
In addition to Weiss and Rigol, the research team includes Neel Malvania, Yicheng Zhang, and Yuan Le at Penn State; and Jerome Dubail at Universit de Lorraine in France. The research was funded by the U.S. National Science Foundation and the U.S. Army Research Office.
More here:
Posted in Quantum Physics
Comments Off on Putting a new theory of many-particle quantum systems to the test | Penn State University – Penn State News
New vortex beams of atoms and molecules are the first of their kind – Science News
Posted: at 2:17 pm
Like soft serve ice cream, beams of atoms and molecules now come with a swirl.
Scientists already knew how to dish up spiraling beams of light or electrons, known as vortex beams (SN: 1/14/11). Now, the first vortex beams of atoms and molecules are on the menu, researchers report in the Sept. 3 Science.
Vortex beams made of light or electrons have shown promise for making special types of microscope images and for transmitting information using quantum physics (SN: 8/5/15). But vortex beams of larger particles such as atoms or molecules are so new that the possible applications arent yet clear, says physicist Sonja Franke-Arnold of the University of Glasgow in Scotland, who was not involved with the research. Its maybe too early to really know what we can do with it.
In quantum physics, particles are described by a wave function, a wavelike pattern that allows scientists to calculate the probability of finding a particle in a particular place (SN: 6/8/11). But vortex beams waves dont slosh up and down like ripples on water. Instead, the beams particles have wave functions that move in a corkscrewing motion as a beam travels through space. That means the beam carries a rotational oomph known as orbital angular momentum. This is something really very strange, very nonintuitive, says physicist Edvardas Narevicius of the Weizmann Institute of Science in Rehovot, Israel.
Headlines and summaries of the latest Science News articles, delivered to your inbox
Thank you for signing up!
There was a problem signing you up.
Narevicius and colleagues created the new beams by passing helium atoms through a grid of specially shaped slit patterns, each just 600 nanometers wide. The team detected a hallmark of vortex beams: a row of doughnut-shaped rings imprinted on a detector by the atoms, in which each doughnut corresponds to a beam with a different orbital angular momentum.
Another set of doughnuts revealed the presence of vortex beams of helium excimers, molecules created when a helium atom in an excited, or energized, state pairs up with another helium atom.
Next, scientists might investigate what happens when vortex beams of molecules or atoms collide with light, electrons or other atoms or molecules. Such collisions are well-understood for normal particle beams, but not for those with orbital angular momentum. Similar vortex beams made with protons might also serve as a method for probing the subatomic particles mysterious innards (SN: 4/18/17).
In physics, most important things are achieved when we are revisiting known phenomena with a fresh perspective, says physicist Ivan Madan of EPFL, the Swiss Federal Institute of Technology in Lausanne, who was not involved with the research. And, for sure, this experiment allows us to do that.
Here is the original post:
New vortex beams of atoms and molecules are the first of their kind - Science News
Posted in Quantum Physics
Comments Off on New vortex beams of atoms and molecules are the first of their kind – Science News
Physics – 3D Collimation of Matter Waves – Physics
Posted: at 2:17 pm
August 30, 2021• Physics 14, 119
An innovative matter-wave lens exploiting atomic interactions is able to slow the expansion of a Bose-Einstein condensate in three dimensions, thus reaching unprecedented ultralow temperatures.
At ultralow temperatures, dilute atomic gases manifest their full quantum nature as matter waves in the form of Bose-Einstein condensates (BECs). Through the interference of matter waves in an interferometer, researchers can probe gravitational effects at microscopic scales and thereby test gravity at the quantum level. But improving the precision of these tests requires lowering the temperature of the BECs even further. Ernst Rasel from Leibniz University Hannover in Germany and colleagues have realized BECs at the lowest temperature so far (38 pK) by collimating the atoms in 3D with a new time-domain lens system based on atomic interactions [1].
The team prepared BEC matter waves with over one hundred thousand atoms and recorded their time evolution via absorption imaging during 2 s of free fall in a 110-m-high tower. Without any lensing applied, the BEC expanded through random thermal motion and became too dilute to be detected after 160 ms. In contrast, when the team collimated the atoms with their lens, the expansion slowed, and the BEC was visible throughout its fall. Moreover, the authors extrapolated their results and found that their innovative collimation technique can generate slowly expanding BECs that should remain detectable even after 17 s, which could be useful in future tests of gravity in space-based experiments.
BEC matter waves are a magnificent tool with which to explore the interface between quantum theory and general relativitythe underlying theories of the microcosmos and the macrocosmos, respectively. When a BEC is placed in an interferometer, its interference pattern will partly depend on gravitational effects due to the mass of the atoms. Detecting these effects could allow for fundamental tests, such as the verification of the Einstein equivalence principle with quantum objects. These tests require letting the BEC freely evolve for long times, which poses a problem, as the atoms tend to fly apart because of the internal kinetic energy (or temperature) of the system. Reducing this energy would extend the expansion time before the BEC becomes too dilute and improve the precision of matter-wave interferometry.
A powerful way to reduce a BECs internal kinetic energy is to exploit a matter-wave lens to focus the BEC atoms at infinity. Standard matter-wave lenses that are based on magnetic, optical, or electrostatic forces have indeed been used to reduce the BEC internal kinetic energy. Those tools can reach effective temperatures of about 50 pK but, unfortunately, only in two dimensions [2]. A magnetic lens, for example, has a cylindrical geometry that can bend the trajectory of atoms inward along the two radial directions, but it lacks this refractive power along the axial direction.
In their experiments, Rasel and colleagues achieve an unprecedently low temperature of 38 pK by exploiting an innovative matter-wave lens system in the time domain. Such a system can focus the BEC wave at infinity in all three spatial dimensions by cleverly combining both a magnetic lens and a collective-mode excitation (or shape vibration) in the BEC [3].
The team first generated a BEC of approximately one hundred thousand rubidium atoms within a cylindrically shaped magnetic trap produced on a microchip [4]. To excite the collective-mode oscillation, the researchers quickly reduced the trap magnetic bias field along one direction, while increasing the trapping strength in the other two directions. Because of the atomic interactions, the BEC responded by lengthening along its axis and slimming around the waist (Fig. 1). If allowed to continue this oscillation, the BEC would return to its original shape, but the researchers instead released the BEC at the time of maximum slimming. This was the key step for achieving 3D collimation, as it minimized the expansion along the axial direction. To slow the expansion around the BECs waist, the team applied a magnetic lens that collimated the atomic motion in the other two dimensions.
The experiments were performed at the Bremen drop tower in Germany, which provides an exceptional microgravity environment with residual accelerations of the order of 106g [5]. The researchers released the BEC at the top of the tower and measured its size via absorption imaging at different points during the free fall. From the data, they surmised that the expansion velocities were of the order of 60ms. In simulations, the team extended the free-fall time and showed that the BEC should remain detectable for up to 17 s.
By tuning both the oscillation time at the condensates release and the strength of the magnetic lenss potential, this new lensing method offers the possibility to engineer and control BEC shape and expansion for fundamental physics tests as well as for quantum sensing technologies. Indeed, the ability to generate slowly expanding BECs for tens of seconds can enable high-precision gravitational-wave detection [6], measurements of the gravitational constant [7] and the tidal force of gravity [8], as well as the search for ultralight dark matter [9] and a stringent quantum verification of Einsteins equivalence principle, both in drop towers and in space [10].
Furthermore, the 3D matter-wave lens system introduced by Rasel and co-workers provides a new and exciting perspective on the quantum advantage hidden behind the presence of interatomic interactions, often viewed as a drawback in matter-wave optics with long expansion times. Indeed, such interactions can be exploited as a powerful metrological tool in the development of matter-wave quantum sensors, enabling not only high-coherence properties but also highly nonclassical correlations.
Vincenzo Tamma is currently the Founding Director of the Quantum Science and Technology Hub and a reader in physics at the University of Portsmouth, UK, after being a group leader at the Institute of Quantum Physics at Ulm University, Germany. His Ph.D. research at the University of Maryland, Baltimore County and at the University of Bari Aldo Moro, Italy, was recognized with the Giampietro Puppi Award for the best Italian Ph.D. thesis in physics and astrophysics in 20072009. His research aims for a deeper understanding of the fundamental physics at the interface of quantum mechanics, quantum information, complexity theory, atomic physics, and general relativity, as well as at boosting the real-world implementation of quantum-enhanced technologies for computing and sensing applications.
Christian Deppner, Waldemar Herr, Merle Cornelius, Peter Stromberger, Tammo Sternke, Christoph Grzeschik, Alexander Grote, Jan Rudolph, Sven Herrmann, Markus Krutzik, Andr Wenzlawski, Robin Corgier, Eric Charron, David Gury-Odelin, Naceur Gaaloul, Claus Lmmerzahl, Achim Peters, Patrick Windpassinger, and Ernst M. Rasel
Phys. Rev. Lett. 127, 100401 (2021)
Published August 30, 2021
Researchers demonstrate lighter, smaller optics and vacuum components for cold-atom experiments that they hope could enable the development of portable quantum technologies. Read More
Read the original here:
Posted in Quantum Physics
Comments Off on Physics – 3D Collimation of Matter Waves – Physics
New Quantum Algorithm Directly Calculates the Energy Difference of Atoms and Molecules – SciTechDaily
Posted: at 2:17 pm
Left: The phase difference between |0| and exp(-iEt)|1| affords the total energy E . The curved arrow in purple indicates the phase evolution of | in time. Right: The phase difference between exp(-iE0t)|0|0 and exp(-iE1t)|1|1 affords the energy difference E1 E0, directly. The curved arrows in blue and in purple indicate the phase evolution of |0 and that of |1 , respectively. Credit: K. Sugisaki, K. Sato and T. Takui
Osaka City University creates a general quantum algorithm, executable on quantum computers, which calculates molecular energy differences without considering relevant total energies.
As newly reported by the journalPhysical Chemistry Chemical Physics, researchers from the Graduate School of Science at Osaka City University have developed a quantum algorithm that can understand the electronic states of atomic or molecular systems by directly calculating the energy difference in their relevant states. Implemented as a Bayesian phase different estimation, the algorithm breaks from convention by not focusing on the difference in total energies calculated from the pre- and post-phase evolution, but by following the evolution of the energy difference itself.
Almost all chemistry problems discuss the energy difference, not the total energy of the molecule itself, says research lead and Specially-Appointed Lecturer Kenji Sugisaki, also, molecules with heavy atoms that appear at the lower part of the periodic table have large total energies, but the size of the energy difference discussed in chemistry, such as electronic excitation states and ionization energies, does not depend much on the size of the molecule. This idea led Sugisaki and his team to implementing a quantum algorithm that directly calculates energy differences instead of total energies, creating a future where scalable or practical quantum computers enable us to carry out actual chemical research and materials development.
Currently, quantum computers are capable of performing the full configuration interaction (full-CI) calculations which afford optimal molecular energies with a quantum algorithm called quantum phase estimation (QPE), noting that the full-CI calculation for sizable molecular systems is intractable with any supercomputers. QPE relies on the fact that a wave function, | which denotes the mathematical description of the quantum state of a microscopic system in this case the mathematical solution of the Schrdinger equation for the microscopic system such as an atom or molecule time-evolutionally changes its phase depending on its total energy. In the conventional QPE, the quantum superposition state (|0|+|1|) 2 is prepared, and the introduction of a controlled time evolution operator makes | evolve in time only when the first qubit designates the |1 state. Thus, the |1 state creates a quantum phase of the post-evolution in time whereas the|0 state that of the pre-evolution. The phase difference between the pre- and post-evolutions gives the total energy of the system.
The researchers of Osaka City University generalize the conventional QPE to the direct calculation of the difference in the total energy between two relevant quantum states. In the newly implemented quantum algorithm termed Bayesian phase difference estimation (BPDE), the superposition of the two wave functions, (|0|0 + |1|1) 2, where |0 and |1 denote the wave function relevant to each state, respectively, is prepared, and the difference in the phase between |0 and |1 after the time evolution of the superposition directly gives the difference in the total energy between the two wave functions involved. We emphasize that the algorithm follows the evolution of the energy difference over time, which is less prone to noise than individually calculating the total energy of an atom or molecule. Thus, the algorithm suites the need for chemistry problems which require precise accuracy in energy. states research supervisor and Professor Emeritus Takeji Takui.
Previously, this research group developed a quantum algorithm that directly calculates the energy difference between electronic states (spin states) with different spin quantum numbers (K. Sugisaki, K. Toyota, K. Sato, D. Shiomi, T. Takui,Chem. Sci.2021,12, 21212132.). This algorithm, however, requires more qubits than the conventional QPE and cannot be applied to the energy difference calculation between the electronic states with equal spin quantum numbers, which is important for the spectral assignment of UV-visible absorption spectra. The BPDE algorithm developed in the study overcomes these issues, making it a highly versatile quantum algorithm.
Reference: A Bayesian phase difference estimation: a general quantum algorithm for the direct calculation of energy gaps 2 September 2021, Physical Chemistry Chemical Physics.
Other contributors include Kazuo Toyota, Kazunobu Sato and Daisuke Shiomi, all of whom are affiliated with the Department of Chemistry and Molecular Materials Science in Osaka City Universitys Graduate School of Science. Sugisaki is also affiliated with the Japan Science and Technology Agencys PRESTO Project, Quantum Software. Takui is also a University Research Administrator in the Research Support Department/University Research Administrator Center of Osaka City University.
Continued here:
Posted in Quantum Physics
Comments Off on New Quantum Algorithm Directly Calculates the Energy Difference of Atoms and Molecules – SciTechDaily