Page 57«..1020..56575859..7080..»

Category Archives: Quantum Physics

What if the universe had no beginning? – Livescience.com

Posted: October 11, 2021 at 9:59 am

In the beginning, there was well, maybe there was no beginning. Perhaps our universe has always existed and a new theory of quantum gravity reveals how that could work.

"Reality has so many things that most people would associate with sci-fi or even fantasy," said Bruno Bento, a physicist who studies the nature of time at the University of Liverpool in the U.K.

In his work, he employed a new theory of quantum gravity, called causal set theory, in which space and time are broken down into discrete chunks of space-time. At some level, there's a fundamental unit of space-time, according to this theory.

Bento and his collaborators used this causal-set approach to explore the beginning of the universe. They found that it's possible that the universe had no beginning that it has always existed into the infinite past and only recently evolved into what we call the Big Bang.

Related: Big Bang to civilization: 10 amazing origin events

Quantum gravity is perhaps the most frustrating problem facing modern physics. We have two extraordinarily effective theories of the universe: quantum physics and general relativity. Quantum physics has produced a successful description of three of the four fundamental forces of nature (electromagnetism, the weak force and the strong force) down to microscopic scales. General relativity, on the other hand, is the most powerful and complete description of gravity ever devised.

But for all its strengths, general relativity is incomplete. In at least two specific places in the universe, the math of general relativity simply breaks down, failing to produce reliable results: in the centers of black holes and at the beginning of the universe. These regions are called "singularities," which are spots in space-time where our current laws of physics crumble, and they are mathematical warning signs that the theory of general relativity is tripping over itself. Within both of these singularities, gravity becomes incredibly strong at very tiny length scales.

Related: 8 ways you can see Einstein's theory of relativity in real life

As such, to solve the mysteries of the singularities, physicists need a microscopic description of strong gravity, also called a quantum theory of gravity. There are lots of contenders out there, including string theory and loop quantum gravity.

And there's another approach that completely rewrites our understanding of space and time.

In all current theories of physics, space and time are continuous. They form a smooth fabric that underlies all of reality. In such a continuous space-time, two points can be as close to each other in space as possible, and two events can occur as close in time to each other as possible.

"Reality has so many things that most people would associate with sci-fi or even fantasy."

But another approach, called causal set theory, reimagines space-time as a series of discrete chunks, or space-time "atoms." This theory would place strict limits on how close events can be in space and time, since they can't be any closer than the size of the "atom."

Related: Can we stop time?

For instance, if you're looking at your screen reading this, everything seems smooth and continuous. But if you were to look at the same screen through a magnifying glass, you might see the pixels that divide up the space, and you'd find that it's impossible to bring two images on your screen closer than a single pixel.

This theory of physics excited Bento. "I was thrilled to find this theory, which not only tries to go as fundamental as possible being an approach to quantum gravity and actually rethinking the notion of space-time itself but which also gives a central role to time and what it physically means for time to pass, how physical your past really is and whether the future exists already or not," Bento told Live Science.

Causal set theory has important implications for the nature of time.

"A huge part of the causal set philosophy is that the passage of time is something physical, that it should not be attributed to some emergent sort of illusion or to something that happens inside our brains that makes us think time passes; this passing is, in itself, a manifestation of the physical theory," Bento said. "So, in causal set theory, a causal set will grow one 'atom' at a time and get bigger and bigger."

The causal set approach neatly removes the problem of the Big Bang singularity because, in the theory, singularities can't exist. It's impossible for matter to compress down to infinitely tiny points they can get no smaller than the size of a space-time atom.

So without a Big Bang singularity, what does the beginning of our universe look like? That's where Bento and his collaborator, Stav Zalel, a graduate student at Imperial College London, picked up the thread, exploring what causal set theory has to say about the initial moments of the universe. Their work appears in a paper published Sept. 24 to the preprint database arXiv. (The paper has yet to be published in a peer-reviewed scientific journal.)

The paper examined "whether a beginning must exist in the causal set approach," Bento said. "In the original causal set formulation and dynamics, classically speaking, a causal set grows from nothing into the universe we see today. In our work instead, there would be no Big Bang as a beginning, as the causal set would be infinite to the past, and so there's always something before."

Their work implies that the universe may have had no beginning that it has simply always existed. What we perceive as the Big Bang may have been just a particular moment in the evolution of this always-existing causal set, not a true beginning.

There's still a lot of work to be done, however. It's not clear yet if this no-beginning causal approach can allow for physical theories that we can work with to describe the complex evolution of the universe during the Big Bang.

"One can still ask whetherthis [causal set approach] can be interpreted in a 'reasonable' way, or what such dynamics physically means in a broader sense, but we showed that a framework is indeed possible," Bento said. "So at least mathematically, this can be done."

In other words, it's a beginning.

Originally published on Live Science.

View post:

What if the universe had no beginning? - Livescience.com

Posted in Quantum Physics | Comments Off on What if the universe had no beginning? – Livescience.com

The neuroscience of advanced scientific concepts | npj Science of Learning – Nature.com

Posted: at 9:59 am

This study identified the content of the neural representations in the minds of physicists considering some of the classical and post-classical physics concepts that characterize their understanding of the universe. In this discussion, we focus on the representations of post-classical concepts, which are the most recent and most abstract and have not been previously studied psychologically. The neural representations of both the post-classical and classical concepts were underpinned by four underlying neurosemantic dimensions, such that these two types of concepts were located at opposite ends of the dimensions. The neural representations of classical concepts tended to be underpinned by underlying dimensions of measurability of magnitude, association with a mathematical formulation, having a concrete, non-speculative basis, and in some cases, periodicity. By contrast, the post-classical concepts were located at the other ends of these dimensions, stated initially here in terms of what they are not (e.g. they are not periodic and not concrete). Below we discuss what they are.

The main new finding is the underlying neural dimension of representation pertaining to the concepts presence (in the case of the classical concepts) or absence (in the case of the post-classical concepts) of a concrete, non-speculative basis. The semantic characterization of this new dimension is supported by two sources of converging evidence. First, the brain imaging measurement of each concepts location on this underlying dimension (i.e. the concepts factor scores) converged with the behavioral ratings of the concepts degree of association with this dimension (as we have interpreted it) by an independent group of physicists. (This type of convergence occurred for the other three dimensions as well.) Second, the two types of concepts have very distinguishable neural signatures: a classifier can very accurately distinguish the mean of the post-classical concepts signatures from the mean of the classical concepts within each participant, with a grand mean accuracy of 0.93, p<0.001.

As physicists ventured into conceptually new territory in the 20th century and developed new post-classical concepts, their brains organized the new concepts with respect to a new dimension that had not played a role in the representation of classical concepts.

To describe what mental processes might characterize the post-classical end of this new dimension, it is useful to consider what attributes of the post-classical concepts could have led to their being neurally organized as they are and what cognitive and neural processes might operate on these attributes. Previously mentioned was that post-classical concepts often involve their immeasurability and their lower likelihood of being strongly associated with a mathematical formulation and periodicity, both of which are attributes that are often absent from post-classical concepts.

More informative than the absent attributes are four types of cognitive processes evoked by the post-classical concepts: (1) Reasoning about intangibles, taking into account their separation from direct experience and their lack of direct observability; (2) Assessing consilience with other, firmer knowledge; (3) Causal reasoning about relations that are not apparent or observable; and (4) Knowledge management of a large knowledge organization consisting of a multi-level structure of other concepts.

In addition to enabling the decoding of the content of the participants thoughts, whether they were thinking of dark matter or tachyon for example, the brain activation patterns are also informative about the concomitant psychological processes that operate on the concepts, in particular, the four processes listed above are postulated to co-occur specifically with the post-classical concepts. The occurrence of these processes was inferred from those locations of the voxel clusters associated with (having high loadings on) the classical/post-classical factor, specifically the factor locations where the activation levels increased for the post-classical concepts. (These voxel clusters are shown in Fig. 4, and their centroids are included in Table 2). Inferring a psychological process based on previous studies that observed activation in that location is called reverse inference. This can be an uncertain inferential method because many different processes or tasks can evoke activation at the same location. What distinguishes the current study are several sources of independent converging evidence, in conjunction with the brain locations associated with a factor (and not simply observed activation), indicating a particular process.

The factor clusters are encircled and numbered for ease of reference in the text and their centroids are included in Table 2. These locations correspond to the four classes of processes evoked by the post-classical concepts.

First, a statistically reliable decoding model predicted the activation levels for each concept in the factor locations, based on independent ratings of the concepts with respect to the postulated dimension/factor. The activation levels of the voxels in the factor locations were systematically modulated by the stimulus set, with the post-classical concepts, a specific subset of the stimuli eliciting the highest activation levels in these locations, resulting in the highest factor scores for this factor. Thus these brain locations were associated with an activation-modulating factor, not with a stimulus or a task. Second, the processes are consistent with the properties participants reported to have associated with the post-classical concepts. These properties provide converging evidence for these four types of processes occurring. For example, the concept of multiverse evoked properties related to assessing consilience, such as a hypothetical way to explain away constants. Another example is that tachyons and quasars were attributed with properties related to reasoning about intangibles, such as quasi-stellar objects. Third, the processes attributed to the factor locations were based not simply on an occasional previous finding, but on the large-scale meta-analysis (the Neurosynth database, Yarkoni et al.10) using the association based test feature. The association between the location and the process was based on the cluster centroid locations; particularly relevant citations are included in the factor descriptions. Each of the four processes is described in more detail below.

The nature of many of the post-classical concepts entails the consideration of alternative possible worlds. The post-classical factor location in the right temporal area (shown in cluster 5 in Fig. 4) has been associated with hypothetical or speculative reasoning in previous studies. In a hypothetical reasoning task, the left supramarginal factor location (shown in cluster 8) was activated during the generation of novel labels for abstract objects11. Additionally, the right temporal factor location (shown in cluster 5) was activated during the assessment of confidence in probabilistic judgments12.

Another facet of post-classical concepts is that they require the unknown or non-observable to be brought into consilience with what is already known. The right middle frontal cluster (shown in cluster 2) has been shown to be part of a network for integrating evidence that disconfirms a belief13. This consilience process resembles the comprehension of an unfolding narrative, where a new segment of the narrative must be brought into coherence with the parts that preceded it. When readers of a narrative judge the coherence of a new segment of text, the dorsomedial prefrontal cortex location (shown in cluster 6) is activated14. This location is associated with a post-classical factor location, as shown in Fig. 4. Thus understanding the coherence of an unfolding narrative text might involve some of the same psychological and neural consilience-seeking processes as thinking about concepts like multiverse.

Thinking about many of the post-classical concepts requires the generation of novel types of causal inferences to link two events. In particular, the inherent role of the temporal relations in specifying causality between events is especially complex with respect to post-classical concepts. The temporal ordering itself of events is frame-dependent in some situations, despite causality being absolutely preserved, leading to counter-intuitive (though not counter-factual) conclusions. For example, in relativity theory the concept of simultaneity entails two spatially separated events that may occur at the same time for a particular observer but which may not be simultaneous for a second observer, and even the temporal ordering of the events may not be fixed for the second observer. Because the temporal order of events is not absolute, causal reasoning in post-classical terms must eschew inferencing on this basis, but must instead rely on new rules (laws) that lead to consilience with observations that indeed can be directly perceived.

Another example, this one from quantum physics, concerns a particle such as an electron that may be conceived to pass through a small aperture at some speed. Its subsequent momentum becomes indeterminate in such a way that the arrival location of the particle at a distant detector can only be described in probabilistic terms, according to new rules (laws) that are very definite but not intuitive. The perfectly calculable non-local wave function of the particle-like object is said to collapse upon arrival in the standard Copenhagen interpretation of quantum physics. Increasingly elaborate probing of physical systems with one or several particles, interacting alone or in groups with their environment, has for decades elucidated and validated the non-intuitive new rules about limits and alternatives to classical causality in the quantum world. The fact that new rules regarding causal reasoning are needed in such situations was described as the heart of quantum mechanics and as containing the only mystery by Richard Feynman15.

Generating causal inferences to interconnect a sequence of events in a narrative text evokes activation in a right temporal and right frontal location (shown in clusters 3 and 4) which are post-classical factor locations16,17,18 as shown in Fig. 4. Causal reasoning accompanying perceptual events also activates a right middle frontal location (shown in cluster 3) and a right superior parietal location (shown in cluster 1)19. Notably, the right parietal activation is the homolog of a left parietal cluster associated with causal visualization1 found in undergraduates physics conceptualizations, suggesting that post-classical concepts may recruit right hemisphere homologs of regions evoked by classical concepts. Additionally, a factor location in the left supramarginal gyrus (shown in cluster 8) is activated in causal assessment tasks such as determining whether the causality of a social event was person-based (being a hard worker) or situation based (danger)20.

Although we have treated post-classical concepts such as multiverse as a single concept, it is far more complex than velocity. Multiverse entails the consideration of the uncertainty of its existence, the consilience of its probability of existence with measurements of matter in the universe, and the consideration of scientific evidence relevant to a multiverse. Thinking about large, multi-concept units of knowledge, such as the schema for executing a complex multi-step procedure evokes activation in medial frontal regions (shown in cluster 6)21,22. Reading and comprehending the description of such procedures (read, think about, answer questions, listen to, etc.) requires the reader to cognitively organize diverse types of information in a common knowledge structure. Readers who were trained to self-explain expository biological texts activated an anterior prefrontal cortex region (shown in cluster 7 in Fig. 4) during the construction of text models and strategic processing of internal representations23.

This underlying cognitive function of knowledge management associated with the post-classical dimension may generate and utilize a structure to manage a complex array of varied information that is essential to the concept. This type of function has been referred to as a Managerial Knowledge Unit22. As applied to a post-classical concept such as a tachyon, this knowledge management function would contain links to information to evaluate the possibility of the existence of tachyons, hypothetical particles that would travel faster than light-speed in vacuum. The concept invokes a structured network of simpler concepts (mass, velocity, light, etc.) that compose it. This constitutes a knowledge unit larger than a single concept.

Although the discussion has so far focused on the most novel dimension (the classical vs. post-classical), all four dimensions together compose the neural representation of each concept, which indicates where on each dimension a given concept is located (assessed by the concepts factor scores). The bar graphs of Fig. 5 show how the concepts at the extremes of the dimensions can appear at either extreme on several dimensions. These four dimensions are:

the classical vs. post-classical dimension, as described above, which is characterized by contrasting the intangible but consilient nature of post-classical concepts versus the quantifiable, visualizable, otherwise observable nature of classical concepts.

the measurability of a magnitude associated with a concept, that is, the degree to which it has some well-defined extent in space, time, or material properties versus the absence of this property.

the periodicity or oscillation which describes how many systems behave over time versus the absence of periodicity as an important element.

the degree to which a concept is associated with a mathematical formulation that formalizes the rules and principles of the behavior of matter and energy versus being less specified by such formalizations.

A concept may have a high factor score for more than one factor; for example, potential energy appears as measurable, mathematical, and on the classical end of the post-classical dimension. In contrast, multiverse appears as non-measurable, non-periodic, and post-classical.

The locations of the clusters of voxels with high loadings on each of the factors are shown in Fig. 6.

Colors differentiate the factors and greater color transparency indicates greater depth. Sample concepts from the two ends of the dimensions are listed. The post-classical factor locations include those whose activations were high for post-classical concepts (their locations are shown in Fig. 4) as well as those locations whose activations were high for classical concepts.

Classical concepts with high factor scores on the measurability factor, such as frequency, wavelength, acceleration, and torque, are all concepts that are often measured, using devices such as oscilloscopes and torque wrenches, whereas post-classical concepts such as duality and dark matter have an uncertainty of boundedness and no defined magnitude resulting in factor scores at the other end of the dimension. This factor is associated with parietal and precuneus clusters that are often found to be activated when people have to assess or compare magnitudes of various types of objects or numbers24,25,26, a superior frontal cluster that exhibits higher activation when people are comparing the magnitudes of fractions as opposed to decimals27, and an occipital-parietal cluster (dorsolateral extrastriate V3A) that activates when estimating the arrival time of a moving object28. Additional brain locations associated with this factor include left supramarginal and inferior parietal regions that are activated during the processing of numerical magnitudes;26 and left intraparietal sulcus and superior parietal regions activated during the processing of spatial information29. This factor was not observed in a previous study that included only classical concepts and hence the factor would not have differentiated among the concepts1.

The mathematical formulation factor is salient for concepts that are clearly associated with a mathematical formalization. The three concepts that are most strongly associated with this factor, commutator, Lagrangian, and Hamiltonian, are mathematical functions or operators. Cluster locations that are associated with this factor include: parietal regions that tend to activate in tasks involving mathematical representations30,31 and right frontal regions related to difficult mental calculations32,33. The parietal regions associated with the factor, which extend into the precuneus, activate in arithmetic tasks34. While most if not all physics concepts entail some degree of mathematical formulation, post-classical concepts such as quasar, while being measurable, are typically not associated with an algebraic formulation.

The periodicity factor is salient for many of the classical concepts, particularly those related to waves: wave function, light, radio waves, and gamma rays. This factor is associated with right hemisphere clusters and a left inferior frontal cluster, locations that resemble those of a similarly described factor in a neurosemantic analysis of physics concepts in college students1. This factor was also associated with a right hemisphere cluster in the inferior frontal gyrus and bilateral precuneus.

For all four underlying semantic dimensions, the brain activation-based orderings of the physics concepts with respect to their dimensions were correlated with the ratings of those concepts along those dimensions by independent physics faculty. This correlation makes it possible for a linear regression model to predict the activation pattern that will be evoked by future concepts in physicists brains. When a new physics concept becomes commonplace, (such as a new particle category, say, magnetic monopoliae), it should be possible to predict the brain activation that will be the neural signature of the magnetic monopole concept, based on how that concept is rated along the four underlying dimensions.

The neurosemantic conceptual space defined by the four underlying dimensions includes regions that are currently sparsely populated by existing concepts, but these regions may well be the site of some yet-to-be theorized concepts. It is also possible that as future concepts are developed, additional dimensions of neural representation may emerge, expanding the conceptual space that underpins the concepts in the current study.

More here:

The neuroscience of advanced scientific concepts | npj Science of Learning - Nature.com

Posted in Quantum Physics | Comments Off on The neuroscience of advanced scientific concepts | npj Science of Learning – Nature.com

5 sci-fi concepts that are possible (in theory) – Livescience.com

Posted: at 9:59 am

Science fiction novels and movies are packed with far-out ideas, most often as the springboard for an action-packed adventure rather than a serious attempt to predict future trends in science or technology. Some of the most common tropes, such as accelerating a spacecraft to fantastic speeds in a matter of seconds without crushing the occupants , are just plain impossible according to the laws of physics as we understand them. Yet those very same laws appear to permit other seemingly far-fetched sci-fi concepts, from wormholes to parallel universes. Here's a rundown of some of the sci-fi ideas that could really be done in theory, at least.

The idea of a wormhole a shortcut through space that allows almost instantaneous travel between distant parts of the universe sounds like it was created as a fictional story-driver. But under its more formal name of an Einstein-Rosen bridge, the concept has existed as a serious theoretical concept long before sci-fi writers got hold of it. It comes out of Albert Einstein's theory of general relativity, which views gravity as a distortion of space-time caused by massive objects. In collaboration with physicist Nathan Rosen, Einstein theorized in 1935 that points of extremely strong gravity, such as black holes, could be directly connected with each other. And so the idea of wormholes was born.

The forces around a black hole would destroy anyone that came close to it, so the idea of actually traveling through a wormhole wasn't given serious consideration until the 1980s, when astrophysicist Carl Sagan decided he was going to write a sci-fi novel. According to the BBC, Sagan encouraged fellow physicist Kip Thorne to come up with a feasible way to travel interstellar distances in a flash. Thorne duly devised a way possible in theory, but highly improbable in practice that humans might achieve interstellar travel by traversing a wormhole unscathed. The result found its way into Sagan's novel "Contact" (Simon and Schuster: 1985) which was subsequently adapted into a film with Jodie Foster in the lead role.

While it's highly unlikely that wormholes will ever become the simple and convenient methods of transportation portrayed in movies, scientists have now come up with a more viable way to construct a wormhole than Thorne's original suggestion. It's also possible that, if wormholes already exist in the universe, they could be located using the new generation of gravitational-wave detectors.

An essential prerequisite for most space-based adventure stories is the ability to get from A to B much faster than we can today. Wormholes aside, there are multiple stumbling blocks to achieving this with a conventional spaceship. There's the enormous amount of fuel required, the crushing effects of acceleration, and the fact that the universe has a strictly imposed speed limit. This is the speed at which light travels precisely one light-year per year, which in a cosmic context isn't very fast at all. Proxima Centauri, the second-closest star to Earth, is 4.2 light-years from the sun, while the center of the galaxy is a whopping 27,000 light-years away.

Fortunately, there's a loophole in the cosmic speed limit: It only dictates the maximum speed we can travel through space. As Einstein explained, space itself can be distorted, so perhaps it's possible to manipulate the space around a ship in such a way as to subvert the speed limit. The spaceship would still travel through the surrounding space at less than the speed of light, but the space itself would be moving faster than that.

This was what the writers of "Star Trek" had in mind when they came up with the concept of a "warp drive" in the 1960s. But to them it was just a plausible-sounding phrase, not real physics. It wasn't until 1994 that theoretician Miguel Alcubierre found a solution to Einstein's equations that produced a real warp drive effect, Live Science's sister site Space.com reported, contracting space in front of a spaceship and expanding it to the rear. To start with, Alcubierre's solution was no less contrived than Thorne's traversable wormhole, but scientists are attempting to refine it in the hope that it might one day be practical.

The concept of a time machine is one of the great sci-fi plot devices, allowing characters to go back and change the course of history for better or worse. But this inevitably raises logical paradoxes. In "Back to the Future," for example, would Doc have built his time machine if he hadn't been visited by the future Marty using that very same machine? It's because of paradoxes like these that many people assume time travel must be impossible in the real world and yet, according to the laws of physics, it really can occur.

Just like with wormholes and space warps, the physics that tells us it's possible to travel back in time comes from Einstein's theory of general relativity. This treats space and time as part of the same "space-time" continuum, with the two being inextricably linked. Just as we talk about distorting space with a wormhole or warp drive, time can be distorted as well. Sometimes it can get so distorted that it folds back on itself, in what scientists refer to as a "closed timelike curve" though it could just as accurately be called a time machine.

A conceptual design for such a time machine was published in 1974 by physicist Frank Tipler, according to physicist David Lewis Anderson, who describes the research on the Anderson Institute, a private research lab. Called a Tipler cylinder, it has to be big at least 60 miles (97 kilometers) long, according to Humble and extremely dense, with a total mass comparable to that of the sun. To get it to function as a time machine, the cylinder has to rotate fast enough to distort space-time to the point where time folds back on itself. It may not sound as simple as installing a flux capacitor in a DeLorean, but it does have the advantage that it really would work on paper, at least.

The archetypal sci-fi example of teleportation is the "Star Trek" transporter, which, as the name suggests, is portrayed simply as a convenient way to transport personnel from one location to another. But teleportation is quite unlike any other form of transport: Instead of the traveler moving through space from the starting point to the destination, teleportation results in an exact duplicate being created at the destination while the original is destroyed. Viewed in these terms and at the level of subatomic particles rather than human beings teleportation is indeed possible, according to IBM.

The real-world process is called quantum teleportation. This process copies the precise quantum state of one particle, such as a photon, to another that may be hundreds of miles away. Quantum teleportation destroys the quantum state of the first photon, so it does indeed look as though the photon has been magically transported from one place to another. The trick is based on what Einstein referred to as "spooky action at a distance," but is more formally known as quantum entanglement. If the photon that is to be "teleported" is brought into contact with one of a pair of entangled photons, and a measurement of the resulting state is sent to the receiving end where the other entangled photon is then the latter photon can be switched into the same state as the teleported photon.

It's a complicated process even for a single photon, and there's no way it could be scaled up to the kind of instant-transportation system seen in "Star Trek." Even so, quantum teleportation does have important applications in the real world, such as for hack-proof communications and super-fast quantum computing.

The universe is everything our telescopes reveal to us all the billions of galaxies expanding outward from the Big Bang. But is that all there is? Theory says maybe not: There might be a whole multiverse of universes out there. The idea of "parallel universes" is another familiar sci-fi theme, but when they're depicted on screen they typically differ from our own universe only in minor details. But the reality may be much weirder than that, with the basic parameters of physics in a parallel universe such as the strength of gravity or nuclear forces differing from our own. A classic portrayal of a genuinely different universe of this kind, and the creatures living in it, is Isaac Asimov's novel "The Gods Themselves" (Doubleday: 1972).

The key to the modern understanding of parallel universes is the concept of "eternal inflation." This pictures the infinite fabric of space in a state of perpetual, incredibly rapid expansion. Every now and then a localized spot in this space a self-contained Big Bang drops out of the general expansion and begins to grow at a more sedate pace, allowing material objects like stars and galaxies to form inside it. According to this theory, our universe is one such region, but there may be countless others.

As in Asimov's story, these parallel universes could have completely different physical parameters from our own. At one time scientists believed that only universes with virtually the same parameters as ours would be capable of supporting life, but recent studies suggest the situation may not be as restrictive as this, Live Science previously reported. So there's hope for Asimov's aliens yet though perhaps not for making contact with them, as happens in the novel. Nevertheless, the traces of other universes might be detectable to us by other means. It's even been suggested that the mysterious "cold spot" in the cosmic microwave background is the scar from a collision with a parallel universe, Ivan Baldry, a professor of astrophysics at Liverpool John Moores University in the U.K. wrote in The Conversation.

Originally published on Live Science.

Read the original here:

5 sci-fi concepts that are possible (in theory) - Livescience.com

Posted in Quantum Physics | Comments Off on 5 sci-fi concepts that are possible (in theory) – Livescience.com

The 3 types of energy stored within every atom – Big Think

Posted: at 9:59 am

The humble atom is the fundamental building block of all normal matter.

Hydrogen, where single electrons orbit individual protons, composes ~90% of all atoms.

Quantum mechanically, electrons only occupy specific energy levels.

Atomic and molecular transitions between those levels absorbs and/or releases energy.

Energetic transitions have many causes: photon absorption, molecular collisions, atomic bond breaking/forming, etc.

Chemical energy powers most human endeavors: through coal, oil, gas, wind, hydroelectric, and solar power.

The most energy-efficient chemical reactions convert merely ~0.000001% of their mass into energy.

However, atomic nuclei offer superior options.

Containing 99.95% of an atoms mass, bonds between protons and neutrons involve significantly greater energies.

Nuclear fission, for example, converts ~0.09% of the fissionable mass into pure energy.

Fusing hydrogen into helium achieves even greater efficiencies.

For every four protons that fuse into helium-4, ~0.7% of the initial mass is converted into energy.

Nuclear power universally outstrips electron transitions for energy efficiency.

Still, the atoms greatest source of energy is rest mass, extractable via Einsteins E = mc2.

Matter-antimatter annihilation is 100% efficient, converting mass completely into energy.

Practically unlimited energy is locked within every atom; the key is to safely and reliably extract it.

Mostly Mute Monday tells an astronomical story in images, visuals, and no more than 200 words. Talk less; smile more.

Read this article:

The 3 types of energy stored within every atom - Big Think

Posted in Quantum Physics | Comments Off on The 3 types of energy stored within every atom – Big Think

What’s behind rising violent crime? Progressive prosecutors’ non-enforcement of the law | TheHill – The Hill

Posted: at 9:59 am

Mutual combatants.

I was a prosecutor for nearly 20 years and taught criminal law. But I must admit, Id missed that one and thus had to be, um, edified by the office of Kim Foxx, states attorney for Cook County, Ill., and thus responsible for enforcing the law on Chicagos murder-ravaged streets.

There was a gang firefight in Chicago recently. Well, okay, theres always a gang firefight in Chicago. This one seemed run-of-the-mill at first. There is internecine strife, Im sure youll be stunned to hear, in the Four Corner Hustlers street gang. As a result, rival factions shot it out. Not in the dark of night, but in mid-morning, as though a brisk daylight gunfight were as routine as a jog along the river (provided, of course, that you wear your flak jacket and matching N95 mask). When the dust settled, one young man was dead and two were wounded.

Windy City police rounded up five of the gangbangers and brought them to prosecutors, expecting that each would be charged with commensurately serious felonies, including first-degree murder. The cops, however, were stunned when Foxxs office released the suspects without any charges.

Why? Did the shooting not really happen? Oh, it happened all right. But theres apparently a new law non-enforcement standard in Chicago: No indictments, even in brutal crossfire between rival criminals, because those criminals are yes mutual combatants.

Theyre in gangs, get it? And if youre in a gang, this is what you sign up for. Next case.

You want to know why violent crime is surging in the nations urban centers? Why, as the latest FBI statistics indicate, murder was up an astonishing 30 percent year-over-year in 2020, a record increase? Look no further than the Progressive Prosecutors Project (as I branded it in a March 2020 Commentary essay). This is the radical lefts enterprise to reform the criminal justice system by pretending that we dont have criminals or, to get so very nuanced about it, to assign blame for all crime on our systemically racist society.

No, it is not the sociopaths committing the violence, the thinking goes. See, when you really consider it, its each of us, right?

Imagine if todays prosecutors didnt rationalize this way. Lets say they werent hypnotized under the spell of disparate impact analysts, who insist that racism, not crime, explains Americas prison population which, not coincidentally, has plummeted as felonies have surged.

Well, then theyd have to wrestle with what, and who, is responsible for the bloodshed. Progressive prosecutors would have to come to grips with the stubborn fact they and their media cheerleaders strain to avoid: patterns of offending.

By the metric of percentage composition of the overall population, young Black males account for a disproportionate amount of the incarcerated population because, as a demographic class, they commit a disproportionate amount of the crime. They account for an overwhelming number of the gang arrests in cities such as Chicago because they are shooting at each other; that means they also account for more of the victims. And shooting each other is something they are certain to do more of, if progressive prosecutors keep coming up with creative ways to resist charging them.

These novelties include declining to invoke the anti-gang sentencing enhancement provisions. Though state legislatures enact these laws, prosecutors are effectively and imperiously repealing them because they disproportionately punish African Americans (as if defendants were being prosecuted for being African American rather than for committing murder and mayhem).

Now, evidently, the do-not-prosecute trick bag also includes the flat-out refusal to prosecute on a mutual combatant theory the lunatic notion that killing and being killed is what gang members volunteer for, so who are we prosecutors to intrude?

Heres an interesting point: If arrests and prosecutions are explained by racism rather than criminal behavior, why rely on the statistics breaking down the races of prison inmates? Why not just say that police departments even though many are run and heavily staffed by minority officers are systematically racist, and therefore, must be blinded by bias in making arrests?

Because progressive prosecutors know that is not how things work.

In most cases, police do not witness crimes or theorize about who the suspects are. Crimes have victims, and victims file reports identifying the perps. Thats what police act on. And thats how we know although progressive prosecutors are preoccupied by their perception of racism against the criminals (which may reflect their own ingrained bias) it is African American communities that disproportionately bear the bruntof violent crime. Prosecutors like Kim Foxx exacerbate this tragedy when they invent reasons not to address it.

If you want to know why crime is up, you need to understand why it went down, dramatically, after the high-crime generation from the 1970s into the 90s.

Prosecutors and police back then grasped that crime rates were a function of expectations about the rule of law. When prosecutors set the tone by acting against quality-of-life crimes, it signaled to more serious criminals that the communitys laws would be enforced. When serious crimes were committed, police were not told the cases would be dismissed; they were encouraged to conduct interrogations and follow-up investigations that improved law enforcements intelligence data bank. That intelligence was carefully and continually studied so that police could be deployed in the places where crime trends were emerging. Order does not need to be re-established if you take pains not to lose it in the first place.

This is not quantum physics. Progressive prosecutors dereliction of their duty invites more crime. Professional criminals are recidivists, and if they are repeatedly returned to the streets, rather than prosecuted and imprisoned, they commit lots more crime. The only way to stop it is to stop it. That means enforcing the law, even or especially, I should say against the mutual combatants.

Former federal prosecutor Andrew C. McCarthy is a senior fellow atNational Review Institute, a contributing editor at National Review, a Fox News contributor and the author of several books, including Willful Blindness: A Memoir of the Jihad. Follow him on Twitter@AndrewCMcCarthy.

Read the original post:

What's behind rising violent crime? Progressive prosecutors' non-enforcement of the law | TheHill - The Hill

Posted in Quantum Physics | Comments Off on What’s behind rising violent crime? Progressive prosecutors’ non-enforcement of the law | TheHill – The Hill

Life Is Simple Review: A Blade to Shave Away Error – The Wall Street Journal

Posted: October 9, 2021 at 7:30 am

If a friend tells you Ive seen a UFO! what would you think? It might have been an alien spacecraftor perhaps the friend was mistaken. The first possibility requires numerous unproven assumptions about extraterrestrial life; the second is consistent with what we know about human fallibility. The 14th-century Franciscan friar William of Occam was never troubled by flying saucers, but he did see the importance of eliminating unnecessary assumptionsthe principle known as Occams Razor. It forms the central theme of Johnjoe McFaddens Life Is Simple, a tour through two millennia of scientific discovery.

Mr. McFadden is professor of molecular genetics at the University of Surrey in England. His interest in Occam was sparked by a daily commute which took him past the village of Ockham, where William was born, probably around 1287. Little is known about Williams early life, but by his thirties his writings on philosophy and theology were widely readand highly controversial.

Continued here:

Life Is Simple Review: A Blade to Shave Away Error - The Wall Street Journal

Posted in Quantum Physics | Comments Off on Life Is Simple Review: A Blade to Shave Away Error – The Wall Street Journal

Ruling electrons and vibrations in a crystal with polarized light – EurekAlert

Posted: at 7:29 am

image:- view more

Credit: Dr. Kazutaka Nakamura, Tokyo Institute of Technology

The quantum behavior of atomic vibrations excited in a crystal using light pulses has much to do with the polarization of the pulses, say materials scientists from Tokyo Tech. The findings from their latest study offer a new control parameter for the manipulation of coherently excited vibrations in solid materials at the quantum level.

To the naked eye, solids may appear perfectly still, but in reality, their constituent atoms and molecules are anything but. They rotate and vibrate, respectively defining the so-called rotational and vibrational energy states of the system. As these atoms and molecules obey the rules of quantum physics, their rotation and vibration are, in fact, discretized, with a discrete quantum imagined as the smallest unit of such motion. For instance, the quantum of atomic vibration is a particle called phonon.

Atomic vibrations, and therefore phonons, can be generated in a solid by shining light on it. A common way to do this is by using ultrashort light pulses (pulses that are tens to hundreds of femtoseconds long) to excite and manipulate phonons, a technique known as coherent control. While the phonons are usually controlled by changing the relative phase between consecutive optical pulses, studies have revealed that light polarization can also influence the behavior of these optical phonons.

Dr. Kazutaka Nakamuras team at Tokyo Institute of Technology (Tokyo Tech) explored the coherent control of longitudinal optical (LO) phonons (i.e., phonons corresponding to longitudinal vibrations excited by light) on the surface of a GaAs (gallium arsenide) single crystal and observed a quantum interference for both electrons and phonons for parallel polarization while only phonon interference for mutually perpendicular polarization. We developed a quantum mechanical model with classical light fields for the coherent control of the LO phonon amplitude and applied this to GaAs and diamond crystals. However, we did not study the effects of polarization correlation between the light pulses in sufficient detail, says Dr. Nakamura, Associate Professor at Tokyo Tech.

Accordingly, his team focused on this aspect in a new study published in Physical Review B. They modeled the generation of LO phonons in GaAs with two relative phase-locked pulses using a simplified band model and Raman scattering, the phenomenon underlying the phonon generation, and calculated the phonon amplitudes for different polarization conditions.

Their model predicted both electron and phonon interference for parallel-polarized pulses as expected, with no dependence on crystal orientation or the intensity ratio for allowed and forbidden Raman scattering. For perpendicularly polarized pulses, the model only predicted phonon interference at an angle of 45 from the [100] crystal direction. However, when one of the pulses was directed along [100], electron interference was excited by allowed Raman scattering.

With such insights, the team looks forward to a better coherent control of optical phonons in crystals. Our study demonstrates that polarization plays quite an important role in the excitation and detection of coherent phonons and would be especially relevant for materials with asymmetric interaction modes, such as bismuth, which has more than two optical phonon modes and electronic states. Our findings are thus extendable to other materials, comments Nakamura.

Indeed, light has its ways of getting both materials and material scientists excited!

###

About Tokyo Institute of Technology

Tokyo Tech stands at the forefront of research and higher education as the leading university for science and technology in Japan. Tokyo Tech researchers excel in fields ranging from materials science to biology, computer science, and physics. Founded in 1881, Tokyo Tech hosts over 10,000 undergraduate and graduate students per year, who develop into scientific leaders and some of the most sought-after engineers in industry. Embodying the Japanese philosophy of monotsukuri, meaning technical ingenuity and innovation, the Tokyo Tech community strives to contribute to society through high-impact research.

https://www.titech.ac.jp/english/

Physical Review B

Commentary/editorial

Not applicable

Theory for coherent control of longitudinal optical phonons in GaAs using polarized optical pulses with relative phase locking

1-Oct-2021

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Here is the original post:

Ruling electrons and vibrations in a crystal with polarized light - EurekAlert

Posted in Quantum Physics | Comments Off on Ruling electrons and vibrations in a crystal with polarized light – EurekAlert

A key part of the Big Bang remains troublingly elusive – Popular Science

Posted: at 7:29 am

It all started with a bang. During an unimaginably brief fraction of a second, the embryonic universe ballooned in size with unimaginable swiftness. In a flash, dimples of imperfection stretched into cosmic scars and locked in the universe we experience today, a milieu filled with galaxies, stars, planets, and humans.

The circumstantial evidence for this origin story, known as inflation, is overwhelming. It has inspired a generation of cosmologists to write papers, teach classes, and publish textbooks about the sundry ways inflation could have played out. And yet, a smoking gun remains elusive: Ancient ripples in spacetime should have left a particular imprint on the sky, but searches have repeatedly come up short.

A group of astronomers known as the BICEP/Keck collaboration leads the hunt for these primordial gravitational waves. On Monday the researchers released their latest results, the culmination of years of painstaking labor in one of the harshest places on Earth. Once more, they found no sign of their quarry. If an inflating universe reverberated with gravitational wavesas most cosmologists still fully expect it did it must have done so in a rather subtle way.

The simplest flavors, we are right now ruling out, says Clem Pryke, an astrophysicist at the University of Minnesota and member of the BICEP/Keck collaboration. [This result] is killing previously very popular theories of inflation.

[Related: What did the universe look like just after the big bang?]

The oldest light in the universe has been streaming through space for more than 13 billion years, ever since the cosmos cooled enough to become transparent. Astronomers have precisely mapped this Cosmic Microwave Background (CMB) and used it to learn that the universe was, and has remained, strikingly uniform overall. The CMB indicates that when the universe was just 380,000 years old, it had nearly the same density of matter everywhere. And today, astronomers see galaxies in every direction.

But the CMB is ever so slightly clumpy, and it clumps in a special way. Dense and thin spots come in all sizes, from very small to very large. Today we see a related pattern, from single galaxies to giant mega clusters of them.

How did the universe get this way? Inflation winds back the clock even further, attempting to explain how lumps of all sizes developed during the cosmoss first 0.00000000000000000000000000000000001 of a second. During this period, the minuscule universe seethed with energy, and quantum theory ruled the day. In the quantum realm, nothing holds perfectly steady. Subatomic jitters continuously introduced tiny flaws into the inflating universe, tweaking the density of substances that would eventually become light, matter, dark matter, and more. The growing universe continuously stretched these blemishes, even as newer, smaller fluctuations kept appearing, resulting in blips of all sizes. Eventually, the CMB recorded the final product. Inflation naturally produces lumpiness of exactly that type, Pryke says.

Or so the story goes. Inflation has become the leading theory of the birth of the cosmos because it explains exactly what astronomers see when they study the large-scale patterns formed by matter, dark matter, and more.

But one pattern has eluded them. The fabric of spacetime itself cannot hold perfectly still at the quantum scale, and inflation should have stretched those initial tremors into proper waves just as it did with matter and everything else. These primordial gravitational waves would have left faint fingerprints in the CMB, specific whorls in the light known as B-mode polarization. Astronomers have the capability to directly detect these whorls today, if the pattern is prominent enough, but have yet to find any.

Frustratingly, the universe is awash in materials that shine in a similarly whirly way. The BICEP team triumphantly announced the discovery of primordial gravitational waves in 2014, for instance, only to later learn that they had picked up the dim heat glow of dust grains streaming along the magnetic fields that fill the space between stars in the Milky Way.

The BICEP/Keck collaboration has now spent years refining their methods and building a series of telescopes at the south pole, where the crisp and arid air offers a crystal-clear view of the cosmos. Their newest results blend data from the last three generations of their Antarctic telescopes with other experiments.

For more than a decade, they have increased the number of sensors from dozens to thousands. And crucially, they have expanded the set of colors in which they observe, from one wavelength to three. Any B-mode swirls in the CMB, which fills the entire universe, should show up evenly in all wavelengths. Polarization that comes through more strongly at different wavelengths, however, can be blamed on local dust.

The key measure of how much inflation rattled the universe goes by the name tensor-to-scalar ratio, or r to those in the field. This single number describes how forcefully space time rippled compared to other fluctuations. An r of zero would imply that inflation didnt rock the fabric of the cosmos at all, suggesting that cosmology textbooks might need to rip out their first chapter.

BICEP/Keck observations have successively lowered the ceiling for primordial gravitational waves, showing that r should be smaller than 0.09 in 2016 and less than 0.07 in 2018. With the latest results, published in Physical Review Letters, the collaboration states with 95 percent confidence that r should be less than 0.036, a value that makes one commonly studied class of inflationary models impossible.

The shrinking limit for gravitational waves has obliged theorists to crouch lower and lower, but plenty of riffs on the general theme of inflation still fit comfortably below the new BICEP/Keck roof. The situation is getting cozy though, and if the limit falls below 0.01, many inflation researchers will start to sweat.

Its pretty hard to get a value less than that in any basic textbook model of inflation, theorist Marc Kamionkowski of John Hopkins University told Physics Today in 2019.

[Related: Wait a second: What came before the big bang?]

The nearly hundred members of the BICEP/Keck collaboration plan to reach that level of precision in a matter of years. They are currently building a new array of four telescopes, which should allow for a measurement of r up to three times as precise. By the end of the decade, a mega-collaboration between BICEP/Keck and other CMB teams known as CMB-S4 should get a few times more sensitive still, limiting r to roughly 0.001.

Many cosmologists hope that primordial gravitational waves will show up in one of these ever-sharper images of the CMB, proving that theorists really do have a handle on the universes initial bang. If not, the theory may languish in limbo a bit longer. It would take an r ten times lower still to cull most straggler inflationary models, and that would require experimentalists like Pryke to once again come up with even better ways to measure the nearly imperceptible ripples theorists forecast.

From an experimental point of view, it just seems unobtainable, he says. But when I got into the business 20 years ago, measuring B modes at all seemed ridiculous.

Originally posted here:

A key part of the Big Bang remains troublingly elusive - Popular Science

Posted in Quantum Physics | Comments Off on A key part of the Big Bang remains troublingly elusive – Popular Science

Observer effect (physics) – Wikipedia

Posted: October 7, 2021 at 3:50 pm

Fact that observing a situation changes it

In physics, the observer effect is the disturbance of an observed system by the act of observation.[1][2] This is often the result of instruments that, by necessity, alter the state of what they measure in some manner. A common example is checking the pressure in an automobile tire; this is difficult to do without letting out some of the air, thus changing the pressure. Similarly, it is not possible to see any object without light hitting the object, and causing it to reflect that light. While the effects of observation are often negligible, the object still experiences a change. This effect can be found in many domains of physics, but can usually be reduced to insignificance by using different instruments or observation techniques.

An especially unusual version of the observer effect occurs in quantum mechanics, as best demonstrated by the double-slit experiment. Physicists have found that observation of quantum phenomena can actually change the measured results of this experiment. Despite the "observer effect" in the double-slit experiment being caused by the presence of an electronic detector, the experiment's results have unfortunately been misinterpreted by some to suggest that a conscious mind can directly affect reality.[3] The need for the "observer" to be conscious is not supported by scientific research, and has been pointed out as a misconception rooted in a poor understanding of the quantum wave function and the quantum measurement process.[4][5][6]

An electron is detected upon interaction with a photon; this interaction will inevitably alter the velocity and momentum of that electron. It is possible for other, less direct means of measurement to affect the electron. It is also necessary to distinguish clearly between the measured value of a quantity and the value resulting from the measurement process. In particular, a measurement of momentum is non-repeatable in short intervals of time. A formula (one-dimensional for simplicity) relating involved quantities, due to Niels Bohr (1928) is given by

where

The measured momentum of the electron is then related to vx, whereas its momentum after the measurement is related to vx. This is a best-case scenario.[7]

In electronics, ammeters and voltmeters are usually wired in series or parallel to the circuit, and so by their very presence affect the current or the voltage they are measuring by way of presenting an additional real or complex load to the circuit, thus changing the transfer function and behavior of the circuit itself. Even a more passive device such as a current clamp, which measures the wire current without coming into physical contact with the wire, affects the current through the circuit being measured because the inductance is mutual.

In thermodynamics, a standard mercury-in-glass thermometer must absorb or give up some thermal energy to record a temperature, and therefore changes the temperature of the body which it is measuring.

The theoretical foundation of the concept of measurement in quantum mechanics is a contentious issue deeply connected to the many interpretations of quantum mechanics. A key focus point is that of wave function collapse, for which several popular interpretations assert that measurement causes a discontinuous change into an eigenstate of the operator associated with the quantity that was measured, a change which is not time-reversible.

More explicitly, the superposition principle ( = nann) of quantum physics dictates that for a wave function , a measurement will result in a state of the quantum system of one of the m possible eigenvalues fn , n = 1, 2, ..., m, of the operator F which in the space of the eigenfunctions n , n = 1, 2, ..., m.

Once one has measured the system, one knows its current state; and this prevents it from being in one of its other statesit has apparently decohered from them without prospects of future strong quantum interference.[8][9][10] This means that the type of measurement one performs on the system affects the end-state of the system.

An experimentally studied situation related to this is the quantum Zeno effect, in which a quantum state would decay if left alone, but does not decay because of its continuous observation. The dynamics of a quantum system under continuous observation are described by a quantum stochastic master equation known as the Belavkin equation.[11][12][13] Further studies have shown that even observing the results after the photon is produced leads to collapsing the wave function and loading a back-history as shown by delayed choice quantum eraser.[14]

When discussing the wave function which describes the state of a system in quantum mechanics, one should be cautious of a common misconception that assumes that the wave function amounts to the same thing as the physical object it describes. This flawed concept must then require existence of an external mechanism, such as a measuring instrument, that lies outside the principles governing the time evolution of the wave function , in order to account for the so-called "collapse of the wave function" after a measurement has been performed. But the wave function is not a physical object like, for example, an atom, which has an observable mass, charge and spin, as well as internal degrees of freedom. Instead, is an abstract mathematical function that contains all the statistical information that an observer can obtain from measurements of a given system. In this case, there is no real mystery in that this mathematical form of the wave function must change abruptly after a measurement has been performed.

A consequence of Bell's theorem is that measurement on one of two entangled particles can appear to have a nonlocal effect on the other particle. Additional problems related to decoherence arise when the observer is modeled as a quantum system, as well.

The uncertainty principle has been frequently confused with the observer effect, evidently even by its originator, Werner Heisenberg.[15] The uncertainty principle in its standard form describes how precisely we may measure the position and momentum of a particle at the same time if we increase the precision in measuring one quantity, we are forced to lose precision in measuring the other.[16]An alternative version of the uncertainty principle,[17] more in the spirit of an observer effect,[18] fully accounts for the disturbance the observer has on a system and the error incurred, although this is not how the term "uncertainty principle" is most commonly used in practice.

More here:

Observer effect (physics) - Wikipedia

Posted in Quantum Physics | Comments Off on Observer effect (physics) – Wikipedia

Physics – Simulating Quantum Particles on a Lattice

Posted: at 3:50 pm

September 2, 2021• Physics 14, s111

A new quantum simulator uses microwave photons in a superconducting cavity to simulate particles on a lattice similar to those found in superconductors or atomic nuclei.

Jimmy S. C. Hung/University of Waterloo

Jimmy S. C. Hung/University of Waterloo

A quantum simulator is a limited-use quantum computer: a machine that can be programmed to replicate the behavior of a specific quantum system that is too complex to simulate using classical methods. Because of their comparative simplicity, many researchers believe that quantum simulators could deliver useful applications sooner than universal quantum computers will. With this goal in mind, Christopher Wilson of the University of Waterloo in Canada and colleagues have used a chip-based superconducting cavity to build a quantum simulator that can simulate quantum particles on a lattice [1]. Such particle-lattice systems can be used as models for the behavior of high-temperature superconductors or the particles inside an atomic nucleus.

The superconducting cavity demonstrated by Wilson and colleagues holds microwave radiation of specific frequencies, or modes, which are determined by the cavitys size. The researchers change the effective size of the cavity by delaying the propagation of photons at one end by a variable interval. When the cavity contains multiple microwave photons, tuning its effective length causes the various cavity modes to interact with each other.

The team used this setup to create a so-called bosonic Creutz laddera simple model of particles moving on a lattice of four nodes. In their implementation of the model, Wilson and colleagues engineered the cavity-mode interactions so that each mode of the cavity corresponded to a node on the lattice. They also showed that the quantum simulator can be programmed in situ by introducing microwaves of different frequencies into the cavity. The technique can be scaled up to simulate more complex quantum systems by placing multiple superconducting cavities on the chip.

Sophia Chen

Sophia Chen is a freelance science writer based in Columbus, Ohio.

Jimmy S.C. Hung, J.H. Busnaina, C.W. Sandbo Chang, A.M. Vadiraj, I. Nsanzineza, E. Solano, H. Alaeian, E. Rico, and C.M. Wilson

Phys. Rev. Lett. 127, 100503 (2021)

Published September 2, 2021

The control of molecular-level quantum effects in artificial photosynthetic membranes is a powerful tuning knob for optimizing long-range energy transport, according to a theoretical study. Read More

Numerical studies indicate that certain types of time crystals might be described using classical physicsa result that could vastly simplify the theoretical description of these systems. Read More

The rest is here:

Physics - Simulating Quantum Particles on a Lattice

Posted in Quantum Physics | Comments Off on Physics – Simulating Quantum Particles on a Lattice

Page 57«..1020..56575859..7080..»