Page 72«..1020..71727374..8090..»

Category Archives: Quantum Physics

Physicists at CERN Just Discovered a Brand New Particle – Interesting Engineering

Posted: July 29, 2021 at 8:40 pm

In quantum physics, one breakthrough can quickly lead to several more.

This could happen in the wake of a brand new particle recently discovered by a group of scientists with the Large Hadron Collider (LHCb), called Tcc+and dubbed a tetraquark, according to a recent presentation at the European Physical Society Conference on High Energy Physics (EPS-HEP). The new particle is an exotic hadron comprised of two quarks and two antiquarks.

Crucially, this exotic matter particle lives longer than any other ever discovered, in addition to containing two heavy quarks and two light antiquarks, in another first.

All matter is comprised of fundamental building blocks, called quarks, which can fuse to form hadrons, including baryons, like the neutron and proton of conventional atomic theory. These contain three quarks, in addition to mesons, which come into being as quark-antiquark pairs. In the last several years, numerous "exotic" hadrons, particles dubbed as such because they possess four or five quarks (instead of two or three, which is more normal), were discovered. But the recent study has revealed the existence of an especially distinguished exotic hadron, or super-exotic hadron, if you can believe it.

The exceptionally unique hadron contains two charm quarks, in addition to both an up and a down antiquark. In recent years, multiple tetraquarks were discovered, one of which had two charm quarks, and two charm antiquarks. But the newly-discovered one has two charm quarks, without the extra two charm antiquarks that previously discovered hadrons had. Called "open charm", or "double open charm", these particles are different from other quarks that have an equal balance of quarks and antiquarks that cancel one another out (like a zero-sum game). But in the case of the new "super" exotic hadron (super quote not official), the charm number adds up to two, according to a Phys.org report.

But there's more to thisTcc+ super exotic hadron than charm. It's also the first particle discovered that's a member of a category of tetraquarks with a pair of both light and heavy antiquarks. This class of particles decays via a transformation into a pair of mesons, each of which comes into being via one of the heavy and one of the light antiquarks. Some theoretical predictions predicate the mass of tetraquarks of this kind to be near the sum of masses of the two mesons. In other words, their masses are very close, which creates "difficulty" for decay processes. What this does is extend the lifetime of the particle, compared to other ones, which is whyTcc+is the longest-lived exotic hadron ever discovered in the history of quantum physics.

Everyone knows quantum theory is famously difficult to parse, but this discovery will open the door to the discovery of even more novel particles of this class. Ones that are heavier, with one or two charm quarks that are replaced with bottom quarks. The theorized particle with two bottom quarks should have a mass smaller than the sum of any two B mesons, which, in simpler terms, means decay will be extremely difficult: Lacking the ability to decay via strong interaction, heavier particles than the newly-discovered one would have a lifetime that's several orders of magnitude longer than any exotic hadron observed before. Finally, this novelTcc+particle exhibit an exceptional level of precision on its mass, and enable further studies of quantum numbers of the particle. With these, physicists will finally be able to observe effects on quantum levelsthat no one has successfully studied before.

Go here to read the rest:

Physicists at CERN Just Discovered a Brand New Particle - Interesting Engineering

Posted in Quantum Physics | Comments Off on Physicists at CERN Just Discovered a Brand New Particle – Interesting Engineering

Ohio State joins national initiative to accelerate innovation in quantum technology – The Ohio State University News

Posted: at 8:40 pm

The Ohio State University has joined the Chicago Quantum Exchange, a growing intellectual hub for the research and development of quantum technology.

The exchange, based at the University of Chicagos Pritzker School of Molecular Engineering, announced the addition today of Ohio State and the Weizmann Institute of Science as partners, referring to both as world-leading research institutions at the forefront of quantum information science and engineering.

Quantum information technology presents unique opportunities for students and researchers to engage in curiosity-driven and cutting-edge work that solves the problems people face in their everyday lives, said Ohio State President Kristina M. Johnson. As a result of this partnership with CQE, Ohio State faculty and students will have the opportunity to learn alongside brilliant collaborators and make a real-world and far-reaching impact.

Ohio State is the Chicago Quantum Exchanges first regional partner, strengthening the organizations connections throughout the Midwest and the nation. The lead member institution in the multi-institutional quantum education initiative QuSTEAM, the university is dedicated to preparing a quantum-ready workforce that can meet the existing and growing demand across the communications, optics, computing and materials industries.

The exchange is composed of a community of researchers aiming to accelerate discovery and innovation in quantum technology and develop new ways of understanding the laws of quantum mechanics, the theory that governs nature at its smallest scales. Anchored by the University of Chicago, Argonne National Laboratory, Fermi National Accelerator Laboratory and the University of Illinois at Urbana-Champaign, CQE also includes the University of Wisconsin-Madison and Northwestern University as well as a range of industry partners.

Having partners across the world, and across the Midwest, broadens our perspectives and as we continue to grow our community from the heart of U.S. quantum research in Chicago, said David Awschalom, the Liew Family Professor in Molecular Engineering and Physics at the University of Chicago and director of the Chicago Quantum Exchange. We look forward to collaborating with Ohio State and the Weizmann Institute to advance quantum science and technology and develop a strong, diverse quantum workforce.

In addition to advancing research in multiple quantum and physics areas as well as such disciplines as nanomechanics and physical chemistry, the exchange seeks to attract talent, funding and industry to the Chicago area to become the source for tomorrows leading quantum engineers.

Working with leaders at Ohio State University and the Weizmann Institute has reinforced for us the deep value of global collaboration on quantum science and technology, said Juan de Pablo, vice president for national laboratories, science strategy, innovation and global initiatives at the University of Chicago. Quantum information science is poised to make a profound impact on research, technology and business growth around the globe, and we are excited to continue advancing that work with some of the worlds great research organizations.

View post:

Ohio State joins national initiative to accelerate innovation in quantum technology - The Ohio State University News

Posted in Quantum Physics | Comments Off on Ohio State joins national initiative to accelerate innovation in quantum technology – The Ohio State University News

Charm Meson Particle | What Is Antimatter? – Popular Mechanics

Posted: at 8:40 pm

A quirky type of subatomic particle known as the charm meson has the seemingly magical ability to switch states between matter and antimatter (and back again), according to the team of over 1,000 physicists who were involved in documenting the phenomenon for the first time.

Oxford researchers, using data from the second run of the Large Hadron Collider (LHC)a particle accelerator at the Switzerland-based European Organization for Nuclear Research (known internationally as CERN)made the determination by taking extremely precise measurements of the masses of two particles: the charm meson in both its particle and antiparticle states.

Yes, this breakthrough in quantum physics is as heady as it sounds. A charm meson particle, after all, can exist in a state where it is both itself and its evil twin (the antiparticle version) at once. This state is known as "quantum superposition," and it's at the heart of the famous Schrdinger's Cat thought experiment.

As a result of this situation, the charm meson exists as two distinct particles with two distinct masses. But the difference between the two is infinitesimally small0.00000000000000000000000000000000000001 grams to be exact, according to the scientists' research, described in a new paper published last month on the arXiv preprint server (that means the work hasn't been peer-reviewed yet). They've recently submitted the work for publication in the journal Physical Review Letters.

While the findings are basically the definition of minuscule, the ramifications are anything but; the physicists say the charm meson particle's ability to exist as both itself and its alter-ego could shake up our assumptions about the very nature of reality.

To understand what's going on here, we first have to unpack the meson particle. These are extremely short-lived subatomic particles with a balanced number of quarks and antiquarks. In case you skipped that lecture in quantum physics, quarks are particles that combine together to form "hadrons," some of which are protons and neutronsthe basic components of atomic nuclei.

Via Symmetry Magazine: a joint Fermilab/SLAC publication. Artwork by Sandbox Studio, Chicago.

There are six "flavors" of quark: up, down, charm, strange, top, and bottom. Each also has an antiparticle, called an antiquark. Quarks and antiquarks vary because they have different propertieslike electrical charge of equal magnitude, but opposite sign.

Back to mesons: They're almost the size of neutrons or protons, but are extremely unstable. So, they're uncommon in nature itself, but physicists are interested in studying them in artificial environments (like in the LHC) because they want to better understand quarks. That's because, along with leptons, quarks make up all known matter.

Charm mesons can travel as a mixture of both its particle and antiparticle states (a phenomenon appropriately called "mixing"). Physicists have known that for over a decade, but the new research shows for the first time that charm mesons can actually oscillate back and forth between the two states.

Antiquarks are the opposite of quarks and are considered a type of antimatter. These particles can cancel out normal matterwhich is kind of a problem if you want the universe to, well, exist. The various kinds of antimatter are almost all named using the anti- prefix, like quark versus antiquark. More specifically, a charm meson typically has a charm quark and an up antiquark, and its anti- partner has a charm antiquark and an up quark.

It's important to note the charm meson is not the only particle that can oscillate between matter and antimatter states. Physicists have previously observed three other particles in the Standard Modelthe theory that explains particle physicsdoing so. That includes strange mesons, beauty mesons, and strange-beauty mesons.

Why was the charm meson a holdout for so long? The charm meson oscillates incredibly slowly, meaning physicists had to take measurements at an extremely fine degree of detail. In fact, most charm mesons will fully decay before a complete oscillation can even take place, like an aging person with a very slow-growing tumor.

CERN

The large-scale undertaking that produced the charm meson data is called the Large Hadron Collider beauty experiment. It seeks to examine why we live in a world full of matter, but seemingly no antimatter, according to CERN.

Using a vast amount of data from the charm mesons generated at the LHC, the scientists measured particles to a difference of 1 x 10^-38 grams. With that unbelievably fine-toothed comb, they were able to observe the superposition oscillation of the charm mesons.

How did scientists measure this incredibly tiny difference in mass? In short, the LHC regularly produces mesons of all kinds as part of its scientists' work.

"Charm mesons are produced at the LHC in proton-proton collisions, and normally they only travel a few millimeters before they decay into other particles," according to a University of Oxford press release. "By comparing the charm mesons that tend to travel further versus those that decay sooner, the team identified differences in mass as the main factor that drives whether a charm meson turns into an anti-charm meson or not."

ALFRED PASIEKA/SCIENCE PHOTO LIBRARYGetty Images

Now that scientists have finally observed charm meson oscillation, they're ready to open up a whole new can of worms in their experimentation, hoping to unearth the mysteries of the oscillation process itself.

That path of study could lead to a new understanding about how our world began in the first place. Per the Standard Model of particle physics, the Big Bang should have produced matter and antimatter in equal parts. Thankfully, that didn't happenbecause if it had, all of the antimatter particles would have collided with the matter particles, destroying everything.

Clearly, physicists say, there is an imbalance in matter and antimatter collisions in our world, and the answer to that mystery could lie in the incomprehensibly small oscillations of particles like the charm meson. Now, scientists want to understand if the rate of transition from particle to antiparticle is the same as the rate of transition from antiparticle to particle.

Depending on what they find, our very conceptions of how we existwhy we live in a world full of matter rather than antimatter, and how we got herecould change forever.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io

Read more:

Charm Meson Particle | What Is Antimatter? - Popular Mechanics

Posted in Quantum Physics | Comments Off on Charm Meson Particle | What Is Antimatter? – Popular Mechanics

Quantum Cash and the End of Counterfeiting – IEEE Spectrum

Posted: at 8:40 pm

Illustration: Emily Cooper

Since the invention of paper money, counterfeiters have churned out fake bills. Some of their handiwork, created with high-tech inks, papers, and printing presses, is so good that its very difficult to distinguish from the real thing. National banks combat the counterfeiters with difficult-to-copy watermarks, holograms, and other sophisticated measures. But to give money the ultimate protection, some quantum physicists are turning to the weird quirks that govern natures fundamental particles.

At the moment, the idea of quantum money is very much on the drawing board. That hasnt stopped researchers from pondering what encryption schemes they might apply for it, or from wondering how the technologies used to create quantum states could be shrunk down to the point of fitting it in your wallet, says Scott Aaronson, an MIT computer scientist who works on quantum money. This is science fiction, but its science fiction that doesnt violate any of the known laws of physics.

The laws that govern subatomic particles differ dramatically from those governing everyday experience. The relevant quantum law here is the no-cloning theorem, which says it is impossible to copy a quantum particles state exactly. Thats because reproducing a particles state involves making measurementsand the measurements change the particles overall properties. In certain cases, where you already know something about the state in question, quantum mechanics does allow you to measure one attribute of a particle. But in doing so youve made it impossible to measure the particles other attributes.

This rule implies that if you use money that is somehow linked to a quantum particle, you could, in principle, make it impossible to copy: It would be counterfeit-proof.

The visionary physicist Stephen Wiesner came up with the idea of quantum money in 1969. He suggested that banks somehow insert a hundred or so photons, the quantum particles of light, into each banknote. He didnt have any clear idea of how to do that, nor do physicists today, but never mind. Its still an intriguing notion, because the issuing bank could then create a kind of minuscule secret watermark by polarizing the photons in a special way.

To validate the note later, the bank would check just one attribute of each photon (for example, its vertical or horizontal polarization), leaving all other attributes unmeasured. The bank could then verify the notes authenticity by checking its records for how the photons were set originally for this particular bill, which the bank could look up using the bills printed serial number.

Thanks to the no-cloning theorem, a counterfeiter couldnt measure all the attributes of each photon to produce a copy. Nor could he just measure the one attribute that mattered for each photon, because only the bank would know which attributes those were.

But beyond the daunting engineering challenge of storing photons, or any other quantum particles, theres another basic problem with this scheme: Its a private encryption. Only the issuing bank could validate the notes. The ideal is quantum money that anyone can verify, Aaronson saysjust the way every store clerk in the United States can hold a $20 bill up to the light to look for the embedded plastic strip.

That would require some form of public encryption, and every such scheme researchers have created so far is potentially crackable. But its still worth exploring how that might work. Verification between two people would involve some kind of black boxa machine that checks the status of a piece of quantum money and spits out only the answer valid or invalid. Most of the proposed public-verification schemes are built on some sort of mathematical relationship between a bank notes quantum states and its serial number, so the verification machine would use an algorithm to check the math. This verifier, and the algorithm it follows, must be designed so that even if they were to fall into the hands of a counterfeiter, he couldnt use them to create fakes.

As fast as quantum money researchers have proposed encryption schemes, their colleagues have cracked them, but its clear that everyones having a great deal of fun. Most recently, Aaronson and his MIT collaborator Paul Christiano put forth a proposal [PDF] in which each banknotes serial number is linked to a large number of quantum particles, which are bound together using a quantum trick known as entanglement.

All of this is pie in the sky, of course, until engineers can create physical systems capable of retaining quantum states within moneyand that will perhaps be the biggest challenge of all. Running a quantum economy would require people to hold information encoded in the polarization of photons or the spin of electrons, say, for as long as they required cash to sit in their pockets. But quantum states are notoriously fragile: They decohere and lose their quantum properties after frustratingly short intervals of time. Youd have to prevent it from decohering in your wallet, Aaronson says.

For many researchers, that makes quantum money even more remote than useful quantum computers. At present, its hard to imagine having practical quantum money before having a large-scale quantum computer, says Michele Mosca of the Institute for Quantum Computing at the University of Waterloo, in Canada. And these superfast computers must also overcome the decoherence problem before they become feasible.

If engineers ever do succeed in building practical quantum computersones that can send information through fiber-optic networks in the form of encoded photonsquantum money might really have its day. On this quantum Internet, financial transactions would not only be secure, they would be so ephemeral that once the photons had been measured, there would be no trace of their existence. In todays age of digital cash, we have already relieved ourselves of the age-old burden of carrying around heavy metal coins or even wads of banknotes. With quantum money, our pockets and purses might finally be truly empty.

Michael Brooks, a British science journalist, holds a Ph.D. in quantum physics from the University of Sussex, which prepared him well to tackle the article Quantum Cash and the End of Counterfeiting. He says he found the topic of quantum money absolutely fascinating, and adds, I just hope I get to use some in my lifetime. He is the author, most recently, of Free Radicals: The Secret Anarchy of Science (Profile Books, 2011).

Here is the original post:

Quantum Cash and the End of Counterfeiting - IEEE Spectrum

Posted in Quantum Physics | Comments Off on Quantum Cash and the End of Counterfeiting – IEEE Spectrum

In search of nature’s laws Steven Weinberg died on July 23rd – The Economist

Posted: at 8:40 pm

Jul 31st 2021

AS HE LIKED to tell it, there were three epiphanies in Steven Weinbergs life. The first came in a wooden box. It was a chemistry set, passed on by a cousin who was tired of it. As he played with the chemicals in it, and found that each reacted differently because of atoms, a vast thought struck him: if he learned about atoms, he would know how the whole world worked.

Your browser does not support the

Enjoy more audio and podcasts on iOS or Android.

The second epiphany came when, as a teenager, he paid a routine visit to his local library in New York. On the table was a book called Heat, open to a page of equations. Among them was the elegant, unknown swirl of an integral sign. It showed that with a mathematical formula, and a magic symbol, science could express something as rudimentary as the glow of a candle flame. His third awakening, when he was in his 20s and already a professor of physics, was the discovery that a mathematical theory could be applied to the whole dazzling array of stars and planets, dark space beyond them and, he concluded, everything.

All regularities in nature followed from a few simple laws. Not all were known yet; but they would be. In the end he was sure they would combine into a set of equations simple enough to put on a T-shirt, like Einsteins E=mc2. It was just a matter of continually querying and searching. In the strange circumstance of finding himself conscious and intelligent on a rare patch of ordinary matter that was able to sustain life, doggedly asking questions was the least he could do.

His signal achievement was to discover, in the 1960s, a new level of simplicity in the universe. There were then four known universal forcesgravity and electromagnetism, both of which operate at large scales, and the strong and weak nuclear forces, both of which are appreciable only at small scales. Electromagnetism was explained by a quantum field theory; similar theories for the nuclear forces were eagerly being sought.

In quantum field theories, forces are mediated by particles called bosons; the boson involved in electromagnetism is the photon, the basic particle of light. He and others showed that a theory of the weak force required three bosons: the W+ and the W-, which carried electric charges, and the Z0, which did not. The W particles were at play in the observable universe; they were responsible for some sorts of radioactive decay. The Z was notional until, in 1973, researchers at CERN, Europes great particle-physics lab, observed neutral currents between the particles they were knocking together. These had never been seen before, and could be explained only by the Z. In 1979 the Nobel prize duly followed.

In his understated way, he called his contribution very satisfactory. It was not just that the weak force and the electromagnetic force could be explained by similar tools. At high energies they were basically the same thing.

That triumph of unification increased his curiosity about the only point where such high energies were known to have existed: the Big Bang. In his book The First Three Minutes, in 1977, he described the immediate aftermath, to the point where the hyper-hot cosmic soup had cooled enough for atomic nuclei to form. He saw early on how deeply particle physics and cosmology were intertwined, and became fascinated by the idea of a universe dominated by unobservable dark energy and dark matter in which ordinary matter (the stars and the planets and us) was merely a small contamination. He longed for CERN s Large Hadron Collider to find evidence of dark matter. It caused him lasting frustration that Congress in 1993 had cancelled the Superconducting Super Collider, which was to have been even bigger.

Whatever was found, he was sure it would fit into the simple scheme of natures laws. Quantum mechanics, however, troubled him. He worried that its determinism implied that the world was endlessly splitting, generating myriad parallel histories and universes in which the constants in nature would have different values. Goodbye to a unified theory of everything, if that were so.

Such a unified law would have given him satisfaction but, he knew, no comfort. Natures laws were impersonal, cold and devoid of purpose. Certainly there was no God-directed plan. As he wrote at the end of The First Three Minutes, the more the universe seemed comprehensible, the more it seemed pointless. No saying of his became more famous, but the next paragraph softened it: humans gave the universe their own point and purpose by the way they lived, by loving each other and by creating art.

He set the example by marrying Louise, his college sweetheart, devouring opera and theatre, revelling in the quirky liberalism of Austin, where he taught at the University of Texas for almost four decades, and looking for theories in physics that would carry the same sense of inevitability he found so beautiful in chamber music, or in poetry. He still thought of human existence as accidental and tragic, fundamentally. But from his own little island of warmth and love, art and science, he managed a wry smile.

What angered him most was the persistence of religion. It had not only obstructed and undermined science in the age of Galileo and Copernicus; it had also survived Darwin, whose theory of evolution had shocked it more sharply than anything physics did. And it was still there, an alternative theory of the world that corroded free inquiry. For even if the laws of nature could be reduced to one, scientists would still ask: Why? Why this theory, not another? Why in this universe, and not another?

There was, he reflected, no end to the chain of whys. So he did not stop asking or wondering. He liked to review and grade his predecessors, from the ancient Greeks onwards, chastising them for failing to use the data they had, but also sympathising with their lack of machines advanced enough to prove their ideas. The human tragedy was never to understand why things were as they were. Yet, for all that, he could echo Ptolemy: I know that I am mortal and the creature of a day, but when I search out the massed wheeling circles of the stars, my feet no longer touch the EarthI take my fill of ambrosia, the food of the gods.

This article appeared in the Obituary section of the print edition under the headline "Natures laws"

More here:

In search of nature's laws Steven Weinberg died on July 23rd - The Economist

Posted in Quantum Physics | Comments Off on In search of nature’s laws Steven Weinberg died on July 23rd – The Economist

July: Superconductivity in cuprates | News and features – University of Bristol

Posted: at 8:40 pm

Researchers from the University of Bristols School of Physics used some of Europes strongest continuous magnetic fields to uncover evidence of exotic charge carriers in the metallic state of copper-oxide high-temperature superconductors.

Their results have been published this week in Nature. In a related publication in SciPost Physics last week, the team postulated that it is these exotic charge carriers that form the superconducting pairs, in marked contrast with expectations from conventional theory.

Conventional superconductivity

Superconductivity is a fascinating phenomenon in which, below a so-called critical temperature, a material loses all its resistance to electrical currents. In certain materials, at low temperatures, all electrons are entangled in a single, macroscopic quantum state, meaning that they no longer behave as individual particles but as a collective resulting in superconductivity. The general theory for this collective electron behaviour has been known for a long time, but one family of materials, the cuprates, refuses to conform to the paradigm. They also possess the highest ambient-pressure superconducting transition temperatures known to exist. It was long thought that for these materials the mechanism that glues together the electrons must be special, but recently the attention has shifted and now physicists investigate the non-superconducting states of cuprates, hoping to find clues to the origin of high-temperature superconductivity and its distinction from normal superconductors.

High-temperature superconductivity

Most superconductors, when heated to exceed their critical temperature, change into ordinary metals. The quantum entanglement that causes the collective behaviour of the electrons fades away, and the electrons start to behave like an ordinary gas of charged particles.

Cuprates are special, however. Firstly, as mentioned above, because their critical temperature is considerably higher than that of other superconductors. Secondly, they have very special measurable properties even in their metallic phase. In 2009, physicist Prof Nigel Hussey and collaborators observed experimentally that the electrons in these materials form a new type of structure, different from that in ordinary metals, thereby establishing a new paradigm that scientists now call the strange metal. Specifically, the resistivity at low temperatures was found to be proportional to temperature, not at a singular point in the temperature versus doping phase diagram (as expected for a metal close to a magnetic quantum critical point) but over an extended range of doping. This extended criticality became a defining feature of the strange metal phase from which superconductivity emerges in the cuprates.

Magnetoresistance in a strange metal

In the first of these new reports, EPSRC Doctoral Prize Fellow Jakes Ayres and PhD student Maarten Berben (based at HFML-FELIX in Nijmegen, the Netherlands) studied the magnetoresistance the change in resistivity in a magnetic field and discovered something unexpected. In contrast to the response of usual metals, the magnetoresistance was found to follow a peculiar response in which magnetic field and temperature appear in quadrature. Such behaviour had only been observed previously at a singular quantum critical point, but here, as with the zero-field resistivity, the quadrature form of the magnetoresistance was observed over an extended range of doping. Moreover, the strength of the magnetoresistance was found to be two orders of magnitude larger than expected from conventional orbital motion and insensitive to the level of disorder in the material as well as to the direction of the magnetic field relative to the electrical current. These features in the data, coupled with the quadrature scaling, implied that the origin of this unusual magnetoresistance was not the coherent orbital motion of conventional metallic carriers, but rather a non-orbital, incoherent motion from a different type of carrier whose energy was being dissipated at the maximal rate allowed by quantum mechanics.

From maximal to minimal dissipation

Prof Hussey said: Taking into account earlier Hall effect measurements, we had compelling evidence for two distinct carrier types in cuprates - one conventional, the other strange. The key question then was which type was responsible for high-temperature superconductivity? Our team led by Matija ulo and Caitlin Duffy then compared the evolution of the density of conventional carriers in the normal state and the pair density in the superconducting state and came to a fascinating conclusion; that the superconducting state in cuprates is in fact composed of those exotic carriers that undergo such maximal dissipation in the metallic state. This is a far cry from the original theory of superconductivity and suggests that an entirely new paradigm is needed, one in which the strange metal takes centre stage.

Paper:

'Incoherent transport across the strange-metal regime of overdoped cuprates' in Nature by Nigel Hussey et al.

Go here to see the original:

July: Superconductivity in cuprates | News and features - University of Bristol

Posted in Quantum Physics | Comments Off on July: Superconductivity in cuprates | News and features – University of Bristol

The Quest for the Spin Transistor – IEEE Spectrum

Posted: at 8:40 pm

From the earliest batteries through vacuum tubes, solid state, and integrated circuits, electronics has staved off stagnation. Engineers and scientists have remade it repeatedly, vaulting it over one hurdle after another to keep alive a record of innovation unmatched in industrial history.

It is a spectacular and diverse account through which runs a common theme. When a galvanic pile twitches a frog's leg, when a triode amplifies a signal, or when a microprocessor stores a bit in a random access memory, the same agent is at work: the movement of electric charge. Engineers are far from exhausting the possibilities of this magnificent mechanism. But even if a dead end is not yet visible, the foreseeable hurdles are high enough to set some searching for the physics that will carry electronics on to its next stage. In so doing, it could help up the ante in the semiconductor stakes, ushering in such marvels as nonvolatile memories with enormous capacity, ultrafast logic devices that can change function on the fly, and maybe even processors powerful enough to begin to rival biological brains.v A growing band of experimenters think they have seen the future of electronics, and it is spin. This fundamental yet elusive property of electrons and other subatomic particles underlies permanent magnetism, and is often regarded as a strange form of nano-world angular momentum.

Microelectronics researchers have been investigating spin for at least 20 years. Indeed, their discoveries revolutionized hard-disk drives, which since 1998 have used a spin-based phenomenon to cram more bits than ever on to their disks. Within three years, Motorola Inc. and IBM Corp. are expected to take the next step, introducing the first commercial semiconductor chips to exploit spin--a new form of random access memory called M (for magnetic) RAM. Fast, rugged, and nonvolatile, MRAMs are expected to carve out a niche from the US $10.6-billion-a-year flash memory market. If engineers can bring the costs down enough, MRAMs may eventually start digging into the $35 billion RAM market as well.

The sultans of spin say memory will be just the beginning. They have set their sights on logic, emboldened by experimental results over the past two or three years that have shown the budding technologies of spin to be surprisingly compatible with the materials and methods of plain old charge-based semiconductor electronics. In February 2000, the Defense Advance Research Projects Agency announced a $15-million-a-year, five-year program to focus on new kinds of semiconductor materials and devices that exploit spin. It was the same Arlington, Va., agency's largesse of $60 million or so over the past five years that helped move MRAMs from the blackboard to the verge of commercial production.

Subatomic spookiness

Now proponents envision an entirely new form of electronics, called spintronics. It would be based on devices that used the spin of electrons to control the movement of charge. Farther down the road (maybe a lot farther), researchers might even succeed in making devices that used spin itself to store and process data, without any need to move charge at all. Spintronics would use much less power than conventional electronics, because the energy needed to change a spin is a minute fraction of what is needed to push charge around.

Other advantages of spintronics include nonvolatility: spins don't change when the power is turned off. And the peculiar nature of spin--and the quantum theory that describes it--points to other weird, wonderful possibilities, such as: logic gates whose function--AND, OR, NOR, and so on--could be changed a billion times a second; electronic devices that would work directly with beams of polarized light as well as voltages; and memory elements that could be in two different states at the same time. "It offers completely different types of functionality" from today's electronics, said David D. Awschalom, who leads the Center for Spintronics and Quantum Computation at the University of California at Santa Barbara. "The most exciting possibilities are the ones we're not thinking about."

Much of the research is still preliminary, Awschalom cautions. A lot of experiments are still performed at cryogenic temperatures. And no one has even managed to demonstrate a useful semiconductor transistor or transistor-like device based on spin, let alone a complex logic circuit. Nevertheless, researchers at dozens of organizations are racing to make spin-based transistors and logic, and encouraging results from groups led by Awschalom and others have given ground for a sense that major breakthroughs are imminent.

"A year and a half ago, when I was giving a talk [and] said something about magnetic logic, before I went on with the rest of my talk I'd preface my statement with, '...and now, let's return to the planet Earth,'" said Samuel D. Bader, a group leader in the materials science division at Argonne National Laboratory, in Illinois. "I can drop that line now," he added.

Quantum mechanical mystery

Spin remains an unplumbed mystery. "It has a reputation of not being really fathomable," said Jeff M. Byers, a leading spin theorist at the Naval Research Laboratory (NRL), in Washington, D.C. "And it's somewhat deserved."

Physicists know that spin is the root cause of magnetism, and that, like charge or mass, it is an intrinsic property of the two great classes of subatomic particles: fermions, such as electrons, protons, and neutrons; and bosons, including photons, pions, and more. What distinguishes them, by the way, is that a boson's spin is measurable as an integer number (0, 1, 2...) of units, whereas fermions have a spin of 1/2, 3/2, 5/2.... units.

Much of spin's elusiveness stems from the fact that it goes right to the heart of quantum theory, the foundation of modern physics. Devised in the early decades of the 20th century, quantum theory is an elaborate conceptual framework, based on the notion that the exchange of energy at the subatomic level is constrained to certain levels, or quantities--in a word, quantized.

Paul Dirac, an electrical engineering graduate of Bristol University, in England, turned Cambridge mathematician, postulated the existence of spin in the late 1920s. In work that won him a Nobel prize, he reconciled equations for energy and momentum from quantum theory with those of Einstein's special theory of relativity.

Spin is hard to grasp because it lacks an exact analog in the macroscopic world we inhabit. It is named after its closest real-world counterpart: the angular momentum of a spinning body. But whereas the ordinary angular momentum of a whirling planet, say, or curve ball vanishes the moment the object stops spinning and hence is extrinsic, spin is a kind of intrinsic angular momentum that a particle cannot gain or lose.

"Imagine an electronics technology founded on such a bizarre property of the universe," said Byers.

Of course, the analogy between angular momentum and spin only goes so far. Particle spin does not arise out of rotation as we know it, nor does the electron have physical dimensions, such as a radius. So the idea of the electron having angular momentum in the classical meaning of the term doesn't make sense. Confused? "Welcome to the club," Byers said, with a laugh.

The smallest magnets

Fortunately, a deep grasp of spin is not necessary to understand the promise of the recent advances. The usual imperfect analogies that somehow manage to render the quantum world meaningful for mortal minds turn out to be rather useful--as is spin's role in magnetism, a macroscopic manifestation of spin.

Start with the fact that spin is the characteristic that makes the electron a tiny magnet, complete with north and south poles. The orientation of the tiny magnet's north-south axis depends on the particle's axis of spin. In the atoms of an ordinary material, some of these spin axes point "up" (with respect to, say, an ambient magnetic field) and an equal number point "down." The particle's spin is associated with a magnetic moment, which may be thought of as the handle that lets a magnetic field torque the electron's axis of spin. Thus in an ordinary material, the up moments cancel the down ones, so no surplus moment piles up that could hold a picture to a refrigerator.

For that, you need a ferromagnetic material, such as iron, nickel, or cobalt. These have tiny regions called domains in which an excess of electrons have spins with axes pointing either up or down--at least, until heat destroys their magnetism, above the metal's Curie temperature. The many domains are ordinarily randomly scattered and evenly divided between majority-up and majority-down. But an externally applied magnetic field will move the walls between the domains and line up all the domains in the direction of the field, so that they point in the same direction. The result is a permanent magnet.

Ferromagnetic materials are central to many spintronics devices. Use a voltage to push a current of electrons through a ferromagnetic material, and it acts like a spin polarizer, aligning the spin axes of the transiting electrons so that they are up or down. One of the most basic and important spintronic devices, the magnetic tunnel junction, is just two layers of ferromagnetic material separated by an extremely thin, nonconductive barrier [see figure, "How a Magnetic Tunnel Junction Works" ]. The device was first demonstrated by the French physicist M. Jullire in the mid-1970s.

ILLUSTRATIONS: STEVE STANKIEWICZ

How a Magnetic Tunnel Junction Works: One of the most fundamental spintronic devices, the magnetic tunnel junction, is just two layers of ferromagnetic material [light blue] separated by a nonmagnetic barrier [darker blue]. In the top illustration, when the spin orientation [white arrows] of the electrons in the two ferromagnetic layers are the same, a voltage is quite likely to pressure the electrons to tunnel through the barrier, resulting in high current flow. But flipping the spins in one of the two layers [yellow arrows, bottom illustration], so that the two layers have oppositely aligned spins, restricts the flow of current.

It works like this: suppose the spins of the electrons in the ferromagnetic layers on either side of the barrier are oriented in the same direction. Then applying a voltage across the three-layer device is quite likely to cause electrons to tunnel through the thin barrier, resulting in high current flow. But flipping the spins in one of the two ferromagnetic layers, so that the two layers have opposite alignment, restricts the flow of current through the barrier [bottom]. Tunnel junctions are the basis of the MRAMs developed by IBM and Motorola, one per memory cell.

Any memory device can also be used to build logic circuits, in theory at least, and spin devices such as tunnel junctions are no exception. The idea has been explored by Mark Johnson, a leading spin researcher at the Naval Research Laboratory, and others. Lately, work in this area has shifted to a newly formed program at Honeywell Inc., Minneapolis, Minn. The challenges to the devices' use for programmable logic are formidable. To quote William Black, principal engineer at the Rocket Chips subsidiary of Xilinx, a leading maker of programmable logic in San Jose, Calif., "The basic device doesn't have gain and the switching threshold typically is not very well controlled." To call that "the biggest technical impediment," as he does, sounds like an understatement.

Relativistic transistors

Already on the drawing board are spin-based devices that would act something like conventional transistors--and that might even produce gain. There are several competing ideas. The most enduring one is known as the spin field-effect transistor (FET). A more recent proposal puts a new spin, so to speak, on an almost mythical device physicists have pursued for decades: the resonant tunneling transistor.

In an ordinary FET, a metal gate controls the flow of current from a source to a drain through the underlying semiconductor. A voltage applied to the gate sets up an electric field, and that field in turn varies the amount of current that can flow between source and drain. More voltage produces more current.

In 1990 Supriyo Datta and Biswajit A. Das, then both at Purdue University, in West Lafayette, Ind., proposed a spin FET in a seminal article published in the journal Applied Physics Letters. The two theorized about an FET in which the source and drain were both ferromagnetic metals, with the same alignment of electron spins. Electrons would be injected into the source, which would align the spins so that their axes were oriented the same way as those in the source and drain. These spin-polarized electrons would shoot through the source and travel at 1 percent or so of the speed of light toward the drain.

This speed is important, because electrons moving at so-called relativistic speeds are subject to certain significant effects. One is that an applied electric field acts as though it were a magnetic field. So a voltage applied to the gate would torque the spin-polarized electrons racing from source to drain and flip their direction of spin. Thus electron spins would become polarized in the opposite direction to the drain, and could not enter it so easily. The current going from the source to the drain would plummet.

Note that the application of the voltage would cut off current, rather than turn it on, as in a conventional FET. Otherwise, the basic operation would be rather similar--but with a couple of advantages. To turn the current on or off would require only the flipping of spins, which takes very little energy. Also, the polarization of the source and drain could be flipped independently, offering intriguing possibilities unlike anything that can be done with a conventional FET. For example, Johnson patented the idea of using an external circuit to flip the polarization of the drain, turning the single-transistor device into a nonvolatile memory cell.

A recent German breakthrough will "revolutionize" a majorspintronics subfield, one expert declared

Alas, 11 years after the paper by Datta and Das, no one has managed to make a working spin FET. Major efforts have been led by top researchers, such as Johnson at the NRL, Michael Flatt at the University of Iowa, Michael L. Roukes at the California Institute of Technology, Hideo Ohno of Tohoku University in Japan, Laurens W. Molenkamp, then at the University of Aachen in Germany, and Anthony Bland at the University of Cambridge in England. The main problem has been maintaining the polarization of the spins: the ferromagnetic source does in fact align the spins of electrons injected into it, but the polarization does not survive as the electrons shoot out of the source and into the semiconductor between the source and drain.

Recent work in Berlin, Germany, may change all that. In a result published last July in Physical Review Letters, Klaus H. Ploog and his colleagues at the Paul Drude Institute disclosed that they had used a film of iron, grown on gallium arsenide, to polarize spins of electrons injected into the GaAs. Not only was the experiment carried out at room temperature, but the efficiency of the injection, at 2 percent, was high in comparison with similar experiments. The work was "extremely important," said the Naval Research Laboratory's Johnson. "It will revolutionize this subfield. A year from now many spin-FET researchers will be working with iron."

The other kind of proposed spin transistor would exploit a quantum phenomenon called resonant tunneling. The device would be an extension of the resonant tunneling diode. At the heart of this device is an infinitesimal region, known as a quantum well, in which electrons can be confined. However, at a specific, resonant voltage that corresponds to the quantum energy of the well, the electrons tend to slip--the technical term is "tunnel"--freely through the barriers enclosing the well.

Generally, the spin state of the electron is irrelevant to the tunneling, because the up and down electrons have the same amount of energy. But by various means, researchers can design a device in which the spin-up and spin-down energy levels are different, so that there are two different tunneling pathways. The two tunnels would be accessed with different voltages; each voltage would correspond to one or the other of the two spin states. At one voltage, a certain level of spin-down current would flow. At some other voltage, a different level of spin-up current would go through the quantum well's barriers.

One way of splitting the energy levels is to make the two barriers of different materials, so that the potential energy that confines the electrons within the quantum well is different on either side of the well. That difference in the confining potentials translates, for a moving electron, into two regions within the quantum well, which have magnetic fields that are different from each other. Those asymmetric fields in turn give rise to the different resonant energy levels for the up and down spin states. A device based on these principles is the goal of a team led by Thomas McGill at the California Institute of Technology, with members at HRL Laboratories LLC, Jet Propulsion Laboratory, Los Alamos National Laboratory, and the University of Iowa.

Another method of splitting the energy levels is to simply put them in a magnetic field. This approach is being taken by a collaborative effort of nine institutions, led by Bruce D. McCombe at the University at Buffalo, New York.

Neither team has managed to build a working device, but the promise of such a device has kept interest high. A specific voltage would produce a certain current of, say, spin-up electrons. Using a tiny current to flip the spins would enable a larger current of spin-down electrons to flow at the same voltage. Thus a small current could, in theory anyway, be amplified.

Ray of hope

As these researchers refine the resonant and ballistic devices, they are looking over their shoulders at colleagues who are forging a whole new class of experimental device. This surging competition is based on devices that create or detect spin-polarized electrons in semiconductors, rather than in ferromagnetic metals. In these experiments, researchers use lasers to get around the difficulties of injecting polarized spin into semiconductors. By shining beams of polarized laser light onto ordinary semiconductors, such as gallium arsenide and zinc selenide, they create pools of SPIN-POLARIZED ELECTRONS.

Some observers lament the dependence on laser beams. They find it hard to imagine how the devices could ever be miniaturized to the extent necessary to compete with conventional electronics, let alone work smoothly with them on the same integrated circuit. Also, in some semiconductors, such as GaAs, the spin polarization persists only at cryogenic temperatures.

In an early experiment, Michael Oestreich, then at Philips University in Marburg, Germany, showed that electric fields could push pools of spin-polarized electrons through nonmagnetic semiconductors such as GaAs. The experiment was reported in the September 1998 Applied Physics Letters.

Then over the past three years, a breathtaking series of findings has turned the field into a thriving subdiscipline. Several key results were achieved in Awschalom's laboratory at Santa Barbara. He and his co-workers demonstrated that pools of spin-coherent electrons could retain their polarization for an unexpectedly long time--hundreds of nanoseconds. Working separately, Awschalom, Oestreich, and others also created pools of spin-polarized electrons and moved them across semiconductor boundaries without the electrons' losing their polarization.

If not for these capabilities, spin would have no future in electronics. Recall that a practical device will be operated by altering its orientation of spin. That means that the spin coherence has to last, at a minimum, longer than it takes to alter the orientation of that spin polarization. Also, spintronic devices, like conventional ones, will be built with multiple layers of semiconductors, so moving spin-polarized pools across junctions between layers without losing the coherence will be essential.

Awschalom and his co-workers used a pulsed, polarized laser to establish pools of spin-coherent electrons. The underlying physics revolves around the so-called selection rules. These are quantum-theoretical laws describing whether or not an electron can change energy levels by absorbing or emitting a photon of light. According to those selection rules, light that is circularly polarized will excite only electrons of one spin orientation or the other. Conversely, when spin-coherent electrons combine with holes, the result is photons of circularly polarized light.

Puzzling precession

In his most recent work, Awschalom and his graduate student, Irina Malajovich, collaborated with Nitin Samarth of Pennsylvania State University in University Park and his graduate student, Joseph Berry. As he has in the past, Awschalom performed the experiment on pools of electrons that were not only spin polarized but were also precessing. Precession occurs when a pool of spin-polarized electrons is put in a magnetic field: the field causes their spin axes to rotate in a wobbly way around that field. The frequency and direction of rotation depend on the strength of the magnetic field and on characteristics of the material in which the precession is taking place.

The Santa BarbaraPenn State team used circularly polarized light pulses to create a pool of spin-coherent electrons in GaAs. They applied a magnetic field to make the electrons precess, and then used a voltage to drag the precessing electrons across a junction into another semiconductor, ZnSe. The researchers found that if they used a low voltage to drag the electrons into the ZnSe, the electrons took on the precession characteristics of the ZnSe as soon as they got past the junction. However, if they used a higher voltage, the electrons kept on precessing, as though they were still in the GaAs [see illustration, "Precessional Mystery" ].

ILLUSTRATIONS: STEVE STANKIEWICZ

Precessional Mystery: Given the right circumstances, electrons will synchronously "precess," or whirl about an axis that is itself moving. The angle and rate of this wobbly spin depend in part on the material in which it occurs. Thus, if a voltage pushes an electron out of gallium arsenide [light blue] into zinc selenide [yellow], the electron's precession characteristics change [top]. However, if a higher voltage pushes the electron sharply enough into the ZnSe, the precession characteristics do not change but remain those of GaAs for a while [bottom]. Some researchers believe they will be able to exploit this variability in future devices.

"You can tune the whole behavior of the current, depending on the electric field," Awschalom said in an interview. "That's what was so surprising to us." The group reported its results in the 14 June issue of Nature, prompting theorists around the world to wear out their pencils trying to explain the findings.

Other results from the collaboration were even more intriguing. The Santa Barbara and Penn State researchers performed a similar experiment, except with p-type GaAs and n-type ZnSe. N-type materials rely on electrons to carry current; p-type, on holes. Because the materials were of two different charge-carrier types, an electric field formed around their junction. That field, the experimenters found, was strong enough to pull a pool of spin-coherent electrons from the GaAs immediately into the ZnSe, where the coherence persisted for hundreds of nanoseconds.

The result was encouraging for two reasons. As Awschalom put it, "It showed that you can build n-type and p-type materials and spin can get through the interfaces between them just fine." Equally important, it demonstrated that the spin can be moved from one kind of semiconductor into another without the need for external electric fields, which wouldn't be practical in a commercial device.

"The next big opportunity is to make a spin transistor," Awschalom added. "These results show, in principle, that there is no obvious reason why it won't work well."

Such a device is at least several years away. But even if researchers were on the verge of getting a spin transistor to work in the laboratory, more breakthroughs would be necessary before the device could be practical. For example, the fact that the device would need pulses of circularly polarized laser light would seem an inconvenience, although Awschalom sees a bright side. The gist is that the photons would be used for communications among chips, the magnetic elements for memory, and the spin-based devices for fast, low-power logic.

It's far-fetched now--but no more so than the idea of 1GB DRAMs would have seemed in the days when triodes ruled.

Hot off the presses is Semiconductor Spintronics and Quantum Computation, edited by David D. Awschalom, Nitin Samarth, and Daniel Loss. The 250-page book was released last October by Springer Verlag, Berlin/Heidelberg; ISBN: 3540421769.

The November/December issue of American Scientist, published by the scientific research society Sigma Xi, included an eight-page overview titled "Spintronics" by Sankar Das Sarma. See Vol. 89, pp. 516523.

Honeywell Inc.'s Romney R. Katti and Theodore Zhu described the company's magnetic RAM technology in "Attractive Magnetic Memories," IEEE Circuits & Devices, Vol. 17, March 2001, pp. 2634.

Continue reading here:

The Quest for the Spin Transistor - IEEE Spectrum

Posted in Quantum Physics | Comments Off on The Quest for the Spin Transistor – IEEE Spectrum

Everything we know about soil science might be wrong – The Counter

Posted: at 8:40 pm

You are granted a personal, revocable, limited, non-exclusive, non-transferable license to access and use the Services and the Content conditioned on your continued acceptance of, and compliance with, the Terms. You may use the Services for your noncommercial personal use and for no other purpose. We reserve the right to bar, restrict or suspend any users access to the Services, and/or to terminate this license at any time for any reason. We reserve any rights not explicitly granted in these Terms.

We may change the Terms at any time, and the changes may become effective immediately upon posting. It is your responsibility to review these Terms prior to each use of the Services and, by continuing to use the Services, you agree to all changes as well as Terms in place at the time of the use. The changes also will appear in this document, which you can access at any time.

We may modify, suspend or discontinue any aspect of the Services at any time, including the availability of any Services feature, database, or content, or for any reason whatsoever, whether to all users or to you specifically. We may also impose limits on certain features and services or restrict your access to parts or all of the Services without notice or liability.

More:

Everything we know about soil science might be wrong - The Counter

Posted in Quantum Physics | Comments Off on Everything we know about soil science might be wrong – The Counter

Robert Noyce and the Tunnel Diode – IEEE Spectrum

Posted: at 8:40 pm

Photo: Intel Corp.

I have in my notebooks from [1956] a complete description of the tunnel diode, the speaker told the audience at a symposium on innovation at the MIT Club of New York, in New York City, in December 1976. It was quite a revelation, because the speaker wasnt Leo Esaki, who had won the 1973 Nobel Prize in physics for inventing the tunnel diode in the late 1950s. It was Robert N. Noyce, cofounder of Intel Corp., Santa Clara, Calif.; inventor of the first practical integrated circuit; and a man who, as far as anyone knew before that speech, had no connection to the most storied electronic device never to be manufactured in large numbers.

Engineers coveted the tunnel diode for its extremely fast switching timestens of picosecondsat a time when transistors loped along at milliseconds. But it never found commercial success, though it was occasionally used as a very fast switch. As a two-terminal device, the diode could not readily be designed for amplification, unlike a three-terminal transistor, whose circuit applications were then growing astronomically. Nevertheless, the tunnel diode was a seminal invention. It provided the first physical evidence that the phenomenon of tunneling, a key postulate of quantum mechanics, was more than an intriguing theory.

Quantum mechanics, the foundation of modern physics, is an elaborate conceptual framework that predicts the behavior of matter and radiation at the atomic level. One of its most fundamental notions is that the exchange of energy at the subatomic level is constrained to certain levels, or quantitiesin a word, quantized.

Many of the core concepts and phenomena of quantum mechanics are almost completely counterintuitive. For example, consider a piece of semiconductor joined to an insulator. From the point of view of classical physics theory, the electrons in the semiconductor are like rubber balls, and the insulator is like a low garden wall. An electron would have no chance of getting over the barrier unless its energy were higher than the barriers. But according to quantum mechanics, the phenomenon of tunneling ensures that for certain conditions an electron with less energy than the barriers will not bounce off the wall but will instead tunnel right through it.

Ever since the late 1920s, physicists had debated about whether tunneling really occurred in solids. The tunnel diode offered the first compelling experimental evidence that it did.

When Esaki, then a 49-year-old semiconductor research scientist at IBM Corp., won his Nobel Prize in 1973, neither he nor the Nobel committee had any idea about Noyces work. Esaki had made a tunnel diode and measured its current versus voltage behavior 16 years earlier, when he was working at the company now called Sony Corp. in his native Japan. The Nobel committee, in fact, dated Esakis discovery from 1957, roughly contemporaneous with Noyces recollected work in the same field. Stig Lundqvist of the Swedish Royal Academy of Sciences used the electrons as balls against the wall analogy in his speech presenting the 1973 Nobel Prize in physics to Esaki; Ivar Giaever and Brian David Josephson shared the award for discovering different aspects of the tunneling phenomenon in solids.

In Good Company: The eight engineers and scientists, including Robert N. Noyce (right) and Gordon E. Moore (standing second from left), who cofounded Fairchild Semiconductor in 1957, are pictured here on the firms production floor in its earlyyears.Photo: Intel Corp.

Almost every important discovery since the start of the industrial age has a contested history. Heinrich Gobel, from a town near Hanover in Germany, filed suit in 1893 claiming that he, not Thomas Edison, had invented the light bulb years earlier in New York City. Something similar has occurred for the airplane, telephone, rotor encryption machine, television, integrated circuit, and microprocessor, to name but a few. Such counterclaims often have meritinvention and research are often group activities, and discoveries regularly appear in different places at almost precisely the same time. And sometimes such claims come from experimenting hacks eager for a measure of recognition for themselves.

Noyce was no hack, obviouslyhis integrated circuit nestles at the heart of essentially every piece of modern electronics. In fact, the invention of the IC was recognized as a Nobel-level achievement in 2000, when the prize for physics was awarded to Jack S. Kilby, credited by U.S. courts as the coinventor of the IC. Unfortunately for Noyce, he missed his chance to join the pantheon of laureates when he died in 1990; the prizes are not awarded posthumously.

Nor was Noyce pursuing glory when he mentioned his work in his talk at that symposium in 1976. In fact, immediately after claiming to have the invention in his notebooks, Noyce said, The work had been done elsewhere [by Leo Esaki] and was published shortly thereafter. He had mentioned it in the first place only because he thought the way his boss had handled Noyces tunnel diode efforts in 1956 may be instructive in how not to motivate people.

Noyces boss at that time was William B. Shockley, the brilliant, mercurial, ambitious, autocratic, and eccentric physicist. He was the sort of man who thought nothing of publicly subjecting his employees to lie-detector tests. As the young Noyce had his insight about the tunnel diode, Shockley himself was only weeks away from his own Nobel Prize in physics, awarded for his 1947 invention, along with two colleagues, John Bardeen and Walter Brattain, of the transistor.

Shockley had started Shockley Semiconductor Laboratory in 1955 with the self-proclaimed goal of making a million dollars and seeing his name in The Wall Street Journal. Noyce headed the transistor group at Shockley Lab. With a Ph.D. in physical electronics and two years in a transistor research lab at Philco Corp., he was the most experienced semiconductor researcher among Shockleys several dozen employees.

On 14 August 1956, Noyce noted an idea for a negative resistance diode in his lab notebook. With most diodes, current increases with increased voltagethe more voltage applied to the device, the more current passes through it.

In a diode, the current under a forward voltage, or bias, is relatively large, while little current results when the bias is reversed. For a semiconductor diode, such behavior is obtained by adding impurity atoms. Esakis semiconductor was germanium. He used two types of impurities. So-called donor atoms have more electrons in their outer orbits than do the outer orbits of germanium atoms. The excess electrons become free electrons, available for conduction. A semiconductor with an excess of electrons is called n-type.

Similarly, if the germanium is doped with impurity atoms that hold fewer electrons in their outer orbits than germanium, the impurity atoms will take away, or accept, electrons from the semiconductor atoms, leaving behind deficiencies of electrons, known as holes. A semiconductor with an excess of holes, each one considered to have a positive charge, is called a p-type semiconductor.

Germanium can be doped to create p- and n-type sections that butt against each other and form what is called a p-n junction. In a p-n junction, a potential difference normally builds up across a narrow region near where the p-type and n-type semiconductors come into contact. This built-in potential sets up a barrier against the passage of holes into the n-type material and the passage of electrons into the p-type. Applying an external bias across the diode changes the barriers height. A forward biasobtained by connecting a batterys positive terminal to the p side and its negative terminal to the n sidelowers the barrier, allowing electrons to flow easily from the n side to the p side. Reverse the polarity, and the height of the barrier rises, prohibiting the flow of electrons.

Noyce, however, made a startling prediction. First he proposed the existence of a semiconductor whose regions of opposite polarity were each doped with roughly a thousand times more impurities than was usual at the time. When a forward bias increasing from zero was applied to such a heavily doped diode (which Noyce called degenerate), he predicted that current would initially increase at a greater rate than for a normal diode. This phenomenon would occur because the high impurity density would, in effect, make it possible for the balls (electrons) to tunnel through the wall (the junctions potential barrier). At some point, increasing the voltage further would decrease the tunneling current, but at still higher voltages, the current would increase because of the nontunneling diode current.

Noyce discussed his ideas with his friend Gordon E. Moore, a chemist who had joined Shockley Lab a day before Noyce. He then brought his notebook to Shockley, fully expecting him to be impressed. Instead, the boss showed no interest in the idea, Noyce said. The lab was not equipped to do anything profitable with Noyces thoughts, and besides, Shockley was a fiercely competitive man who resented his employees pursuing ideas that he had not personally placed on their research agendas. Disappointed, Noyce closed his lab book and went on to other projects more in line with Shockleys wishes.

Until now, no one other than Moore and Shockley had seen Robert Noyces 1956 description of a tunnel diode. But Noyce copied his work and saved it. How he managed to copy these pages is unclearphotocopy technology was in its infancy in the late 1950s, and Noyce never made note of going back to his Shockley notebooks later in lifebut that the pages are legitimate is indisputable. Leslie Berlin, one of this articles authors, found them in January 2001 tucked in one of Noyces Fairchild notebooks stored in Santa Clara, Calif., at a company that prefers not to be identified.

Berlin compared these copied pages to the only surviving notebook from Shockley Lab: the book belonging to William Shockley housed in the Special Collections of Stanford University, in California. The pages on which Noyces ideas are written are clearly from the same type of lab book that Shockley issued to his staff, and the handwriting is undoubtedly Noyces. This, along with the date of Noyces work (which correlates with his 1976 comments about it), and Moores recollections of the event, further validate their authenticity.

A quick comparison of Noyces notebook pages with Esakis seminal paper, New Phenomenon in Narrow Germanium p-n Junctions, published in Physical Review in January 1958 (and received by that journal in October 1957), shows striking parallels. Both men used an energy-band diagram that represents the electron and hole energies on the y (vertical) axis versus their position in the p-n junction on the x (horizontal) axis [see sidebar, The Noyce Diode,two pages from Noyces notebook].

Noyces energy-level diagram, which is now called an energy-band diagram [on the left-hand page], shows where the electrons and holes are located. It also illustrates the conditions necessary for tunneling current. The upper solid line in the diagram represents the bottom of the semiconductors conduction band; in this band electrons can move freely as a result of the donor atoms. The lower solid line represents the top of the valence band, where acceptor impurities allow holes to move freely. The separation between the conduction and valence bands is the energy gap, or Eg , and is the range in energy where no electrons or holes are permitted. For this reason, Eg is sometimes called the forbidden gap.

In Noyces diagram, the Fermi energy, or Ef , represents the energy boundary for most of the holes in the p-type semiconductor and most of the free electrons in the n-type. For a highly doped, or degenerate, semiconductor, Ef falls below the edge of the valence band and rises above the edge of the conduction band. Electrons sink so they fill the lowest energy levels in the conduction band, while holes float and fill the highest levels of the valence band. Therefore, it is the holes between the top of the valence band and Ef and the free electrons between Ef and the bottom of the conduction band that are significant for tunneling.

The region between the p and the n sides where the valence and conduction band edges bend is called the depletion region; this is where the potential barrier exists. This region narrows for large donor and acceptor concentrations and would be less than 10 nanometers for a tunnel diode.

Note that without an applied bias, the holes on the p side are at a higher energy than the electrons on the n side. For tunneling to occur, there must be holes at the same energy as the free electrons. But a forward bias (a positive voltage connected to the p side), raises Ef and the conduction-band electrons on the n side with respect to Ef on the p side by the amount of the bias voltage. Now there are free electrons at the same energy as the holes, and the electrons can tunnel through the potential barrier to holes on the p side, resulting in a current. As the forward bias is increased, more free electrons and holes are at the same energy and the tunneling current increases.

Both Noyce and Esaki recognized that as the bias increased further, Ef on the n side would be raised further with respect to Ef on the p side and the concentration of free electrons at the same energy as holes would diminish and result in a reduced tunneling current, as shown in Noyces current (I) vs. voltage (V) plot. At a larger bias, the normal diode current would flow at the voltage Eg in Noyces plot.

This plot [on the right-hand page] is very similar to the measured I-V plot that Esaki shows. This phenomenon of decreasing current with increasing voltage is negative resistance, a characteristic that has been exploited to build oscillators.

But there was one important difference between Noyces and Esakis work. Noyce only predicted the drop in current (the evidence of tunneling) would occur. Esaki, who actually built a device to demonstrate his ideas, showed that it would. This difference is crucialmany good ideas die en route from the mind to the lab bench.

Noyces failure to implement his brilliant idea was almost certainly a direct result of Shockleys discouraging comments to him in 1956. Noyce was an experimentalist at heart. (He admired people who did things, his friend Maurice Newstein explained to one of the authors in 2003.) Noyce would later prove a bit of an iconoclast as well, joining a covert effort to build silicon transistors at Shockley whenever the boss, who had decided the lab should focus its attention on an obscure device he had invented called a four-layer diode, was away. But in August 1956, Noyce was 29 years old and not yet six months into his job at Shockley Lab. If his boss told him to drop an idea, at that point he would have done it.

Noyce no longer worked for Shockley when he read Leo Esakis Physical Review article in 1958. In September 1957, he, Moore, and six other Shockley employeesmore than half the senior technical staffhad left their temperamental boss to start their own transistor company, Fairchild Semiconductor. Shockleys business venture, meanwhile, withered (he joined the Stanford faculty in 1963), and he suffered the additional indignity of watching his proteges new company achieve phenomenal success. In less than a decade, Fairchildunder Noyces leadershipgrew to employ 11 000 people and generate more than US $12 million in profits.

The publication of Esakis article caused quite a sensation in the electronics community. At an international physics conference in 1958, the audience for Esakis presentation was overflowing. Interestingly, Esaki credits William Shockley, who explicitly mentioned Esakis work in his keynote address earlier to the conference, with the large attendance at his presentation.

We can never know why Shockley changed his mind about the importance of the diode, but there are several possible explanations. Shockley was infamous for his swings of opinionone person who worked for him said he was regularly jerking the company back and forth. Similar remarks from other colleagues indicate that Shockley may have changed his mind in this case. Moreover, the grudge he bore against the eight Fairchild founders was still fresh in 1958.

After reading Esakis article, Noyce brought his copy of Physical Review to Moore and laid it on his desk. Noyce could not mask the irritation he felt with William Shockley and, even more, with himself for not pursuing his ideas after Shockley dismissed them. If I had gone one step further, he told Moore, who would go on to found Intel with him in 1968, I would have doneit.

Leslie Berlin is a visiting scholar in the Program in the History and Philosophy of Science and Technology at Stanford University, in California. Her biography of Robert Noyce, The Man Behind the Microchip: Robert Noyce and the Invention of Silicon Valley, will be published by Oxford University Press on 1 June.

H. Craig Casey Jr. is Professor Emeritus at Duke University, in Durham, N.C. He is a life fellow of the IEEE and a past president of the Electron Devices Society.

Leo Esakis lecture Long Journey Into Tunneling, which he gave when awarded the Nobel Prize in physics in 1973, is available at http://nobelprize.org/physics/laureates/1973/esaki-lecture.html.

An interactive Web site that graphically illustrates the current mechanisms in a tunnel diode is at http://www.shef.ac.uk/eee/teach/resources/diode/tunnel.html.

For more on William Shockley, see Crystal Fire: The Birth of the Information Age, by Michael Riordan and Lillian Hoddeson (W.W. Norton, 1997).

Read the rest here:

Robert Noyce and the Tunnel Diode - IEEE Spectrum

Posted in Quantum Physics | Comments Off on Robert Noyce and the Tunnel Diode – IEEE Spectrum

Why information is central to physics and the universe itself – Big Think

Posted: at 8:40 pm

The following is an adapted excerpt from the book The Extended Mind. It is reprinted with permission of the author.

If you'd like to make smarter choices and sounder decisions and who doesn't? you might want to take advantage of a resource you already have close at hand: your interoception. Interoception is, simply stated, an awareness of the inner state of the body. Just as we have sensors that take in information from the outside world (retinas, cochleas, taste buds, olfactory bulbs), we have sensors inside our bodies that send our brains a constant flow of data from within. These sensations are generated in places all over the body in our internal organs, in our muscles, even in our bones and then travel via multiple pathways to a structure in the brain called the insula. Such internal reports are merged with several other streams of information our active thoughts and memories, sensory inputs gathered from the external world and integrated into a single snapshot of our present condition, a sense of "how I feel" in the moment, as well as a sense of the actions we must take to maintain a state of internal balance.

To understand the role interoception can play in smart decision-making, it's important to know that the world is full of far more information than our conscious minds can process. However, we are also able to collect and store the volumes of information we encounter on a non-conscious basis. As we proceed through each day, we are continuously apprehending and storing regularities in our experience, tagging them for future reference. Through this information-gathering and pattern-identifying process, we come to know things but we're typically not able to articulate the content of such knowledge or to ascertain just how we came to know it. This trove of data remains mostly under the surface of consciousness, and that's usually a good thing. Its submerged status preserves our limited stores of attention and working memory for other uses.

A study led by cognitive scientist Pawel Lewicki demonstrates this process in microcosm. Participants in Lewicki's experiment were directed to watch a computer screen on which a cross-shaped target would appear, then disappear, then reappear in a new location; periodically they were asked to predict where the target would show up next. Over the course of several hours of exposure to the target's movements, the participants' predictions grew more and more accurate. They had figured out the pattern behind the target's peregrinations. But they could not put this knowledge into words, even when the experimenters offered them money to do so. The subjects were not able to describe "anything even close to the real nature" of the pattern, Lewicki observes. The movements of the target operated according to a pattern too complex for the conscious mind to accommodate but the capacious realm that lies below consciousness was more than roomy enough to contain it.

"Nonconscious information acquisition," as Lewicki calls it, along with the ensuing application of such information, is happening in our lives all the time. As we navigate a new situation, we're scrolling through our mental archive of stored patterns from the past, checking for ones that apply to our current circumstances. We're not aware that these searches are under way; as Lewicki observes, "The human cognitive system is not equipped to handle such tasks on the consciously controlled level." He adds, "Our conscious thinking needs to rely on notes and flowcharts and lists of 'if-then' statements or on computers to do the same job which our non-consciously operating processing algorithms can do without external help, and instantly."

But if our knowledge of these patterns is not conscious, how then can we make use of it? The answer is that, when a potentially relevant pattern is detected, it's our interoceptive faculty that tips us off: with a shiver or a sigh, a quickening of the breath or a tensing of the muscles. The body is rung like a bell to alert us to this useful and otherwise inaccessible information. Though we typically think of the brain as telling the body what to do, just as much does the body guide the brain with an array of subtle nudges and prods. (One psychologist has called this guide our "somatic rudder.") Researchers have even captured the body in mid-nudge, as it alerts its inhabitant to the appearance of a pattern that she may not have known she was looking for.

Such interoceptive prodding was visible during a gambling game that formed the basis of an experiment led by neuroscientist Antonio Damasio, a professor at the University of Southern California. In the game, presented on a computer screen, players were given a starting purse of two thousand "dollars" and were shown four decks of digital cards. Their task, they were told, was to turn the cards in the decks face-up, choosing which decks to draw from such that they would lose the least amount of money and win the most. As they started clicking to turn over cards, players began encountering rewards bonuses of $50 here, $100 there and also penalties, in which small or large amounts of money were taken away. What the experimenters had arranged, but the players were not told, was that decks A and B were "bad" they held lots of large penalties in store and decks C and D were "good," bestowing more rewards than penalties over time.

How Our Brains Feel Emotion | Antonio Damasio | Big Think http://www.youtube.com

As they played the game, the participants' state of physiological arousal was monitored via electrodes attached to their fingers; these electrodes kept track of their level of "skin conductance." When our nervous systems are stimulated by an awareness of potential threat, we start to perspire in a barely perceptible way. This slight sheen of sweat momentarily turns our skin into a better conductor of electricity. Researchers can thus use skin conductance as a measure of nervous system arousal. Looking over the data collected by the skin sensors, Damasio and his colleagues noticed something interesting: after the participants had been playing for a short while, their skin conductance began to spike when they contemplated clicking on the bad decks of cards. Even more striking, the players started avoiding the bad decks, gravitating increasingly to the good decks. As in the Lewicki study, subjects got better at the task over time, losing less and winning more.

Yet interviews with the participants showed that they had no awareness of why they had begun choosing some decks over others until late in the game, long after their skin conductance had started flaring. By card 10 (about forty-five seconds into the game), measures of skin conductance showed that their bodies were wise to the way the game was rigged. But even ten turns later on card 20 "all indicated that they did not have a clue about what was going on," the researchers noted. It took until card 50 was turned, and several minutes had elapsed, for all the participants to express a conscious hunch that decks A and B were riskier. Their bodies figured it out long before their brains did. Subsequent studies supplied an additional, and crucial, finding: players who were more interoceptively aware were more apt to make smart choices within the game. For them, the body's wise counsel came through loud and clear.

Damasio's fast-paced game shows us something important. The body not only grants us access to information that is more complex than what our conscious minds can accommodate. It also marshals this information at a pace that is far quicker than our conscious minds can handle. The benefits of the body's intervention extend well beyond winning a card game; the real world, after all, is full of dynamic and uncertain situations, in which there is no time to ponder all the pros and cons. When we rely on the conscious mind alone, we lose but when we listen to the body, we gain a winning edge.

Annie Murphy Paul is a science writer who covers research on learning and cognition. She is the author of The Extended Mind: The Power of Thinking Outside the Brain, from which this article is adapted.

Excerpt from:

Why information is central to physics and the universe itself - Big Think

Posted in Quantum Physics | Comments Off on Why information is central to physics and the universe itself – Big Think

Page 72«..1020..71727374..8090..»