Daily Archives: August 30, 2022

PERMA FIX ENVIRONMENTAL SERVICES INC : Entry into a Material Definitive Agreement, Creation of a Direct Financial Obligation or an Obligation under an…

Posted: August 30, 2022 at 11:33 pm

Item 1.01. Entry into a Material Definitive Agreement.

On August 29, 2022, Perma-Fix Environmental Services, Inc. (the "Company")entered into an amendment to its Loan Agreement (as defined below) with PNCBank, National Association ("PNC" or "lender") as discussed under Item 2.03below, which is incorporated herein by reference.

Item 2.03. Creation of a Direct Financial Obligation or an Obligation Under anOff Balance Sheet Arrangement of a Registrant.

The Company and PNC entered into a Fifth Amendment ("Fifth Amendment") to itsSecond Amended and Restated Revolving Credit, Term Loan and Security Agreement,as amended ("Loan Agreement") on August 29, 2022, to set forth certain revisionsto the Loan Agreement. The new revisions to the Loan Agreement are included inthe revised Loan Agreement attached hereto as Exhibit 4.2 and referenced in theFifth Amendment as Annex A ("Revised Loan Agreement"). The new revisions in theRevised Loan Agreement include, among other revisions, (i) removal of andreplacing the reference to the London InterBank Offer Rate "("LIBOR") basedinterest rate benchmark provisions with the Secured Overnight Finance Rate("SOFR"). If the Company selects the SOFR benchmark provisions, payment ofinterest due on the revolving credit will be "Term SOFR Rate" (as defined inExhibit 4.2 hereto and referenced as Annex A in the Fifth Amendment) plus 3.00%plus an SOFR Adjustment applicable for an interest period selected by theCompany and payment of interest due on the term loan and capital line will beTerm SOFR Rate plus 3.50% plus an SOFR Adjustment for an interest periodselected by the Company. Pursuant to the Revised Loan Agreement, SOFR Adjustmentrates of 0.10% and 0.15% will be applicable for a one-month interest period andthree-month period, respectively; and (ii) adding certain additionalanti-terrorism provisions to the covenants contained in the Loan Agreement.

Item 9.01. Financial Statements and Exhibits

(d) Exhibits.

Edgar Online, source Glimpses

More here:
PERMA FIX ENVIRONMENTAL SERVICES INC : Entry into a Material Definitive Agreement, Creation of a Direct Financial Obligation or an Obligation under an...

Posted in Fifth Amendment | Comments Off on PERMA FIX ENVIRONMENTAL SERVICES INC : Entry into a Material Definitive Agreement, Creation of a Direct Financial Obligation or an Obligation under an…

Scientists harness powers of Webb and Hubble in stunning galactic image – Mashable

Posted: at 11:26 pm

Stare into the core of the Phantom Galaxy.

New images from humanity's most powerful space telescopes the legendary Hubble telescope and its successor the James Webb Space Telescope reveal unprecedented detail in this magnificent distant spiral galaxy. It's 32 million light-years away.

The over 30-year-old Hubble telescope views light we can see (visible light), while the Webb telescope views a type of light with longer wavelengths (called "infrared light") that isn't visible to us. Together, these instruments gather bounties of data that reveal new insights about what lies in the distant cosmos.

The middle image below shows the combined views of the Hubble and Webb telescopes. What you can see:

The areas of bright pink in the reddish spirals are active star-forming regions

The bright blue dots are other stars

The core of the galaxy glows cyan and green. These are older stars clustered around the galactic center.

At center is a view of the Phantom Galaxy with combined data of the Hubble and Webb telescopes.Credit: ESA / Webb / NASA / CSA / J. Lee and the PHANGS-JWST Team / Acknowledgement: J. Schmidt

In the Webb image by itself (the top image of this story or the right-side image in the comparison above), it's easy to see the many stars (shown in blue) amassed in the galaxy's core. A lack of gas at the heart of the Phantom Galaxy makes this view exceptionally clear.

Hubble continues to capture dazzling views of distant stars and galaxies. Meanwhile, Webb, stationed 1 million miles away from Earth, is expected to reveal new insights about the universe. Here's how Webb will achieve unparalleled things:

Giant mirror: Webb's mirror, which captures light, is over 21 feet across. That's over two and a half times larger than the Hubble Space Telescope's mirror. Capturing more light allows Webb to see more distant, ancient objects. The telescope will peer at stars and galaxies that formed over 13 billion years ago, just a few hundred million years after the Big Bang.

"We're going to see the very first stars and galaxies that ever formed," Jean Creighton, an astronomer and the director of the Manfred Olson Planetarium at the University of WisconsinMilwaukee, told Mashable last year.

Infrared view: Webb is primarily an infrared telescope, meaning it views light in the infrared spectrum. This allows us to see far more of the universe. Infrared has longer wavelengths than visible light, so the light waves more efficiently slip through cosmic clouds; the light doesn't as often collide with and get scattered by these densely-packed particles. Ultimately, Webb's infrared eyesight can penetrate places Hubble can't.

"It lifts the veil," said Creighton.

Peering into distant exoplanets: The Webb telescope carries specialized equipment, called spectrometers, that will revolutionize our understanding of these far-off worlds. The instruments can decipher what molecules (such as water, carbon dioxide, and methane) exist in the atmospheres of distant exoplanets be they gas giants or smaller rocky worlds. Webb will look at exoplanets in the Milky Way galaxy. Who knows what we'll find?

"We might learn things we never thought about," Mercedes Lpez-Morales, an exoplanet researcher and astrophysicist at the Center for Astrophysics-Harvard & Smithsonian, told Mashable in 2021.

See more here:
Scientists harness powers of Webb and Hubble in stunning galactic image - Mashable

Posted in Hubble Telescope | Comments Off on Scientists harness powers of Webb and Hubble in stunning galactic image – Mashable

Pros and Cons of Genetic Engineering – HRF

Posted: at 11:25 pm

Manipulation of genes in natural organisms, such as plants, animals, and even humans, is considered genetic engineering. This is done using a variety of different techniques like molecular cloning. These processes can cause dramatic changes in the natural makeup and characteristic of the organism. There are benefits and risks associated with genetic engineering, just like most other scientific practices.

Genetic engineering offers benefits such as:

1. Better Flavor, Growth Rate and NutritionCrops like potatoes, soybeans and tomatoes are now sometimes genetically engineered in order to improve size, crop yield, and nutritional values of the plants. These genetically engineered crops also possess the ability to grow in lands that would normally not be suitable for cultivation.

2. Pest-resistant Crops and Extended Shelf LifeEngineered seeds can resist pests and having a better chance at survival in harsh weather. Biotechnology could be in increasing the shelf life of many foods.

3. Genetic Alteration to Supply New FoodsGenetic engineering can also be used in producing completely new substances like proteins or other nutrients in food. This may up the benefits they have for medical uses.

4. Modification of the Human DNAGenes that are responsible for unique and desirable qualities in the human DNA can be exposed and introduced into the genes of another person. This changes the structural elements of a persons DNA. The effects of this are not know.

The following are the issues that genetic engineering can trigger:

1. May Hamper Nutritional ValueGenetic engineering on food also includes the infectivity of genes in root crops. These crops might supersede the natural weeds. These can be dangerous for the natural plants. Unpleasant genetic mutations could result to an increased allergy occurrence of the crop. Some people believe that this science on foods can hamper the nutrients contained by the crops although their appearance and taste were enhanced.

2. May Introduce Risky PathogensHorizontal gene shift could give increase to other pathogens. While it increases the immunity against diseases among the plants, the resistant genes can be transmitted to harmful pathogens.

3. May Result to Genetic ProblemsGene therapy on humans can end to some side effects. While relieving one problem, the treatment may cause the onset of another issue. As a single cell is liable for various characteristics, the cell isolation process will be responsible for one trait will be complicated.

4. Unfavorable to Genetic DiversityGenetic engineering can affect the diversity among the individuals. Cloning might be unfavorable to individualism. Furthermore, such process might not be affordable for poor. Hence, it makes the gene therapy impossible for an average person.

Genetic engineering might work excellently but after all, it is a kind of process that manipulates the natural. This is altering something which has not been created originally by humans. What can you say about this?

More here:
Pros and Cons of Genetic Engineering - HRF

Posted in Genetic Engineering | Comments Off on Pros and Cons of Genetic Engineering – HRF

Methods and Mechanisms for Genetic Manipulation of Plants, Animals, and …

Posted: at 11:25 pm

Techniques Other than Genetic EngineeringSimple Selection

The easiest method of plant genetic modification (see Operational Definitions in Chapter 1), used by our nomadic ancestors and continuing today, is simple selection. That is, a genetically heterogeneous population of plants is inspected, and superior individualsplants with the most desired traits, such as improved palatability and yieldare selected for continued propagation. The others are eaten or discarded. The seeds from the superior plants are sown to produce a new generation of plants, all or most of which will carry and express the desired traits. Over a period of several years, these plants or their seeds are saved and replanted, which increases the population of superior plants and shifts the genetic population so that it is dominated by the superior genotype. This very old method of breeding has been enhanced with modern technology.

An example of modern methods of simple selection is marker-assisted selection, which uses molecular analysis to detect plants likely to express desired features, such as disease resistance to one or more specific pathogens in a population. Successfully applying marker-assisted selection allows a faster, more efficient mechanism for identifying candidate individuals that may have superior traits.

Superior traits are those considered beneficial to humans, as well as to domesticated animals that consume a plant-based diet; they are not necessarily beneficial to the plant in an ecological or evolutionary context. Often traits considered beneficial to breeders are detrimental to the plant from the standpoint of environmental fitness. For example, the reduction of unpalatable chemicals in a plant makes it more appealing to human consumers but may also attract more feeding by insects and other pests, making it less likely to survive in an unmanaged environment. As a result, cultivated crop varieties rarely establish populations in the wild when they escape from the farm. Conversely, some traits that enhance a plant's resistance to disease may also be harmful to humans.

Crossing occurs when a plant breeder takes pollen from one plant and brushes it onto the pistil of a sexually compatible plant, producing a hybrid that carries genes from both parents. When the hybrid progeny reaches flowering maturity, it also may be used as a parent.

Plant breeders usually want to combine the useful features of two plants. For example, they might add a disease-resistance gene from one plant to another that is high-yielding but disease-susceptible, while leaving behind any undesirable genetic traits of the disease-resistant plant, such as poor fertility and seed yield, susceptibility to insects or other diseases, or the production of antinutritional metabolites.

Because of the random nature of recombining genes and traits in crossed plants, breeders usually have to make hundreds or thousands of hybrid progeny to create and identify those few that possess useful features with a minimum of undesirable features. For example, the majority of progeny may show the desired disease resistance, but unwanted genetic features of the disease-resistant parent may also be present in some. Crossing is still the mainstay of modern plant breeding, but many other techniques have been added to the breeders' tool kit.

Interspecies crossing can take place through various means. Closely related species, such as cultivated oat (Avena sativa) and its weedy relative wild oat (Avena fatua), may cross-pollinate for exchange of genetic information, although this is not generally the case. Genes from one species also can naturally integrate into the genomes of more distant relatives under certain conditions. Some food plants can carry genes that originate in different species, transferred both by nature and by human intervention. For example, common wheat varieties carry genes from rye. A common potato, Solanum tuberosum, can cross with relatives of other species, such as S. acaule (Kozukue et al., 1999) or S. chacoense (Sanford et al., 1998; Zimnoch-Guzowska et al., 2000).

Chromosome engineering is the term given to nonrecombinant deoxyribonucleic acid (rDNA) cytogenetic manipulations, in which portions of chromosomes from near or distant species are recombined through a natural process called chromosomal translocation. Sears (1956, 1981) pioneered the human exploitation of this process, which proved valuable for transferring traits that were otherwise unattainable, such as pest or disease resistance, into crop species. However, because transferring large segments of chromosomes also transferred a number of neutral or detrimental genes, the utility of this technique was limited.

Recent refinements allow plant breeders to restrict the transferred genetic material, focusing more on the gene of interest (Lukaszewski, 2004). As a result, chromosome engineering is becoming more competitive with rDNA technology in its ability to transfer relatively small pieces of DNA. Several crop species, such as corn, soybean, rice, barley, and potato, have been improved using chromosome engineering (Gupta and Tsuchiya, 1991).

Sometimes human technical intervention is required to complete an interspecies gene transfer. Some plants will cross-pollinate and the resulting fertilized hybrid embryo develops but is unable to mature and sprout. Modern plant breeders work around this problem by pollinating naturally and then removing the plant embryo before it stops growing, placing it in a tissue-culture environment where it can complete its development. Such embryo rescue is not considered genetic engineering, and it is not commonly used to derive new varieties directly, but it is used instead as an intermediary step in transferring genes from distant, sexually incompatible relatives through intermediate, partially compatible relatives of both the donor and recipient species.

Recent advances in tissue-culture technologies have provided new opportunities for recombining genes from different plant sources. In somatic hybridization, a process also known as cell fusion, cells growing in a culture medium are stripped of their protective walls, usually using pectinase, cellulase, and hemicellulase enzymes. These stripped cells, called protoplasts, are pooled from different sources and, through the use of varied techniques such as electrical shock, are fused with one another.

When two protoplasts fuse, the resulting somatic hybrid contains the genetic material from both plant sources. This method overcomes physical barriers to pollen-based hybridization, but not basic chromosomal incompatibilities. If the somatic hybrid is compatible and healthy, it may grow a new cell wall, begin mitotic divisions, and ultimately grow into a hybrid plant that carries genetic features of both parents. While protoplast fusions are easily accomplished, as almost all plants (and animals) have cells suitable for this process, relatively few are capable of regenerating a whole organism, and fewer still are capable of sexual reproduction. This non-genetic engineering technique is not common in plant breeding as the resulting range of successful, fertile hybrids has not extended much beyond what is possible using other conventional technologies.

Somaclonal variation is the name given to spontaneous mutations that occur when plant cells are grown in vitro. For many years plants regenerated from tis-sue culture sometimes had novel features. It was not until the 1980s that two Australian scientists thought this phenomenon might provide a new source of genetic variability, and that some of the variant plants might carry attributes of value to plant breeders (Larkin and Scowcroft, 1981).

Through the 1980s plant breeders around the world grew plants in vitro and scored regenerants for potentially valuable variants in a range of different crops. New varieties of several crops, such as flax, were developed and commercially released (Rowland et al., 2002). Molecular analyses of these new varieties were not required by regulators at that time, nor were they conducted by developers to ascertain the nature of the underlying genetic changes driving the variant features. Somaclonal variation is still used by some breeders, particularly in developing countries, but this non-genetic engineering technique has largely been supplanted by more predictable genetic engineering technologies.

Mutation breeding involves exposing plants or seeds to mutagenic agents (e.g., ionizing radiation) or chemical mutagens (e.g., ethyl methanesulfonate) to induce random changes in the DNA sequence. The breeder can adjust the dose of the mutagen so that it is enough to result in some mutations, but not enough to be lethal. Typically a large number of plants or seeds are mutagenized, grown to reproductive maturity, and progeny are derived. The progeny are assessed for phenotypic expression of potentially valuable new traits.

As with somaclonal variation, the vast majority of mutations resulting from this technique are deleterious, and only chance determines if any genetic changes useful to humans will appear. Other than through varying the dosage, there is no means to control the effects of the mutagen or to target particular genes or traits. The mutagenic effects appear to be random throughout the genome and, even if a useful mutation occurs in a particular plant, deleterious mutations also will likely occur. Once a useful mutation is identified, breeders work to reduce the deleterious mutations or other undesirable features of the mutated plant. Nevertheless, crops derived from mutation breeding still are likely to carry DNA alterations beyond the specific mutation that provided the superior trait.

Induced-mutation crops in most countries (including the United States) are not regulated for food or environmental safety, and breeders generally do not conduct molecular genetic analyses on such crops to characterize the mutations or determine their extent. Consequently, it is almost certain that mutations other than those resulting in identified useful traits also occur and may not be obvious, remaining uncharacterized with unknown effects.

Worldwide, more than 2,300 different crop varieties have been developed using induced mutagenesis (FAO/IAEA, 2001), and about half of these have been developed during the past 15 years. In the United States, crop varieties ranging from wheat to grapefruit have been mutated since the technique was first used in the 1920s. There are no records of the molecular characterizations of these mutant crops and, in most cases, no records to retrace their subsequent use.

Several commercial crop varieties have been developed using cell selection, including varieties of soybeans (Sebastian and Chaleff, 1987), canola (Swanson et al., 1988), and flax (Rowland et al., 1989). This process involves isolating a population of cells from a so-called elite plant with superior agricultural characteristics. The cells are then excised and grown in culture. Initially the population is genetically homogeneous, but changes can occur spontaneously (as in somaclonal variation) or be induced using mutagenic agents. Cells with a desired phenotypic variation may be selected and regenerated into a whole plant. For example, adding a suitable amount of the appropriate herbicide to the culture medium may identify cells expressing a novel variant phenotype of herbicide resistance. In theory, all of the normal, susceptible cells will succumb to the herbicide, but a newly resistant cell will survive and perhaps even continue to grow. An herbicide-resistant cell and its derived progeny cell line thus can be selected and regenerated into a whole plant, which is then tested to ensure that the phenotypic trait is stable and results from a heritable genetic alteration. In practice, many factors influence the success of the selection procedure, and the desired trait must have a biochemical basis that lends itself to selection in vitro and at a cellular level.

Breeders cannot select for increased yield in cell cultures because the cellular mechanism for this trait is not known. The advantage of cell selection over conventional breeding is the ability to inexpensively screen large numbers of cells in a petri dish in a short time instead of breeding a similar number of plants in an expensive, large field trial conducted over an entire growing season.

Like somaclonal variation, cell selection has largely been superceded by recombinant technologies because of their greater precision, higher rates of success, and fewer undocumented mutations.

As noted in Chapter 1, this report defines genetic engineering specifically as one type of genetic modification that involves an intended targeted change in a plant or animal gene sequence to effect a specific result through the use of rDNA technology. A variety of genetic engineering techniques are described in the following text.

Agrobacterium tumefaciens is a naturally occurring soil microbe best known for causing crown gall disease on susceptible plant species. It is an unusual pathogen because when it infects a host, it transfers a portion of its own DNA into the plant cell. The transferred DNA is stably integrated into the plant DNA, and the plant then reads and expresses the transferred genes as if they were its own. The transferred genes direct the production of several substances that mediate the development of a crown gall.

Among these substances is one or more unusual nonprotein amino acids, called opines. Opines are translocated throughout the plant, so food developed from crown gall-infected plants will carry these opines. In the early 1980s strains of Agrobacterium were developed that lacked the disease-causing genes but maintained the ability to attach to susceptible plant cells and transfer DNA.

By substituting the DNA of interest for the crown gall disease-causing DNA, scientists derived new strains of Agrobacterium that deliver and stably integrate specific new genetic material into the cells of target plant species. If the transformed cell then is regenerated into a whole fertile plant, all cells in the progeny also carry and may express the inserted genes. Agrobacterium is a naturally occurring genetic engineering agent and is responsible for the majority of GE plants in commercial production.

Klein and colleagues (1987) discovered that naked DNA could be delivered to plant cells by shooting them with microscopic pellets to which DNA had been adhered. This is a crude but effective physical method of DNA delivery, especially in species such as corn, rice, and other cereal grains, which Agrobacterium does not naturally transform. Many GE plants in commercial production were initially transformed using microprojectile delivery.

In electroporation, plant protoplasts take up macromolecules from their surrounding fluid, facilitated by an electrical impulse. Cells growing in a culture medium are stripped of their protective walls, resulting in protoplasts. Supplying known DNA to the protoplast culture medium and then applying the electrical pulse temporarily destabilizes the cell membrane, allowing the DNA to enter the cell. Transformed cells can then regenerate their cell walls and grow to whole, fertile transgenic plants. Electroporation is limited by the poor efficiency of most plant species to regenerate from protoplasts.

DNA can be injected directly into anchored cells. Some proportion of these cells will survive and integrate the injected DNA. However, the process is labor intensive and inefficient compared with other methods.

The genes of most plant and some animal (e.g., insects and fish) species carry transposons, which are short, naturally occurring pieces of DNA with the ability to move from one location to another in the genome. Barbara McClintock first described such transposable elements in corn plants during the 1950s (Cold Spring Harbor Laboratory, 1951). Transposons have been investigated extensively in research laboratories, especially to study mutagenesis and the mechanics of DNA recombination. However, they have not yet been harnessed to deliver novel genetic information to improve commercial crops.

Genetic features can be added to plants and animals without inserting them into the recipient organism's native genome. DNA of interest may be delivered to a plant cell, expressing a new proteinand thereby a new traitwithout becoming integrated into the host-cell DNA. For example, virus strains may be modified to carry genetic material into a plant cell, replicate, and thrive without integrating into the host genome. Without integration, however, new genetic material may be lost during meiosis, so that seed progeny may not carry or express the new trait.

Many food plants are perennials or are propagated by vegetative means, such as grafting or from cuttings. In these cases the virus and new genes would be maintained in subsequent, nonsexually generated populations. Technically such plants are not products of rDNA because there is no recombination or insertion of introduced DNA into the host genome. Although these plants are not GE, they do carry new DNA and new traits. No such products are known to be currently on the market in the United States or elsewhere. (See McHughen [2000] for further information on genetic mechanisms used in plant improvement.)

Here is the original post:
Methods and Mechanisms for Genetic Manipulation of Plants, Animals, and ...

Posted in Genetic Engineering | Comments Off on Methods and Mechanisms for Genetic Manipulation of Plants, Animals, and …

Quantum physics is on the cusp of an astonishing revolution in low-energy technology Professor Brian Gerardot – The Scotsman

Posted: at 11:25 pm

Heterostructures are different layers of atoms stacked on top of each other to form a single structure. They were first proposed in 1959 by the physicist Richard Feynman, who famously asked: What would the properties of materials be if we could arrange atoms just the way we want them?

Over the following decades, researchers developed the ability to engineer the arrangement of atoms through which particles such as electrons (particles of charge) or photons (particles of light) travel.

This allowed scientists to probe, understand, and eventually control the quantum mechanical properties of the particles the behaviour of matter and light creating a toolkit for the technological development of electronics and photonics.

Today, heterostructures are everywhere; they enable technologies such as transistors in computers, solar cells, LED lighting, and lasers. Even the internet would not be possible without use of heterostructures.

Until now, our use of heterostructures has been limited to taking advantage of isolated, individual particles, where their interactions are negligible.

However, if scientists could understand and take control of the interactions between particles within heterostructures, unimagined new technologies will become possible.

Like dancers in a ballet, interacting particles can coordinate their movements in surprising ways. Strongly interacting electrons can: dance together in their place to generate strong magnets; completely stop their journey through a crystal as if frozen to create insulators; or pair up to zoom through a crystal without any resistance to create a superconductor.

Unfortunately, the precise steps in the choreography of interacting particles are tricky to control, and in many cases not even well understood, which prevents their implementation in technologies.

However, an unexpected recent discovery has renewed optimism that this difficult problem can now be tackled.

If two sheets of carbon atoms, called graphene, are placed on top of each other with a relative twist of precisely 1.1 degrees the so-called magic angle an abundance of correlated electron states miraculously appear.

Graphene, the wonder material found in graphite pencil lead, is completely non-magnetic and does not host strongly correlated states. However, when two layers are stacked at the magic angle, it can be switched from insulating to magnetic to superconducting with the use of a tiny battery.

The discovery of these astonishing features is now driving a revolution in our ability to produce, study, and take advantage of heterostructures.

Through these ventures into strongly correlated quantum materials, a whole new generation of low-energy technologies and tools, beyond anything we can currently imagine, becomes ever more likely.

Brian Gerardot is professor at the Institute of Photonics and Quantum Science at Heriot-Watt University, a current chair in emerging technologies at the Royal Academy of Engineering, and a fellow of the Royal Society of Edinburgh. This article expresses his own views. The RSE is Scotland's national academy, bringing great minds together to contribute to the social, cultural and economic well-being of Scotland. Find out more at rse.org.uk and @RoyalSocEd.

More here:

Quantum physics is on the cusp of an astonishing revolution in low-energy technology Professor Brian Gerardot - The Scotsman

Posted in Quantum Physics | Comments Off on Quantum physics is on the cusp of an astonishing revolution in low-energy technology Professor Brian Gerardot – The Scotsman

Protons Contain a Particle That’s Heavier Than the Proton Itself – Popular Mechanics

Posted: at 11:25 pm

Protons are particles that exist in the nucleus of all atoms, with their number defining the elements themselves. Protons, however, are not fundamental particles. Rather, they are composite particles made up of smaller subatomic particles, namely two up quarks and one down quark bound together by force-carrying particles (bosons) called gluons.

This structure isnt certain, however, and quantum physics suggests that along with these three quarks, other particles should be popping into and out of existence at all times, affecting the mass of the proton. This includes other quarks and even quark-antiquark pairs.

Indeed, the deeper scientists have probed the structure of the proton with high-energy particle collisions, the more complicated the situation has become. As a result, for around four decades, physicists have speculated that protons may host a heavier form of quark than up and down quarks called intrinsic charm quarks, but confirmation of this has been elusive.

Now, by exploiting a high-precision determination of the quark-gluon content of the proton and by examining 35 years worth of data, particle physics data researchers have discovered evidence that the proton does contain intrinsic charm quarks.

What makes this result more extraordinary is that this flavor of quark is one-and-a-half times more massive than the proton itself. Yet when it is a component of the proton, the charm quark still only accounts for around half of the composite particles mass.

This counter-intuitive setup is a consequence of the weirdness of quantum mechanics, the physics that governs the subatomic world. This requires thinking of the structure of a particle and what can be found within it as probabilistic in nature.

There are six kinds of quarks in nature, three are lighter than the proton [up, down, and strange quarks] and three are heavier [charm, up, and down quarks], Stefano Forte, NNPDF Collaboration team leader and professor of theoretical Physics at Milan University, tells the Nature Briefing podcast. One would think that only the lighter quarks are inside the proton, but actually, the laws of quantum physics allow also for the heavier quarks to be inside the proton.

Fortethe lead author of a paper published earlier this month in the journal Nature, describing the researchand his team set out to discover if the lightest of these heavier quarks, the charm quark, is present in the proton.

When the Large Hadron Collider (LHC) and other particle accelerators smash protons against each other (and other particles, like electrons) at high energies, what emerges is a shower of particles. This can be used to reconstruct the composition of the original particle and the particles that comprised it, collectively known as partons.

Each of these partons carries away a portion of the overall momentum of the systemthe momentum distributionwith this share of momentum known as the momentum fraction.

Forte and colleagues fed 35 years of data from particle accelerators, including the worlds largest and most powerful machine of this kind, the LHC, to a computer algorithm that pieces proton structure back together by looking for a best fit for its structure at high-energies. From here, the team calculated the structure for the proton when it is at rest.

This resulted in the first evidence that protons do indeed sometimes have charm quarks. These are labeled intrinsic because they are part of the proton for a long time and are still present when the proton is at rest, meaning it doesnt emerge from the high-energy interaction with another particle.

You have a chance, which is small but not negligible, of finding a charm quark in the proton, and when you do find one, it so happens that that charm quark is typically carrying about half of the proton mass, Forte says on the podcast. This is quantum physics, so everything is probabilistic.

Romona Vogt is a high-energy physicist at Lawrence Livermore National Laboratory (LLNL) in California, who wrote a News and Views piece for Nature to accompany the new research paper.

She explains to Popular Mechanics how charm quarks could be connected to proton structure and how the intrinsic charm quark scenario differs from the standard scenario that sees protons comprised of just two up and one down quarks joined by gluons.

Charm quarks come in quark-antiquark pairs in both the standard scenario and the intrinsic charm one, Vogt says. In the standard scenario, a gluon radiates this pairing during a high-energy interaction. Because of the charm quarks mass, it is too heavy to be part of the sea of light up, down, and strange quarks.

This means the charm quark doesnt have a large role when physicists calculate the standard parton momentum distribution functions until momentum reaches a threshold above mass.

Thats very different from the intrinsic charm scenario where the charm distribution carries a large fraction of the proton momentum, Vogt adds. Because in the intrinsic charm quark scenario, the quark-antiquark pair is attached to more than one of the up and down quarks in the proton they travel with. Thats why the charm quarks appear at large momentum fractions.

The proton is more or less empty in this scenario or has a small size configuration because the proton is just up, up, down quarks and charm quark pairs with no other quarks at low momentum fractions in the minimal model of intrinsic charm.

Vogt suggests that the NNPDF Collaborations results could lead other researchers to ask if other quarks could play a role in the composition of protons.

One question these findings might raise is whether or not there are other intrinsic quark scenarios, like intrinsic bottom and intrinsic strangeness, she says.

See original here:

Protons Contain a Particle That's Heavier Than the Proton Itself - Popular Mechanics

Posted in Quantum Physics | Comments Off on Protons Contain a Particle That’s Heavier Than the Proton Itself – Popular Mechanics

3 research universities to collaborate with industry, government to develop quantum technologies: News at IU: Indiana University – IU Newsroom

Posted: at 11:24 pm

BLOOMINGTON, Ind. -- Quantum science and engineering can save energy, speed up computation, enhance national security and defense, and innovate health care. With a grant from the National Science Foundation, researchers from Indiana University (both Bloomington and IUPUI campuses), Purdue University and the University of Notre Dame will develop industry- and government-relevant quantum technologies as part of the Center for Quantum Technologies. Purdue will serve as the lead site.

"The Center for Quantum Technologies is based on the collaboration between world experts whose collective mission is to deliver frontier research addressing the quantum technological challenges facing industry and government agencies," said Gerardo Ortiz, Indiana University site director, scientific director of the IU Quantum Science and Engineering Center and professor of physics. "It represents a unique opportunity for the state of Indiana to become a national and international leader in technologies that can shape our future."

"This newly formed center is unique in many aspects," said Ricardo Decca, professor and chair of the Department of Physics at IUPUI. "It brings together experts in many scientific disciplines -- computer science, physics, chemistry, materials science -- from three universities and four campuses and companies developing the next generation of quantum-based information and sensing systems. The future seems very bright."

Given the wide applicability of quantum technologies, the new Center for Quantum Technologies will team with member organizations from a variety of industries, including computing, defense, chemical, pharmaceutical, manufacturing and materials. The center's researchers will develop foundational knowledge into industry-friendly quantum devices, systems and algorithms with enhanced functionality and performance.

"Over the coming decades, quantum science will revolutionize technologies ranging from the design of drugs, materials and energy harvesting systems, to computing, data security, and supply chain logistics," IU Vice President for Research Fred Cate said. "Through the CQT, Indiana will be at the forefront of transferring new quantum algorithms and technologies to industry. We are also looking forward to educating the quantum workforce for the future through the corporate partnerships that are integral to the funding model of the CQT."

Committed industry and government partners include Accenture, the Air Force Research Laboratory, BASF, Cummins, D-Wave, Eli Lilly, Entanglement Inc., General Atomics, Hewlett Packard Enterprise, IBM Quantum, Intel, Northrup Grumman, NSWC Crane, Quantum Computing Inc., Qrypt and Skywater Technology.

Additionally, the Center for Quantum Technologies will train future quantum scientists and engineers to fill the need for a robust quantum workforce. Students engaged with the center will take on many of the responsibilities of principal investigators, including drafting proposals, presenting research updates to members, and planning meetings and workshops.

The center is funded for an initial five years through the NSF's Industry-University Cooperative Research Centers program, which generates breakthrough research by enabling close and sustained engagement between industry innovators, world-class academic teams and government agencies. The IUCRC program is unique in that members fund and guide the direction of research through active involvement and mentoring.

Other academic collaborators include Sabre Kais, center director and distinguished professor of chemical physics at Purdue; Peter Kogge, the University of Notre Dame site director and the Ted H. McCourtney Professor of Computer Science and Engineering; and David Stewart, Center for Quantum Technologies industry liaison officer and managing director of the Purdue Quantum Science and Engineering Institute.

More here:

3 research universities to collaborate with industry, government to develop quantum technologies: News at IU: Indiana University - IU Newsroom

Posted in Quantum Physics | Comments Off on 3 research universities to collaborate with industry, government to develop quantum technologies: News at IU: Indiana University – IU Newsroom

The big difference between physics and mathematics – Big Think

Posted: at 11:24 pm

To an outsider, physics and mathematics might appear to be almost identical disciplines. Particularly at the frontiers of theoretical physics, where a very deep knowledge of extraordinarily advanced mathematics is required to grasp even cutting-edge physics from a century ago curved four-dimensional spacetimes and probabilistic wavefunctions among them its clear that predictive mathematical models are at the core of science. Since physics is at the fundamental core of the entire scientific endeavor, its very clear that theres a close relationship between mathematics and all of science.

Yes, mathematics has been incredibly successful at describing the Universe that we inhabit. And yes, many mathematical advances have led to the exploration of new physical possibilities that have relied on those very advances to provide a mathematical foundation. But theres an extraordinary difference between physics and mathematics that one of the simplest questions we can ask will illustrate:

I bet you think you know the answer, and in all honesty, you probably do: its 2, right?

I cant blame you for that answer, and its not exactly wrong. But theres much more to the story, as youre about to find out.

A ball in mid-bounce has its past and future trajectories determined by the laws of physics, but time will only flow into the future for us. While Newtons laws of motion are the same whether you run the clock forward or backward in time, not all of the rules of physics behave identically if you run the clock forward or backward, indicating a violation of time-reversal (T) symmetry where it occurs.

Take a look at the above time-lapse image of a bouncing ball. One look at this tells you a simple, straightforward story.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

This is, quite reasonably, the story youd tell yourself of whats going on.

But why, may I ask, would you tell yourself that story rather than the opposite: that the ball begins on the right side, moving leftward, and that it gains energy, height, and speed after each successive bounce on the floor?

In Newtonian (or Einsteinian) mechanics, a system will evolve over time according to completely deterministic equations, which should mean that if you can know the initial conditions (like positions and momenta) for everything in your system, you should be able to evolve it, with no errors, arbitrarily forward in time. In practice, due to the inability to know the initial conditions to truly arbitrary precisions, this is not true.

The only answer youd likely be able to give, and you may find it dissatisfying even as you give it, is your experience with the actual world. Basketballs, when they bounce, lose a percentage of their initial (kinetic) energy upon striking the floor; youd have to have a specially prepared system designed to kick the ball to higher (kinetic) energies to successfully engineer the alternate possibility. Its your knowledge of physical reality, and your assumption that what youre observing is aligned with your experiences, that lead you to that conclusion.

Similarly, look at the diagram, above, that shows three stars all orbiting around a central mass: a supermassive black hole. If this were a movie, instead of a diagram, you could imagine that all three stars are moving clockwise, that two move clockwise while one moves counterclockwise, that one moves clockwise and two move counterclockwise, or that all three move counterclockwise.

But now, ask yourself this: how would you know whether the movie were running forward in time or backward in time? In the case of gravity just as in the case of electromagnetism or the strong nuclear force youd have no way of knowing. For these forces, the laws of physics are time symmetric: the same forward in time as they are backward in time.

Individual protons and neutrons may be colorless entities, but the quarks within them are colored. Gluons can not only be exchanged between the individual gluons within a proton or neutron, but in combinations between protons and neutrons, leading to nuclear binding. However, every single exchange must obey the full suite of quantum rules, and these strong force interaction are time-reversal symmetric: you cannot tell whether the animated movie here is shown moving forward or backward in time.

Time is an interesting consideration in physics, because while the mathematics offers a set of possible solutions for how a system will evolve, the physical constraint that we have time possesses an arrow, and always progresses forward, never backward ensures that only one solution describes our physical reality: the solution that evolves the system forward in time. Similarly, if we ask the opposite question of What was the system doing in the lead-up until the present moment? the same constraint, that time only moves forward, enables us to choose the mathematical solution that describes how the system was behaving at some prior time.

Consider what this means, then: even given the laws that describe a system, and the conditions that the system possesses at any particular moment, the mathematics is capable of offering multiple different solutions to any problem that we can pose. If we look at a runner, and ask, When will the runners left foot strike the ground? were going to find multiple mathematical solutions, corresponding to the many times their left foot struck the ground in the past, as well as many times their left foot will strike the ground in the future. Mathematics gives you the set of possible solutions, but it doesnt tell you which one is the right one.

Having your camera anticipate the motion of objects through time is just one practical application of the idea of time-as-a-dimension. For any set of conditions that will be recorded throughout time, its plausible to predict when a certain set of conditions will arise, and find multiple possible solutions in the past and future.

But physics does. Physics can allow you to find the correct, physically relevant solution, whereas mathematics can only give you the set of possible outcomes. When you find a ball in mid-flight and know its trajectory perfectly well, you have to turn to the mathematical formulation of the physical laws that govern the system to determine what happens next.

You write down the set of equations that describe the balls motion, you manipulate and solve them, and then you plug in the specific values that describe the conditions of your particular system. When you work the mathematics that describe that system to its logical conclusion, that exercise will give you (at least) two possible solutions as to precisely when-and-where it will hit the ground in the future.

One of those solutions does, indeed, correspond to the solution youre looking for. It will tell you, at a particular point in the future, when the projectile will first strike the ground, and what its positions will be in all three spatial dimensions when that occurs.

But there will be another solution that corresponds to a negative time: a time in the past where the projectile would also have struck the ground. (You can also find the 3D spatial position of where that projectile would be at that time, if you like.) Both solutions have equal mathematical validity, but only one is physically relevant.

This image shows the parabolic trail left by a rocket after launch. If you would simply calculate the trajectory of this object, assuming no further engine firings after launch, youd get multiple solutions for where/when it would land. One solution is correct, corresponding to the future; the other solution is mathematically correct but physically incorrect, corresponding to a time in the past.

Thats not a deficiency in mathematics; thats a feature of physics, and of science in general. Mathematics tells you the set of possible outcomes. But the scientific fact that we live in a physical reality and in that reality, wherever and whenever we make a measurement, we observe only one outcome teaches us that there are additional constraints beyond what mere mathematics provides. Mathematics tells you what outcomes are possible; physics (and science in general) is what you use to pick out which outcome is (or was, or will be) relevant for the specific problem youre trying to address.

In biology, we can know the genetic makeup of two parent organisms, and can predict the probability with which their offspring will inherent a certain combination of genes. But if these two organisms combine their genetic material to actually make an offspring organism, only one set of combinations will be realized. Furthermore, the only way to determine which genes actually were inherited by the child of the two parents would be to make the critical observations and measurements: you have to gather the data and determine the outcome. Despite the myriad of mathematical possibilities, only one outcome actually occurs.

An Irish immigrant (center) waiting next to an Italian immigrant and her children at Ellis Island, circa 1920. The womans children each possess 50% of her DNA, but specifically which 50% is present in each childs genetic makeup varies not only from child-to-child, but must be observed and measured, explicitly, to correctly determine which of all the possible outcomes actually occurred.

The more complicated your system, the more difficult it becomes to predict the outcome. For a room filled with large numbers of molecules, asking What fate will befall any one of these molecules? becomes a practically impossible task, as the number of possible outcomes after only a small amount of time passes becomes greater than the number of atoms in the entire Universe.

Some systems are inherently chaotic, where minuscule, practically immeasurable differences in the initial conditions of a system lead to vastly different potential outcomes.

Other systems are inherently indeterminate until theyre measured, which is one of the most counterintuitive aspects of quantum mechanics. Sometimes, the act of performing a measurement to literally determine the quantum state of your system winds up changing the state of the system itself.

In all of these cases, mathematics offers a set of possible outcomes whose probabilities can be determined and calculated in advance, but only by performing the critical measurement can you actually determine which one outcome has actually occurred.

Trajectories of a particle in a box (also called an infinite square well) in classical mechanics (A) and quantum mechanics (B-F). In (A), the particle moves at constant velocity, bouncing back and forth. In (B-F), wavefunction solutions to the Time-Dependent Schrodinger Equation are shown for the same geometry and potential. The horizontal axis is position, the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. These stationary (B, C, D) and non-stationary (E, F) states only yield probabilities for the particle, rather than definitive answers for where it will be at a particular time.

This takes us all the way back to the initial question: what is the square root of 4?

Chances are, you read that question, and the number 2 immediately popped into your head. But thats not the only possible answer; it could have been -2 just as easily. After all, (-2) equals 4 just as surely as (2) equals 4; theyre both admissible solutions.

If I had gone further and asked, What is the fourth root (the square root of the square root) of 16? you could have then gone and given me four possible solutions. Each of these following numbers,

when raised to the fourth power, will yield the number 16 as the mathematical answer.

This graph shows the function y = x. Note that there are two possible solutions on the y-axis for every value of x. Two of those solutions correspond to x = 4: y = 2 and y = -2. Both solutions are, mathematically, equally valid. But theres only one physical Universe that we inhabit, and each physical problem must be considered individually to determine which of these solutions is physically relevant.

But in the context of a physical problem, there will only be one of these many possible solutions that actually reflects the reality we inhabit. The only way to determine which one is correct is either to go out and measure reality and pick out the physically relevant solution, or to know enough about your system and apply the relevant physical conditions so that youre not simply calculating the mathematical possibilities, but that youre capable of choosing the physically relevant solution and rejecting the non-physical ones.

Sometimes, that means we have multiple admissible solutions at once that are all plausible for explaining an observed phenomenon. It will only be through the obtaining of more, superior data that rules out certain possibilities while remaining consistent with others that enables us to determine which of the possible solutions actually remain viable. This approach, inherent to the process of doing science, helps us make successively better and better approximations to our inhabited reality, allowing us to tease out what is true about our Universe amidst the possibilities of what could have been true in the absence of that critical data.

NASAs Curiosity Mars Rover detected fluctuations in the methane concentration of Marss atmosphere seasonally and at specific locations on the surface. This can be explained via either geochemical or biological processes; the evidence is not sufficient to decide at present. However, future missions, such as Mars Sample Return, may enable us to determine whether fossilized, dormant, or active life exists on Mars. Right now, we can only narrow down the physical possibilities; more information is required to determine which pathway accurately reflects our physical reality.

The biggest difference between physics and mathematics is simply that mathematics is a framework that, when applied wisely, can accurately describe certain properties about a physical system in a self-consistent fashion. However, mathematics is limited in what it can achieve: it can only give you a set of possible outcomes sometimes weighted by probability and sometimes not weighted at all for what could occur or could have occurred in reality.

Physics is much more than mathematics, however, as no matter when we look at the Universe or how we look at it, there will be only one observed outcome that has actually occurred. Mathematics shows us the full set of all possible outcomes, but its the application of physical constraints that allows us to actually determine what is true, real, or what actual outcomes have occurred in our reality.

If you can remember that the square root of 4 isnt always 2, but is sometimes -2 instead, you can remember the difference between physics and mathematics. The latter can tell you all the possible outcomes that could occur, but what elevates something to the realm of science, rather than pure mathematics, is its connection to our physical reality. The answer to the square root of 4 will always be either 2 or -2, and the other solution will be rejected by a means that mathematics alone can never fully determine: on physical grounds, alone.

Read more from the original source:

The big difference between physics and mathematics - Big Think

Posted in Quantum Physics | Comments Off on The big difference between physics and mathematics – Big Think

NSF grant brings state-of-the-art materials research equipment to the UAB Department of Physics – University of Alabama at Birmingham

Posted: at 11:24 pm

The grant will enhance research capabilities at UAB by facilitating acquisition of a Physical Properties Measurement System.

Wenli Bi, Ph.DThe National Science Foundation has awarded a Major Research Instrumentation grant of $419,614 to Wenli Bi, Ph.D., assistant professor in the University of Alabama at Birmingham theCollege of Arts and SciencesDepartment of Physics.

The grant, led by Bi, is titled MRI: Acquisition of a Quantum Design Physical Properties Measurement System for Materials Research and Education.

The MRI grant supports the acquisition of a Physical Properties Measurement System from Quantum Design, which is a state-of-the-art, highly automated and multifunctional system capable of measuring a multitude of material properties at cryogenic temperature, high magnetic field and high pressure.

The PPMS will greatly expand the materials research capability at UAB Physics by directly benefiting seven research groups in our department, Bi said. It will enable integration of all three extreme sample environments: high pressure up to 1 million atmosphere, low temperature down to 1.9 K and high magnetic field up to 9 Tesla for materials research.

Bi believes the grant will open new avenues for intra- and interdepartmental collaborations with the acquisition of PPMS. Additionally, she hopes it will foster education of diverse graduate and undergraduate students, and train high school students and STEM teachers through research, education and outreach activities. It will also help develop new hands-on modules for lab courses at UAB to demonstrate basic physics principles.

Learn more about physics degrees at UAB here.

The grant will develop and use exhibits on advanced characterization of materials properties and applications for outreach at McWane Science Center to promote science literacy in the general public and inspire K-12 students to pursue STEM areas.

The addition of this new research capability will foster new collaborations with other diverse physics and materials research groups in Central Alabama, Bi said.

UAB faculty Yogesh Vohra, Ph.D., professor university scholar and associate dean in the Department of Physics; Mary Ellen Zvanut, Ph.D., professor and graduate program director; Sergey Mirov, Ph.D., university professor; and Renato Camata, Ph.D., associate professor and undergraduate program director; are the co-principal investigators of this project.

Read the original here:

NSF grant brings state-of-the-art materials research equipment to the UAB Department of Physics - University of Alabama at Birmingham

Posted in Quantum Physics | Comments Off on NSF grant brings state-of-the-art materials research equipment to the UAB Department of Physics – University of Alabama at Birmingham

Evansville’s ties to the first detonation of the A-bomb in 1945 – Courier & Press

Posted: at 11:24 pm

Its not hyperbole to suggest that there are two worlds one before and one after the detonation of the atomic bomb.

Interestingly, there are two Southern Indiana connections to J. Robert Oppenheimer, leader of the Manhattan Project.

Joseph Fabian Mattingly, the uncle of Evansville baseball legend Don, was present July 6, 1945 as the gadget was successfully tested in Alamogordo, New Mexico. The U.S. dropped the A-bomb on Hiroshima on Aug. 6 and Nagasaki on Aug. 9 and Japan surrendered from World War II shortly thereafter.

It was very bright, Joseph Fabian Mattingly told the Evansville Courier in 1995. When it lit up the sky, the colors were beautiful violet and purple. It was a pretty sight. We were on a mountainside about 17 miles out.

It was bright as hell, and it was quiet. Eerie. There was no sound for a minute and a half. Then, whoom! A thunderous reverberation from the mountains occurred again and again. The light was like looking at the sun. There was a cloud layer about 17,000 feet and it looked like there was somebody at the end of the clouds shaking them like a bedsheet, vibrating up and down.

Local news:Evansville's total debt on three Downtown developments is $142 million

Then 86, that was Mattinglys recollection of seeing the detonation in the New Mexico desert. Randy Mattingly said his uncle, who died at 91 in 2000, made for quite a conversation piece at family gatherings when he was growing up.

Initially, I was young enough that it didnt register to me, Randy told the Courier & Press. The A-bomb didnt really register to me. He showed us the goggles (he wore during the detonation) at our grandfathers house.

Although those goggles (welders glasses) might bring in quite a price at an auction, Randy isnt sure where they are.

Melba Newell Phillips, a female trailblazer from Hazleton in Gibson County, Indiana, worked with J. Robert Oppenheimer years before the A-bomb exploded.

Phillips, who died in 2004 at age 97, studied under and collaborated with Oppenheimer. She was part of a heroic age of physics, a time when scientists were just beginning to study quantum theory and other areas of physics that would bring the world into the atomic age, according to American Prometheus: the Triumph and Tragedy of J. Robert Oppenheimer a Pulitzer Prize-winning biography by Kai Bird and Martin J. Sherwin. It is the basis for an upcoming biographical film, Oppenheimer, scheduled to be released in July 2023.

Barely 16, Phillips graduated from Union High School in rural Pike County in 1923. She began her undergraduate work at Oakland City University and worked with Oppenheimer at the University of California at Berkeley in the early 1930s. During the Red Scare of 1952, she stood up to congressional bullies of Senator Joseph McCarthy, but lost her job at Brooklyn College in the process, said Oakland City University social sciences professor and area historian Randy Mills.

Still, she persevered. In fact, the American Association of Physics Teachers in 1983 recognized her commitment to education by creating the Melba Newell Phillips Award, a national honor given yearly to the individual who is judged to have made an exceptional contribution to physics education.

In 1943, while working at the U.S. Weather Bureau in Evansville, Joseph Mattingly received a call from Dr. Philemon Edwards Church, who was assigned to the Manhattan Project to study/predict weather patterns and turbulence for the project, according to the July 2006 Mattingly Family Newsletter.

Church invited Mattingly, a 1927 Memorial High School graduate, to take part in his studies at the University of Chicago. He was given special leave where his position with the Weather Bureau was protected for the duration of the war. Mattingly also received, over objection from local military authorities, a special military deferment personally from Gen. Leslie Groves, Military Chief of the Manhattan Project.

After training in Chicago, he was sent to Hanford, Washington, assigned to Hanford Engineering Works, a division of E.I. DuPont. DuPont had erected the first full-size nuclear reactor at this site and would produce plutonium for the atomic bomb. Few of the 20,000 workers at Hanford, including Mattingly, knew what was going on or what the Hanford site mission entailed. One mile from the reactor, they built a tower several hundred feet tall that his team used to make continuous observations of barometric pressure, temperature, humidity, and cloud cover in an attempt to track the radioactive smoke from the production facility. Geiger counters were placed all over the area.

Local news:Vanderburgh County Prosecutor's Office linked to firm of lawyer who wasn't charged for DUI

Every morning Mattingly boarded a Piper Cub and was taken up to 2,000 feet to track smoke from the stacks. The Hanford Site was 600 square miles and the smoke was to diffuse before it got off the reservation. No one knew what was really going on other than a war project that involved something called the gadget.

The Hanford area ws later considered one of the most contaminated places in the world. Mattingly said at least one person died of cancer and it was in Hanford that his wife, Adeline, became ill with Parkinson's disease.

"But there's no way to know if radiation had anything to do with it," Mattingly told the Evansville Courier.

In July of 1945, Mattingly was sent to Alamogordo. Uncle Fabian was on hand to witness the most powerful development of the century. Following are a few of the quotes from his notebook made on the date of the detonation: White hot 1 mile. The second drawing shows a mushroom with the note, Golden glowing one-half mile. The third drawing shows a larger cloud and the note, Violet brilliant color. Other notes from his address book: Base precaution C, burn from ultraviolet rays, (2) prone on face, (3) eye protection, (4) evacuation, in case of disaster. One half hour after blast, stratified layers aloft, no longer distinguishable from Albuquerque road. B-29 at 24,000 feet reported light bump at altitude above shot.

When Mattingly returned to Hanford, he was the only one of the 20,000 workers who knew what the gadget was and what it could do. He didnt know how it was going to be used until Aug. 6, 1945, when the story broke that the bomb Little Boy was dropped over the city of Hiroshima and three days later the bomb Fat Man was dropped over the city of Nagasaki.

Unlike the Trinity Site in New Mexico, the Hanford reactor site is one of the most polluted sites in the world. In their rush they just didnt know what the consequences were to the environment. The government is spending $1 billion per year on cleanup that will go for several more years.

In 1947, Mattingly returned to the University of Washington in a sub-faculty position in the newly formed Department of Meteorology and Climatology. He returned to the U.S. Weather Bureau in Evansville in 1949. He built his house in the summer of 1950 on St. George Road and lived there the rest of his life next door to his sister Catherine Hess.

After the U.S. dropped atomic bombs on Japan, Phillips joined other scientists organized to prevent future nuclear wars.She took a great hit to her career during the Cold War for standing up to McCarthyism. Colleagues and students notedher intellectual honesty, self-criticism, and style, and called her a role model for principle and perseverance" in "Melba Phillips: Leader in Science and Conscience."

As she moved up the academic ranks, Phillips pursued graduate research under Oppenheimer and earned her doctorate in 1933. Within a few years she was known throughout the physics world because of her contribution to the field via theOppenheimer-Phillips effect, according to "Women in Physics."

The 1935 Oppenheimer-Phillips Effect explained what was at the time unexpected behavior of accelerated deuterons (nuclei of deuterium, or heavy hydrogen atoms) in reactions with other nuclei, according to aUniversity of Chicagopress release. When Oppenheimer died in 1967, hisNew York Timesobituary noted his and Phillips discovery as a basic contribution to quantum theory.

Phillips was subsequently fired from her university positions due to a law which required the termination of any New York City employee who invoked the Fifth Amendment.

Bonner explained, McCarran was a specialist at putting people in the position in which they had to invoke the Fifth Amendment. It was a deliberate expression of the McCarthyism of the time.

In a1977 interview,Phillips briefly discussed the incident (although she was reluctant because she was trying to keep the interviewer focused on her scientific accomplishments).She stated: I was fired from Brooklyn College for failure to cooperate with the McCarran Committee, and I think that ought to go into the record . . . city colleges were particularly vulnerable, and the administration was particularly McCarthyite.

Phillips stated that she wasnt particularly political. Her objection to cooperating had been a matter of principle.

In 1987, Brooklyn College publicly apologized for firing Phillips, and in 1997 created the aforementioned scholarship in her name. Phillips died on Nov. 8, 2004 in Petersburg, Indiana.

TheNew York Timesreferred to Phillips in her obituary as a pioneer in science education and noted that at a time when there were few women working as scientists, Dr. Phillips was leader among her peers.

Her accomplishments helped pave the way for other women in the sciences.

In a 1977 interview, Phillips addressed the problems women face in aspiring to science careers an a 1977 interview, stating: "Were not going to solve them, but, as Ive been saying all the time; if we make enough effort, well make progress; and I think progress has been made. We sometimes slip back, but we never quite slip all the way back; or we never slip back to the same place. Theres a great deal of truth in saying that progress is not steady no matter how inevitable."

Contact Gordon Engelhardt by email at gordon.engelhardt@courierpress.com or on Twitter @EngGordon.

More here:

Evansville's ties to the first detonation of the A-bomb in 1945 - Courier & Press

Posted in Quantum Physics | Comments Off on Evansville’s ties to the first detonation of the A-bomb in 1945 – Courier & Press