The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: February 11, 2022
Is Afterlife Possible? Scientist Reveals the Physics Behind Death – News18
Posted: February 11, 2022 at 7:02 am
The human brain is a mysterious organ that is much bigger than it looks. This deceptive characteristic of the brain is also reflected in the sense of self that humans entail. While what we look like are just a collection of atoms and molecules, the sheer probability of having consciousness, and that too, this advanced, triggers a belief that humans are much more than just flesh and bones.
And this is how the concept of soul is fostered. Religious texts and teachings frequently bring the soul into the discussion. What some perceive as soul boils down to consciousness that assists us in being us. Soul is believed to exist beyond the laws of life and death. It is postulated that our soul existed before we did and will exist after we do not. However, this concept becomes feeble when looked through a scientific spectacle.
Sean M. Carroll, a physicist specialising in cosmology, gravity, and quantum mechanics, shared his piece of mind regarding this never-ending journey of a soul through a blog post. Sean elaborately analysed the tributaries of this thought that claims that life after death does not end at decomposing of the body but exists beyond that.
The questions that target the sanctity of this belief revolved around the fundamental laws of physics that play their role in the interaction of atoms with their surroundings. Sean throws light on the fact that for life after death to be true, the basic structure of physics of atoms and electrons will have to be demolished, and someone will have to build a new model. Believing n life after death, to put it mildly, required physics beyond the standard model. Most importantly, we need some way for that new Physics to interact with the atoms that we do have.
Most people perceive souls as a blob of energy. What Sean argues about is the interaction of this energy with the world that we witness and the building blocks of it that we do not see. Multiple equations such as the Dirac equation, Lorentz invariance, Hamiltonian system of Quantum Mechanics, Gauge Variance, etc., will be proven void, or the concept of the soul will lose trustful ground in attempts to justify the existence of life after death.
While discussions such as these do tickle the thought process, it also sways us away from the more reality-centric questions about human beings and the consciousness giving them an identity. So, what do you think about the existence of an immaterial, immortal soul and the life after we die?
Read all the Latest News, Breaking News and Coronavirus News here.
Read the original:
Is Afterlife Possible? Scientist Reveals the Physics Behind Death - News18
Posted in Quantum Physics
Comments Off on Is Afterlife Possible? Scientist Reveals the Physics Behind Death – News18
No, we haven’t finally found evidence for a parallel Universe – Big Think
Posted: at 7:02 am
For some of us, the idea of parallel Universes sparks our wildest dreams. If there are other Universes where certain events had different outcomes where just one crucial decision went a different way perhaps there could be some way to access them. Perhaps particles, fields, or even people could be transported from one Universe to another, enabling us to live in a Universe thats better, in some ways, than our own. These ideas have a foothold not only in science fiction, but in theoretical physics as well, from the infinity of possible outcomes from quantum mechanics to ideas related to the Multiverse.
But do these ideas have anything to do with observable, measurable reality? Recently, a claim has surfaced asserting that weve found evidence for parallel Universes from the ANtarctic Impulsive Transient Antenna: ANITA. Its true: the experiment found evidence for cosmic ray particles thats quite difficult to explain using only conventional physics. But to leap to the most fantastical, outlandish, revolutionary explanation is absolutely premature. In all fields of science, we have to take tremendous care not to fool ourselves. We must endeavor to knock down any new, wild hypotheses, and instead make sure that known laws of nature cant conceivably explain what were seeing.
Science is about being appropriately skeptical, and when we take that approach, we see that the evidence for a parallel Universe all but evaporates.
From a physics point of view, parallel Universes are one of those intriguing ideas that captures our imaginations, and compels us to consider their existence, but at the same time, its an idea thats very difficult to test. Parallel Universes first arose in the context of quantum physics, which is notorious for having unpredictable outcomes even if you know everything possible about how you set up your system. If you take a single electron and shoot it through a double slit, you can only know the probabilities of where it will land; you cannot predict exactly where it will show up.
One remarkable idea known as the many-worlds interpretation of quantum mechanics postulates that all the outcomes that can possibly occur actually do happen, but only one outcome can happen in each Universe. It takes an infinite number of parallel Universes to account for all the possibilities, but this interpretation is just as valid as any other. There are no experiments or observations that rule it out.
A second place where parallel Universes arise in physics is from the idea of the Multiverse. Our observable Universe began 13.8 billion years ago with the hot Big Bang, but the Big Bang itself wasnt the very beginning. A very different phase of the Universe occurred previously to set up and give rise to the Big Bang: cosmological inflation. When and where inflation ends, a Big Bang occurs.
But inflation doesnt end everywhere at once, and the places where inflation doesnt end continue to inflate, giving rise to more space and more potential Big Bangs. Once inflation begins, in fact, its virtually impossible to stop inflation from occurring in perpetuity at least somewhere. As time goes on, more Big Bangs all disconnected from one another occur, giving rise to an uncountably large number of independent Universes: a Multiverse.
The big problem for both of these ideas is that theres no way to test or constrain the prediction of these parallel Universes. After all, if were stuck in our own Universe, how can we ever hope to access another one? We have our own laws of physics, but they come with a whole host of quantities that are always conserved.
Particles dont simply appear, disappear, or transform; they can only interact with other quanta of matter and energy, and the outcomes of those interactions are similarly governed by the laws of physics.
In all the experiments weve ever performed, all the observations weve ever recorded, and all the measurements ever made, weve never yet discovered an interaction that demands the existence of something beyond our own, isolated Universe to explain.
But according to the various reports regarding the ANITA experiments unexpected findings, you may have read that scientists in Antarctica have discovered evidence for the existence of parallel Universes. If this were true, it would be absolutely revolutionary. Its a grandiose claim that would show us that the Universe as we currently conceive of it is inadequate, and theres much more out there to learn about and discover than we ever thought possible.
Not only would these other Universes be out there, but matter and energy from them would have the capability to cross over and interact with matter and energy in our own Universe. Perhaps, if this claim were correct, some of our wildest science fiction dreams would be possible. Perhaps you could travel to a Universe:
So what was the remarkable evidence that demonstrates the existence of a parallel Universe? What observation or measurement was made that brought us to this remarkable and unexpected conclusion?
The ANITA (ANtarctic Impulsive Transient Antenna) experiment a balloon-borne experiment thats sensitive to radio waves detected radio waves of a particular set of energies and directions coming from beneath the Antarctic ice.
This is good; its what the experiment was designed to do! In both theory and in practice, we have all sorts of cosmic particles traveling through space, including the ghostly neutrino. While many of the neutrinos that pass through us come from the Sun, stars, or the Big Bang, some of them come from colossally energetic astrophysical sources like pulsars, black holes, supernovae, or even mysterious, unidentified objects.
These neutrinos also come in a variety of energies, with the most energetic ones (unsurprisingly) being the rarest and, to many physicists, the most interesting. Neutrinos are mostly invisible to normal matter youd have to pass a typical astrophysical neutrino through about a light-years worth of lead to have a 50/50 shot of stopping one so they can realistically come from any direction.
However, most of the high-energy neutrinos that we see arent produced from far away, but are produced when other cosmic particles (also of extremely high energies) strike the upper atmosphere, producing cascades of particles that also result in neutrinos. Some of these neutrinos will pass through the Earth almost completely, only interacting with the final layers of Earths crust (or ice), where they can produce a signal that our detectors are sensitive to.
The rare events that ANITA saw were consistent with a neutrino coming up through the Earth and producing radio waves, but at energies that should be so high that passing through the Earth uninhibited should not be possible. So now, we have to put our skeptical goggles on, and ask some important questions concerning how seriously we should take these observations.
Scientifically, this means that:
So where, in all of this, do the parallel Universes come in?
Because there were only three explanations for what ANITA saw.
Somevery good science ruled out the first option(back in January of 2020), which means its almost certainly the second option. The third? Well, if our Universe cannot violate CPT, maybe this comes froma parallel Universe where CPT is reversed: an explanation thats as unlikely as it is poorly reasoned.
In fact, back in April of 2020, physicist Ian Shoemaker came up with a spectacular but mundane explanation for what ANITA saw: ultra-high-energy cosmic rays could have simply reflected off of certain types of ice at or near the Antarctic surface, creating the illusion that these particles traveled through the Earth from the perspective of ANITA. Remember: in science, we must always rule out all the conventional explanations that dont involve new physics before we resort to a game-breaking explanation. Over the past decade, a number of remarkable claims have been made that have disintegrated upon further investigation. Neutrinos dont travel faster-than-light; we havent found dark matter or sterile neutrinos; cold fusion isnt real; the impossible reactionless engine was a failure.
Theres a remarkable story here thats all about good science. An experiment (ANITA) saw something unexpected, and published their results. A much better experiment (IceCube) followed it up, and ruled out their leading interpretation. It strongly suggested something was amiss with the first experiment, and completely mundane explanations involving no new physics at all could wholly account for the full suite of what weve seen. As always, conducting more science will help us uncover whats truly occurring. For now, based on the scientific evidence we have, parallel Universes will have to remain a dream solely confined to the realm of speculation and science fiction.
Read more here:
No, we haven't finally found evidence for a parallel Universe - Big Think
Posted in Quantum Physics
Comments Off on No, we haven’t finally found evidence for a parallel Universe – Big Think
Schrdinger Theatre to be Named ‘Physics Lecture Theatre’ The University Times – The University Times
Posted: at 7:02 am
Mairead MaguireDeputy Editor
The Schrdinger lecture theatre will now be called Physics Lecture Theatre, after staff and students in the School of Physics called for its renaming in light of revelations about the physicists abuse of young women and girls.
The annual Schrdinger lecture series will be changed to the What is Life? lecture series, with discussions to continue in the coming months about what this will entail.
In an email statement to The University Times, Trinity Media Relations Officer Catherine OMahony said: This is a complex situation and the School of Physics has approached it in a careful and considered manner.
The current approach continues to honour the indisputable scientific contribution of Erwin Schrdinger, while acknowledging disturbing information much of it from Schrdingers own diaries which is now also known.
The theatre was originally named Physics Lecture Theatre but the name was changed in the 1990s to celebrate Schrdinger and his contributions to quantum theory.
The School of Physics executive recommended the change to the Provost last month.
Suggestions for a new name included the Walton theatre for Ernest Walton, who is known for splitting the atom. Another possibility is that it be renamed after a female physicist.
In an email to physics students and staff at the time, Head of School Prof Jonathan Coleman said: There was a diversity of opinion on how the School and College should react, but it was clear that a large majority of both staff and students now favour changing the name of the lecture theatre in the Fitzgerald Building that has borne his name since the 1990s.
A petition lobbying for the change has raised over 200 signatures.
In a survey conducted by third-year theoretical physics class representative Ruaidhr Campion, almost two-thirds of undergraduate physics students said they wanted the lecture theatre to be renamed.
Coleman added: Naming it for another person could well be seen as a tainted honour after it had been previously been named for such a controversial person as Schrdinger.
Continued here:
Schrdinger Theatre to be Named 'Physics Lecture Theatre' The University Times - The University Times
Posted in Quantum Physics
Comments Off on Schrdinger Theatre to be Named ‘Physics Lecture Theatre’ The University Times – The University Times
Judge in warning to people who view child abuse images as he jails dad who downloaded hundreds of vile pictures – The Star
Posted: at 7:01 am
Kyle Willis remained silent as Judge Peter Kelson QC sent him to prison for 10 months, during a hearing held at Sheffield Crown Court on February 10.
Despite initially claiming that the 394 child abuse images found on his electronic devices were as a result of hacking, Willis later pleaded guilty to three counts of making indecent images of children.
As he jailed Willis, Judge Kelson said: I need to demonstrate to people who may be of the same mindset that this is what happens to those who make indecent images. You will be found out, you will be caught and you will be brought to justice.
You, and people like you, do need to realise that the children that you watch being raped are real victims. This is not a victimless crime, he added.
Prosecutor, Andrew Bailey, told the court that Willis home was raided by police in February 2019, when officers seized his mobile telephone and a safe which contained two additional phones.
In addition to the indecent images of children found on Willis devices, forensic examination also revealed that the search terms he used to find the illegal content included three year-old wh***, and seven-year-old a** friend.
The 24-year-old was found to have peer-to-peer software on his phone, such as YouTorrent and Tor, the latter of which was described by Judge Kelson as the gateway to the dark web.
Police also discovered apps with the ability to encrypt, and permanently delete, items on his devices, something Judge Kelson said he regarded as an aggravating factor.
Laura Marshall, defending, said Willis had already lost a lot as a result of his offending, including his home, his five-year relationship and was no longer allowed contact with his young child.
She added that Willis, of North Road in East Dene, Rotherham had previously been someone who was well regarded in his community, but that had changed after the details of his crimes became public knowledge.
"He understands why people find what he did abhorrernt, Ms Marshall said.
Willis was also made the subject of a 10-year sexual harm prevention order and was put on the sex offenders register for the same period.
Continued here:
Posted in Victimless Crimes
Comments Off on Judge in warning to people who view child abuse images as he jails dad who downloaded hundreds of vile pictures – The Star
Childless people over 50 are honestly reflecting on whether they made the right decision – Upworthy
Posted: at 7:00 am
People who decide not to have children are often unfairly judged by those who chose a different life path. People with children can be especially judgmental to women whove decided to opt out of motherhood.
You will regret it! is one of the most common phrases lobbed at those who choose to remain childless. Why do people think theyll have such awful regrets? Because they often say theyll wind up lonely and sad when theyre older.
They also say that life without children is without purpose and that when the childless get older theyll have no one to take care of them. One of the most patronizing critiques thrown at childless women is that they will never feel complete unless they have a child.
However, a lot of these critiques say more about the person doling them out than the person who decides to remain childless. Maybe, just maybe, their life is fulfilling enough without having to reproduce. Maybe, just maybe, they can have a life full of purpose without caring for any offspring.
Maybe the question should be: Whats lacking in your life that you need a child to feel complete?
Studies show that some people regret being childless when they get older, but theyre in the minority. An Australian researcher found that a quarter of child-free women came to regret the decision once they were past child-bearing age and began contemplating old age alone.
People revealed the reasons theyve decided to be childless in an article by The Upshot. The top answers were the desire for more leisure time, the need to find a partner and the inability to afford child care. A big reason that many women decide not to have children is that motherhood feels like more of a choice these days, instead of a foregone conclusion as it was in previous decades.
Reddit user u/ADreamyNightOwl asked a serious question about being childless to the AskReddit subforum and received a lot of honest answers. They asked People over 50 that chose to be childfree, do you regret your decision? Why or why not?
The people who responded are overwhelmingly happy with their decision not to have children. A surprising number said they felt positive about their decision because they thought theyd be a lousy parent. Others said they were happy to have been able to enjoy more free time than their friends and family members who had kids.
Here are some of the best responses to the Askreddit question.
"I explain it to people like this - you know that feeling you get where you just can't wait to teach your kid how to play baseball? or whatever it is you want to share with them? I don't have that. Its basically a lack of parental instinct. Having children was never something I aspired to. My SO is the same way.
"Don't get me wrong, I have nothing against children. And I get really angry at people who harm them or mistreat them. I just never wanted my own." IBeTrippin
"Nope. It was never something I wanted. No regrets." BornaCrone
"I have mixed feelings. I don't care much for children and I think it would have been disastrous for us to have them. I was also able to retire at 52. Pretty sure that wouldn't have happened with kids. So yeah, absolutely the right decision.But I love my family and I do wonder what it would be like to have my own, to teach my child the things I know and not to be without someone who cares about me at the time of my death.
"But again, absolutely the right decision and at 55 I'm very happy NOT to have them. This is reinforced every time I'm exposed to other people's kids." ProfessorOzone
"My wife worked at a nursing home for years. Imagine seeing for years that over 95% of old people never have family visit. Till they die and people want a piece of the pie. This when I learned that the whole 'well who is gonna visit you or take care of you when you're older' line is complete bullshit. We decided to not have kids ever after that. Made great friends and saw the world. No regrets." joevilla1369
"I don't necessarily regret not having them, but I regret the fact that I wasn't in a healthy enough relationship where I felt I COULD have children. I regret not being stronger to leave the abuse earlier, if I had been stronger, I think maybe I could have had the choice at least. So yeah... I have regrets." MaerakiStudioMe
"No. I knew what I was getting into when I agreed to marry my husband. He had two sons from his first marriage and a vasectomy. He was worried because I was so young (comparatively, he's 10 years older). I did think it over seriously and concluded that a life with him compared to a life without him but (perhaps!) with a baby I didn't even have yet was what I wanted. It worked out for us, we've been together for 26 years. As a bonus I have 9 grandchildren. All the fun without the work of the raising!" Zublor
"Not one bit. I have never believed that I would be a good parent. I have a short temper, and while I don't think I would have been physically abusive, my words and tone of voice would be harsh in a very similar way to my own father. I wasn't happy growing up with that kind parent and I wouldn't want to subject any child to that kind of parenting." Videoman7189
"No and I found a partner who feels the same. We are the cool aunt and uncle." laudinum
"54 yrs.old. I've lived the past 30 years alone. Presently my dog and I are chillin' in a nice hotel on a spur of the moment vacation. I'd maybe be a grandfather by now?! I can't imagine what it would be like to have family. I picture a life lived more "normally" sometimes. All sunshine and roses, white picket fence, etc. but I realize real life isn't like that. No I don't regret being childfree or wifefree for that matter. My life can be boring at times but then I look back at all the drama that comes with relationships and think I've dodged a bullet. I spent 20 years trying to find a wife to start a family. Then I realized the clock had run out, so fuck it, all the money I'd saved for my future family would be spent on myself. Hmmmmm...what do I want to buy myself for Christmas?" Hermits_Truth
"Nope. I never had the urge to change diapers or lose sleep, free time and most of my earnings. Other people's kids are great. Mostly because they are other people's. When people ask 'Who will take care of you when you're old' I tell them that when I'm 75 I will adopt a 40-year-old." fwubglubbel
"Im 55 (F) and never wanted children. I just dont much like them, and 20+ years of motherhood sounded (and still sounds) like a prison sentence. Maternal af when it comes to cats and dogs, but small humans? No chance.
"And Im very happy to be childless. Cannot imagine my life any other way." GrowlKitty
"Dual income no kids = great lifestyle!" EggOntheRun
"Over 50 and child free. My only regret is that my wife would have been a great mother, and sometimes I feel like I deprived her of that, even though we both agreed we didnt want kids. Sometimes I wonder if I pushed her into that decision. She works with the elderly every day and sees a lot of lonely folks so it gets to her sometimes. I was always afraid Id screw up the parenting thing, so I was never really interested in the idea. Im a loner by nature though." Johnny-Virgil
From Your Site Articles
Related Articles Around the Web
Visit link:
Childless people over 50 are honestly reflecting on whether they made the right decision - Upworthy
Posted in Childfree
Comments Off on Childless people over 50 are honestly reflecting on whether they made the right decision – Upworthy
Raised By Wolves Season 2 Casts Real Life Partners and Age – Daily Research Plot
Posted: at 7:00 am
Raised By Wolves Season 2 Updates: American sci-fi TV series Raised by Wolves is a post-apocalyptic drama that sets on a different planet Kepler-22b where two androids- Father and Mother are raising human children. The series premiered on September 3, 2020, on HBO and the second season arrived on February 3, 2022.
Created by Aaron Guzikowski, the TV Series involves a group of talented and brilliant cast members with six of them joining newly in the second season. Now let us see how the actors and actresses starring in the popular drama are in real lives.
Mother is the android built by an atheist scientist and sent to Kepler- 22b to raise the human community there with her partner Father, another android. At the end of season one, she gave birth to a mysterious serpent-like creature whom she wanted to destroy but ended up setting the childfree at the tropical zone of the planet.
The 35 years old Danish actress has been found starring in other Danish films like The Exception, The Horrible Woman. She has kept her personal life out of the spotlight, so much has not been disclosed about it. Collin is married and the mother of one daughter.
Paul is the son of real Marcus and Sue whose identities get stolen by Caleb and Mary to take over the Mithraic ship. Mother and Father kidnaps Paul with four other children from the Ark of Heaven. Paul carries a pet rat that Marcus gave him. Eventually, he as well begins to hear a voice that he believes to be Sols which tells him about the real identity of Sue. He suits her after that.
Felix is a young American actor of 14 years old who is known for performing in Game of Thrones, The Ghost.
Mother and Fathers only surviving son Campion is born from the embryos taken from Earth. Though the androids raise him as an atheist, he eventually gains faith and empathy. He has natural leadership qualities and grows up a friendship as well as rivalry with Paul.
The 17 years old Australian actor also starred in Doctor Doctor, an Australian TV Show.
Father was created alongside Mother to raise the human colony in Keplar-22. Father was once reprogrammed by the invading Mithraic. Mother and Father were sometimes seen to fight over their mission regarding the children.
The 29 years old British actor starred in Jamestown series. He is still unmarried and much is not known about his relationship status.
Hunter is the older son kidnapped by Mother and Father. He is a headstrong boy who believes his status to be higher than the rest because of his high-ranking Mithraic official father. Hunter is also very technically proficient and he is able to reprogram Father which eventually helps all the kids to escape Marcus and rejoin Mother.
This 23-year-old Afro-British actor has acted in the miniseries The Long Song and film Break.
Marcus is the later identity of Caleb who goes to the rescue mission of the human children kidnapped by Mother for the human community in Keplar-22.
Caleb and his wife Mary had an android perform surgery on faces and set off to rescue mission to the planet under the identity of Mithraic couple. Later he begins to hear voices that he assumes to be of the Mithraic God Sol which eventually maddens him.
Travis Flimmel is an Australian actor and a former model. He previously acted in Historys Vikings. This 42-year-old actor is presently unmarried.
Calebs wife Mary takes the identity of Sue, Pauls mother to take over the Mithtaic ship. She treats Paul as her own son. Once on Kepler-22b, she is confronted with the change of Marcus who now thinks of himself as the prophet of Sol. Sue eventually joins Mother, Father, and the human children. After having the real identity Sue revealed, Paul suits her.
The Irish actress previously appeared in Deceit and recently started in Guy Ritchies Wrath of Man. The 29 years old actress won IFTA Film and Drama Awards for acting in The Virtues. Algar is currently dating Lorne MacFadyen who is also an actor.
The other children kidnapped by Mother and Father are Tempest played by Jordan Loughran, Holly as Aasiya Shah, Vita as Ivy Wong.
Peter Christoffersen (Cleaver)- A leader of the atheists.
Jennifer Saayeng (Nerva)- A tough atheist woman.
James Harkness (Tamerlane)- An atheist soldier.
Selina Jones (Grandmother)- A god-like android built years ago.
Kim Engelbrecht (Decima)- A highly educated scientist and weapon developer of Earth who was formally atheist, but later begins to believe in Sol.
Morgan Santo (Ville)- Built to look and behave like her human counterpart, a child who committed suicide, Vrille is a humanoid android. She treats Decima as her mother.
We're independent and uninclined News Publisher. Please Help us in running the Publication, Donate us on Pateron
You can Subscribe to our Newsstand on Google News, Click the below Button
View post:
Raised By Wolves Season 2 Casts Real Life Partners and Age - Daily Research Plot
Posted in Childfree
Comments Off on Raised By Wolves Season 2 Casts Real Life Partners and Age – Daily Research Plot
Thermodynamics of evolution and the origin of life – pnas.org
Posted: at 6:59 am
Significance
We employ the conceptual apparatus of thermodynamics to develop a phenomenological theory of evolution and of the origin of life that incorporates both equilibrium and nonequilibrium evolutionary processes within a mathematical framework of the theory of learning. The threefold correspondence is traced between the fundamental quantities of thermodynamics, the theory of learning, and the theory of evolution. Under this theory, major transitions in evolution, including the origin of life, represent specific types of physical phase transitions.
We outline a phenomenological theory of evolution and origin of life by combining the formalism of classical thermodynamics with a statistical description of learning. The maximum entropy principle constrained by the requirement for minimization of the loss function is employed to derive a canonical ensemble of organisms (population), the corresponding partition function (macroscopic counterpart of fitness), and free energy (macroscopic counterpart of additive fitness). We further define the biological counterparts of temperature (evolutionary temperature) as the measure of stochasticity of the evolutionary process and of chemical potential (evolutionary potential) as the amount of evolutionary work required to add a new trainable variable (such as an additional gene) to the evolving system. We then develop a phenomenological approach to the description of evolution, which involves modeling the grand potential as a function of the evolutionary temperature and evolutionary potential. We demonstrate how this phenomenological approach can be used to study the ideal mutation model of evolution and its generalizations. Finally, we show that, within this thermodynamics framework, major transitions in evolution, such as the transition from an ensemble of molecules to an ensemble of organisms, that is, the origin of life, can be modeled as a special case of bona fide physical phase transitions that are associated with the emergence of a new type of grand canonical ensemble and the corresponding new level of description.
Classical thermodynamics is probably the best example of the efficiency of a purely phenomenological approach for the study of an enormously broad range of physical and chemical phenomena (1, 2). According to Einstein, It is the only physical theory of universal content, which I am convinced, that within the framework of applicability of its basic concepts will never be overthrown (3). Indeed, the basic laws of thermodynamics were established at a time when the atomistic theory of matter was only in its infancy, and even the existence of atoms has not yet been demonstrated unequivocally. Nevertheless, these laws remained untouched by all subsequent developments in physics, with the important qualifier within the framework of applicability of its basic concepts. This framework of applicability is known as the thermodynamic limit, the limit of a large number of particles when fluctuations are assumed to be small (4). Moreover, the concept of entropy that is central to thermodynamics was further generalized to become the cornerstone of information theory [Shannons entropy (5)] and is currently considered to be one of the most important concepts in all of science, reaching far beyond physics (6). The conventional presentation of thermodynamics starts with the analysis of thermal machines. However, a more recently promoted and apparently much deeper approach is based on the understanding of entropy as a measure of our knowledge (or, more accurately, our ignorance) of a system (68). In a sense, there is no entropy other than information entropy, and the loss of information resulting from summation over a subset of the degrees of freedom is the only necessary condition to derive the Gibbs distribution and hence all the laws of thermodynamics (9, 10).
It is therefore no surprise that many attempts have been made to apply concepts of thermodynamics to problems of biology, especially to population genetics and the theory of evolution. The basic idea is straightforward: Evolving populations of organisms fall within the domain of applicability of thermodynamics inasmuch as a population consists of a number of organisms sufficiently large for the predictable collective effects to dominate over unpredictable life histories of individual (where organisms are analogous to particles and their individual histories are analogous to thermal fluctuations). Ludwig Boltzmann prophetically espoused the connection between entropy and biological evolution: If you ask me about my innermost conviction whether our century will be called the century of iron or the century of steam or electricity, I answer without hesitation: it will be called the century of the mechanical view of nature, the century of Darwin (11). More specifically, the link between thermodynamics and evolution of biological populations was clearly formulated for the first time by Ronald Fisher, the principal founder of theoretical population genetics (12). Subsequently, extended efforts aimed at establishing detailed mapping between the principal quantities analyzed by thermodynamics, such as entropy, temperature, and free energy, and those central to population genetics, such as effective population size and fitness, were undertaken by Sella and Hirsh (13) and elaborated by Barton and Coe (14). The parallel is indeed clear: The smaller the population size the stronger are the effects of random processes (genetic drift), which in physics associates naturally with temperature increase. It should be noted, however, that this crucial observation was predicated on the specific model of independent, relatively rare mutations (low mutation limit) in a constant environment or the so-called ideal mutation model. Among other attempts to conceptualize the relationships between thermodynamics and evolution, of special note is the work of Frank (15, 16) on applications of the maximum entropy principle, according to which the distribution of any quantity in a large ensemble of entities tends to the highest entropy distribution subject to the relevant constraints (6). The nature of such constraints, as we shall see, is the major object of inquiry in the study of evolution from the perspective of thermodynamics.
The notable parallels notwithstanding, the conceptual framework of classical thermodynamics is insufficient for an adequate description of evolving systems capable of learning. In such systems, the entropy increase caused by physical and chemical processes in the environment, under the second law of thermodynamics, competes with the entropy decrease engendered by learning, under the second law of learning (17). Indeed, learning, by definition, decreases the uncertainty in knowledge and thus should result in entropy decrease. In the accompanying paper, we describe deep, multifaceted connections between learning and evolution and outline a theory of evolution as learning (18). In particular, this theory incorporates a theoretical description of major transitions in evolution (MTE) (19, 20) and multilevel selection (2124), two fundamental evolutionary phenomena that so far have not been fully incorporated into the theory of evolution.
Here, we make the next logical step toward a formal description of biological evolution as learning. Our main goal is to develop a macroscopic, phenomenological description of evolution in the spirit of classical thermodynamics, under the assumption that not only the number of degrees of freedom but also the number of the learning subsystems (organisms or populations) is large. These conditions correspond to the thermodynamic limit in statistical mechanics, where the statistical description is accurate.
The paper is organized as follows. In Maximum Entropy Principle Applied to Learning and Evolution we apply the maximum entropy principle to derive a canonical ensemble of organisms and to define relevant macroscopic quantities, such as partition function and free energy. In Thermodynamics of Learning we discuss the first and second laws of learning and their relations to the first and second laws of thermodynamics, in the context of biological evolution. In Phenomenology of Evolution we develop a phenomenological approach to evolution and define relevant thermodynamic potentials (such as average loss function, free energy, and grand potential) and thermodynamic parameters (such evolutionary temperature and evolutionary potential). In Ideal Mutation Model we apply this phenomenological approach to analyze evolutionary dynamics of the ideal mutations model previously analyzed by Hirsh and Sella (7). In Ideal Gas of Organisms we demonstrate how the phenomenological description can be generalized to study more complex systems in the context of the ideal gas model. In Major Transitions in Evolution and the Origin of Life we apply the phenomenological description to model MTE, and in particular the origin of life as a phase transition from an ideal gas of molecules to an ideal gas of organisms. Finally, in Discussion, we summarize the main facets of our phenomenological theory of evolution and discuss its general implications.
To build the vocabulary of evolutionary thermodynamics (Table 1), we proceed step by step. The natural first concept to introduce is entropy, S, which is universally applicable beyond physics, thanks to the information representation of entropy (5). The relevance of entropy in general, and the maximum entropy principle in particular (6), for problems of population dynamics and evolution has been addressed previously, in particular by Frank (25, 26), and we adopt this principle as our starting point. The maximum entropy principle states that the probability distribution in a large ensemble of variables must be such that the Shannon (or Boltzmann) entropy is maximized subject to the relevant constraints. This principle is applicable to an extremely broad variety of processes, but as shown below is insufficient for an adequate description of learning and evolutionary dynamics and should be combined with the opposite principle of minimization of entropy due to the learning process, or the second law of learning (see Thermodynamics of Learning and ref. 17). Our presentation in this section could appear oversimplified, but we find this approach essential to formulate as explicitly and as generally as possible all the basic assumptions underlying thermodynamics of learning and evolution.
Corresponding quantities in thermodynamics, machine learning, and evolutionary biology
The crucial step in treating evolution as learning is the separation of variables into trainable and nontrainable ones (18). The trainable variables are subject to evolution by natural selection and, therefore, should be related, directly or indirectly, to the replication processes, whereas nontrainable variables initially characterize the environment, which determines the criteria of selection. As an obvious example, chemical and physical parameters of the substances that serve as food for organisms are nontrainable variables, whereas the biochemical characteristics of proteins involved in the consumption and utilize the food molecules as building blocks and energy source are trainable variables.
Consider an arbitrary learning system described by trainable variables q and nontrainable variables x, such that nontrainable variables undergo stochastic dynamics and trainable variables undergo learning dynamics. In the limit when the nontrainable variables x have already equilibrated, but the trainable variables q are still in the process of learning, the conditional probability distribution p(x|q) over nontrainable variables x can be obtained from the maximum entropy principle whereby Shannon (or Boltzmann) entropyS=dNxpx|qlogpx|q[2.1]is maximized subject to the appropriate constraints on the system, such as average lossdNxHx,qpx|q=Uq[2.2]
and normalization conditiondNxpx|q=1.[2.3]
Its simplicity notwithstanding, the condition [2.2] is crucial. This condition means, first, that learning can be mathematically described as minimization of some function U(q) of trainable variables only, and second that this function can be represented as the average of some function H(x,q) of both trainable, q, and nontrainable, x, variables over the space of the latter. Eq. 2.2 is not an assumption but rather follows from the interpretation of the function p(x|q) as the probability density over nontrainable variables, x, for a given set of trainable ones, q. This condition is quite general and can be used to study, for example, selection of the shapes of crystals (such as snowflakes), in which case H(x,q) represents Hamiltonian density.
In the context of biology, U(q) is expected to be a monotonically increasing function of Malthusian fitness (q), that is, reproduction rate (assuming a constant environment); a specific choice of this function will be motivated below [2.9]. However, this connection cannot be taken as a definition of the loss function. In a learning process, loss function can be any measure of ignorance, that is, inability of an organism to recognize the relevant features of the environment and to predict its behavior. According to Sir Francis Bacons famous motto scientia potentia est, better knowledge and hence improved ability to predict the environment increases chances of an organisms survival and reproduction. However, derivation of the Malthusian fitness from the properties of the learning system requires a detailed microscopic theory such as that outlined in the accompanying paper (18). Here, we look instead at the theory from a macroscopic perspective by developing a phenomenological description of evolution.
We postulate that a system under consideration obeys the maximum entropy principle but is also learning or evolving by minimizing the average loss function U(q) [2.2]. The corresponding maximum entropy distribution can be calculated using the method of Lagrange multipliers, that is, by solving the following variational problem:SdNyHy,qpy|qUdNypy|q1px|q=0, [2.4]where and are the Lagrange multipliers which impose, respectively, the constraints [2.2] and [2.3]. The solution of [2.4] is the Boltzmann (or Gibbs) distributionlogp(x|q)1H(x,q)=0p(x|q)=exp(H(x)1)=exp(H(x,q))Z(,q),[2.5]
whereZ(,q)=exp(1+)=dNxexp(H(x,q))=dNx(x,q)[2.6]
is the partition function (Z stands for German Zustandssumme, sum over states).
Formally, the partition function Z(,q) is simply a normalization constant in Eq. 2.5, but its dependence on and q contains a wealth of information about the learning system and its environment. For example, if the partition function is known, then the average loss can be easily calculated by simple differentiationU(q)=dNxH(x,q)exp(H(x,q))dNxexp(H(x,q))=logZ(,q)=(F(,q)),[2.7]where the biological equivalent of free energy is defined asFTlogZ=1logZ=UTS[2.8]
and the biological equivalent of temperature is T=1. Evolutionary temperature is yet another key term in our vocabulary (Table 1), after entropy, which emerges as the inverse of the Lagrange multiplier that imposes a constraint on the average loss function [2.2], that is, defines the extent of stochasticity of the process of evolution. Roughly, free energy F is the macroscopic counterpart of the loss function H or additive fitness (18), whereas, as shown below, partition function Z is the macroscopic counterpart of Malthusian fitness,exp(H(x,q)).[2.9]
The relation between the loss function and fitness is discussed in the accompanying paper (18) and in Ideal Mutation Model. In biological terms, Z represents macroscopic fitness or the sum over all possible fitness values for a given organism, that is, over all genome sequences that are compatible with survival in a given environment, whereas F represents the adaptation potential of the organism.
In the rest of this analysis, we follow the previously developed approach to the thermodynamics of learning (17). Here, the key difference from conventional thermodynamics is that learning decreases the uncertainty in our knowledge on the training dataset (or of the environment, in the case of biological systems) and therefore results in entropy decrease. Close to the learning equilibrium, this decrease compensates exactly for the thermodynamic entropy increase and such dynamics is formally described by a time-reversible Schrdinger-like equation (17, 27). An important consequence is that, whereas in conventional thermodynamics, the equilibrium corresponds to the minimum of the thermodynamic potential over all variables, in a learning equilibrium the free energy F(q) can either be minimized or maximized with respect to the trainable variables q. If for a particular trainable variable the entropy decrease due to learning is negligible, then the free energy is minimized, as in conventional thermodynamics, but if the entropy decrease dominates the dynamics, then the free energy is maximized. Using the terminology introduced in the accompanying paper (18), we will call the variables of the first type neutral q(n) and those of the second type adaptable or active variables q(a). There is also a third type of variables that are (effectively) constant or core variables q(c), that is, those that have already been well/trained. The term neutral means that changing the values of these variables does not affect the essential properties of the system, such as its loss function or fitness, corresponding to the regime of neutral evolution. The adaptable variables comprise the bulk of the material for evolution. The core variables are most important for optimization (that is, for survival) and thus are quickly trained to their optimal values and remain more or less constant during the further process of learning (evolution). The equilibrium state corresponds to a saddle point on the free energy landscape (viewed as a function of trainable variables q), in agreement with both the first law of thermodynamics and the first law of learning (17): The change in loss/energy equals the heat added to the learning/thermodynamic system minus the work done by the system,dU=TdSQdq,[3.1]
where T is temperature, S is entropy, and Q is the learning/generalized force for the trainable/external variables q.
In the context of evolution, the first term in Eq. 3.1 represents the stochastic aspects of the dynamics, whereas the second term represents adaptation (learning, work). If the state of the entire learning system is such that the learning dynamics is subdominant to the stochastic dynamics, then the total entropy will increase (as is the case in regular, closed physical systems, under the second law of thermodynamics), but if learning dominates, then entropy will decrease as is the case in learning systems, under the second law of learning (17): The total entropy of a thermodynamic system does not decrease and remains constant in the thermodynamic equilibrium, but the total entropy of a learning system does not increase and remains constant in the learning equilibrium.
If the stochastic entropy production and the decrease in entropy due to learning cancel out each other, then the overall entropy of the system remains constant and the system is in the state of learning equilibrium (see refs. 17, 27, and 28 for discussion of different aspects of the equilibrium states.) This second law, when applied to biological processes, specifies and formalizes Schrdingers idea of life as a negentropic phenomenon (29). Indeed, learning equilibrium is the fundamental stationary state of biological systems. It should be emphasized that the evolving systems we examine here are open within the context of classical thermodynamics, but they turn into closed systems that reach equilibrium when thermodynamics of learning is incorporated into the model.
On longer time scales, when q(c) remains fixed but all other variables (i.e., q(a), q(n), and x) have equilibrated, the adaptable variables q(a) can transform into neutral ones q(n), and, vice versa, neutral variables can become adaptable ones (18). In terms of statistical mechanics, such transformations can be described by generalizing the canonical ensemble with the fixed number of particles (that is, in our context, fixed number of variables relevant for training) to a grand canonical ensemble where the number of variables can fluctuate (2). For neural networks, such fluctuations correspond to recruiting additional neurons from the environment or excluding neurons from the learning process. On a phenomenological level, these transformations can be described as finite shifts in the loss function, UU. In conventional thermodynamics, when dealing with ensembles of particles, is known as chemical potential, but in the context of biological evolution we shall refer to as evolutionary potential, another key term in our vocabulary of evolutionary thermodynamics and learning (Table 1). In thermodynamics, chemical potential describes how much energy is required to move a particle from one phase to another (for example, moving one water molecule from liquid to gaseous phase during water evaporation). Analogously, the evolutionary potential corresponds to the amount of evolutionary work (expressed, for example, as the number of mutations) or the magnitude of the change in the loss function is associated with the addition or removal of a single adaptable variable to or from the learning dynamics, that is, how much work it takes to make a nonadaptable variable adaptable, or vice versa.
The concept of evolutionary potential, , has multiple important connotations in evolutionary biology. Indeed, it is recognized that networks of nearly neutral mutations and, more broadly, nonfunctional genetic material (junk DNA) that dominates the genomes of complex eukaryotes represents the reservoir of potential adaptations (3033), making the evolutionary cost of adding a new adaptable variable low, which corresponds to small . Genomes of prokaryotes are far more tightly constrained by purifying selection and thus contain little junk DNA (34, 35); put another way, the evolutionary potential associated with such neutral genetic sequences is high in prokaryotes. However, this comparative evolutionary rigidity of prokaryote genomes is compensated by the high rate of gene replacement (36), with vast pools of diverse DNA sequences (open pangenomes) available for acquisition of new genes, some of which can contribute to adaptation (37, 38). The cost of gene acquisition varies greatly among genes of different functional classes as captured in the genome plasticity parameter of genome evolution that effectively corresponds to the evolutionary potential introduced here (39). For many classes of genes in prokaryotes, the evolutionary potential is relatively lower, such that gene replacement represents the principal route of evolution in these life forms. In viruses, especially, those with RNA and single-stranded DNA genomes, the evolutionary potential associated with gene acquisition is prohibitively high, but this is compensated by high mutation rates (40, 41), that is, low evolutionary potential associated with extensive nearly neutral mutational networks, making these networks the main source of adaptation.
Treating the learning system as a grand-canonical ensemble, Eq. 3.1, which represents the first law of learning, can be rewritten asdU=TdS+dK,[3.2]where K is the number of adaptable variables. Eq. 3.2 is more macroscopic than [3.1] in the sense that not only nontrainable variables but also adaptable and neutral trainable variables are now described in terms of phenomenological, thermodynamic quantities. Roughly, the average loss associated with a single nontrainable or a single adaptable variable can be identified, respectively, with T and , and the total number of nontrainable and adaptable variables with, respectively, S and K. This correspondence stems from the fact that S and K are extensive variables, whereas T and are intensive ones, as in conventional thermodynamics.
To describe phase transitions, we have to consider the system moving from one learning equilibrium (that is, a saddle point on the free energy landscape) to another. In terms of the microscopic dynamics, such phase transitions can involve either transitions from not fully trained adaptable variables q(a) to fully trained ones q(c) or transitions between different learning equilibria described by different values of q(c). In biological terms, the latter variety of transitions corresponds to MTE, which involve emergence of new classes of slowly changing, near constant variables (18), whereas the former variety of smaller-scale transitions corresponds to the fixation of beneficial mutations of all kinds, including capture of new genes (42), that is, adaptive evolution. In Major Transitions in Evolution and the Origin of Life we present a phenomenological description of MTE, in particular the very first one, the origin of life, which involved the transition from an ensemble of molecules to an ensemble of organisms. First, however, we describe how such ensembles can be modeled phenomenologically.
Consider an ensemble of organisms that differ from each other only by the values of adaptable variables q(a), whereas the effectively constant variables q(c) are the same for all organisms. The latter correspond to the set of core, essential genes that are responsible for the housekeeping functions of the organisms (43). Then, the ensemble can either represent a Bayesian (subjective) probability distribution over degrees of freedom of a single organism or a frequentist (objective) probability distribution over the entire population of organisms; the connections between these two approaches are addressed in detail in the classic work of Jaynes (6). In the limit of an infinite number of organisms, the two interpretations are indistinguishable, but in the context of actual biological evolution the total number of organisms is only exponentially large,Neexp(bK),[4.1]
and is linked to the number of adaptable variables Klog(Ne)/b in a population of the given size Ne. Eq. 4.1 indicates that the effective number of variables (genes or sites in the genome) that are available for adaptation in a given population depends on the effective population size. In larger populations that are mostly immune to the effect of random genetic drift, more sites (genes) can be involved in adaptive evolution. In addition to the effective population size Ne, the number of adaptable variables depends on the coefficient b that can be thought of as the measure of stochasticity caused by factors independent of the population size. The smaller b, the more genes can be involved in adaptation. In the biological context, this implies that the entire adaptive potential of the population is determined by mutations in a small fraction of the genome, which is indeed realistic. It has been shown that in prokaryotes effective population size estimated from the ratio of the rates of nonsynonymous vs. synonymous mutations (dN/dS), indeed, positively correlates with the number of genes in the genome, and, presumably, with the number of genes that are available for adaptation (4446).
To study the state of learning equilibrium for a grand canonical ensemble of organisms, it is convenient to express the average loss function asU(S,K)=T(S,K)S+(S,K)K,[4.2]where the conjugate variables are, respectively, evolutionary temperatureTUS[4.3]
and evolutionary potentialUK.[4.4]
Once again, evolutionary temperature is a measure of disorder, that is, stochasticity in the evolutionary process, whereas evolutionary potential is the measure of adaptability. For a given phenomenological expression of the loss function [4.2], all other thermodynamic potentials, such as free energy F(T,K) and grand potential (T,), can be obtained by switching to conjugate variables using Legendre transformations, i.e., ST, K.
The difference between the grand canonical ensembles in physics and in evolutionary biology should be emphasized. In physics, the grand canonical ensemble is constructed by constraining the average number of particles (2). In contrast, for the evolutionary grand canonical ensemble the constraint is imposed not on the number of organisms Ne per se but rather on the number of adaptable variables in organisms of a given species Klog(Ne), which depends on the effective population size. This key statement implies that, in our approach, the primary agency of evolution (adaptation, selection, or learning) is identified with individual genes rather than with genomes and organisms (47). Only a relatively small number of genes represent adaptable variables, that is, are subject to selection at any given time, in accordance with the classical results of population genetics (48). However, as discussed in the accompanying paper (18), our theoretical framework extends to multiple levels of biological organization and is centered around the concept of multilevel selection such that higher-level units of selection are identified with ensembles of genes or whole genomes (organisms). Then, organisms can be treated as trainable variables (units of selection) and populations as statistical ensembles. The change in the constraint from Ne to Klog(Ne) is similar to changing the ensemble with annealed disorder to one with quenched disorder in statistical physics (49). Indeed, in the case of annealed (thermal) disorder, we sum up (average) over a disorder partition function, whereas for quenched disorder, we average the logarithm of the partition function, that is, free energy.
In this section we demonstrate how the phenomenological approach developed in the previous sections can be applied to model biological evolution in the thermodynamic limit, that is, when both the number of organisms, Ne, and the number of active degrees of freedom, Klog(Ne), are sufficiently large. In such a limit, the average loss function contains all the relevant information on the learning system in equilibrium, which can be derived from a theoretical model, such as the one developed in the accompanying paper (18) using the mathematical framework of neural networks, or a phenomenological model (such as the one developed in the previous section), or reconstructed from observations or numerical simulations. In this section, we adopt a phenomenological approach to model the average loss function of a population of noninteracting organisms (that is, selection only affects individuals), and in the following section we construct a more general phenomenological model, which will also be relevant for the analysis of MTE in Major Transitions in Evolution and the Origin of Life.
Consider a population of organisms described by their genotypes q1,,qNe. There are rare mutations (on time scales ) from one genotype to another that are either quickly fixed or eliminated from the population (on shorter time scales ), but the total number of organisms Ne remains fixed. In addition, we assume that the system is observed for a long period of time so that it has reached a learning equilibrium (that is, an evolutionarily stable configuration). In this simple model, all organisms share the same q(c), whereas all other variables have already equilibrated, but their effect on the loss function depends on the type of the variable, that is, q(a) vs. q(n) vs. x. In particular, the trainable variables of individual organisms qns evolve in such a way that entropy is minimized on short time scales due to fixation of beneficial mutations but maximized on long time scales due to equilibration, that is, exploration of the entire nearly neutral mutational network (50, 51). Thus, the same variables evolve toward maximizing free energy on short time scales but toward minimizing free energy on longer time scales. This evolutionary trajectory is similar to the phenomenon of broken ergodicity in condensed matter systems, where the short time and ensemble (or long-time) averages can differ. The prototype of nonergodic systems in physics are (spin) glasses (5254). The glass-like character of evolutionary phenomena was qualitatively examined previously (55, 56). Nonergodicity unavoidably involves frustrations that emerge from competing interactions (57), and such frustrations are thought to be a major driving force of biological evolution (55). In terms of the model we discuss here, the most fundamental frustration that appears to be central to evolution is caused by the competing trends of the conventional thermodynamic entropy growth and entropy decrease due to learning.
The fixation of mutations on short time scales implies that over most of the duration of evolution all organisms have the same genotype q1==qNe=q (with some neutral variance), whereas the equilibration on the longer time scales implies that the marginal distribution of genotypes is given by the maximum entropy principle, as discussed in Maximum Entropy Principle Applied to Learning and Evolution, that is,p(q)n=1NedNxnexp(n=1NeH(xn,q))=exp(F(q)Ne),[5.1]where integration is taken over the states of the environment xn for all organisms n=1,,Ne. This distribution was previously considered in the context of population dynamics (13), where Ne was interpreted as the inverse temperature parameter. However, as pointed out in Maximum Entropy Principle Applied to Learning and Evolution, in our framework the inverse temperature is the Lagrange multiplier, which imposes a constraint on the average loss function [2.2]. Moreover, in the context of the models considered by Sella and Hirsh (13), the distribution can also be expressed asp(q)Z(q)Ne,[5.2]
where the partition function Z(q)=exp(F(q)) is the macroscopic counterpart of fitness (x,q) (see Eq. 2.6). Eq. 5.2 implies that evolutionary temperature has to be identified with the multiplication constant in [2.8], T=1. Thus, this is the ideal mutation model, which allows us to establish a precise relation between the loss function and Malthusian fitness. Importantly, this relation holds only for the situation of multiple, noninteracting mutations (i.e., without epistasis).
The model of Sella and Hirsh (13) is actually the Kimura model of fixation of mutations in a finite population (58), which is based on the effect of mutations on Malthusian fitness (in the absence of epistasis). In population genetics, this model plays a role analogous to the role of the ideal gas model in statistical physics and thermodynamics (59). The ideal gas model ignores interactions between molecules in the gas, and the population genetics model similarly ignores epistasis, that is, interaction between mutations. This model necessitates that the loss function is identified with minus logarithm of Malthusian fitness (otherwise, the connection between these two quantities would be arbitrary, with the only restriction that one of them should be a monotonically decreasing function of the other). However, identification of Ne with the inverse temperature (13) does not seem to be justified equally well. For the given environment, the probability of the state [5.1] depends only on the product of Ne and , that is, the parameter of the Gibbs distribution. This parameter is proportional to Ne, so that we could, in principle, choose the proportionality coefficient to be equal to 1 (or, more precisely, 1, 2, or 4 depending on genome ploidy and the type of reproduction), but only assuming that the properties of the environment are fixed. However, in the interest of generality, we keep the population size and the evolutionary temperature separate, interpreting as an overall measure of the level of stochasticity in the evolutionary process including effects independent of the population size.
This key point merits further explanation. The smaller the population size the more important are evolutionary fluctuations, that is, genetic drift (60). In statistical physics, the amplitude of fluctuations increases with the temperature (2). Therefore, when the correspondence between evolutionary population genetics and thermodynamics is explored, it appears natural to identify effective population size with the inverse temperature (13, 14), which is justified inasmuch as sources of noise independent of the population size, such as changes in the environment, are disregarded. In statistical physics, the probability of a systems leaving a local optimum at a given temperature exponentially depends on the number of particles in the system as compellingly illustrated by the phenomenon of superparamagnetism (61). For a small enough ferromagnetic particle, the total magnetic moment overcomes the anisotropy barrier and oscillates between the spin-up and spin-down directions, whereas in the thermodynamic limit these oscillations are forbidden, which results in spontaneously broken symmetry (2). Thus, the probability of drift from one optimum to the other exponentially depends on the number of particles, and the identification of the latter with the effective population size appears natural. However, from a more general standpoint, effective population size is not the only parameter determining the probability of fluctuations, which is also affected by environmental factors. In particular, stochasticity increases dramatically under harsh conditions, due to stress-induced mutagenesis (6264). Therefore, it appears preferable to represent evolutionary temperature as a general measure of evolutionary stochasticity, to which effective population size is only one of important contributors, with other contributions coming from the mutation rate and the stochasticity of the environment. Extremely high evolutionary temperature caused by any combination of these factors can lead to a distinct type of phase transition, in which complexity is destroyed, for example, due to mutational meltdown of a population (error catastrophe) (65, 66).
Importantly, this simple model allows us to make concrete predictions for a fixed size population, where beneficial mutations are rare and quickly proceed to fixation. If such a system evolves from one equilibrium state [at temperature T1=11, with the fitness distribution Z(1)(q)] to another equilibrium state [at temperature T2=21 and fitness distribution Z(2)(q)], then, according to [2.8], the ratioslogZ(1)(q)logZ(2)(q)=1F(q)2F(q)=12=T2T1[5.3]are independent of q, that is, are the same for all organisms in the ensemble, regardless of their fitness (again, under the key assumption of no epistasis). Then, Eq. 5.3 can be used to measure ratios between different evolutionary temperatures and thus to define a temperature scale. Moreover, the equilibrium distribution [5.1] together with [4.1] enables us to express the average loss functionU(K)=H(x,q)NeH(x,q)exp(bK),[5.4]
where H(x,q) is the average loss function across individual organisms. According to Eq. 5.4, the average loss U(S,K) scales exponentially with the number of adaptable degrees of freedom K, but the dependency on entropy is not yet explicit.
For phenomenological modeling of evolution it is essential to keep track not only of different organisms but also of the entropy of the environment. On the microscopic level, the overall average loss function is an integral over all nontrainable variables of all organisms, but on a more macroscopic level it can be viewed as a phenomenological function U(S,K), the microscopic details of which are irrelevant. In principle, it should be possible to reconstruct the average loss function directly from experiment or simulation, but for the purpose of illustration we first consider an analytical expressionU(S,K)=H(x,q)Ne=aSnexp(bSK),[6.1]
where in addition to the exponential dependence on K, as in [5.4], we also specify the power law dependency on S. In particular, we assume that H(x,q)Sn, where n>0 is a free parameter, that is, loss function is greater in an environment with a higher entropy. This factor models the effect of the environment on the loss function of individual organisms. In biological terms, this means that diverse and complex environments promote adaptive evolution. In addition, the coefficient b in [5.4] is replaced with b/S in [6.1], to model the effect of the environment on the effective number of active trainable variables. We have to emphasize that the model [6.1] is discussed here for the sole purpose of illustrating the way our formalism works. A realistic model can be built only through bioinformatic analysis of specific biological systems, which requires a major effort.
Thus, if a population of Ne organisms is capable of learning the amount of information S about the environment, then the total number of adaptable trainable variables K required for such learning scales linearly with S and logarithmically with Ne,K=b1Slog(Ne).[6.2]
The logarithmic dependence on Ne is already present in [4.1] and in [5.4], but the dependence on S is an addition introduced in the phenomenological model [6.1]. Under this model, the number of adaptable variables K is proportional to the entropy of the environment. Assuming K is proportional also to the total number of genes in the genome, the dependencies in Eq. 6.2 are at least qualitatively supported by comparative analysis of microbial genomes. Indeed, bacteria that inhabit high-entropy environments, such as soil, typically possess more genes than those that live in low-entropy environments, for example sea water (67). Furthermore, the number of genes in bacterial genomes increases with the estimated effective population size (4446), which also can be interpreted as taking advantage of diverse, high entropy environments.
Given a phenomenological expression for the average loss function [6.1], the corresponding grand potential is given by the Legendre transformation,(T,)=(1n)S(T)b,[6.3]where entropy should be expressed as a function of evolutionary temperature and evolutionary potential,T=b(n+log(ab)+(n1)logS).[6.4]
By solving [6.4] for S and plugging into [6.3], we obtain the grand potential(T,)=a(n1)(eb)nn1exp(bT(n1)).[6.5]
We shall refer to the ensemble described by [6.5] as an ideal gas of organisms.
In principle, the grand potential can also be reconstructed phenomenologically, directly from numerical simulations or observations of time-series of the population size Ne(t) and fitness distribution Z(q,t). Given such data, evolutionary temperature T can be calculated using [5.3] and the distributions pT(K)=plogNe of the number of adaptable variables K can be estimated for a given temperature T. Then, the grand potential is reconstructed from the cumulants n(T) of the distributions pT(K):(T,)=Tn=1n(T)n!(T)n[6.6]and the average loss function U(S,K) is obtained by Legendre transformation from variables (T,) to (S,K). Obviously, the phenomenological reconstruction of the thermodynamic potentials (T,) and U(S,K) is feasible only if the evolving learning system can be observed over a long period of time, during which the system visits different equilibrium states at different temperatures. More realistically, the observation can be limited to either a fixed temperature T or a fixed number of adaptable variables K, and then the thermodynamic potentials would be reconstructed in the respective variables only, that is, in K and or in T and S.
In this section we discuss the MTE, starting from the very first such transition, the origin of life. Under the definition of life adopted by NASA, natural selection is the quintessential trait of life. Here we assume that selection emerges from learning, which appears to be a far more general feature of the processes that occur on all scales in the universe (18, 68). Indeed, any statistical ensemble of molecules is governed by some optimization principle, which is equivalent to the standard requirement of minimization of the properly chosen potential in thermodynamics. Evolving populations of organisms similarly face an optimization problem, but at face value the nature of the optimized potential is completely different. So what, if anything, is in common between thermodynamic free energy and Malthusian fitness? Here we give a specific answer to this question: The unifying feature is that, at any stage of the evolution or learning dynamics, the loss function is optimized. Thus, as also discussed in the accompanying paper (18), the origin of life is not equal to the origin of learning or selection. Instead, we associate the origin of life with a phase transition that gave rise to a distinct, highly efficient form of learning or a learning algorithm known as natural selection. Neither the nature of the statistical ensemble of molecules that preceded this phase transition nor that of the statistical ensemble of organisms that emerged from the phase transition [referred to as the Last Universal Cellular Ancestor, LUCA (69, 70)] are well understood, but at the phenomenological level we can try to determine which statistical ensembles yield the most biologically plausible results.
The origin of life can be identified with a phase transition from an ideal gas of molecules that is often considered in the analysis of physical systems to an ideal gas of organisms that is discussed in the previous section. Then, during such a transition, the grand canonical ensemble of subsystems changes from being constrained by a fixed average number of subsystems (or molecules),Ne=Ne,[7.1]to being constrained by a fixed average number of adaptable variables associated with the subsystems (or organisms),K=K.[7.2]
Immediately before and immediately after the phase transition, we are dealing with the very same system, but the ensembles are described in terms of different ensembles of thermodynamic variables. Formally, it is possible to describe an organism by the coordinates of all atoms of which it is comprised, but this is not a particularly useful language (71). Atoms (and molecules) behave in a collective manner, that is, coherently, and therefore the appropriate language to describe their behavior is the language of collective variables similar to, for example, the dual boson approach to many-body systems (72).
According to [4.1], the total number of organisms (population size) and the number of adaptable variables are related, Klog(Ne), but the choice of the constraint, [7.1] vs. [7.2], determines the choice of the statistical ensemble, which describes the state of the system. In particular, an ensemble of molecules can be described by the grand potential p(T,M), where T is the physical temperature, M is the chemical potential, and an ensemble of biological subsystems can be described by the grand potential b(T,), where, as before, T is the evolutionary temperature and is the evolutionary potential. Assuming that both ensembles can coexist at some critical temperatures T0 and T0 , the evolutionary phase transition will occur whenp(T0,M0)=b(T0,0).[7.3]
This condition is highly nontrivial because it implies that, at phase transition, both physical and biological potentials provide fundamentally different (or dual) descriptions of the exact same system, and all of the biological and physical quantities have different (or dual) interpretations. For example, the loss function is to be interpreted as energy in the physical description, but as additive fitness in the biological description [2.9].
An ideal gas of molecules is described by the grand potentialp(T,M)Texp(MT)[7.4]and an ideal gas of organisms is described by the grand potential [6.5],b(T,)cexp(bT).[7.5]
At higher temperatures, it is more efficient for the individual subsystems to remain independent of each other, p
Plugging [7.6] and [7.7] into [7.4] givesp(T0,M0)T0expM0T0=(expbT00)expclog0=expbT000cb(T0,0),[7.8]which is in agreement with [7.3]. The relations [7.6] and [7.7] were used here to illustrate the conditions under which the phase transition might occur, but it is also interesting to examine whether these relations actually make sense qualitatively. Eq. 7.6 implies that energy/loss associated with learning dynamics, T0, is logarithmically smaller compared to the energy/loss associated with stochastic dynamics, T0, but depends linearly on the energy/loss required to add a new adaptable variable to the learning system, that is, the evolutionary potential 0. This dependency makes sense because the learning dynamics is far more stringently constrained than the stochastic dynamics and its efficiency critically depends on the ability to engage new adaptable degrees of freedom. Eq. 7.7 implies that the energy/loss, M0, that is required to incorporate an additional nontrainable variable into the evolving system is logarithmically smaller 0 but depends linearly on the energy/loss, T0, associated with stochastic dynamics. This also makes sense because it is much easier to engage nontrainable degrees of freedom, and furthermore the capacity of the system to do so depends on the physical temperature.
It appears that for the origin of life phase transition to occur the learning system has to satisfy at least three conditions. The first one is the existence of effectively constant degrees of freedom, q(c), which are the same in all subsystems. This condition is satisfied, for example, for an ensemble of molecules, the stability of which is a prerequisite of the evolutionary phase transitions, but it does not guarantee that the transition occurs. The second condition is the existence of adaptable or active variables, q(a), that are shared by all subsystems, but their values can vary. These are the variables that undergo learning evolution and, according to the second law of learning, adjust their values to minimize entropy. Finally, for learning and evolution to be efficient, the third condition is the existence of neutral variables, q(n), which can become adaptable variables as learning progresses. In the language of statistical ensembles, this is equivalent to switching from a canonical ensemble with a fixed number of adaptable variables to a grand canonical ensemble where the number of adaptable variables can vary.
There are clear biological connotations of these three conditions. In the accompanying paper (18) we identify the origin of life with the advent of natural selection which requires genomes that serve as instructions for the reproduction of organisms. Genes comprising the genomes are shared by the organisms in a population or community, forming expandable pangenome that can acquire new genes, some of which contribute to adaptation (37). In each prokaryote genome about 10% of the genes are rapidly replaced, with the implication that they represent neutral variables that are subject to no or weak purifying selection and comprise the genetic reservoir for adaptation whereby they turn into adaptable variables, that is, genes subject to substantial selection (36). The essential role of gene sharing via horizontal gene transfer at the earliest stages in the evolution of life is thought of as a major factor underlying the universality of the translation system and the genetic code across all life forms (73). Strictly speaking, the transition from an ensemble of molecules to an ensemble of organisms could correspond to the emergence of protocells that lacked genomes but nevertheless displayed collective behavior and were subject to primitive form of selection for persistence (18). The origin of genomes would be a later event that kicked off natural selection. However, under the phenomenological approach adopted here, Eq. 7.3 covers both these stages.
The subsequent MTE, such as the origin of eukaryotic cells as a result of symbiosis between archaea and bacteria, or the origin of multicellularity, or of sociality, in principle, follow the same scheme: One has to switch between two alternative (or dual) descriptions of the same system, that is, the grand potentials in the dual descriptions should be equal at the MTE point, similar to Eq. 7.3. Here we only illustrated how the phase transition associated with the origin of life could be modeled phenomenologically and argue that essentially the same phenomenological approach would generally apply to the other MTEs.
Since its emergence in the Big Bang about 13.8 billion y ago, our universe has been evolving in the overall direction of increasing entropy, according to the second law of thermodynamics. Locally, however, numerous structures emerge that are characterized by readily discernible (even if not necessarily easily described formally) order and complexity. The dynamics of such structures was addressed by nonequilibrium thermodynamics (74) but traditionally has not been described as a process involving learning or selection, although some attempts in this direction have been made (75, 76). However, when learning is conceived of as a universal process, under the world as a neural network concept (68), there is no reason not to consider all evolutionary processes in the universe within the framework of the theory of learning. Under this perspective, all systems that evolve complexity, from atoms to molecules to organisms to galaxies, learn how to predict changes in their environment with increasing accuracy, and those that succeed in such prediction are selected for their stability, ability to persist and, in some cases, to propagate. During this dynamics, learning systems that evolve multiple levels of trainable variables that substantially differ in their rates of change outcompete those without such scale separation. More specifically, as argued in the accompanying paper, scale separation is considered to be a prerequisite for the origin of life (18).
Here we combine thermodynamics of learning (17) with the theory of evolution as learning (18), in an attempt to construct a formal framework for a phenomenological description of evolution. In doing so, we continue along the lines of the previous efforts on establishing the correspondence between thermodynamics and evolution (13, 14). However, we take a more consistent statistical approach, starting from the maximum entropy principle and introducing the principal concepts of thermodynamics and learning, which find natural counterparts in evolutionary population genetics, and we believe are indispensable for understanding evolution. The key idea of our theoretical construction is the interplay between the entropy increase in the environment dictated by the second law of thermodynamics and the entropy decrease in evolving systems (such as organisms or populations) dictated by the second law of learning (17). Thus, the evolving biological systems are open from the viewpoint of classical thermodynamics but are closed and reach equilibrium within the extended approach that includes thermodynamics of learning.
Under the statistical description of evolution, Malthusian fitness is naturally defined as the negative exponent of the average loss function, establishing the direct connection between the processes of evolution and learning. Further, evolutionary temperature is defined as the inverse of the Lagrange multiplier that constrains the average loss function. This interpretation of evolutionary temperature is related to that given by Sella and Hirsh (13), where evolutionary temperature was represented by the inverse of the effective population size, but is more general, reflecting the degree of stochasticity in the evolutionary process, which depends not only on the effective population size, but also on other factors, in particular interaction of organisms with the environment. It should be emphasized that here we adhere to a phenomenological thermodynamics approach, under which the details of replicator dynamics are irrelevant, in contrast, for example, to the approach of Sella and Hirsh (13).
Within our theoretical framework, adaptive evolution involves primarily organisms learning to predict their environment, and accordingly, the entropy of the environment with respect to the organism is one of the key determinants of evolution. For illustration, we consider a specific phenomenological model, in which the rate of adaptive evolution reflected in the value of the loss function depends exponentially on the number of adaptable variables and also shows a power law dependence on the entropy of the environment. The number of adaptable variables, or in biological terms the number of genes or sites that are available for positive selection in a given evolving population at a given time, is itself proportional to the entropy of the environment and to the log of the effective population size. Thus, high-entropy environments promote adaptation, and then success breeds success, that is, adaptation is most effective in large populations. These predictions of the phenomenological theory are at least qualitatively compatible with the available data and are quantitatively testable as well.
Modern evolutionary theory includes an elaborate mathematical description of microevolution (12, 77), but, to our knowledge, there is no coherent theoretical representation of MTE. Here we address this problem directly and propose a theoretical framework for MTE analysis, in which the MTE are treated as phase transitions, in the technical, physical sense. Specifically, a transition is the point where two distinct grand potentials, those characterizing units at different levels, such as molecules vs. cells (organisms) in the case of the origin of life, become equal or dual. Put another way, the transition is from an ensemble of entities at a lower level of organization (for example, molecules) to an ensemble of higher-level entities (for example, organisms). At the new level of organization, the lower-level units display collective behavior and the corresponding phenomenological description applies. This formalism entails the existence of a critical (biological) temperature for the transition: The evolving systems have to be sufficiently robust and resistant to fluctuations for the transition to occur. Notably, this theory implies the existence of two distinct types of phase transitions in evolution: Apart from MTE, each event of an adaptive mutation fixation also is a bona fide transition albeit on a much smaller scale. Of note, the origin of life has been previously described as a first-order phase transition, albeit within the framework of a specific model of replicator evolution (78). Furthermore, the transition associated with the origin of life corresponds to the transition from infrabiological entities to biological ones, the first organisms, as formulated by Szathmry (79) following Gantis chemoton concept. According to Ganti, life is characterized by the union of three essential features: membrane compartmentalization, autocatalytic metabolic network, and informational replicator (80, 81). The pretransition, infrabiological (protocellular) systems only encompass the first two features, and the emergence of the informational replicators precipitates the transition, at which point all quantities describing the system have a dual meaning according to Eq. 7.3 [see also the accompanying paper (18)].
The phenomenological theory of evolution outlined here is highly abstract and requires extensive further elaboration, specification, and, most importantly, validation with empirical data; we indicate several specific predictions for which such validation appears to be straightforward. Nevertheless, even in this general form the theory achieves the crucial goal of merging learning and thermodynamics into a single, coherent framework for modeling biological evolution. Therefore, we hope that this work will stimulate the development of new directions in the study of the origin of life and other MTE. By incorporating biological evolution into the framework of learning processes, this theory implies that the emergence of complexity commensurate with life is an inherent feature of learning that occurs throughout the history of our universe. Thus, although the origin of life is likely to be rare due to the multiple constraints on the learning/evolutionary processes leading to such an event (including the requirement for the essential chemicals, concentration mechanisms, and more), it might not be an extremely improbable, lucky accident but rather a manifestation of a general evolutionary trend in a universe modeled as a learning system (18, 68).
There are no data underlying this work.
V.V. is grateful to Dr. Karl Friston and E.V.K. is grateful to Dr. Purificacion Lopez-Garcia for helpful discussions. V.V. was supported in part by the Foundational Questions Institute and the Oak Ridge Institute for Science and Education. Y.I.W. and E.V.K. are supported by the Intramural Research Program of the NIH.
Author contributions: V.V., Y.I.W., E.V.K., and M.I.K. designed research; V.V. and M.I.K. performed research; and V.V. and E.V.K. wrote the paper.
Reviewers: S.F., University of California, Irvine; and E.S., Parmenides Foundation.
The authors declare no competing interest.
Follow this link:
Thermodynamics of evolution and the origin of life - pnas.org
Posted in Evolution
Comments Off on Thermodynamics of evolution and the origin of life – pnas.org
Why covering anti-evolution laws has me worried about the future of vaccines – Ars Technica
Posted: at 6:59 am
Aurich Lawson
Prior to the pandemic, the opposition to vaccines was apolitical. The true believers were a small population and confined to the fringes of both major parties, with no significant representation in the political mainstream. But over the past year, political opposition to vaccine mandates has solidified, with a steady stream of bills introduced attempting to block various ways of encouraging or requiring COVID vaccinations.
This naturally led vaccine proponents to ask why these same lawmakers weren't up in arms in the many decades that schools, the military, and other organizations required vaccines against things like the measles and polio. After all, pointing out logical inconsistencies like that makes for a powerful argument, right?
Be careful what you wish for. Vaccine mandate opponents have started trying to eliminate their logical inconsistency. Unfortunately, they're doing it by trying to get rid of all mandates.
The fact that this issue has become politicized and turned state legislatures into battlegrounds has a disturbing air of familiarity to it. For over a decade, I've been tracking similar efforts in state legislatures to hamper the teaching of evolution, and there are some clear parallels between the two. If the fight over vaccines ends up going down the same route, we could be in for decades of attempts to pass similar laws and a few very dangerous losses.
To understand the parallels, you have to understand the history of evolution education in the US. Most of it is remarkably simple. In 1968, the Supreme Court issued Epperson v. Arkansas, ruling that prohibitions on the teaching of evolution were religiously motivated and thus unconstitutional. Two decades later, laws requiring that evolution be "balanced" with instruction in creationism (labelled "creation science" for this purpose) were declared unconstitutional for similar reasons. A further attempt to rebrand creationism and avoid this scrutiny was so thoroughly demolished at the District Court level that nobody bothered to appeal it to the Supreme Court.
Given all that precedent, you'd think that evolution education would be a throughly settled issue. If only that were true.
Instead, each year sees a small collection of bills introduced in state legislatures that attempt to undermine public education in biology. These tend to arise from two different sources. One is what you might call ignorant true believers. These are people who sincerely believe that evidence supports their sectarian religious views and are either unaware of Supreme Court precedents or believe that the Supremes would see things their way if given just another chance.
On their own, the true believers aren't very threatening. The bills they introduce are often comically unconstitutional and tend to die in committee. The problem is that these legislators and the people who elect them are all in the same political party.
That party has plenty of people in it who aren't true believers. They know that trying to smuggle creationism into schools is unconstitutional and that there's nothing traditionally Republican about trying to do an end run around the Constitution. But they recognize that the true believers are a major constituency of their party, and they want to signal to that constituency that they share values. So they engage in vice signaling, supporting things they know are wrong but will signal shared values.
In some cases, this includes disturbing levels of support for the clearly bonkers bills filed by the true believers. But in more insidious cases, the vice signaling can involve supporting bills that are carefully crafted to enable creationists without blatantly violating the Constitution. Two such bills, which claim to champion "academic freedom" while singling out evolution as in need of critical thinking, have become law in Louisiana and Tennessee.
Prior to the pandemic, another group of true believersthe people who really think that vaccines are dangerouswas a tiny minority with no real home in either of the major political parties. But Republican opposition to vaccine mandates has now given anti-vaxxers a home. There, they've merged with another set of true believers: those who think that their personal freedom isn't balanced by a responsibility to respect the freedom and safety of others.
With all of these true believers in one party, the vice signaling has started. Florida Gov. Ron DeSantis has been vaccinated and has spoken of the value of vaccinations a number of times. Yet he's tried to enforce laws that interfere with private businesses that wish to require vaccines, an effort that initial rulings have found to be unconstitutional. He's also appointed a surgeon general who refuses to say whether he's vaccinated and spent two minutes dodging a question about whether vaccines are effective before acknowledging that they are.
But the problems aren't limited to Florida. Missouri's top health official was compelled to resign even though he opposed vaccine mandates. He ran afoul of state legislators simply for saying he'd like to see more citizens vaccinated.
The list of states with bills targeting COVID vaccine mandates is long: Mississippi, Oklahoma, Iowa, South Carolina, Alabama, and more. And then there's the bill circulating in Georgia we mentioned at top, which signals that this politicization isn't limiting itself to the COVID vaccines. A number of other states appear to be pondering related effortsthat target vaccines generally.
See the article here:
Why covering anti-evolution laws has me worried about the future of vaccines - Ars Technica
Posted in Evolution
Comments Off on Why covering anti-evolution laws has me worried about the future of vaccines – Ars Technica
Evolution is More Important than Environment for Water Uptake – Eos
Posted: at 6:59 am
Editors Highlights are summaries of recent papers by AGUs journal editors.Source: Geophysical Research Letters
Understanding of root water uptake supporting water release into the atmosphere (known as transpiration) by plants has remained elusive due to the difficulty of studying root systems. However, the problem is important since climate change is expected to increase transpiration while reducing precipitation in many regions, causing a shortage of plant available water.
A common assumption used by ecologists and hydrologists is that root water uptake and access to groundwater is dictated either by plant gross similarities (for example, such as needle leaf vs broadleaf; temperate species vs. tropical species) or by their environment (for example landscape position that determines proximity to groundwater).
Knighton et al. [2021] used a global analysis of root traits and signatures of water in plant tissues to conclude that evolutionary proximity of species determines root water uptake strategies. While so far there is little data from tropical forests with high diversity, this research suggests that that the wealth of information on species evolutionary proximities can be used to map root water uptake strategies for yet unstudied species.
Citation: Knighton, J., Fricke, E., Evaristo, J., de Boer, H. J., & Wassen, M. J. [2021]. Phylogenetic underpinning of groundwater use by trees. Geophysical Research Letters, 48, e2021GL093858. https://doi.org/10.1029/2021GL093858
Valeriy Ivanov, Editor, Geophysical Research Letters
View original post here:
Evolution is More Important than Environment for Water Uptake - Eos
Posted in Evolution
Comments Off on Evolution is More Important than Environment for Water Uptake – Eos
How to Evolve Mantyke in Pokmon Legends: Arceus – Attack of the Fanboy
Posted: at 6:59 am
There are plenty of Pokmon for players to get their hands on in Pokmon Legends: Arceus. Mantyke is one of these Pokmon, a Water/Flying-type that eventually evolves into Mantine. While Mantine can be found in the wild in the same area where its pre-evolution is found, too a certain Pokdex research requirement means completionists will still need to evolve Mantyke eventually.
Mantyke can be found in the Cobalt Coastlands, alongside most other fish Pokmon in the game. Unlike those other Pokmon, Mantykes evolution requirements are a bit strange. Theres no need to level it up, and no items are required. Instead,players need a Mantyke and Remoraid in their party at the same time. There are no further requirements; fulfill that task, and you can evolve your Mantyke just like that.
Remoraid can also be found in the Cobalt Coastlands. Since evolution in this game is performed manually, the only preparation players need to do is to catch a Mantyke and Remoraid. If your party is full by the time you catch them, simply go to the Pastures and swap out two of your party members for the time being. Despite it being required for the evolution, you wont lose your Remoraid after evolving Mantyke. It will just stay in the party, making it easy to evolve multiple Mantykes if you caught more than one.
To fully complete Mantykes Pokdex entry, players must evolve it at least once. No further evolutions must be done to complete the task. Even so, evolving some more Mantykes will lead to a higher number of Mantines caught another research task for players to complete. Keeping a spare Remoraid around can be helpful for completing that task, especially if players stumble across a Mass Outbreak of Mantykes. While at the Cobalt Coastlands, also consider catching and evolving some other notable Pokmon, like a Finneon or a Qwilfish.
Pokmon Legends: Arceusis an exclusive title for the Nintendo Switch.
RELATED TOPICS :
Go here to read the rest:
How to Evolve Mantyke in Pokmon Legends: Arceus - Attack of the Fanboy
Posted in Evolution
Comments Off on How to Evolve Mantyke in Pokmon Legends: Arceus – Attack of the Fanboy