The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Evolution
How to create growth in an age of digital evolution – Fast Company
Posted: February 17, 2022 at 7:34 am
There were 7.83 billion people on the planet at the beginning of 2021, with 66.6% of them using mobile phones and more than 53.6% of them using social media. Digital advertising revenue was expected to account for 57% of all U.S. advertising in the United States last year. All of which is to say, its no longer a question of whether or when a business goes digital, but how well it embraces the opportunities digital offers for growth and gaining the competitive edge.
In terms of that competition, through the third quarter of 2021, there were nearly 1.4 million applications in the U.S. for new businesses, more than the third-quarter year-to-date data for previous years. Fueled by unemployment, pandemic concerns, and the numerous factors contributing to the Great Resignation, more people are striking out on their own, potentially adding to an already crowded digital landscape.
So, how can this cohort of new entrepreneurs position themselves for success, and how can non-digital natives shift their legacy mindsetand possibly their approachto not only survive but thrive despite the uncertainty of whats now our normal?
Lets examine a fictional case. Carla ran a healthcare organization for years, but she hadnt recognized the level of healthcare inequity for minority populations until the pandemic. She left her job to start an advocacy and resources firm, but isnt sure how much to invest in her digital presence given access issues for her target audience, the changing nature of outreach technology, and her limited resources.
For innovators, early adopters, and founders like Carla, moving forward requires a leap of faith. Disruption starts from something known, which means ideas about the way things should work may linger. You have to be willing to separate the principles of what youre trying to achieve from the method used to achieve it.
Throughout my career, Ive worked with technologies that hadnt been tested before. We made predictions that werent always right, but they always inspired creative ideas about how to use that technology to reach more people and make their lives better.
The important thing to remember is flexibility. In my industry, out-of-home advertising is the oldest form of advertising, yet it straddles the traditional and the technological to great effect. Keep your goals and your audience in mind so youre pragmatic about how to incorporate something new into something existing. Allowing yourself to adapt will allow your business to advance.
Besides, whats the worst that could happen? You choose a path and have to pivot later. Weve all shown we can do that. Or, you ignore digital innovation until it threatens your ability to meet customer expectations, dampening any chance of meaningful growth.
Growth for the sake of growth is rarely a smooth or sustainable approach. You didnt go into business to frustrate customers and drive them away, so their needs should anchor your strategy for how technology supports your growth efforts. Confront and address any gaps between what you want, what the customer needs, and what it takes to get there.
In Carlas case, she eventually envisions providing consultations and education via extended reality (XR) but doesnt have the provider or partner base to make that a useful experience for customers right now. What she can use are solutions like QR codes on transit stops and community center posters that drive her customers from the physical world of commuting and errands online to her website and social channels.
Not every company can be a unicorn and not everyone is slated for a meteoric rise to enterprise status. But organizations that reflect on what makes sense and practice a little due diligenceeven in the middle of a sprintcan mitigate some of the headache and heartache of adopting new technology. For instance, you can:
Use data to confirm and adapt to customer needs to innovate more effectively on your product or service.
Partner with an expert and/or other leaders in your network on how best to build out your digital presence.
Ensure a convenient, hybrid, human experience for customers, allowing them to connect with you and share with others whether theyre online or offline.
There are additional considerations if youre transitioning from a legacy system. Namely, communication and buy-in. Everyone needs to understand what youre trying to achieve and why, and that the initiative is supported from the C-suite down.
Being strategicwhether youre a startup or an established brandmeans not joining the next trend for the sake of change. It also means not assuming the next thing is necessarily the best thing for your customers and where your organization wants to go. Its still possible to explore the options and not become overly excited, or alternately so overwhelmed by the nature and pace of innovation that you give up.
The reality of change and driving growth through change can be daunting. It might help, then, to think of your progress in different terms. Instead of trying to run a race across an uneven and ever-changing landscape, view your progress as a ladder.
With each rung comes solid footing, propelling you upward or providing a safe space to pause. The view forward is just you, proceeding toward the top at your own pace. You may choose to climb slowly or you may choose to run, but the key is that you get to the top without breaking something or risking a fall back down.
Technology rarely leads to setbacks if you approach change with an open mind and a solid plan. The transition from print to radio to television to the Internet meant following people where they wanted to go and reaching more of them in the process. The simple truth is, you cant grow if customers cant find you. But you want them to find you open to new ideas and feedback, leveraging technology thats intuitive and appropriate for their needs, and ready to bring them along as you climb to reach your goals.
Anna Bager is President & CEO ofOAAA, the national trade association that represents the out of home (OOH) advertising industry.
More here:
How to create growth in an age of digital evolution - Fast Company
Posted in Evolution
Comments Off on How to create growth in an age of digital evolution – Fast Company
Worldwide High-Density Interconnect Industry to 2026 – Evolution of New Innovative Technologies Presents Opportunities – ResearchAndMarkets.com -…
Posted: at 7:34 am
DUBLIN--(BUSINESS WIRE)--The "Global High-Density Interconnect Market (2021-2026) by Application Type, Product Type, End-User Type, Geography, Competitive Analysis and the Impact of Covid-19 with Ansoff Analysis" report has been added to ResearchAndMarkets.com's offering.
The Global High-Density Interconnect Market is estimated to be USD 12.1 Bn in 2021 and is expected to reach USD 21.86 Bn by 2026, growing at a CAGR of 12.56%.
The Global High-Density Interconnect Market is driving due to the increasing adoption of advanced electronics, safety measures, and safety systems by various automotive, medical, consumer electronics, aerospace, and defense industries. Key factors driving the market are the growing demand for smart consumer electronics such as smartphones, gaming consoles, and wearable devices. HDI is used in many devices; it provides a high-reliability density. The benefit of small-size lightweight features is driving the market's growth. On the other hand, complex processes in manufacturing industries and high construction costs are obstacles to the market's growth.
Furthermore, rising changes in technologies rapidly with increasing different types of functionalities in electric devices are some of the significant challenges in the market. Moreover, increasing demand for connected devices, growing usage for HDI in automobiles, and evolving new technologies such as 5G IoT technology create opportunities for the market to grow.
The Global High-Density Interconnect Market is segmented based on Application Type, Product Type, End-User Type, and Geography.
Countries Studied
Competitive Quadrant
The report includes a Competitive Quadrant, a proprietary tool to analyze and evaluate the position of companies based on their Industry Position score and Market Performance score. The tool uses various factors for categorizing the players into four categories. Some of these factors considered for analysis are financial performance over the last 3 years, growth strategies, innovation score, new product launches, investments, growth in market share, etc.
Why buy this report?
Market Dynamics
Drivers
Restraints
Opportunities
Challenges
Companies Mentioned
For more information about this report visit https://www.researchandmarkets.com/r/fm7i64
View original post here:
Posted in Evolution
Comments Off on Worldwide High-Density Interconnect Industry to 2026 – Evolution of New Innovative Technologies Presents Opportunities – ResearchAndMarkets.com -…
Cisco : What’s the Next Evolution of IoT? – marketscreener.com
Posted: at 7:34 am
Perhaps you saw this in the news: Telstra recently signed a $100 million deal to provide connectivity for Intellihub's smart meters that will add 4.1 million SIMs to Telstra's IoT network, and it was all built on Cisco IoT Control Center. Just like Telstra, service providers around the globe are seeing big business opportunities in the massive explosion of low complexity, cellular-enabled devices such as utility meters, asset tracking tags, medical devices, and agricultural sensors.
With low bandwidth consumption, low levels of complexity, and deterministically predictable usage patterns, these devices are often connected over wide areas with 3GPP LPWA (NB-IoT, LTE-M) networks. LPWAN delivers reliable, cost-effective IoT connectivity for low-cost devices supporting a broad range of low-power use cases and business models, from a water meter that sends a burst of data once a week to a city parking meter that handles transactions at all hours of the day and night.
And the business opportunity is massive. Investment in digitalization is just starting and has the potential for continued growth over many years as more countries, industries, and enterprises automate human-focused activities and leverage wireless technology at scale to power this digital transformation. Just consider recent Gartner Research findings that show 3GPP LPWAN will reach 1.2 billion worldwide connections by 2025. Energy and utilities, agriculture, government, smart cities, and healthcare will account for 90 percent of NB-IoT and LTE-M connections over that time.
But adding this extremely low-cost mass IoT segment to your service offerings puts downward pressure on profitability and requires rethinking how you deliver connectivity across all segments. Will your current value chain and economic model be able to support the next wave of IoT growth? Increasing your relevance in the value chain and reconsidering the economics of managing multiple Connectivity Management Platforms (CMPs) are keys to thriving in this growing market.
Zero Touch Deployment adds relevance to increase value
Let's talk about relevance in the value chain. Today the machine builder/smart meter manufacturer understands their connected products must be factory tested, shipped to the destination, installed, configured, and managed both from a connectivity and an ongoing monitoring and analytics/operational perspective. Each step in this multi-stage process adds cost and time, and the IoT connectivity provider is often last in line when it comes to getting their share of the wallet. Without being relevant by delivering more value, IoT providers are limited to competing for revenue per device, per month. One way to achieve relevance that leads to more wallet share is to play a bigger role in the end-to-end enablement of the devices and services. Part of the vision of the digitization dream is to reduce human involvement and the resulting cost. Zero Touch Deployment (ZTD), which leverages multiple, well-defined technologies, is an excellent option for driving that relevance for service providers.
Let's revisit the workflow using ZTD. The device is manufactured, tested, shipped with the standard factory configuration, installed in a desired location with peripherals connected, and powered on. The device powers up and pulls down its pre-defined set of parameters from the cloud, and now is automatically configured, security policies are enabled, it's visible to the enterprise, and data is being collected by the cloud-based analytics. This is done leveraging capabilities that align with the IoT connectivity provider skillsets. Will you get an enterprise's attention based on this new workflow? Absolutely! By enabling end-to-end process automation that delivers cost savings and faster time to deployment, while eliminating manual errors that lead to operational costs and service downtime, ZTD increases the value you provide and improves your relevance, leading to greater wallet share.
Low-complexity deployments don't mean low risk
There are other ways to increase relevance. It's a mistake to think that low-complexity deployment at scale means simplified operations or reduced security risk. For example, let's assume a stationary smart meter starts roaming across a wider geographic region. Clearly the device has been compromised. But as perhaps just one out of thousands or millions of devices, determining that a single device is exhibiting an anomalous behavior is humanly impossible to do manually. However, machine learning can determine and detect out-of-ordinary patterns and help you proactively identify problem devices.So low-cost deployment at scale must go hand in hand with the necessary guard rails to quickly mitigate risk. If the IoT connectivity provider can utilize machine learning capabilities that offer real-time monitoring, rapid detection, alerting, and automated remediation of such issues, this presents another opportunity to increase value and relevance with the enterprise.
What about costs?
Now that we've discussed how increased relevance leads to increased revenue, I'll take a closer look at the other side of the profitability equation - Operational Expenditures (OpEx). Let's start by considering how different types of IoT devices are handled by the connectivity provider. The CMP usually determines the experience that's delivered to the enterprise. Multiple CMP platforms used within an IoT connectivity provider is very common and the CMP the device is placed on is based on a price negotiated for the device and the complexity of the required management capabilities. High-value devices usually end up on platforms with rich feature sets including security, while low complexity devices may be targeted for lower feature-set platforms.
With this ecosystem of multiple CMPs, visibility, which is key to successful and cost-effective operations, is split across the platforms. Multiple CMPs increase complexity and can require significant additional resources in development and operations. In a cost-constrained business, consolidation of platforms should be considered a serious part of any strategy to successfully capture the low-cost mass IoT market and be profitable moving forward. Ultimately a one-to-one relationship, 'One Enterprise' to 'One Platform', delivers the most comprehensive and cost-effective business outcomes.
Learn more
To effectively harness the power that IoT offers - to become both relevant and profitable as a service provider - it's important to consider new ways of thinking about your deployments. We've talked about a few ideas here like ZTD and consolidating your CMPs, but we've got plenty of other ideas that will benefit you on your digitalization journey. Come visit the IoT Control Center team at Mobile World Congress to learn more about where we see this exciting new technology advancing and learn how your organization can add relevance and obtain more wallet share. We look forward to seeing you there!
Share:
Link:
Cisco : What's the Next Evolution of IoT? - marketscreener.com
Posted in Evolution
Comments Off on Cisco : What’s the Next Evolution of IoT? – marketscreener.com
Huskie Tools Introduces the Next Evolution of Battery Powered Cutting & Compression Lineman Tools – the SLC Series – Yahoo Finance
Posted: at 7:34 am
Chicago based Tool Innovator Launches its Most Advanced Lineman Utility Industry Tools
ADDISON, Ill., Feb. 16, 2022 /PRNewswire/ -- Huskie Tools, LLC (HUSKIETOOLS.COM), located just outside of Chicago, Illinois has built their 45-year reputation on designing and manufacturing the highest quality and safest battery-powered, hydraulic utility linemen tools in the industry. Today, Huskie Tools announces the introduction of the new streamline SLC lineup, the next evolution in precision cutting and crimping tools.
SLC BY HUSKIE TOOLS AN EVOLUTION IN BATTERY POWERED HYDRAULIC CUTTING & COMPRESSION TOOLS FOR UTILITY LINEMEN"At Huskie Tools, it is our mission to build and deliver the industry's most innovative tools that meet the demands of linemen. We know they work in tough conditions overhead through storms and underground in the dark. We took those difficulties into account when developing the new SLC line. Compact and ergonomic with an industry-only selectable 360 LED light ring, the new compact SLC tools will weather the storms with maximum reliability, durability and safety," stated Nicholas Skrobot, CEO, Huskie Tools.
YOUR JOB IS TOUGH YOUR TOOL CHOICE IS EASYHuskie Tools' focus for over 45 years has been developing and building the best-performing, most durable, dependable, and safest line of cutting and compression tools for linemen in the utility industry. "We have the experience, knowledge, and resources to engineer the best and safest hydraulic tools in the industry. I'm proud to be part of such a talented product team launching the SLC line. This line has incredible new features such as the 18V brushless motor to provide longer life, an electronic switch for maximum reliability, durability and ease of use, plus an industry first -- our 360 LED light ring for working in dark areas and EVS jaws with improved performance and longer life. This is the reason we are the #1 choice of experienced linemen," explained Dan Voss, Product Development Director and 30-year Huskie Tools veteran.
Story continues
FIRST TO INTRODUCE BATTERY-POWERED TOOLSHuskie Tools introduced the industry's first line of battery-powered hydraulic cutting and compression tools in 1989, and today offer over 190 battery-powered tools on three leading battery platforms: Makita, Huskie Tools, and Milwaukee. We also have a full line of manual tools, presses, dies, and pole pullers -- no one in the industry has a deeper, wider utility industry product solution offering than Huskie Tools. We know what it's like to use our tools in a driving rainstorm with thousands of families depending on you to restore their power. Or to have to twist and contort your body just to access an underground area that needs repair. That's why we continue to offer battery-operated hydraulic tools, such as SLC, that are among the fastest, and most ergonomically designed and balanced on the market today," stated Charlie Kelly, VP, Sales.
ABOUT HUSKIE TOOLS, LLCBorn in Chicago over 45 years ago, Huskie Tools is a full-service company and proven leader in providing utility industry linemen a complete range of product solutions including battery-powered cutting and compression tools, presses, pumps, dies, manual tools and the TiiGER pole puller for all distribution, substation, transmission, overhead and underground utility projects.
FOR MORE INFORMATION: HUSKIETOOLS.COMDan Voss, Engineering, Product Development Director, 329955@email4pr.com, 630-485-2253Greg Holmes, Brand Manager, 329955@email4pr.com, 630-485-2270
*NOTE: Huskie Tools is not affiliated with, authorized, or endorsed by Milwaukee Electric Tool Corporation. Milwaukee listed batteries and chargers are being re-sold by Huskie Tools. Milwaukee M18 REDLITHIUM are registered trademarks of Milwaukee Electric Tool Corporation. Huskie Tools is not affiliated with, authorized, or endorsed by Makita Corporation. Makita listed batteries and chargers are being re-sold by Huskie Tools. Makita and LXT are registered trademarks of Makita Corporation.
Cision
View original content to download multimedia:https://www.prnewswire.com/news-releases/huskie-tools-introduces-the-next-evolution-of-battery-powered-cutting--compression-lineman-tools--the-slc-series-301483179.html
SOURCE Huskie Tools
View original post here:
Posted in Evolution
Comments Off on Huskie Tools Introduces the Next Evolution of Battery Powered Cutting & Compression Lineman Tools – the SLC Series – Yahoo Finance
Can a planet have a mind of its own? – University of Rochester
Posted: at 7:34 am
February 16, 2022
The collective activity of lifeall of the microbes, plants, and animalshave changed planet Earth.
Take, for example, plants: plants invented a way of undergoing photosynthesis to enhance their own survival, but in so doing, released oxygen that changed the entire function of our planet. This is just one example of individual lifeforms performing their own tasks, but collectively having an impact on a planetary scale.
If the collective activity of lifeknown as the biospherecan change the world, could the collective activity of cognition, and action based on this cognition, also change a planet? Once the biosphere evolved, Earth took on a life of its own. If a planetwith life has a life of its own, can it also have a mind of its own?
If we ever hope to survive as a species, we must use our intelligence for the greater good of the planet, says Adam Frank.
These are questions posed by Adam Frank, the Helen F. and Fred H. Gowen Professor of Physics and Astronomy at the University of Rochester, and his colleagues David Grinspoon at the Planetary Science Institute and Sara Walkerat Arizona State University,in a paper published in the International Journal of Astrobiology. Their self-described thought experiment combines current scientific understanding about the Earth with broader questions about how life alters a planet. In the paper, the researchers discuss what they call planetary intelligencethe idea of cognitive activity operating on a planetary scaleto raise new ideas about the ways in which humans might tackle global issues such as climate change.
As Frank says, If we ever hope to survive as a species, we must use our intelligence for the greater good of the planet.
Frank, Grinspoon, and Walker draw from ideas such as the Gaia hypothesiswhich proposes that the biosphere interacts strongly with the non-living geological systems of air, water, and land to maintain Earths habitable stateto explain that even a non-technologically capable species can display planetary Intelligence. The key is that the collective activity of life creates a system that is self-maintaining.
For example, Frank says, many recent studies have shown how the roots of the trees in a forest connect via underground networks of fungi known as mycorrhizal networks. If one part of the forest needs nutrients, the other parts send the stressed portions the nutrients they need to survive, via the mycorrhizal network. In this way, the forest maintains its own viability.
Right now, our civilization is what the researchers call an immature technosphere, a conglomeration of human-generated systems and technology that directly affects the planet but is not self-maintaining. For instance, the majority of our energy usage involves consuming fossil fuels that degrade Earths oceans and atmosphere. The technology and energy we consume to survive are destroying our home planet, which will, in turn, destroy our species.
To survive as a species, then, we need to collectively work in the best interest of the planet.
But, Frank says, we dont yet have the ability to communally respond in the best interests of the planet. There is intelligence on Earth, but there isnt planetary intelligence.
The researchers posit four stages of Earths past and possible future to illustrate how planetary intelligence might play a role in humanitys long-term future. They also show how these stages of evolution driven by planetary intelligence may be a feature of any planet in the galaxy that evolves life and a sustainable technological civilization.
University of Rochester illustration / Michael Osadciw
Planets evolve through immature and mature stages, and planetary intelligence is indicative of when you get to a mature planet, Frank says. The million-dollar question is figuring out what planetary intelligence looks like and means for us in practice because we dont know how to move to a mature technosphere yet.
Although we dont yet know specifically how planetary intelligence might manifest itself, the researchers note that a mature technosphere involves integrating technological systems with Earth through a network of feedback loops that make up a complex system.
Put simply, a complex system is anything built from smaller parts that interact in such a fashion that the overall behavior of the system is entirely dependent on the interaction. That is, the sum is more than the whole of its parts. Examples of complex systems include forests, the Internet, financial markets, and the human brain.
By its very nature, a complex system has entirely new properties that emerge when individual pieces are interacting. It is difficult to discern the personality of a human being, for instance, solely by examining the neurons in her brain.
That means it is difficult to predict exactly what properties might emerge when individuals form a planetary intelligence. However, a complex system like planetary intelligence will, according to the researchers, have two defining characteristics: it will have emergent behavior and will need to be self-maintaining.
The biosphere figured out how to host life by itself billions of years ago by creating systems for moving around nitrogen and transporting carbon, Frank says. Now we have to figure out how to have the same kind of self-maintaining characteristics with the technosphere.
Despite some efforts, including global bans on certain chemicals that harm the environment and a move toward using more solar energy,we dont have planetary intelligence or a mature technosphere yet, he says. But the whole purpose of this research is to point out where we should be headed.
Raising these questions, Frank says, will not only provide information about the past, present, and future survival of life on Earth but will also help in the search for life and civilizations outside our solar system. Frank, for instance, is the principal investigator on a NASA grant to search for technosignatures of civilizations on planets orbiting distant stars.
Were saying the only technological civilizations we may ever seethe ones we should expect to seeare the ones that didnt kill themselves, meaning they must have reached the stage of a true planetary intelligence, he says. Thats the power of this line of inquiry: it unites what we need to know to survive the climate crisis with what might happen on any planet where life and intelligence evolve.
Tags: Adam Frank, Arts and Sciences, Department of Physics and Astronomy, featured-post, research finding
Category: Featured
View post:
Can a planet have a mind of its own? - University of Rochester
Posted in Evolution
Comments Off on Can a planet have a mind of its own? – University of Rochester
The Evolution of Chief Justice John Roberts – Justia Verdict
Posted: at 7:34 am
The two most influential liberal Justices of the U.S. Supreme CourtChief Justice Earl Warren and Justice William Brennanwere Republicans appointed by President Dwight D. Eisenhower, who reportedly called them his two worst mistakes. The story of Ikes statement may be apocryphal, but the phenomenon of Republican appointees disappointing their erstwhile sponsors is real. Nixon appointee Harry Blackmun, Fords John Paul Stevens, Reagans Sandra Day OConnor and Anthony Kennedy, and George H.W. Bushs David Souter all proved less reliably conservative than advertised.
Then the phenomenon apparently stopped. Shortly after his 1991 confirmation, Justice Clarence Thomas famously quipped I aint evolving. He wasnt and he hasnt. With one exception, neither have any of the Justices appointed by Republican Presidents in the years since. Why not?
In a 2007 article in the Harvard Law & Policy Review, I hypothesized that as Supreme Court decisions became more politically salient to their constituents, Republican Presidents got better at screening out potential evolvers by nominating people they knew to be reliable conservatives because the nominees were familiar to the Republican legal establishment based on service in the executive branch of the federal government. Looking at the twelve Republican appointees from the Nixon administration onward, I observed that the six Justices who had not previously served in the executive branch of the federal government evolvedthat is, proved to be liberal or moderatewhereas the six who had served in the federal executive did not evolve. Noting that it was far too early in their tenure to draw any conclusions about Chief Justice John Roberts and Justice Samuel Alito, I nonetheless predicted that based on their prior experience, they would both remain reliable conservatives.
I was right about Alito but wrong about Roberts.
To be sure, no one would mistake John Roberts for a liberal. He joined the leading decisions finding gun rights in the Constitution, dissented from the Courts decision establishing marriage equality, wrote the Courts opinion rejecting judicial review of political gerrymandering, and has, more broadly, steered the Court in roughly the same direction as his conservative predecessor, Chief Justice William Rehnquist (for whom Roberts clerked as a young lawyer).
Yet lately Chief Justice Roberts has been as likely to join his Democratic-appointed colleagues in high-profile cases as he is to join his fellow Republicans. In the summer of 2020 and thereafter, Roberts joined the Democratic appointees in rejecting challenges to public health regulations, even bringing along Justice Brett Kavanaugh to create a 5-4 majority for upholding the Biden administrations vaccine mandate for workers in federally funded health-care facilities last month. Despite previously dissenting from the Courts abortion rights rulings, Roberts cast the fifth and decisive vote to strike down a Louisiana abortion restriction in a 2020 case, concluding that the challenged law was indistinguishable from a Texas law the Court had only recently invalidated, even though Roberts had dissented in the Texas case.
While Roberts has sometimes played a pivotal role, since Justice Ruth Bader Ginsburgs death and her replacement by the very conservative Justice Amy Coney Barrett, the Chiefs evolution has had little impact on the outcomes of cases, as there are now five Justices to his right. Thus, although Roberts joined his Democratic colleagues in voting to allow lawsuits against the Texas attorney general to block the Lone Star States notorious SB8which replaces public enforcement of an abortion ban with large private bountiesthey were outvoted by the other Republican appointees, who permitted only a very narrow challenge and then sent the case back to the conservative U.S. Court of Appeals for the Fifth Circuit despite the Chiefs statement that the district court should have been permitted to resolve the case for the abortion providers quickly. Instead, the appeals court has slow-walked the litigation while abortion after six weeks remains essentially illegal in Texas.
Perhaps most dramatically, last week the Chief Justice joined the Democratic appointees in dissenting from the majoritys decision to block a lower court ruling that had invalidated Alabamas racially biased electoral map. Roberts, the author of the notorious 2013 ruling in Shelby County v. Holderwhich invalidated a key provision of the Voting Rights Actthought that this time the Court had gone too far.
Does the surprising evolution of Chief Justice Roberts matter? Perhaps not. The five Justices to his right seem intent on rolling back abortion rights, promoting gun rights, weakening the separation of church and state, and invalidating all race-based affirmative action. Roberts might even join them in some of these projects.
Still, if Roberts is becoming a moderate or a liberal, that could make a difference. It is much harder to find two unexpected votes for a liberal outcomeas one must on a 6-3 Courtthan to find just one. Kavanaughs vote in the vaccine mandate caselike Justice Neil Gorsuchs 2020 majority opinion finding that a 1964 civil rights law forbids discrimination based on sexual orientation or gender identity (which Roberts but not the other Republican appointees joined)illustrates that shifting the Chief from a presumptively conservative vote to a potential moderate or liberal vote changes the dynamic on the Court.
Whatever the ultimate impact of the Chiefs evolution, we might wonder what is causing it.
To begin, we might identify a backlash effect. Even as a judge on the Eighth Circuit, Harry Blackmun was substantially more liberal than President Nixon realized, but it was not until after he wrote the majority opinion in Roe v. Wade that Blackmunwho was vilified by the right for itbecame reliably liberal. Being attacked by the right played a role in what Linda Greenhouses elegant biography aptly called Harry Blackmuns Supreme Court Journey.
(So far as I am aware) Roberts has not had to endure the picketing, hate mail, death threats, or assassination attempt that were aimed at Blackmun, but for nearly a decade he has been cursed by Republicans as an apostate for joining with his Democratic colleagues in 2012 in upholding the Affordable Care Act. That experience may well have had a moderating effect on him, especially given his commitment to a view of the Court as above politics. Roberts may have been genuinely taken aback by the suggestion that as the appointee of a Republican president, he owed the party his vote in opposition to the signature legislative achievement of a Democratic president.
As a nominee appearing before the Senate, John Roberts likened the judicial role to that of an umpire calling balls and strikes. In one sense, that is simply the kind of formalistic cant that all Supreme Court nominees feel compelled to recite. Even though everyone knows that presidents select nominees based on their values and views, to win confirmation, nominees must swear fealty to a disembodied law, as though its application in contested cases did not call upon value judgments.
For Roberts, however, the commitment to at least the appearance of an impartial judiciary is not mere confirmation fibbing, but foundational to his self-conception. Writing in The Atlantic in 2019 to review Joan Biskupics insightful biography of Roberts, Michael ODonnell described a war within Roberts between, on one hand, his love for the Supreme Court and the federal judiciary as institutions, and, on the other hand, the conservative commitments Roberts formed during his youth and strengthened during the Reagan administration. The backlash against the Obamacare decision led Roberts to realize that he could not always be both an institutionalist and a conservative ideologue. Roberts chose institutionalism.
We might also understand the Roberts journey as a sign of the times. Blackmun used to complain that pundits who described his evolution were wrong. He did not move left, he said; the Court moved right, and thus he only appeared to move by contrast. As Greenhouse shows, that is not entirely accurate. Neither would such an account be entirely accurate with respect to Roberts. But it would contain more than a kernel of truth.
In the 2020 Louisiana abortion case, Roberts seemed genuinely puzzled that his fellow conservatives could claim to be applying rather than overruling the recent ruling involving an identical Texas statute. In the SB8 litigation, he took much the same view: so long as the abortion right remains on the books, states should not be rewarded for circumventing it. And that was his position again last week in the Alabama Voting Rights Act case: maybe the Court should re-examine its precedents, but until it does, a lower court shouldnt be reversed for applying them faithfully.
In these and other cases, Roberts has hardly been staking out a strongly liberal or progressive position. Rather, he is simply insisting on what have hitherto been principles that liberals, moderates, and conservatives all agreed upon: apply the law on the books while it remains there. Or, more boldly, dont lie about the law.
Seen in this light, the evolution of John Roberts does look a fair bit like a man standing still while the landscape moves past him (and to the right). It also makes Roberts look a fair bit like another prominent Republican, Mike Pence. Despite sterling conservative credentials and four years spent demeaning himself as Donald Trumps Vice President, when push came to shove there was a line Pence would not crossand it was much the same line that Roberts has been unwilling to cross. Neither man would brazenly lie about the law to further partisan ends.
It takes nothing away from the personal courage and integrity of John Roberts or Mike Pence to observe that the remarkable fact is not that they have stood up for previously uncontroversial principles but that so many of their fellow Republicansincluding elite conservative lawyers who surely know betterhave not.
With apologies for the ableist metaphor, on a Court of the blind, the one-eyed man is Chief.
See more here:
The Evolution of Chief Justice John Roberts - Justia Verdict
Posted in Evolution
Comments Off on The Evolution of Chief Justice John Roberts – Justia Verdict
EVOLUTION KIDS TENNIS ON BOUNX IS NOW OPEN – PRNewswire
Posted: at 7:34 am
The Bounx and Evolve9 platform collaboration is now available to tennis academies around the world.
At the Evolution Kids Tennis Coaches Conference, Feb. 8-10, Mike Barrell, founder and CEO of Evolve9, and Julian R. Ellison, founder and CEO of BounxSport, announced the new product launch to tennis academies across the globe to build more excitement, energy, and engagement for Under 10s tennis.
"We have been working closely with Mike and his Evolution Kids Tennis program for the past 12 months and have seen how his systematic approach to coaching U10 kids combined with the unique gamification and feedback functionality built inside Bounx, creates an advanced but extremely simple system for tennis academies and clubs to deploy," Ellison said. "We have taken the best of coaching insights, academy operations, and video game engagement methods to create a unique and gamified academy management experience right on a real tennis court."
"Sometimes technology can go too far and lose sight of what matters most for coaches and kids," Barrell said. "Kids must enjoy their time on court and want to come back to play more. Coaches must use the time they have on court as efficiently as possible and provide a lasting memory to players. And most importantly, academies need to create better and more sustainable business models so they can keep inspiring better tennis.
Bounx is the first sport tech that truly understands the union of the three core elements that power successful academies."
The Bounx and Evolve9 collaboration was piloted at three locations on three continents and is ready to be deployed at academies around the world.
SOURCE BounxSport
Visit link:
Posted in Evolution
Comments Off on EVOLUTION KIDS TENNIS ON BOUNX IS NOW OPEN – PRNewswire
Thermodynamics of evolution and the origin of life – pnas.org
Posted: February 11, 2022 at 6:59 am
Significance
We employ the conceptual apparatus of thermodynamics to develop a phenomenological theory of evolution and of the origin of life that incorporates both equilibrium and nonequilibrium evolutionary processes within a mathematical framework of the theory of learning. The threefold correspondence is traced between the fundamental quantities of thermodynamics, the theory of learning, and the theory of evolution. Under this theory, major transitions in evolution, including the origin of life, represent specific types of physical phase transitions.
We outline a phenomenological theory of evolution and origin of life by combining the formalism of classical thermodynamics with a statistical description of learning. The maximum entropy principle constrained by the requirement for minimization of the loss function is employed to derive a canonical ensemble of organisms (population), the corresponding partition function (macroscopic counterpart of fitness), and free energy (macroscopic counterpart of additive fitness). We further define the biological counterparts of temperature (evolutionary temperature) as the measure of stochasticity of the evolutionary process and of chemical potential (evolutionary potential) as the amount of evolutionary work required to add a new trainable variable (such as an additional gene) to the evolving system. We then develop a phenomenological approach to the description of evolution, which involves modeling the grand potential as a function of the evolutionary temperature and evolutionary potential. We demonstrate how this phenomenological approach can be used to study the ideal mutation model of evolution and its generalizations. Finally, we show that, within this thermodynamics framework, major transitions in evolution, such as the transition from an ensemble of molecules to an ensemble of organisms, that is, the origin of life, can be modeled as a special case of bona fide physical phase transitions that are associated with the emergence of a new type of grand canonical ensemble and the corresponding new level of description.
Classical thermodynamics is probably the best example of the efficiency of a purely phenomenological approach for the study of an enormously broad range of physical and chemical phenomena (1, 2). According to Einstein, It is the only physical theory of universal content, which I am convinced, that within the framework of applicability of its basic concepts will never be overthrown (3). Indeed, the basic laws of thermodynamics were established at a time when the atomistic theory of matter was only in its infancy, and even the existence of atoms has not yet been demonstrated unequivocally. Nevertheless, these laws remained untouched by all subsequent developments in physics, with the important qualifier within the framework of applicability of its basic concepts. This framework of applicability is known as the thermodynamic limit, the limit of a large number of particles when fluctuations are assumed to be small (4). Moreover, the concept of entropy that is central to thermodynamics was further generalized to become the cornerstone of information theory [Shannons entropy (5)] and is currently considered to be one of the most important concepts in all of science, reaching far beyond physics (6). The conventional presentation of thermodynamics starts with the analysis of thermal machines. However, a more recently promoted and apparently much deeper approach is based on the understanding of entropy as a measure of our knowledge (or, more accurately, our ignorance) of a system (68). In a sense, there is no entropy other than information entropy, and the loss of information resulting from summation over a subset of the degrees of freedom is the only necessary condition to derive the Gibbs distribution and hence all the laws of thermodynamics (9, 10).
It is therefore no surprise that many attempts have been made to apply concepts of thermodynamics to problems of biology, especially to population genetics and the theory of evolution. The basic idea is straightforward: Evolving populations of organisms fall within the domain of applicability of thermodynamics inasmuch as a population consists of a number of organisms sufficiently large for the predictable collective effects to dominate over unpredictable life histories of individual (where organisms are analogous to particles and their individual histories are analogous to thermal fluctuations). Ludwig Boltzmann prophetically espoused the connection between entropy and biological evolution: If you ask me about my innermost conviction whether our century will be called the century of iron or the century of steam or electricity, I answer without hesitation: it will be called the century of the mechanical view of nature, the century of Darwin (11). More specifically, the link between thermodynamics and evolution of biological populations was clearly formulated for the first time by Ronald Fisher, the principal founder of theoretical population genetics (12). Subsequently, extended efforts aimed at establishing detailed mapping between the principal quantities analyzed by thermodynamics, such as entropy, temperature, and free energy, and those central to population genetics, such as effective population size and fitness, were undertaken by Sella and Hirsh (13) and elaborated by Barton and Coe (14). The parallel is indeed clear: The smaller the population size the stronger are the effects of random processes (genetic drift), which in physics associates naturally with temperature increase. It should be noted, however, that this crucial observation was predicated on the specific model of independent, relatively rare mutations (low mutation limit) in a constant environment or the so-called ideal mutation model. Among other attempts to conceptualize the relationships between thermodynamics and evolution, of special note is the work of Frank (15, 16) on applications of the maximum entropy principle, according to which the distribution of any quantity in a large ensemble of entities tends to the highest entropy distribution subject to the relevant constraints (6). The nature of such constraints, as we shall see, is the major object of inquiry in the study of evolution from the perspective of thermodynamics.
The notable parallels notwithstanding, the conceptual framework of classical thermodynamics is insufficient for an adequate description of evolving systems capable of learning. In such systems, the entropy increase caused by physical and chemical processes in the environment, under the second law of thermodynamics, competes with the entropy decrease engendered by learning, under the second law of learning (17). Indeed, learning, by definition, decreases the uncertainty in knowledge and thus should result in entropy decrease. In the accompanying paper, we describe deep, multifaceted connections between learning and evolution and outline a theory of evolution as learning (18). In particular, this theory incorporates a theoretical description of major transitions in evolution (MTE) (19, 20) and multilevel selection (2124), two fundamental evolutionary phenomena that so far have not been fully incorporated into the theory of evolution.
Here, we make the next logical step toward a formal description of biological evolution as learning. Our main goal is to develop a macroscopic, phenomenological description of evolution in the spirit of classical thermodynamics, under the assumption that not only the number of degrees of freedom but also the number of the learning subsystems (organisms or populations) is large. These conditions correspond to the thermodynamic limit in statistical mechanics, where the statistical description is accurate.
The paper is organized as follows. In Maximum Entropy Principle Applied to Learning and Evolution we apply the maximum entropy principle to derive a canonical ensemble of organisms and to define relevant macroscopic quantities, such as partition function and free energy. In Thermodynamics of Learning we discuss the first and second laws of learning and their relations to the first and second laws of thermodynamics, in the context of biological evolution. In Phenomenology of Evolution we develop a phenomenological approach to evolution and define relevant thermodynamic potentials (such as average loss function, free energy, and grand potential) and thermodynamic parameters (such evolutionary temperature and evolutionary potential). In Ideal Mutation Model we apply this phenomenological approach to analyze evolutionary dynamics of the ideal mutations model previously analyzed by Hirsh and Sella (7). In Ideal Gas of Organisms we demonstrate how the phenomenological description can be generalized to study more complex systems in the context of the ideal gas model. In Major Transitions in Evolution and the Origin of Life we apply the phenomenological description to model MTE, and in particular the origin of life as a phase transition from an ideal gas of molecules to an ideal gas of organisms. Finally, in Discussion, we summarize the main facets of our phenomenological theory of evolution and discuss its general implications.
To build the vocabulary of evolutionary thermodynamics (Table 1), we proceed step by step. The natural first concept to introduce is entropy, S, which is universally applicable beyond physics, thanks to the information representation of entropy (5). The relevance of entropy in general, and the maximum entropy principle in particular (6), for problems of population dynamics and evolution has been addressed previously, in particular by Frank (25, 26), and we adopt this principle as our starting point. The maximum entropy principle states that the probability distribution in a large ensemble of variables must be such that the Shannon (or Boltzmann) entropy is maximized subject to the relevant constraints. This principle is applicable to an extremely broad variety of processes, but as shown below is insufficient for an adequate description of learning and evolutionary dynamics and should be combined with the opposite principle of minimization of entropy due to the learning process, or the second law of learning (see Thermodynamics of Learning and ref. 17). Our presentation in this section could appear oversimplified, but we find this approach essential to formulate as explicitly and as generally as possible all the basic assumptions underlying thermodynamics of learning and evolution.
Corresponding quantities in thermodynamics, machine learning, and evolutionary biology
The crucial step in treating evolution as learning is the separation of variables into trainable and nontrainable ones (18). The trainable variables are subject to evolution by natural selection and, therefore, should be related, directly or indirectly, to the replication processes, whereas nontrainable variables initially characterize the environment, which determines the criteria of selection. As an obvious example, chemical and physical parameters of the substances that serve as food for organisms are nontrainable variables, whereas the biochemical characteristics of proteins involved in the consumption and utilize the food molecules as building blocks and energy source are trainable variables.
Consider an arbitrary learning system described by trainable variables q and nontrainable variables x, such that nontrainable variables undergo stochastic dynamics and trainable variables undergo learning dynamics. In the limit when the nontrainable variables x have already equilibrated, but the trainable variables q are still in the process of learning, the conditional probability distribution p(x|q) over nontrainable variables x can be obtained from the maximum entropy principle whereby Shannon (or Boltzmann) entropyS=dNxpx|qlogpx|q[2.1]is maximized subject to the appropriate constraints on the system, such as average lossdNxHx,qpx|q=Uq[2.2]
and normalization conditiondNxpx|q=1.[2.3]
Its simplicity notwithstanding, the condition [2.2] is crucial. This condition means, first, that learning can be mathematically described as minimization of some function U(q) of trainable variables only, and second that this function can be represented as the average of some function H(x,q) of both trainable, q, and nontrainable, x, variables over the space of the latter. Eq. 2.2 is not an assumption but rather follows from the interpretation of the function p(x|q) as the probability density over nontrainable variables, x, for a given set of trainable ones, q. This condition is quite general and can be used to study, for example, selection of the shapes of crystals (such as snowflakes), in which case H(x,q) represents Hamiltonian density.
In the context of biology, U(q) is expected to be a monotonically increasing function of Malthusian fitness (q), that is, reproduction rate (assuming a constant environment); a specific choice of this function will be motivated below [2.9]. However, this connection cannot be taken as a definition of the loss function. In a learning process, loss function can be any measure of ignorance, that is, inability of an organism to recognize the relevant features of the environment and to predict its behavior. According to Sir Francis Bacons famous motto scientia potentia est, better knowledge and hence improved ability to predict the environment increases chances of an organisms survival and reproduction. However, derivation of the Malthusian fitness from the properties of the learning system requires a detailed microscopic theory such as that outlined in the accompanying paper (18). Here, we look instead at the theory from a macroscopic perspective by developing a phenomenological description of evolution.
We postulate that a system under consideration obeys the maximum entropy principle but is also learning or evolving by minimizing the average loss function U(q) [2.2]. The corresponding maximum entropy distribution can be calculated using the method of Lagrange multipliers, that is, by solving the following variational problem:SdNyHy,qpy|qUdNypy|q1px|q=0, [2.4]where and are the Lagrange multipliers which impose, respectively, the constraints [2.2] and [2.3]. The solution of [2.4] is the Boltzmann (or Gibbs) distributionlogp(x|q)1H(x,q)=0p(x|q)=exp(H(x)1)=exp(H(x,q))Z(,q),[2.5]
whereZ(,q)=exp(1+)=dNxexp(H(x,q))=dNx(x,q)[2.6]
is the partition function (Z stands for German Zustandssumme, sum over states).
Formally, the partition function Z(,q) is simply a normalization constant in Eq. 2.5, but its dependence on and q contains a wealth of information about the learning system and its environment. For example, if the partition function is known, then the average loss can be easily calculated by simple differentiationU(q)=dNxH(x,q)exp(H(x,q))dNxexp(H(x,q))=logZ(,q)=(F(,q)),[2.7]where the biological equivalent of free energy is defined asFTlogZ=1logZ=UTS[2.8]
and the biological equivalent of temperature is T=1. Evolutionary temperature is yet another key term in our vocabulary (Table 1), after entropy, which emerges as the inverse of the Lagrange multiplier that imposes a constraint on the average loss function [2.2], that is, defines the extent of stochasticity of the process of evolution. Roughly, free energy F is the macroscopic counterpart of the loss function H or additive fitness (18), whereas, as shown below, partition function Z is the macroscopic counterpart of Malthusian fitness,exp(H(x,q)).[2.9]
The relation between the loss function and fitness is discussed in the accompanying paper (18) and in Ideal Mutation Model. In biological terms, Z represents macroscopic fitness or the sum over all possible fitness values for a given organism, that is, over all genome sequences that are compatible with survival in a given environment, whereas F represents the adaptation potential of the organism.
In the rest of this analysis, we follow the previously developed approach to the thermodynamics of learning (17). Here, the key difference from conventional thermodynamics is that learning decreases the uncertainty in our knowledge on the training dataset (or of the environment, in the case of biological systems) and therefore results in entropy decrease. Close to the learning equilibrium, this decrease compensates exactly for the thermodynamic entropy increase and such dynamics is formally described by a time-reversible Schrdinger-like equation (17, 27). An important consequence is that, whereas in conventional thermodynamics, the equilibrium corresponds to the minimum of the thermodynamic potential over all variables, in a learning equilibrium the free energy F(q) can either be minimized or maximized with respect to the trainable variables q. If for a particular trainable variable the entropy decrease due to learning is negligible, then the free energy is minimized, as in conventional thermodynamics, but if the entropy decrease dominates the dynamics, then the free energy is maximized. Using the terminology introduced in the accompanying paper (18), we will call the variables of the first type neutral q(n) and those of the second type adaptable or active variables q(a). There is also a third type of variables that are (effectively) constant or core variables q(c), that is, those that have already been well/trained. The term neutral means that changing the values of these variables does not affect the essential properties of the system, such as its loss function or fitness, corresponding to the regime of neutral evolution. The adaptable variables comprise the bulk of the material for evolution. The core variables are most important for optimization (that is, for survival) and thus are quickly trained to their optimal values and remain more or less constant during the further process of learning (evolution). The equilibrium state corresponds to a saddle point on the free energy landscape (viewed as a function of trainable variables q), in agreement with both the first law of thermodynamics and the first law of learning (17): The change in loss/energy equals the heat added to the learning/thermodynamic system minus the work done by the system,dU=TdSQdq,[3.1]
where T is temperature, S is entropy, and Q is the learning/generalized force for the trainable/external variables q.
In the context of evolution, the first term in Eq. 3.1 represents the stochastic aspects of the dynamics, whereas the second term represents adaptation (learning, work). If the state of the entire learning system is such that the learning dynamics is subdominant to the stochastic dynamics, then the total entropy will increase (as is the case in regular, closed physical systems, under the second law of thermodynamics), but if learning dominates, then entropy will decrease as is the case in learning systems, under the second law of learning (17): The total entropy of a thermodynamic system does not decrease and remains constant in the thermodynamic equilibrium, but the total entropy of a learning system does not increase and remains constant in the learning equilibrium.
If the stochastic entropy production and the decrease in entropy due to learning cancel out each other, then the overall entropy of the system remains constant and the system is in the state of learning equilibrium (see refs. 17, 27, and 28 for discussion of different aspects of the equilibrium states.) This second law, when applied to biological processes, specifies and formalizes Schrdingers idea of life as a negentropic phenomenon (29). Indeed, learning equilibrium is the fundamental stationary state of biological systems. It should be emphasized that the evolving systems we examine here are open within the context of classical thermodynamics, but they turn into closed systems that reach equilibrium when thermodynamics of learning is incorporated into the model.
On longer time scales, when q(c) remains fixed but all other variables (i.e., q(a), q(n), and x) have equilibrated, the adaptable variables q(a) can transform into neutral ones q(n), and, vice versa, neutral variables can become adaptable ones (18). In terms of statistical mechanics, such transformations can be described by generalizing the canonical ensemble with the fixed number of particles (that is, in our context, fixed number of variables relevant for training) to a grand canonical ensemble where the number of variables can fluctuate (2). For neural networks, such fluctuations correspond to recruiting additional neurons from the environment or excluding neurons from the learning process. On a phenomenological level, these transformations can be described as finite shifts in the loss function, UU. In conventional thermodynamics, when dealing with ensembles of particles, is known as chemical potential, but in the context of biological evolution we shall refer to as evolutionary potential, another key term in our vocabulary of evolutionary thermodynamics and learning (Table 1). In thermodynamics, chemical potential describes how much energy is required to move a particle from one phase to another (for example, moving one water molecule from liquid to gaseous phase during water evaporation). Analogously, the evolutionary potential corresponds to the amount of evolutionary work (expressed, for example, as the number of mutations) or the magnitude of the change in the loss function is associated with the addition or removal of a single adaptable variable to or from the learning dynamics, that is, how much work it takes to make a nonadaptable variable adaptable, or vice versa.
The concept of evolutionary potential, , has multiple important connotations in evolutionary biology. Indeed, it is recognized that networks of nearly neutral mutations and, more broadly, nonfunctional genetic material (junk DNA) that dominates the genomes of complex eukaryotes represents the reservoir of potential adaptations (3033), making the evolutionary cost of adding a new adaptable variable low, which corresponds to small . Genomes of prokaryotes are far more tightly constrained by purifying selection and thus contain little junk DNA (34, 35); put another way, the evolutionary potential associated with such neutral genetic sequences is high in prokaryotes. However, this comparative evolutionary rigidity of prokaryote genomes is compensated by the high rate of gene replacement (36), with vast pools of diverse DNA sequences (open pangenomes) available for acquisition of new genes, some of which can contribute to adaptation (37, 38). The cost of gene acquisition varies greatly among genes of different functional classes as captured in the genome plasticity parameter of genome evolution that effectively corresponds to the evolutionary potential introduced here (39). For many classes of genes in prokaryotes, the evolutionary potential is relatively lower, such that gene replacement represents the principal route of evolution in these life forms. In viruses, especially, those with RNA and single-stranded DNA genomes, the evolutionary potential associated with gene acquisition is prohibitively high, but this is compensated by high mutation rates (40, 41), that is, low evolutionary potential associated with extensive nearly neutral mutational networks, making these networks the main source of adaptation.
Treating the learning system as a grand-canonical ensemble, Eq. 3.1, which represents the first law of learning, can be rewritten asdU=TdS+dK,[3.2]where K is the number of adaptable variables. Eq. 3.2 is more macroscopic than [3.1] in the sense that not only nontrainable variables but also adaptable and neutral trainable variables are now described in terms of phenomenological, thermodynamic quantities. Roughly, the average loss associated with a single nontrainable or a single adaptable variable can be identified, respectively, with T and , and the total number of nontrainable and adaptable variables with, respectively, S and K. This correspondence stems from the fact that S and K are extensive variables, whereas T and are intensive ones, as in conventional thermodynamics.
To describe phase transitions, we have to consider the system moving from one learning equilibrium (that is, a saddle point on the free energy landscape) to another. In terms of the microscopic dynamics, such phase transitions can involve either transitions from not fully trained adaptable variables q(a) to fully trained ones q(c) or transitions between different learning equilibria described by different values of q(c). In biological terms, the latter variety of transitions corresponds to MTE, which involve emergence of new classes of slowly changing, near constant variables (18), whereas the former variety of smaller-scale transitions corresponds to the fixation of beneficial mutations of all kinds, including capture of new genes (42), that is, adaptive evolution. In Major Transitions in Evolution and the Origin of Life we present a phenomenological description of MTE, in particular the very first one, the origin of life, which involved the transition from an ensemble of molecules to an ensemble of organisms. First, however, we describe how such ensembles can be modeled phenomenologically.
Consider an ensemble of organisms that differ from each other only by the values of adaptable variables q(a), whereas the effectively constant variables q(c) are the same for all organisms. The latter correspond to the set of core, essential genes that are responsible for the housekeeping functions of the organisms (43). Then, the ensemble can either represent a Bayesian (subjective) probability distribution over degrees of freedom of a single organism or a frequentist (objective) probability distribution over the entire population of organisms; the connections between these two approaches are addressed in detail in the classic work of Jaynes (6). In the limit of an infinite number of organisms, the two interpretations are indistinguishable, but in the context of actual biological evolution the total number of organisms is only exponentially large,Neexp(bK),[4.1]
and is linked to the number of adaptable variables Klog(Ne)/b in a population of the given size Ne. Eq. 4.1 indicates that the effective number of variables (genes or sites in the genome) that are available for adaptation in a given population depends on the effective population size. In larger populations that are mostly immune to the effect of random genetic drift, more sites (genes) can be involved in adaptive evolution. In addition to the effective population size Ne, the number of adaptable variables depends on the coefficient b that can be thought of as the measure of stochasticity caused by factors independent of the population size. The smaller b, the more genes can be involved in adaptation. In the biological context, this implies that the entire adaptive potential of the population is determined by mutations in a small fraction of the genome, which is indeed realistic. It has been shown that in prokaryotes effective population size estimated from the ratio of the rates of nonsynonymous vs. synonymous mutations (dN/dS), indeed, positively correlates with the number of genes in the genome, and, presumably, with the number of genes that are available for adaptation (4446).
To study the state of learning equilibrium for a grand canonical ensemble of organisms, it is convenient to express the average loss function asU(S,K)=T(S,K)S+(S,K)K,[4.2]where the conjugate variables are, respectively, evolutionary temperatureTUS[4.3]
and evolutionary potentialUK.[4.4]
Once again, evolutionary temperature is a measure of disorder, that is, stochasticity in the evolutionary process, whereas evolutionary potential is the measure of adaptability. For a given phenomenological expression of the loss function [4.2], all other thermodynamic potentials, such as free energy F(T,K) and grand potential (T,), can be obtained by switching to conjugate variables using Legendre transformations, i.e., ST, K.
The difference between the grand canonical ensembles in physics and in evolutionary biology should be emphasized. In physics, the grand canonical ensemble is constructed by constraining the average number of particles (2). In contrast, for the evolutionary grand canonical ensemble the constraint is imposed not on the number of organisms Ne per se but rather on the number of adaptable variables in organisms of a given species Klog(Ne), which depends on the effective population size. This key statement implies that, in our approach, the primary agency of evolution (adaptation, selection, or learning) is identified with individual genes rather than with genomes and organisms (47). Only a relatively small number of genes represent adaptable variables, that is, are subject to selection at any given time, in accordance with the classical results of population genetics (48). However, as discussed in the accompanying paper (18), our theoretical framework extends to multiple levels of biological organization and is centered around the concept of multilevel selection such that higher-level units of selection are identified with ensembles of genes or whole genomes (organisms). Then, organisms can be treated as trainable variables (units of selection) and populations as statistical ensembles. The change in the constraint from Ne to Klog(Ne) is similar to changing the ensemble with annealed disorder to one with quenched disorder in statistical physics (49). Indeed, in the case of annealed (thermal) disorder, we sum up (average) over a disorder partition function, whereas for quenched disorder, we average the logarithm of the partition function, that is, free energy.
In this section we demonstrate how the phenomenological approach developed in the previous sections can be applied to model biological evolution in the thermodynamic limit, that is, when both the number of organisms, Ne, and the number of active degrees of freedom, Klog(Ne), are sufficiently large. In such a limit, the average loss function contains all the relevant information on the learning system in equilibrium, which can be derived from a theoretical model, such as the one developed in the accompanying paper (18) using the mathematical framework of neural networks, or a phenomenological model (such as the one developed in the previous section), or reconstructed from observations or numerical simulations. In this section, we adopt a phenomenological approach to model the average loss function of a population of noninteracting organisms (that is, selection only affects individuals), and in the following section we construct a more general phenomenological model, which will also be relevant for the analysis of MTE in Major Transitions in Evolution and the Origin of Life.
Consider a population of organisms described by their genotypes q1,,qNe. There are rare mutations (on time scales ) from one genotype to another that are either quickly fixed or eliminated from the population (on shorter time scales ), but the total number of organisms Ne remains fixed. In addition, we assume that the system is observed for a long period of time so that it has reached a learning equilibrium (that is, an evolutionarily stable configuration). In this simple model, all organisms share the same q(c), whereas all other variables have already equilibrated, but their effect on the loss function depends on the type of the variable, that is, q(a) vs. q(n) vs. x. In particular, the trainable variables of individual organisms qns evolve in such a way that entropy is minimized on short time scales due to fixation of beneficial mutations but maximized on long time scales due to equilibration, that is, exploration of the entire nearly neutral mutational network (50, 51). Thus, the same variables evolve toward maximizing free energy on short time scales but toward minimizing free energy on longer time scales. This evolutionary trajectory is similar to the phenomenon of broken ergodicity in condensed matter systems, where the short time and ensemble (or long-time) averages can differ. The prototype of nonergodic systems in physics are (spin) glasses (5254). The glass-like character of evolutionary phenomena was qualitatively examined previously (55, 56). Nonergodicity unavoidably involves frustrations that emerge from competing interactions (57), and such frustrations are thought to be a major driving force of biological evolution (55). In terms of the model we discuss here, the most fundamental frustration that appears to be central to evolution is caused by the competing trends of the conventional thermodynamic entropy growth and entropy decrease due to learning.
The fixation of mutations on short time scales implies that over most of the duration of evolution all organisms have the same genotype q1==qNe=q (with some neutral variance), whereas the equilibration on the longer time scales implies that the marginal distribution of genotypes is given by the maximum entropy principle, as discussed in Maximum Entropy Principle Applied to Learning and Evolution, that is,p(q)n=1NedNxnexp(n=1NeH(xn,q))=exp(F(q)Ne),[5.1]where integration is taken over the states of the environment xn for all organisms n=1,,Ne. This distribution was previously considered in the context of population dynamics (13), where Ne was interpreted as the inverse temperature parameter. However, as pointed out in Maximum Entropy Principle Applied to Learning and Evolution, in our framework the inverse temperature is the Lagrange multiplier, which imposes a constraint on the average loss function [2.2]. Moreover, in the context of the models considered by Sella and Hirsh (13), the distribution can also be expressed asp(q)Z(q)Ne,[5.2]
where the partition function Z(q)=exp(F(q)) is the macroscopic counterpart of fitness (x,q) (see Eq. 2.6). Eq. 5.2 implies that evolutionary temperature has to be identified with the multiplication constant in [2.8], T=1. Thus, this is the ideal mutation model, which allows us to establish a precise relation between the loss function and Malthusian fitness. Importantly, this relation holds only for the situation of multiple, noninteracting mutations (i.e., without epistasis).
The model of Sella and Hirsh (13) is actually the Kimura model of fixation of mutations in a finite population (58), which is based on the effect of mutations on Malthusian fitness (in the absence of epistasis). In population genetics, this model plays a role analogous to the role of the ideal gas model in statistical physics and thermodynamics (59). The ideal gas model ignores interactions between molecules in the gas, and the population genetics model similarly ignores epistasis, that is, interaction between mutations. This model necessitates that the loss function is identified with minus logarithm of Malthusian fitness (otherwise, the connection between these two quantities would be arbitrary, with the only restriction that one of them should be a monotonically decreasing function of the other). However, identification of Ne with the inverse temperature (13) does not seem to be justified equally well. For the given environment, the probability of the state [5.1] depends only on the product of Ne and , that is, the parameter of the Gibbs distribution. This parameter is proportional to Ne, so that we could, in principle, choose the proportionality coefficient to be equal to 1 (or, more precisely, 1, 2, or 4 depending on genome ploidy and the type of reproduction), but only assuming that the properties of the environment are fixed. However, in the interest of generality, we keep the population size and the evolutionary temperature separate, interpreting as an overall measure of the level of stochasticity in the evolutionary process including effects independent of the population size.
This key point merits further explanation. The smaller the population size the more important are evolutionary fluctuations, that is, genetic drift (60). In statistical physics, the amplitude of fluctuations increases with the temperature (2). Therefore, when the correspondence between evolutionary population genetics and thermodynamics is explored, it appears natural to identify effective population size with the inverse temperature (13, 14), which is justified inasmuch as sources of noise independent of the population size, such as changes in the environment, are disregarded. In statistical physics, the probability of a systems leaving a local optimum at a given temperature exponentially depends on the number of particles in the system as compellingly illustrated by the phenomenon of superparamagnetism (61). For a small enough ferromagnetic particle, the total magnetic moment overcomes the anisotropy barrier and oscillates between the spin-up and spin-down directions, whereas in the thermodynamic limit these oscillations are forbidden, which results in spontaneously broken symmetry (2). Thus, the probability of drift from one optimum to the other exponentially depends on the number of particles, and the identification of the latter with the effective population size appears natural. However, from a more general standpoint, effective population size is not the only parameter determining the probability of fluctuations, which is also affected by environmental factors. In particular, stochasticity increases dramatically under harsh conditions, due to stress-induced mutagenesis (6264). Therefore, it appears preferable to represent evolutionary temperature as a general measure of evolutionary stochasticity, to which effective population size is only one of important contributors, with other contributions coming from the mutation rate and the stochasticity of the environment. Extremely high evolutionary temperature caused by any combination of these factors can lead to a distinct type of phase transition, in which complexity is destroyed, for example, due to mutational meltdown of a population (error catastrophe) (65, 66).
Importantly, this simple model allows us to make concrete predictions for a fixed size population, where beneficial mutations are rare and quickly proceed to fixation. If such a system evolves from one equilibrium state [at temperature T1=11, with the fitness distribution Z(1)(q)] to another equilibrium state [at temperature T2=21 and fitness distribution Z(2)(q)], then, according to [2.8], the ratioslogZ(1)(q)logZ(2)(q)=1F(q)2F(q)=12=T2T1[5.3]are independent of q, that is, are the same for all organisms in the ensemble, regardless of their fitness (again, under the key assumption of no epistasis). Then, Eq. 5.3 can be used to measure ratios between different evolutionary temperatures and thus to define a temperature scale. Moreover, the equilibrium distribution [5.1] together with [4.1] enables us to express the average loss functionU(K)=H(x,q)NeH(x,q)exp(bK),[5.4]
where H(x,q) is the average loss function across individual organisms. According to Eq. 5.4, the average loss U(S,K) scales exponentially with the number of adaptable degrees of freedom K, but the dependency on entropy is not yet explicit.
For phenomenological modeling of evolution it is essential to keep track not only of different organisms but also of the entropy of the environment. On the microscopic level, the overall average loss function is an integral over all nontrainable variables of all organisms, but on a more macroscopic level it can be viewed as a phenomenological function U(S,K), the microscopic details of which are irrelevant. In principle, it should be possible to reconstruct the average loss function directly from experiment or simulation, but for the purpose of illustration we first consider an analytical expressionU(S,K)=H(x,q)Ne=aSnexp(bSK),[6.1]
where in addition to the exponential dependence on K, as in [5.4], we also specify the power law dependency on S. In particular, we assume that H(x,q)Sn, where n>0 is a free parameter, that is, loss function is greater in an environment with a higher entropy. This factor models the effect of the environment on the loss function of individual organisms. In biological terms, this means that diverse and complex environments promote adaptive evolution. In addition, the coefficient b in [5.4] is replaced with b/S in [6.1], to model the effect of the environment on the effective number of active trainable variables. We have to emphasize that the model [6.1] is discussed here for the sole purpose of illustrating the way our formalism works. A realistic model can be built only through bioinformatic analysis of specific biological systems, which requires a major effort.
Thus, if a population of Ne organisms is capable of learning the amount of information S about the environment, then the total number of adaptable trainable variables K required for such learning scales linearly with S and logarithmically with Ne,K=b1Slog(Ne).[6.2]
The logarithmic dependence on Ne is already present in [4.1] and in [5.4], but the dependence on S is an addition introduced in the phenomenological model [6.1]. Under this model, the number of adaptable variables K is proportional to the entropy of the environment. Assuming K is proportional also to the total number of genes in the genome, the dependencies in Eq. 6.2 are at least qualitatively supported by comparative analysis of microbial genomes. Indeed, bacteria that inhabit high-entropy environments, such as soil, typically possess more genes than those that live in low-entropy environments, for example sea water (67). Furthermore, the number of genes in bacterial genomes increases with the estimated effective population size (4446), which also can be interpreted as taking advantage of diverse, high entropy environments.
Given a phenomenological expression for the average loss function [6.1], the corresponding grand potential is given by the Legendre transformation,(T,)=(1n)S(T)b,[6.3]where entropy should be expressed as a function of evolutionary temperature and evolutionary potential,T=b(n+log(ab)+(n1)logS).[6.4]
By solving [6.4] for S and plugging into [6.3], we obtain the grand potential(T,)=a(n1)(eb)nn1exp(bT(n1)).[6.5]
We shall refer to the ensemble described by [6.5] as an ideal gas of organisms.
In principle, the grand potential can also be reconstructed phenomenologically, directly from numerical simulations or observations of time-series of the population size Ne(t) and fitness distribution Z(q,t). Given such data, evolutionary temperature T can be calculated using [5.3] and the distributions pT(K)=plogNe of the number of adaptable variables K can be estimated for a given temperature T. Then, the grand potential is reconstructed from the cumulants n(T) of the distributions pT(K):(T,)=Tn=1n(T)n!(T)n[6.6]and the average loss function U(S,K) is obtained by Legendre transformation from variables (T,) to (S,K). Obviously, the phenomenological reconstruction of the thermodynamic potentials (T,) and U(S,K) is feasible only if the evolving learning system can be observed over a long period of time, during which the system visits different equilibrium states at different temperatures. More realistically, the observation can be limited to either a fixed temperature T or a fixed number of adaptable variables K, and then the thermodynamic potentials would be reconstructed in the respective variables only, that is, in K and or in T and S.
In this section we discuss the MTE, starting from the very first such transition, the origin of life. Under the definition of life adopted by NASA, natural selection is the quintessential trait of life. Here we assume that selection emerges from learning, which appears to be a far more general feature of the processes that occur on all scales in the universe (18, 68). Indeed, any statistical ensemble of molecules is governed by some optimization principle, which is equivalent to the standard requirement of minimization of the properly chosen potential in thermodynamics. Evolving populations of organisms similarly face an optimization problem, but at face value the nature of the optimized potential is completely different. So what, if anything, is in common between thermodynamic free energy and Malthusian fitness? Here we give a specific answer to this question: The unifying feature is that, at any stage of the evolution or learning dynamics, the loss function is optimized. Thus, as also discussed in the accompanying paper (18), the origin of life is not equal to the origin of learning or selection. Instead, we associate the origin of life with a phase transition that gave rise to a distinct, highly efficient form of learning or a learning algorithm known as natural selection. Neither the nature of the statistical ensemble of molecules that preceded this phase transition nor that of the statistical ensemble of organisms that emerged from the phase transition [referred to as the Last Universal Cellular Ancestor, LUCA (69, 70)] are well understood, but at the phenomenological level we can try to determine which statistical ensembles yield the most biologically plausible results.
The origin of life can be identified with a phase transition from an ideal gas of molecules that is often considered in the analysis of physical systems to an ideal gas of organisms that is discussed in the previous section. Then, during such a transition, the grand canonical ensemble of subsystems changes from being constrained by a fixed average number of subsystems (or molecules),Ne=Ne,[7.1]to being constrained by a fixed average number of adaptable variables associated with the subsystems (or organisms),K=K.[7.2]
Immediately before and immediately after the phase transition, we are dealing with the very same system, but the ensembles are described in terms of different ensembles of thermodynamic variables. Formally, it is possible to describe an organism by the coordinates of all atoms of which it is comprised, but this is not a particularly useful language (71). Atoms (and molecules) behave in a collective manner, that is, coherently, and therefore the appropriate language to describe their behavior is the language of collective variables similar to, for example, the dual boson approach to many-body systems (72).
According to [4.1], the total number of organisms (population size) and the number of adaptable variables are related, Klog(Ne), but the choice of the constraint, [7.1] vs. [7.2], determines the choice of the statistical ensemble, which describes the state of the system. In particular, an ensemble of molecules can be described by the grand potential p(T,M), where T is the physical temperature, M is the chemical potential, and an ensemble of biological subsystems can be described by the grand potential b(T,), where, as before, T is the evolutionary temperature and is the evolutionary potential. Assuming that both ensembles can coexist at some critical temperatures T0 and T0 , the evolutionary phase transition will occur whenp(T0,M0)=b(T0,0).[7.3]
This condition is highly nontrivial because it implies that, at phase transition, both physical and biological potentials provide fundamentally different (or dual) descriptions of the exact same system, and all of the biological and physical quantities have different (or dual) interpretations. For example, the loss function is to be interpreted as energy in the physical description, but as additive fitness in the biological description [2.9].
An ideal gas of molecules is described by the grand potentialp(T,M)Texp(MT)[7.4]and an ideal gas of organisms is described by the grand potential [6.5],b(T,)cexp(bT).[7.5]
At higher temperatures, it is more efficient for the individual subsystems to remain independent of each other, p
Plugging [7.6] and [7.7] into [7.4] givesp(T0,M0)T0expM0T0=(expbT00)expclog0=expbT000cb(T0,0),[7.8]which is in agreement with [7.3]. The relations [7.6] and [7.7] were used here to illustrate the conditions under which the phase transition might occur, but it is also interesting to examine whether these relations actually make sense qualitatively. Eq. 7.6 implies that energy/loss associated with learning dynamics, T0, is logarithmically smaller compared to the energy/loss associated with stochastic dynamics, T0, but depends linearly on the energy/loss required to add a new adaptable variable to the learning system, that is, the evolutionary potential 0. This dependency makes sense because the learning dynamics is far more stringently constrained than the stochastic dynamics and its efficiency critically depends on the ability to engage new adaptable degrees of freedom. Eq. 7.7 implies that the energy/loss, M0, that is required to incorporate an additional nontrainable variable into the evolving system is logarithmically smaller 0 but depends linearly on the energy/loss, T0, associated with stochastic dynamics. This also makes sense because it is much easier to engage nontrainable degrees of freedom, and furthermore the capacity of the system to do so depends on the physical temperature.
It appears that for the origin of life phase transition to occur the learning system has to satisfy at least three conditions. The first one is the existence of effectively constant degrees of freedom, q(c), which are the same in all subsystems. This condition is satisfied, for example, for an ensemble of molecules, the stability of which is a prerequisite of the evolutionary phase transitions, but it does not guarantee that the transition occurs. The second condition is the existence of adaptable or active variables, q(a), that are shared by all subsystems, but their values can vary. These are the variables that undergo learning evolution and, according to the second law of learning, adjust their values to minimize entropy. Finally, for learning and evolution to be efficient, the third condition is the existence of neutral variables, q(n), which can become adaptable variables as learning progresses. In the language of statistical ensembles, this is equivalent to switching from a canonical ensemble with a fixed number of adaptable variables to a grand canonical ensemble where the number of adaptable variables can vary.
There are clear biological connotations of these three conditions. In the accompanying paper (18) we identify the origin of life with the advent of natural selection which requires genomes that serve as instructions for the reproduction of organisms. Genes comprising the genomes are shared by the organisms in a population or community, forming expandable pangenome that can acquire new genes, some of which contribute to adaptation (37). In each prokaryote genome about 10% of the genes are rapidly replaced, with the implication that they represent neutral variables that are subject to no or weak purifying selection and comprise the genetic reservoir for adaptation whereby they turn into adaptable variables, that is, genes subject to substantial selection (36). The essential role of gene sharing via horizontal gene transfer at the earliest stages in the evolution of life is thought of as a major factor underlying the universality of the translation system and the genetic code across all life forms (73). Strictly speaking, the transition from an ensemble of molecules to an ensemble of organisms could correspond to the emergence of protocells that lacked genomes but nevertheless displayed collective behavior and were subject to primitive form of selection for persistence (18). The origin of genomes would be a later event that kicked off natural selection. However, under the phenomenological approach adopted here, Eq. 7.3 covers both these stages.
The subsequent MTE, such as the origin of eukaryotic cells as a result of symbiosis between archaea and bacteria, or the origin of multicellularity, or of sociality, in principle, follow the same scheme: One has to switch between two alternative (or dual) descriptions of the same system, that is, the grand potentials in the dual descriptions should be equal at the MTE point, similar to Eq. 7.3. Here we only illustrated how the phase transition associated with the origin of life could be modeled phenomenologically and argue that essentially the same phenomenological approach would generally apply to the other MTEs.
Since its emergence in the Big Bang about 13.8 billion y ago, our universe has been evolving in the overall direction of increasing entropy, according to the second law of thermodynamics. Locally, however, numerous structures emerge that are characterized by readily discernible (even if not necessarily easily described formally) order and complexity. The dynamics of such structures was addressed by nonequilibrium thermodynamics (74) but traditionally has not been described as a process involving learning or selection, although some attempts in this direction have been made (75, 76). However, when learning is conceived of as a universal process, under the world as a neural network concept (68), there is no reason not to consider all evolutionary processes in the universe within the framework of the theory of learning. Under this perspective, all systems that evolve complexity, from atoms to molecules to organisms to galaxies, learn how to predict changes in their environment with increasing accuracy, and those that succeed in such prediction are selected for their stability, ability to persist and, in some cases, to propagate. During this dynamics, learning systems that evolve multiple levels of trainable variables that substantially differ in their rates of change outcompete those without such scale separation. More specifically, as argued in the accompanying paper, scale separation is considered to be a prerequisite for the origin of life (18).
Here we combine thermodynamics of learning (17) with the theory of evolution as learning (18), in an attempt to construct a formal framework for a phenomenological description of evolution. In doing so, we continue along the lines of the previous efforts on establishing the correspondence between thermodynamics and evolution (13, 14). However, we take a more consistent statistical approach, starting from the maximum entropy principle and introducing the principal concepts of thermodynamics and learning, which find natural counterparts in evolutionary population genetics, and we believe are indispensable for understanding evolution. The key idea of our theoretical construction is the interplay between the entropy increase in the environment dictated by the second law of thermodynamics and the entropy decrease in evolving systems (such as organisms or populations) dictated by the second law of learning (17). Thus, the evolving biological systems are open from the viewpoint of classical thermodynamics but are closed and reach equilibrium within the extended approach that includes thermodynamics of learning.
Under the statistical description of evolution, Malthusian fitness is naturally defined as the negative exponent of the average loss function, establishing the direct connection between the processes of evolution and learning. Further, evolutionary temperature is defined as the inverse of the Lagrange multiplier that constrains the average loss function. This interpretation of evolutionary temperature is related to that given by Sella and Hirsh (13), where evolutionary temperature was represented by the inverse of the effective population size, but is more general, reflecting the degree of stochasticity in the evolutionary process, which depends not only on the effective population size, but also on other factors, in particular interaction of organisms with the environment. It should be emphasized that here we adhere to a phenomenological thermodynamics approach, under which the details of replicator dynamics are irrelevant, in contrast, for example, to the approach of Sella and Hirsh (13).
Within our theoretical framework, adaptive evolution involves primarily organisms learning to predict their environment, and accordingly, the entropy of the environment with respect to the organism is one of the key determinants of evolution. For illustration, we consider a specific phenomenological model, in which the rate of adaptive evolution reflected in the value of the loss function depends exponentially on the number of adaptable variables and also shows a power law dependence on the entropy of the environment. The number of adaptable variables, or in biological terms the number of genes or sites that are available for positive selection in a given evolving population at a given time, is itself proportional to the entropy of the environment and to the log of the effective population size. Thus, high-entropy environments promote adaptation, and then success breeds success, that is, adaptation is most effective in large populations. These predictions of the phenomenological theory are at least qualitatively compatible with the available data and are quantitatively testable as well.
Modern evolutionary theory includes an elaborate mathematical description of microevolution (12, 77), but, to our knowledge, there is no coherent theoretical representation of MTE. Here we address this problem directly and propose a theoretical framework for MTE analysis, in which the MTE are treated as phase transitions, in the technical, physical sense. Specifically, a transition is the point where two distinct grand potentials, those characterizing units at different levels, such as molecules vs. cells (organisms) in the case of the origin of life, become equal or dual. Put another way, the transition is from an ensemble of entities at a lower level of organization (for example, molecules) to an ensemble of higher-level entities (for example, organisms). At the new level of organization, the lower-level units display collective behavior and the corresponding phenomenological description applies. This formalism entails the existence of a critical (biological) temperature for the transition: The evolving systems have to be sufficiently robust and resistant to fluctuations for the transition to occur. Notably, this theory implies the existence of two distinct types of phase transitions in evolution: Apart from MTE, each event of an adaptive mutation fixation also is a bona fide transition albeit on a much smaller scale. Of note, the origin of life has been previously described as a first-order phase transition, albeit within the framework of a specific model of replicator evolution (78). Furthermore, the transition associated with the origin of life corresponds to the transition from infrabiological entities to biological ones, the first organisms, as formulated by Szathmry (79) following Gantis chemoton concept. According to Ganti, life is characterized by the union of three essential features: membrane compartmentalization, autocatalytic metabolic network, and informational replicator (80, 81). The pretransition, infrabiological (protocellular) systems only encompass the first two features, and the emergence of the informational replicators precipitates the transition, at which point all quantities describing the system have a dual meaning according to Eq. 7.3 [see also the accompanying paper (18)].
The phenomenological theory of evolution outlined here is highly abstract and requires extensive further elaboration, specification, and, most importantly, validation with empirical data; we indicate several specific predictions for which such validation appears to be straightforward. Nevertheless, even in this general form the theory achieves the crucial goal of merging learning and thermodynamics into a single, coherent framework for modeling biological evolution. Therefore, we hope that this work will stimulate the development of new directions in the study of the origin of life and other MTE. By incorporating biological evolution into the framework of learning processes, this theory implies that the emergence of complexity commensurate with life is an inherent feature of learning that occurs throughout the history of our universe. Thus, although the origin of life is likely to be rare due to the multiple constraints on the learning/evolutionary processes leading to such an event (including the requirement for the essential chemicals, concentration mechanisms, and more), it might not be an extremely improbable, lucky accident but rather a manifestation of a general evolutionary trend in a universe modeled as a learning system (18, 68).
There are no data underlying this work.
V.V. is grateful to Dr. Karl Friston and E.V.K. is grateful to Dr. Purificacion Lopez-Garcia for helpful discussions. V.V. was supported in part by the Foundational Questions Institute and the Oak Ridge Institute for Science and Education. Y.I.W. and E.V.K. are supported by the Intramural Research Program of the NIH.
Author contributions: V.V., Y.I.W., E.V.K., and M.I.K. designed research; V.V. and M.I.K. performed research; and V.V. and E.V.K. wrote the paper.
Reviewers: S.F., University of California, Irvine; and E.S., Parmenides Foundation.
The authors declare no competing interest.
Follow this link:
Thermodynamics of evolution and the origin of life - pnas.org
Posted in Evolution
Comments Off on Thermodynamics of evolution and the origin of life – pnas.org
Evolution is More Important than Environment for Water Uptake – Eos
Posted: at 6:59 am
Editors Highlights are summaries of recent papers by AGUs journal editors.Source: Geophysical Research Letters
Understanding of root water uptake supporting water release into the atmosphere (known as transpiration) by plants has remained elusive due to the difficulty of studying root systems. However, the problem is important since climate change is expected to increase transpiration while reducing precipitation in many regions, causing a shortage of plant available water.
A common assumption used by ecologists and hydrologists is that root water uptake and access to groundwater is dictated either by plant gross similarities (for example, such as needle leaf vs broadleaf; temperate species vs. tropical species) or by their environment (for example landscape position that determines proximity to groundwater).
Knighton et al. [2021] used a global analysis of root traits and signatures of water in plant tissues to conclude that evolutionary proximity of species determines root water uptake strategies. While so far there is little data from tropical forests with high diversity, this research suggests that that the wealth of information on species evolutionary proximities can be used to map root water uptake strategies for yet unstudied species.
Citation: Knighton, J., Fricke, E., Evaristo, J., de Boer, H. J., & Wassen, M. J. [2021]. Phylogenetic underpinning of groundwater use by trees. Geophysical Research Letters, 48, e2021GL093858. https://doi.org/10.1029/2021GL093858
Valeriy Ivanov, Editor, Geophysical Research Letters
View original post here:
Evolution is More Important than Environment for Water Uptake - Eos
Posted in Evolution
Comments Off on Evolution is More Important than Environment for Water Uptake – Eos
Why covering anti-evolution laws has me worried about the future of vaccines – Ars Technica
Posted: at 6:59 am
Aurich Lawson
Prior to the pandemic, the opposition to vaccines was apolitical. The true believers were a small population and confined to the fringes of both major parties, with no significant representation in the political mainstream. But over the past year, political opposition to vaccine mandates has solidified, with a steady stream of bills introduced attempting to block various ways of encouraging or requiring COVID vaccinations.
This naturally led vaccine proponents to ask why these same lawmakers weren't up in arms in the many decades that schools, the military, and other organizations required vaccines against things like the measles and polio. After all, pointing out logical inconsistencies like that makes for a powerful argument, right?
Be careful what you wish for. Vaccine mandate opponents have started trying to eliminate their logical inconsistency. Unfortunately, they're doing it by trying to get rid of all mandates.
The fact that this issue has become politicized and turned state legislatures into battlegrounds has a disturbing air of familiarity to it. For over a decade, I've been tracking similar efforts in state legislatures to hamper the teaching of evolution, and there are some clear parallels between the two. If the fight over vaccines ends up going down the same route, we could be in for decades of attempts to pass similar laws and a few very dangerous losses.
To understand the parallels, you have to understand the history of evolution education in the US. Most of it is remarkably simple. In 1968, the Supreme Court issued Epperson v. Arkansas, ruling that prohibitions on the teaching of evolution were religiously motivated and thus unconstitutional. Two decades later, laws requiring that evolution be "balanced" with instruction in creationism (labelled "creation science" for this purpose) were declared unconstitutional for similar reasons. A further attempt to rebrand creationism and avoid this scrutiny was so thoroughly demolished at the District Court level that nobody bothered to appeal it to the Supreme Court.
Given all that precedent, you'd think that evolution education would be a throughly settled issue. If only that were true.
Instead, each year sees a small collection of bills introduced in state legislatures that attempt to undermine public education in biology. These tend to arise from two different sources. One is what you might call ignorant true believers. These are people who sincerely believe that evidence supports their sectarian religious views and are either unaware of Supreme Court precedents or believe that the Supremes would see things their way if given just another chance.
On their own, the true believers aren't very threatening. The bills they introduce are often comically unconstitutional and tend to die in committee. The problem is that these legislators and the people who elect them are all in the same political party.
That party has plenty of people in it who aren't true believers. They know that trying to smuggle creationism into schools is unconstitutional and that there's nothing traditionally Republican about trying to do an end run around the Constitution. But they recognize that the true believers are a major constituency of their party, and they want to signal to that constituency that they share values. So they engage in vice signaling, supporting things they know are wrong but will signal shared values.
In some cases, this includes disturbing levels of support for the clearly bonkers bills filed by the true believers. But in more insidious cases, the vice signaling can involve supporting bills that are carefully crafted to enable creationists without blatantly violating the Constitution. Two such bills, which claim to champion "academic freedom" while singling out evolution as in need of critical thinking, have become law in Louisiana and Tennessee.
Prior to the pandemic, another group of true believersthe people who really think that vaccines are dangerouswas a tiny minority with no real home in either of the major political parties. But Republican opposition to vaccine mandates has now given anti-vaxxers a home. There, they've merged with another set of true believers: those who think that their personal freedom isn't balanced by a responsibility to respect the freedom and safety of others.
With all of these true believers in one party, the vice signaling has started. Florida Gov. Ron DeSantis has been vaccinated and has spoken of the value of vaccinations a number of times. Yet he's tried to enforce laws that interfere with private businesses that wish to require vaccines, an effort that initial rulings have found to be unconstitutional. He's also appointed a surgeon general who refuses to say whether he's vaccinated and spent two minutes dodging a question about whether vaccines are effective before acknowledging that they are.
But the problems aren't limited to Florida. Missouri's top health official was compelled to resign even though he opposed vaccine mandates. He ran afoul of state legislators simply for saying he'd like to see more citizens vaccinated.
The list of states with bills targeting COVID vaccine mandates is long: Mississippi, Oklahoma, Iowa, South Carolina, Alabama, and more. And then there's the bill circulating in Georgia we mentioned at top, which signals that this politicization isn't limiting itself to the COVID vaccines. A number of other states appear to be pondering related effortsthat target vaccines generally.
See the article here:
Why covering anti-evolution laws has me worried about the future of vaccines - Ars Technica
Posted in Evolution
Comments Off on Why covering anti-evolution laws has me worried about the future of vaccines – Ars Technica