Page 42«..1020..41424344..5060..»

Category Archives: Evolution

Kayzo Fosters the Next Evolution of Electronic-Rock Music With Third Album, "New Breed" – EDM.com

Posted: June 26, 2022 at 10:23 pm

The release of NEW BREEDmarks the decisive beginning of a new chapter for Kayzo, one of the biggest electronic dance music producer success stories of the 2010s.

With his latest album,Kayzo explores the intersection of rock and electronic music more thoroughlymore thoughtfullythan ever before.

Longtime fans will fondly remember records such as his remix of "Last Resort" alongside Papa Roach as breakthrough moments in his meteoric career. But while such tracks combining dubstep production with rock sonics were fresh at the time, a new spark was needed to illuminate this creative pathway ahead.

And with NEW BREED,Kayzo handily delivers.If there's one central accomplishment Kayzo can claim on the album, it's blending the many diverse rock influences with biting electronic production to create a cohesive experience.

Kayzo.

c/o Press

Scroll to Continue

Listeners got a taste of this with his recent single "POSER" (withconner)a pop-punk-inspired anthem wherein the electronic influence feels strategically augmentative, oftentimes subtle and overall seamlessly engrained in the DNA of the production. Kayzo's creative approach here continues with "BOTTLE OF RAIN," a gloomy rock offering with a lo-fi flare; and the energizing "WASTE AWAY," with yearning vocals from Kala.

For fans of Kayzo's earlier works, he's by no means forgotten how to blow the doors off. The album's titular track is a rollercoaster ride traversing multiple bass music genres before ending with a stampeding mix of hardstyle and dubstep at its coda. Meanwhile, PhaseOne and Aaron Pauley join Kayzo for "MEET YOU IN THE SOUND," taking no prisoners in a cathartic melodic dubstep concoction.

For a risk-taking blend of Kayzo old and new, "LOVE ME HATE ME" finds a seemingly perfect balance. The track additionally stands as a highlight since he's joined by long-time partner CRAY in what marks their first collaborative offering together.

Listen to Kayzo's NEW BREED in its entirety below and stream the album here.

Facebook: facebook.com/kayzomusicTwitter: twitter.com/KayzoMusicInstagram: instagram.com/kayzomusicSpotify: spoti.fi/3kiAtQO

Continue reading here:

Kayzo Fosters the Next Evolution of Electronic-Rock Music With Third Album, "New Breed" - EDM.com

Posted in Evolution | Comments Off on Kayzo Fosters the Next Evolution of Electronic-Rock Music With Third Album, "New Breed" – EDM.com

Ancient Egyptian Weapons: The Evolution of Warfare – Ancient Origins

Posted: at 10:23 pm

Anyone interested in the history of warfare or weaponry should make sure to look at ancient Egyptian weapons and how the Egyptian armies utilized their technological superiority. Whilst most famous for its architectural wonders, ancient Egypt once wielded the most terrifying fighting force the world had ever seen. The history of the rise and eventual fall of Egypts military is ultimately all about ancient Egyptian weapons!

The Egyptian weapons of the Early Dynastic Period (3150 BC-2613 BC) were as simple as one might expect. Military weapons consisted of basic daggers, spears, and maces for melee combat along with primitive bows for long-range combat.

The spears were rudimentary and very similar to those used by predynastic Egyptian hunters . The only real advancement was the introduction of copper spearheads, which offered better penetration than the traditional flint tip. However, metallurgy in this early period was expensive and it is unclear how widespread the use of copper spear tips was among the average foot soldiers.

Troops carried a dagger as a secondary weapon. The dagger normally had a copper blade and was used at either very close range or to finish off wounded enemies. The blades were too brittle to be reliable in one-to-one combat. The second they hit bone or another blade they were likely to chip or even snap, which was less than ideal.

The mace was another secondary weapon issued to some foot soldiers. The heads were either made of hardwood or pear-shaped stone. The mace could be used to smash through enemy shields or dispatch wounded enemies with one quick blow to the skull.

For longer ranges, the Egyptians began to use archers during this period. However, these rudimentary single-arched bows werent much use. They were difficult to draw, had an embarrassingly short range, and were inaccurate to boot. These problems were compounded by the fact that archers were drawn from low-class peasantry who mostly had no experience in bow hunting.

The early foot soldiers of Egypt, in the Middle Kingdom, had simple ancient Egyptian weapons: a shield, a spear and probably a dagger but not much more! Wooden figures of the Egyptian army of the 11th Dynasty found in the tomb of Mesehti. (Udimu / CC BY-SA 3.0 )

Things begin to change for the Egyptian army with the rise of Mentuhotep II of Thebes. In early Egypt, the empire was made up of different regions all led by individual nomarchs (regional leaders with their own armies) who answered to the central government. If the central government called, you and your army came running.

These small armies mostly consisted of poorly trained and poorly equipped peasant conscripts. The thinking of the time was why waste expensive equipment and training on peasant cannon fodder?

Mentuhotep was not a fan of this system though. In 2050 BC he defeated the central government's ruling party at Herakleopolis and in doing so united the country. Egypt was now solely under the control of the Thebans.

With one united army the Egyptians could focus on military development. Previously the smaller militias had been made up of disposable peasants. As such the approach to weapons had been cheap and not very cheerful. But a proper military needed proper soldiers wielding proper weaponry.

Soldiers were now commonly armed with a dagger, sword, spear, and shield. The dagger and sword hadnt evolved much. They were both crude copper blades riveted to a handle. These rivets were a major structural weak point. The longer-bladed swords were especially prone to snapping when attempting to block an incoming blow.

Archers still carried the same single arched bows as before. These came with all the same weaknesses. However, the Egyptian army became much better at utilizing these less-than-ideal weapons. Archers were now better trained and organized.

If troops were lucky they were equipped with the all-new slicing ax. This was a long wooden shaft with a crescent copper blade attached at the end via a notch. The slicing ax was a two-handed weapon with excellent range. As the name implies the ax was swung in a slicing motion (much like a scythe).

The weapon was brutally effective. Its heavyweight and momentum meant that it was next to impossible to guard against an incoming swing. A sword simply wasnt strong enough and would snap while the ax could also brute force its way through the rudimentary wooden shields of the time.

Finally, as foot soldiers were now an investment and less disposable, protective gear became more popular and troops were given simple leather armor. The protection offered was minimal but offered some protection from glancing blows or an errant long-range arrow. When it comes to armor, something is better than nothing.

Egyptian duckbill-shaped axe blade using the Syro-Palestinian style, axe head technology probably introduced by the Hyksos (19811550 BC). (Metropolitan Museum of Art / CC0)

Towards the end of the Middle Kingdom, the central government became increasingly complacent and weak. They took their eyes off the ball and allowed the Hyksos, a dangerous military culture who spoke a Western Semitic language and were likely Canaanites, to infiltrate their lands.

While the central Egyptian government was distracted by petty infighting, the Hyksos managed to take over lower Egypt around the city of Avaris. The Hyksos quickly established themselves and began inflicting their will on the area both politically and militarily.

The Hyksos originated from western Asia , and they were way more technologically advanced than the Egyptians. They had horse-drawn war chariots , composite bows, and more advanced weapon designs. And their melee weapons werent so prone to snapping. For every weapon the Egyptians could field, the Hyksos already had a better version.

The Hyksos became the bogeyman of the Second Intermediate Period of Egypt (1782-1570 BC). This period is often described as the Hyksos Invasion. Propaganda from The New Kingdom Of Egypt and Manethos Josephus makes the Hyksos sound like blood-thirsty monsters who swept across Egypt, destroying everything in their path. However, there is no archaeological evidence for this level of destruction.

During the Intermediate Period, the Hyksos held Egypts lower (north) ports and the Nubians had much of upper (south) Egypt. Only Thebes was still ruled by Egyptians at this point. It was time to fight back.

Under Ahmose I of Thebes the Egyptians took what they learned from the Hyksos and used it against them. Ahmose I defeated the Hyksos and ejected them from lower Egypt. He then went south and did the same to the Nubians. Once again Egypt was unified, and the age of the New Kingdom had begun.

The New Kingdom was an age of unparalleled military expansion for Egypt. Never again would they be the victims of foreign invasion. As Egypt's borders expanded and it met new enemies the Egyptians continued the rapid technological advancement of their forces.

Their armies would soon become virtually unbeatable.

The spear remained largely unchanged. It was still essentially a long stick with a sharp point at the end, but the sharp point was now made from bronze which was much better at holding an edge. The spear was still inexpensive but effective and remained the primary weapon of most Egyptian troops.

The key change was the introduction of spear and shield tactics. Spearmen were equipped with shields made of wooden boards bound with animal glue and hides. The shields were basic but effective. Spearmen could hold up behind their shields and block enemy attacks before striking back with devastating medium-range spear blows.

The javelin was an evolution of the simple spear. Soldiers would carry a quiver full of javelins. These weapons were dual purpose. They could either be used as short-range spears or launched at enemy chariots and troops. Importantly javelins were equipped with diamond-shaped metal blades that were armor-piercing.

The javelin was light, well-balanced, and easy to throw accurately. Unlike arrows, it was also reusable. As Egyptian troops advanced they could reclaim thrown javelins.

The Egyptian Battle Ax of Baki from circa 1504-1447 BC in the New Kingdom period. Bronze or copper alloy, wood (with modern restoration), and modern rawhide. (Metropolitan Museum of Art / CC0)

Up until this period, Egypt had been using the aforementioned slicing ax. Against unarmored foes, this was still the go-to weapon, and hard to improve upon. But what if your enemy was armored?

The Egyptians soon encountered the Hittite and Syrian armies. These army's troops wore reinforced leather jerkins which were adept at repelling slicing weapons. The Egyptians once again adapted, and the new battle-ax was born. It had a narrow, straight-edged blade designed to punch through armor with minimal resistance.

Around this time the Egyptians also discovered that utility in a weapon could be invaluable. During one pitched battle against a Canaanite city half the Egyptian army used their axes to dig beneath the city's defenses whilst the other half used their axes to level the citys surrounding forests.

In the new and improved battle ax, the Egyptians had invented an early form of entrenching tool that is still widely used by armies today.

Before the Hyksos invasion, Egyptian swords had been brittle and easy to break. The Hyksos introduced critical advances in bronze casting technology. Now the Egyptians could cast swords as one solid piece: blade and hilt all in one. With no rivets serving as weak points the swords had greatly increased durability.

This increased durability meant that swords became much more widely used. There were two common designs: a short, dagger-shaped blade for close-range stabbing, and a longer blade designed for slashing at slightly longer ranges.

The famous Khopesh that combines the advantages of an ax with a short sword. (Louvre Museum / CC BY-SA 2.0 FR )

What happens when you combine an ax with a short sword? You get the brutal-looking Khopesh. A moon-curved blade with the sharp edge on the outside, the Khopesh was simply terrifying to behold.

The Khopesh is another design that the Egyptians pilfered from the Hyksos. It was predominantly used as a secondary weapon used to dispatch wounded enemy soldiers with one gruesome strike. Due to its vicious-looking curve, the Khopesh became a weapon of terror. Pharaohs were often depicted in paintings wielding the Khopesh to put down entire enemy armies.

Though this is not an ancient Egyptian composite bow it is identical to what King Tutankhamens military wielded and a huge improvement in range over the earlier simple hunting bows. The bow depicted here is Turkish, from 171920 AD, and made from horn, wood, pigment, sinew, lacquer, gold, silver, ivory, iron, feather, silk. (TheMet / Public domain )

Whilst all the weapons so far had seen major improvements, none of them was as much of a gamer changer as the composite bow. Another gift of the Hyksos invasion, the composite bow completely changed how the Egyptian army approached combat.

The composite bow was long with a recurved shape. The bow was made by combining layers of Birchwood, animal horn, cattle tendons, and sinews that were all glued together. This layered construction method, combined with the shape meant the bow was much more powerful than previous designs.

A skilled archer could easily reach 250-300 meters (820-984 feet) and could fire each arrow in less than two seconds. This gave the composite bow a rate of fire and effective range comparable to some modern firearms. Unsurprisingly, it was devastatingly effective on the battlefield.

A platoon of 50 archers outfitted with these bows could inflict heavy losses on an enemy long before they had a chance to fight back, destroying morale. The only downside? The bows were incredibly expensive to make and maintain. Rather than ask for gold Egyptian armies would often ask for new composite bows as a tribute. It is said that after defeating the Libyans Ramses III demanded over six hundred composite bows in tribute.

Egyptian war chariots were also adapted from Hyksos designs, but the Egyptian improvements were significant. This stone panel was found at the Great Temple of Ramses II in Abu Simbel, south Egypt. (Warren LeMay / CC0)

The most impressive and deadly new weapon of the New Kingdom was the chariot. The Hyksos had introduced the Egyptians to the idea of a lightweight chariot used in battle, but the Egyptians perfected it.

A war chariot was manned by two warriors. One would drive the horses and focus on short-range defense while an archer in the back focused on long-range attack. The chariot was lightweight but laden with weapons- quivers of arrows and javelins were attached to the sides along with khopeshes and battle axes.

One chariot on its own was terrifying but the Egyptian army would use formations of more than 100 chariots to cut through enemy lines and decimate their flank. The chariot was essentially a mobile weapons platform zipping around the battlefield at crazy speeds.

The war chariot was no glass cannon though. The charioteers and their horses were encased in scale armor. They wore coats of bronze scales which protected them from long-range attacks whilst cranking up the intimidation factor even higher.

The combination of these heavily armored and armed war chariots and the fielding of the composite bow made the Egyptian army one of the most advanced and unbeatable in the world. They also made it one of the most expensive.

If the Egyptian army had become so impressive theres only one question left. What went wrong?

Just like every empire that came before and every empire that has come since the New Kingdom eventually entered a decline and began to crumble under its own weight. A grand army needs grand leadership and that became increasingly rare. Egypt soon found itself without the resources or leadership required to wield such an impressive army.

Looking at the ancient Egyptian army and its weapons provides an important lesson in humility. Without effective leadership, a large army is little more than a weight around a country's neck. It doesnt matter how massive or technologically advanced your army is, no one stays top dog forever.

Top image: The history of Egypt is very much the history of ancient Egyptian weapons and how they evolved. Here Egyptian pharaoh Ramesses II charges his war chariot into battle against the Nubians in south Egypt. Source: Ahmed88z / CC BY-SA 4.0

By Robbie Mitchell

Read the original here:

Ancient Egyptian Weapons: The Evolution of Warfare - Ancient Origins

Posted in Evolution | Comments Off on Ancient Egyptian Weapons: The Evolution of Warfare – Ancient Origins

Evolution Health Group’s blulava agency announces expanded technology offerings, digital services, and new team members – PR Newswire

Posted: at 10:23 pm

"blulava is leveraging its heritage in medical communications and infusing advanced analytics and creative strategies to provide clients with incredibly sophisticated solutions for healthcare brands," says Managing Partner Carolyn Vogelesang Harts."This vision, combined with visionary leaders like Mr. Napolitano and Ms. Mickelberg, positions blulava to compete with the largest of the networked agencies."

blulava has expanded its 360|Connexplatform with the inclusion of 360|Dialogue, a proprietary and integrated social analytics solution to measure HCP engagement and message penetration. In addition, blulava has launched 360|Collaboration, which provides a compliant platform for KOLs to solicit input, collaborate, and provide messaging feedback on clinical presentations in real time. These 2 platforms add greater breadth to the already industry, leading speaker bureau and KOL management solutions within the 360|Connex platform.

Jeffrey Freedman, blulava's Executive Vice President, rounds out blulava's unique vision by saying, "blulava is a creative force armed with data-science, cutting-edge technology, and laser-focused vision. We're here to build lasting human connections that transform each customer's journey, and to create meaningful impact for your business."

blulava was founded in 2019 and is headquartered in Pearl River, NY. More information about blulava offerings and services can be found at http://www.blulava.com

About Evolution Health Group, LLC

Learn how you can partner with Evolution Health Group and blulava to reimagine your brand. Please contact:

Carolyn Vogelesang HartsManaging PartnerEvolution Health Group1 Blue Hill Plaza, 8thFloorPearl River, NY10965[emailprotected]

SOURCE Evolution Health Group, LLC

View original post here:

Evolution Health Group's blulava agency announces expanded technology offerings, digital services, and new team members - PR Newswire

Posted in Evolution | Comments Off on Evolution Health Group’s blulava agency announces expanded technology offerings, digital services, and new team members – PR Newswire

Host control and the evolution of cooperation in host microbiomes – Nature.com

Posted: June 22, 2022 at 12:38 pm

Theory: the barriers to cooperation within the microbiome

We focus on a host and its symbiotic microbes - where both sides of the relationship can evolve to invest in traits that provide a fitness benefit to the other (Fig.1a, Methods,Table 1). For example, microbes could invest in production of a vitamin that benefits the host or simply evolve to be benign e.g. a strain that competes with pathogens and refrains itself from breaching the epithelial barrier, even though this restraint reduces its available nutrients. Hosts, meanwhile, might direct carbon towards the symbionts, such as the provision of glycosylated mucins.

a Cartoon of the model: Both hosts and microbiota can invest in cooperation. Host can also invest in host control that preferentially benefits more cooperative symbionts. Microbes migrate into the system at rate M from a fixed environmental pool of largely uncooperative microbes between host generations, and at rate m each symbiont generation within host generations(Methods, Table 1). b Example dynamics from the model. Cooperation evolves when the benefits of cooperation are high, symbiontrelatedness is high (i.e. within-species diversity is low) and the microbiome is short lived (the ratio of symbiont to host generations is 1). Increasing the number of symbiontgenerations within a single host generation (generation ratio)increases symbiont competition within the host and cooperation with the host collapses (unless stated, parameters are x=y=2, R=0.5, f=0.02, g=0.1, m=1106, M=0.05). c Effect of relatedness and benefit to cost ratio of the evolution of cooperation. Cooperation is only stable at high relatedness, high benefit to cost ratio and low generation ratio. Increasing the generation ratio leads to the collapse of cooperation across a wide parameter space.

Each host generation, microbes colonise new hosts from two sources. A proportion M comes from an environmental pool, which has not coevolved with the host and, therefore, has a low baseline level of cooperation. The rest of the microbes (1-M) come from the hosts of the previous generation, based upon their frequency there. If symbionts help their host, this will increase its fitness, and this effect can feedback as a benefit that increases the symbionts genotype in the next host generation (a between-host effect in the terminology of social evolution23,32). Intuitively, so long as the benefits are high and the costs are low, one might predict that cooperation will evolve under these circumstances. If the symbionts, for example, evolve some level of investment in the host, this can incentivise investment by the host in return, which in turn can favour further investment by the symbionts. However, there is a potential problem with this argument. The benefit to helping a host can be countered by competition between symbionts. This effect arises because genotypes that invest their energy in cooperation are expected to, all else being equal, have less energy for survival and reproduction than non-cooperative genotypes in the same host (a within-host effect).

Many microbiomes are relatively open and diverse, which means a focal strain will experience competition from diverse microbial genotypes10. The question of how genetic diversity among social partners influences cooperation is central to evolutionary biology23,33,34, and captured by relatedness, R (Methods)35. Distinct from phylogenetic relatedness, this term in microbes captures the extent to which the genotype of a focal cell predicts the genotypes of all cells in the species under study36. In a simple case, with one strain, the focal cell genotype will predict all cell genotypes and R=1. While, for ten randomly-selected strains, the genotype of any one cell will only predict one in ten of the cells genotypes and R=0.1.

Why is this measure important? Consider when cooperation first emerges as a new symbiont genotype, such that the allele for cooperation is rare. When R=1, if one cell cooperates with the host, all cells will as they are genetically identical, and all will share in the benefits, meaning that cooperation may readily evolve. By contrast, if R=0.1, if one cell cooperates with the host, only one in ten cells will cooperate and yet all will again benefit from the cooperation. The effect is that the other 9/10 cells all get the benefit of cooperation without themselves paying the cost. The cooperative genotype, therefore, is likely to be outcompeted by these other strains. In this case, natural selection may favour symbionts that do not invest in cooperation, but receive any benefits from the cooperation of other symbionts in the microbiota. Over time, this can drive down the cooperation provided by the microbiota so far that the host no longer benefits from investing in the microbiota, and so cooperation is lost on both sides of the relationship.

We can see this effect as we decrease relatedness in the modelequivalent to increasing the number of different strains competing within the hostwith a decrease in the region where cooperation is favoured (Fig.1). Another key factor is the benefit to cost ratio: how much a recipient gains from cooperation relative to the costs of being cooperative. As relatedness is reduced, cooperation only evolves for a relatively high benefit to cost ratio (Fig.1). Relatedness in the model captures the effects of competition between strains i.e. strains within the same niche in a host. However, a system like the human microbiome contains many such niches and many species that fill them. Here, a requirement for a high benefit to cost ratio may present a significant barrier to cooperation. With many species in a host, each symbiont strain is relatively rare and, all else being equal, less able to provide strong benefits for the host. This effect suggests that, in addition to the impact of low relatedness and competition within a given niche (Fig.1), between-species diversity may also limit the evolution of cooperation in microbiomes.

A standard model of cooperation between species, therefore, suggests that systems like the human microbiome may have limited scope for cooperative evolution. However, missing from such models is the potential for there to be many symbiont generations per host generation. For example, one human generation can take ~30y in contrast to symbiotic bacteria estimated to replicate on a timescale of hours37. This means that competition between strains is prolonged and chronic. Introducing this prolonged competition into the model (Methods) causes further problems for the evolution of cooperation (Fig.1). Cooperating symbionts perform particularly poorly under these conditions, because their investment in the host makes them grow more slowly than symbionts that do not cooperate. The effect is to further decrease the likelihood of symbiont cooperation (i.e., at high generation ratios in Fig.1, Supplementary Fig1). This, in turn, disincentivises the host from investing in the symbionts, which leads to a collapse of cooperation between host and microbiota.

This prediction is robust to changes in parameters and modelling assumptions. High generation ratios lead to the collapse of cooperation across broad parameter sweeps of both relatedness and the cost-to-benefit ratio of cooperation (Fig.1c). The shape of the relationship between the investment in cooperation and its benefit can be important in some contexts38,39. We compared a range of functional forms relating symbiont cooperation to host benefit, and found consistently that cooperation collapses at high generation ratios (Supplementary Fig1). Increasing symbiont immigration from the environment (M) to very high levels does generate cooperation. However, this only occurs because we assume a baseline level of cooperation in these immigrants, and this forcing effect on cooperation is again not robust to high generation ratios (Supplementary Fig2).

Where does the human microbiome fit within these parameter sweeps? The available estimates for average symbiont relatedness is relatively high40 but, critically, the generation ratio is extremely high due to human life span being so long relative to that of microbes. These parameters again, therefore, lead to the prediction that cooperation will collapse due to competition within hosts (Supplementary Fig3a).

Our findings fit well with another recent model of host-microbiota evolution, which also concluded that the conditions for cooperation were very limited in systems like the mammalian microbiota26. However, we have so far overlooked the expectation that a host is under strong selection to promote symbiont cooperation10,11,30. Hosts can promote cooperation in a variety of ways, including selective feeding, influencing adhesion to the mucosa, and, of course, via the immune system28,29,30. Animal immune systems, for example, use toll-like receptors (TLRs) to detect conserved microbial features known as microbial associated molecular patterns (MAMPs), such as lipopolysaccharide and flagella. The presence of MAMPs can drive inflammation or other responses that targets and suppresses microbes41. Many of these mechanisms are of course already well known to counter specific pathogens42,43,44. Here, we are interested in their role more broadly in the evolution of a cooperative microbiota.

Our model predicts that allowing host control mechanisms to evolve will often rescue the evolution of cooperation (Fig.2, Supplementary Fig1)25. This prediction fits with a growing body of theory and data in social evolution supporting the importance of control (or enforcement) mechanisms for the evolution of cooperation, including a model of the plant microbiome27,45, When is host control most important for the evolution of cooperation? At low generation ratios, we find that control will only evolve under conditions where relatedness is relatively low. This result fits with classic evolutionary theory46 and occurs because host control is less effective and useful when relatedness is high. At higher generation ratios, the effects of relatedness are weakened by extended competition and evolution within the symbionts, and host control evolves across the whole range of relatedness (Fig.2c).

a Schematic of the model: Both hosts and microbiota can invest in cooperationand, in addition, hosts can invest in control mechanisms that favourmore cooperative symbionts over less cooperative ones. Hosts control alsonegatively effects allsymbionts at cost (f) and hosts pay a direct cost for control (g). b Within-host evolution ofsymbiontcooperation (shown here for the first host generation, as an illustration). Increasing symbiont generations perhost generation (generation ratio)promotes symbiont cooperationwhen there is host control, but hinders cooperationwhen there is not. c Effect of relatedness and benefit to cost ratio on the evolution of cooperation.Cooperation evolves across broad parameter ranges with host control, where increasing the symbiont to host generation ratio only increases the range of conditions where cooperation is stable. The regions where cooperation evolves for hosts and symbiont overlap perfectly and so we show only a single plot for cooperation. d Cooperation collapses when symbionts can evolve cooperation independently of the trait that is thetarget of host control. Mutualism is stable while the trait and cooperation are fixed (original model) but when symbionts are allowed to evolve the trait-cooperation link, cooperation and control are quickly lost. Reinstating the relationship again renders host control effective and restores cooperation. Unless stated, parameters are: x=y=2, f=0.02, g=0.1, m=1106, M=0.05.

At high generation ratios, host control also becomes more effective, because the selection imposed by hosts now acts across many symbiont generations and has a greater impact on genotype frequencies (Fig.2b). Interestingly, this implies that the same property that can undermine cooperation in the microbiota of long-lived hosts (Fig.1b, c) can help to rescue cooperation if there is host control (Fig.2, Supplementary Fig1). Consistent with this, when we again use parameters motivated by the human microbiome, our model predicts that host control can robustly rescue cooperation (Supplementary Fig3b). We also provide parameter sweeps of the costs of host control (Supplementary Fig4), the strength of host control (Supplementary Fig5), and symbiont immigration rates from the environment (Supplementary Fig6). As expected, higher costs of control result in hosts investing less in control at equilibrium. Nevertheless, across all parameter sweeps, the evolution of host control is widely predicted whenever there are a high number of symbiont generations per host generation. The same conclusion is reached when we consider the range of alternative relationships between symbiont cooperation and the benefit to the host (Supplementary Fig1).

An exception to these conclusions occurs when there is no immigration of environmental symbionts, because here host control can collapse. This effect is well-known from previous models of enforcement25,47,48. Without immigration, host control drives all symbiont genotypes to be cooperative. This lack of symbiont variability means host control no longer has a benefit and is lost and with it, cooperation. In reality, there are many sources of symbiont variability, whether it is immigration or mutation, which means that host control is expected to be evolutionarily stable25. For example, in addition to general immigration of environmental genotypes (M in our model), an important source of such variability is the potential for pathogens. To account for this possibility, we developed an individual-based version of our model where we can follow a subset of immigrating genotypes that are especially costly for the host. As expected, including the potential for pathogens only increases natural selection for host control (Supplementary Fig7b). This result underlines the potential for host control mechanisms, and indeed cooperation in the microbiome, to be shaped by pathogens that represent a particularly high risk to a host.

A final consideration is the potential for members of the microbiota to escape from mechanisms of host control. Specifically, natural selection is expected to favour symbionts that reduce their investment in cooperation, while keeping whatever trait the host targets to exert its control. We, therefore, asked what happens if symbiont evolution can alter the link between the trait under host control and their cooperation. Figure2d shows the impacts of this change on evolutionary dynamics. When symbionts are constrained, cooperation and control both rapidly evolve. Indeed, host investment in control is greatest early on because this is when it is most needed to select cooperative symbionts. As symbiont cooperation increases, and symbiont variability decreases, host investment in control drops but to a stable level, which is set by the costs of control (above, Supplementary Fig4).

This all changes when we remove the constraint on symbiont evolution. Now, symbionts rapidly evolve to maintain the trait under host control while reducing investment in cooperation. Host control becomes ineffective because it cannot select for the more cooperative symbionts, and is no longer favoured by natural selection leading to the collapse of cooperation (Fig.2d). Another prediction of the model, therefore, is that cooperation rests upon the evolution of control mechanisms that cannot easily be escaped via counter evolution in the symbionts. This prediction is similar to the idea that the immune system needs to find conserved targets for pathogen recognition44, but here we are considering host control over the microbiota as a whole. As for our earlier results, parameter sweeps confirm that this prediction is robust to changes in relatedness and cost-to-benefit ratios (Supplementary Fig8).

Our model predicts that host control mechanisms have been central to the evolution and maintenance of cooperation within diverse long-lived microbiomes, such as the human microbiome. The potential for host control is clear from the wide variety of mechanisms that can influence the microbiota, including the innate and adaptive immune systems of animals10. However, it is not known whether these mechanisms have been generally important for the evolution of host-associated microbiomes. A challenge for such a broad assessment is that the microbial traits associated with cooperation will typically differ among different host and symbiont species. We, therefore, sought a microbial trait that (i) is widely found and easily identified in genomic data (ii) influences whether symbionts benefit or harm the host and (iii) is subject to strong host control. These criteria led us to bacterial flagella.

Many bacteria possess flagella, which are used to swim and move between microenvironments. Flagella can confer strong benefits to bacteria in a host. Swimming has been shown to help bacteria persist in the mammalian gut49 and, similarly, to escape peristalsis and ejection from the zebra fish gut50. For many pathogens, flagella are also essential for reaching the epithelial layer51,52,53. Due to this latter effect, flagella are important for cooperation and whether bacteria are likely to be beneficial to a host. Specifically, possession of flagella is often associated with harm to the host as a mechanism that allows bacteria to breach the epithelial barrier50,51,52,53,54 In E. coli, for example, only some strains appear to express flagella in the host, and these strains are associated with inflammation and disease54. Consistent with the importance for the host, the key structural component ofbacterial flagella (flagellin)is amongst the most immunogenic of all microbial factors55, with a dedicated receptor in vertebrates (TLR5)56. Mice that lack this receptor have an increase in detectable flagellin in their microbiome57. Conversely, inducing the production of anti-flagellin IgA in mice decreases flagellin levels and limits the encroachment of the microbiota at the epithelial barrier58. Importantly, these experimental studies suggest that host control can limit flagellated bacteria and help in maintaining a cooperative relationship by preventing epithelial encroachment56. However, they leave open the question of how important these processes have been for the evolution of host microbiomes.

We therefore sought evidenceacross animalsthat host control mechanisms have served to suppress flagellated bacteria in spite of the documented benefits of swimming in the host50,51,52,53. We estimated both the frequency of flagellated species and the rate of flagella loss in environmental and host-associated bacteria using a database of 3833 sequenced bacterial strains (1262 host-associated and 2571 environmental)59 (see Materials and Methods) (Fig.3a). Using the software BayesTraits, we assessed transitions between flagellated/non-flagellated and host/environmental bacteria, and fit the data to a simple model where the two traits are independent, and a complex model where rate of change in flagella status was dependant on host association status and vice-versa (Fig.3b). Comparing the likelihood of both models, we can robustly reject the simple model in favour of a complex model where the two traits are dependant (Log Bayes Factor (LogBF)=47.24). We tested for implicit biases in the dataset by performing 100 replicates with random label switching, which produced no significant results (LogBF=42.73).

a 16S phylogeny for strains in the PATRIC representative dataset. We only show Firmicutes here as an example because the full phylogeny is too large to show effectively. Host association was determined using metadata from the PATRIC and BacDive databases. Flagella status was determined by identified conserved motifs of flagellin genes. b Transitions between the four states in the data set, with and the posterior distributions of the transition rates calculated using Bayestraits111. c Posterior distribution of flagella loss rates for host-associated and environmental bacteria. Our model provides evidence for a significant difference in the rate of flagella loss between host-associated bacteria and environmental bacteria. Source data are provided as a Source Data file.

The supported model contains a number of transitions between states that could influence a link between flagella status and host status. To confirm that host association is driving the evolution of flagella loss, we examined the key transition rate from flagellated to non-flagellated bacteria. This analysis revealed that the data support a model where host association is predictive of flagella loss rate (LogBF>2). Moreover, in line with the predicted effect of hosts control, flagella loss rates are higher in host-associated bacteria than in environmental strains (Fig.3c).

The use of flagella by bacteria is associated with breaches of the epithelial barrier and inflammation50,51,52,53,54 and limiting flagella has the potential to improve the cooperativity of the microbiota58. However, in this case, cooperation is the absence of a trait, rather than the presence of a trait that provides benefits to the host, which is a more typical example in the literature. We, therefore, sought a second independent test of the importance of host control, involving a beneficial microbial trait. In the mammalian gut, anaerobic bacteria produce short chain fatty acids, including butyrate, which is considered central to the host-microbiota relationship. Butyrate is a major source of nutrition for the colonic epithelium and is monitored by the immune system (Fig.4a). Butyrate binds to G-protein coupled receptors in host cells, which influences the levels of regulatory T-cells and lowers intestinal inflammation60,61. In addition, butyrate is made by obligate anaerobes and so the maintenance of an anaerobic gut by a mammalian host62 is a second mechanism likely to favour butyrate production.

a Cartoon of butyrate biology: the short chain fatty acid is produced by members of the mammalian microbiome and is a key energy source for the host colonocytes. The anaerobic environment of the gut is favourable to butyrate producing bacteria and is reinforced by metabolism of butyrate by colonocytes, which lowers the oxygen potential in the gut. In addition, butyrate can reduce inflammation via effects on regulatory T cells by binding to G-protein couples receptors (GPCR)60,61, (b) Evolutionary loss rate of a pyruvate to butyrate operon based upon the genomes of the PATRIC database (Methods). c Posterior distribution for butyrate loss rates for symbionts associated with vertebrate hosts against environmental or invertebrate associated hosts. Source data are provided as a Source Data file.

If host control is important, the prediction is that butyrate production will be better maintained (lost less often) in the mammalian microbiome relative to other microbiomes. To test this, we searched the same dataset as above for operons associated with butyrate production63, to study the loss rate of butyrate productionacross bacteria that live in different hosts and environments. Butyrate production may also be importantfor host physiology in vertebratesother than mammals64, and so we first compared loss rates in all vertebrate microbiotas (including mammals) versus all other microbiotas(Fig. 4). We also performed the more stringent test of mammalmicrobiotas versus all others.In both cases, the data support a model where host association and butyrate production are non-independent (LogBF=58.37 for vertebrate analysis, LogBF=45.77 for mammal analysis). Moreover, the loss rate is lower where we predict i.e. lower in vertebrate microbiotas than all others (LogBF=36.17) (Fig.4)and lower in mammalian microbiotas than all others (LogBF=33.42).

The data for both flagella and butyrate metabolism, therefore, are consistent with the prediction that host controlincluding immunological responses to bacterial traitshas influenced microbiome evolution and cooperation. Importantly, both tests could refute our hypothesis and yet both were consistent with our modelling predictions, and the published experimental work showing that the immune system can modulate bacterial traits in the microbiome57,58. However, both tests are also very broad, spanning a wide range of hosts (all animals) and symbionts (all bacteria). As a result, we cannot exclude the possibility that other factors are important in the patterns we observe. We, therefore, sought additional tests of our modelling predictions.

The flagella data set provided such an opportunity. Flagella are targeted by the invertebrate and vertebrate immune systems, but vertebrates show an elaboration of anti-flagella mechanisms. With vertebrates, there was the evolution of TLR5: a dedicated anti-flagellin receptor that mounts both innate and adaptive immune responses56, where the latter responses are absent in invertebrates that lack an adaptive immune system. The evolution of vertebrates is also associated with longer life and so a higher number of symbiont generations per host generation. Our model predicts that both of these effectsstronger host control and increased symbiont generations in a hostwill promote flagella loss (Fig.2, Supplementary Fig5). We compared patterns of flagella loss evolution in vertebrate symbionts relative to invertebrates but this analysis lacked power using our original dataset (PATRIC59). While the trends looked encouraging, there were too few invertebrate species to resolve patterns. We were then fortunate that a new larger dataset was published: the Genomes of Earths Microbiomes, which is a collection of genomes assembled from metagenomic sequences from environmental samples and from a variety of hosts65.

We first used this new data set of 13757 taxa to confirm our original flagella analyses (shown in Fig.3)65. This replicated the results of the PATRIC dataset in both the association of flagella and host-association traits (LogBF=33.61) and even stronger evidence of a difference in the rate of flagella loss between host-associated and environmental bacteria (LogBF=15.02). We next compared patterns in vertebrate vs invertebrate associated bacteria (3333 taxa in total). As predicted, we found a significantly higher flagella loss rate in vertebrate symbionts than invertebrate symbionts (LogBF=6.14) (Supplementary Fig9). This analysis, therefore, is again supportive of the predicted role of host control mechanisms in microbiome evolution.

Whenever a host is able to drive bacteria to lose their flagella, this is likely to be an effective way to promote cooperation because it will limit their ability to reach host tissue50,51,52,53,54. However, there is the possibility that symbionts might evade the immune system without losing their flagella, via modifications that prevent the flagella being detected. Our models predict the need for constraints on such counter evolution in symbionts for host control, and cooperation, to be stable (Fig.2d). We, therefore, explored the potential for counter evolution within the microbiome, as a final test of our modelling predictions. Here, we turned to the key mediator of flagella recognition in vertebrates, TLR5, which binds to flagellin, the main structural component of flagella. Consistent with ongoing host evolution, previous work found evidence that TLR5 is under positive natural selection66,67,68,69,70. For example, there is evidence that a core set of sites in TLR5 are under positive selection across all mammals69, with further residues that are positively selected within particular lineages or species66,68,69. Furthermore, differences in TLR5 are associated with host-specific phenotypes, with different host species responding to flagellins of different bacterial species with varying sensitivity71,72,73.

We looked for evidence that TLR5 evolution has driven comparable changes in the D1 domain of flagellin, which is the key region for TLR5 binding74. We studied the flagellin genes of six symbionts that are typically not pathogenic (Butyrivibrio fibrisolvens, Citrobacter freundii, Clostridium butyricum, Enterobacter cloacae, Escherichia coli, Roseburia intestinalis) and six major pathogens (Burkholderia pseudomallei, Helicobacter pylori, Proteus mirabilis, Pseudomonas aeruginosa, S. typhimurium, and Vibrio cholerae), all found in the human gastrointestinal tract. We included pathogens as we reasoned that evidence of counter evolution is most likely to be found there, and indeed might exclusively occur there, given the evolutionary pressures that hosts exert on pathogens75,76.

We examined flagellins in 1761 strains across our 12 species. In all 11 species which are expected to be recognised by TLR5, the four key residues shown to be important for TLR5 binding (by alanine-scanning mutagenesis74) were extremely highly conserved. Specifically, at these four residues, there was only one change from the consensus sequences (E115 to K115) in one E. coli strain out of a total of 1535 strains across the 11 species, which suggests little or no evolutionary escape from TLR5 recognition (Fig.5a). Across species, one of the four residues (I112 in E. coli) is variable, but only between two similar hydrophobic amino acids (leucine and isoleucine) that are both known to allow TLR5 binding74. The exception that helps prove the rule is H. pylori flagellin which is not recognised by TLR5 and differs from the other species at three of the four key residues77.

a Alignment of the domain of flagellin which TLR5 recognises in symbionts and pathogens. Red bars indicate residues predicted to be in the interface between flagellin and TLR574. Red residues have been identified as important for TLR5 binding by alanine scanning mutagenesis74. As a member of the -proteobacteria, Helicobacter pylori has managed to escape TLR5 recognition and maintain motility by a serious of compensatory mutations74. b Schematic of flagellin alignments for the 12 species tested. Numbers indicate the total number of sequences in the alignment (and the number of unique sequences). Red domains indicate the TLR5 binding region as shown in the above alignment, yellow domains are a second site that also interacts with TLR5(aC-terminal region that also forms part ofthe D1 domain when the protein folds). Episodic positive selection was determined as any site with an LRT>2 and p<0.05 (calculated by MEME, and pervasive positive selection an >1 and p<0.05 calculated by FEL and are represented by +). Lines indicate pervasive negative selection at residues predicted by FEL to have a value of <0.05. For C. freundii, E. cloacae and E. coli variable domains made aligning the full flagellin sequence inaccurate, therefore we focused only on the N-terminalD1 domain, which is the primary binding site for TLR5.

Moreover, in contrast to host evolution in TLR5, we found few examples of positive selection in the TLR5 binding site for two measures of natural selection, across both the commensals and the pathogens (Fig.5). The first measure (FEL)78 assesses pervasive selection i.e. natural selection that is consistent and relatively constant at a given site within the gene of interest. Here, the majority of sites identifiedwere under strong pervasive negative (purifying) selection, which acts to limit evolutionary change. Moreover, all cases of positive selection were outside of the TLR5-binding D1 domain. The second measure (MEME) evaluates evidence for episodic site-specific selection where some alleles experience strong selection while others may not experience any79. This measure identified cases of positive selection across the species, which confirms there is statistical power to detect these sites. However, only three residues were in the D1 domain (two in E. cloacae and one in R. intestinalis) and then always on the very edge of the domain. In summary, we find that the key residues for TLR5 binding are highly conserved, and there is very limited evidence for positive selection in the D1 domain.

The data suggest distinct evolutionary patterns in the host and the microbiota. While host TLR5 appears free to evolve and tune its response to different bacterial flagella, the target of TLR5 in bacteria appears constrained. What drives this constraint? Part of it may be TLR5 itself, if this limits the sequences that bacteria use to those that are not highly immunogenic. However, a key cause is clearly structural. There is a highly conserved molecular interaction between the D1 and D0 domains of flagellin, which is critical to the polymerisation that builds the flagella. The importance of this region for flagella functioning was shown by detailed studies that mutated all residues in the D1 domain80,81. The great majority of residues are required for normal motility, suggesting that bacteria cannot easily change the D1 domain without affecting flagella functioning.

Our modelling predicts that for host control to be evolutionarily stable, it must target constrained bacterial traits that have limited potential for counter evolution, because otherwise bacteria are predicted to evolve to evade control (Fig.2d). In support of this prediction, we find little evidence for functional evolutionary change in the region of flagellin that is targeted by TLR5. As discussed above for the case of H. pylori, the only flagellin where escape from TLR5 detection is documented is that of the - and -Proteobacteria. These groups have a heavily altered TLR5 recognition region that does not illicit a TLR5 mediated immune response77,82. Importantly, to swim, these strains have also accumulated a series of compensatory mutations that maintain the flagella polymerisation and function77. This exception, therefore, is again consistent with there being a significant functional barrier to changes in the D1 region.

Link:

Host control and the evolution of cooperation in host microbiomes - Nature.com

Posted in Evolution | Comments Off on Host control and the evolution of cooperation in host microbiomes – Nature.com

The Evolution of Web Analytics – CMSWire

Posted: at 12:38 pm

Though web analytics is almost as old as the internet itself, the field has transformed dramatically since inception.

Before the decade is out, the data analytics market is expected to be worth $550 billion. Today, millions of businesses around the world rely on Google Analytics (or comparable software) to better understand customer wishes and optimize their web experiences. Though web analytics is almost as old as the internet itself, the field has transformed dramatically since inception.

Let's take a look at the history of web analytics, how Google took over and where it's going next.

Three years after the internet was born, the first analytics solutions appeared. Hit counters, or simple code that can display the number of page views, came first. They were simple to use without any IT experience.

Slightly more complex at this time was log analysis, which could interpret server logs and help identify sources of web traffic. As websites grew more complex, so too did server logs. Caching, or temporarily storing a file in the system to avoid multiple HTTP requests, didnt show up on the log, leaving gaps in the data.

This was a problem until JavaScript came along. JavaScript allowed tag-based tracking, which kept track of far more than just hits. Thanks to tag-based tracking, analytics moved into the domain of marketing. Marketers began to create targeted advertising, optimize their website copy, and more.

Related Article: Google's Move Away From Universal Analytics: What It Means for Digital Marketers

Around the turn of the century, it could take up to 24 hours for large companies to process their website data. That is, until Urchin came along and did it in as short a time as 15 minutes. Urchin quickly expanded its client base and offerings until Google bought them in 2005. And so Google Analytics was born.

Google Analytics was built as a hosted analytics solution that is heavily focused on quantitative data. The service ties in directly with Googles other web marketing offers and provides in depth, tag-based data. Its farthest reaching effort has been Universal Analytics, which was introduced in 2012. Universal Analytics lived up to its name by allowing for the tracking of users across multiple devices and platforms through the assignment of user IDs. Through this software, offline behavior monitoring, demographics, and (as of 2016) machine learning provided consumer insight with incredible detail at the cost of user privacy.

Related Article: Google Is Forcing the Switch to GA4 and Many Brands Aren't Happy

Partially in response to Universal Analytics far reach, some governments passed new online privacy laws. The most well-known example is the European Unions General Data Protection Regulation (GDPR), which went into effect in 2018.

To better comply with the new rules, Google released Google Analytics 4 (GA4) in 2020. GA4 only uses first-party cookies, and its Consent Mode adjusts the types of data collected based on user permissions. However, GA4 is still able to offer detailed consumer insight by reverting (in a way) to a previous mode of tracking: hit collection. GA4 considers every event a hit, collecting data that stretches far beyond page views.

GA4 also makes upgrades unrelated to privacy like collecting data in the same way for both web and mobile. Other new features include real-time reports, cross-platform reporting, and the ability to exclude users for certain behaviors. Universal Analytics shuts down July 2023 in favor of GA4.

See more here:

The Evolution of Web Analytics - CMSWire

Posted in Evolution | Comments Off on The Evolution of Web Analytics – CMSWire

Drivers of adaptive evolution during chronic SARS-CoV-2 infections – Nature.com

Posted: at 12:38 pm

Diverse evolutionary patterns in chronic infections

We begin by defining criteria for a chronic infection. In clinical settings, a chronic infection is often defined as one with both prolonged shedding of viral RNA and evidence of infectious virus, either through virus isolation in tissue culture or via detection of subgenomic RNA. However, when surveying various studies reporting chronic infection, we noted a lack of standardization, with different studies defining chronic infections somewhat inconsistently. Hence, we expanded our focus to include patients displaying high-viral-load (VL) shedding for 20 or more days while mining the literature for all such cases that were accompanied by longitudinal whole-genome sequencing of the virus (Methods). The criterion of 20days was based on a meta-analysis of the duration of viral shedding (defined as a positive nasopharyngeal polymerase chain reaction (PCR) test) across thousands of patients diagnosed until June 2020, which revealed that mean duration of upper respiratory tract shedding was around 17days, with a 95% confidence interval ranging from 15.5days to 18days16. Of note, shedding of replication-competent virus lasted markedly less than 20days. Moreover, estimates of viral shedding are different in some of the more recently detected SARS-CoV-2 variants, such as Delta and Omicron17,18, yet, as described below, our analysis focused on variants that were found in earlier stages of the pandemic.

Our search yielded a total of 21 case reports, all of which reported patients who were diagnosed during 2020 or early 2021, and all of which reported patients who were infected with viruses belonging to lineages that pre-dated the Alpha variant (Supplementary Table 2). In addition, six patients adhering to the above criteria were identified in TASMC, and all available samples were sequenced (Methods). Five TASMC patients suffered from hematologic cancers. The sixth patient suffered from an autoimmune disorder and was treated with a high dose of steroids. The six TASMC patients were all diagnosed in late 2020 or early 2021, with four patients infected with a virus from pre-Alpha lineages and two patients infected with a virus from the Alpha lineage (Supplementary Table 2).

Of the 27 chronically infected patients (mean age (s.d.) 55 (21.3) years; 17/27 male), we inferred that all were immunocompromised due to one or more of the following: hematologic cancer (that inherently tends to lead to immunosuppression), direct anti-B cell treatment, high-dosage steroid treatment or very low CD4+ T cell counts (due to AIDS). We observed very different evolutionary outcomes across the range of patients examined, from considerable evolution and antibody evasion observed in some patients to relatively static evolution in others (Table 1 and Supplementary Tables 1 and 2).

We searched for patterns of evolution across all 27 patients with chronic infection and compared this pattern to the pattern observed under (1) mostly neutral evolution, in the first approximately 9months of viral circulation19,20 (data were obtained from a sample of ~3,500 sequences generated by NextStrain https://nextstrain.org/21 (Methods)) and under (2) presumed positive selection, which occurred in the lineages leading to the five currently defined VOCs (Alpha, Beta, Gamma, Delta and Omicron) (data on lineage-defining mutations (LDMs) of VOCs were obtained from https://covariants.org (Fig. 1a and Supplementary Table 4)). In each scenario, we searched for binsthat is, consecutive regions of 500 basesenriched for mutations (P<0.05, binomial test, after correction for multiple testing; Methods).

a, Comparison of substitutions observed in chronic infections to VOC LDMs and to substitutions dominated by genetic drift during globally dispersed acute infections. Shown are the number of substitutions observed along the SARS-CoV-2 genome, in bins of 500 nucleotides. The upper panel displays substitutions observed at any timepoint of the 27 chronic infections. The middle panel displays LDMs of the five currently recognized VOCs. The lower panel displays substitutions observed globally during the first 9months of the pandemic, mostly before the emergence of VOCs. Asterisks mark bins enriched for more substitutions using a one-tailed binominal test, after correction for multiple testing (P<0.05; Methods and Supplementary Table 8). The genomic positions are based on the Wuhan-Hu-1 reference genome (GenBank ID NC_045512), and the banner on the top shows a breakdown of ORF1a/b into individual proteins and domains of the S protein (see main text). b, A network of co-occurring substitutions across patients with chronic SARS-CoV-2 infection. Each colored circle represents a locus, and a black asterisk and dot represent a significant enrichment under a one-tailed Fishers exact test with P<0.05 and P<0.1, respectively, after correction for multiple testing. Blue asterisks represent enrichment of co-occurring substitutions in globally observed sequences using a one-tailed X2 test, with P<0.05 and P<0.1, respectively, after correction for multiple testing (Methods).

During the first 9months of virus circulation, we noted that 61% of substitutions were non-synonymous, which is generally what we could expect under lack of both positive and purifying selection and in line with reports suggesting incomplete purifying selection during the early stages of SARS-CoV-2 spread22. During this time, we observed a relatively uniform distribution of substitutions across most of the genome, with some enrichment in ORF3a, ORF7a, ORF8 and N. This enrichment was previously reported and may be due to more relaxed purifying selection in these regions or higher mutation rates19; adaptive evolution at these regions also cannot be ruled out.

In general, the patterns obtained in chronic infections and in the LDMs of VOCs were very similar. The average proportion of non-synonymous substitutions in chronic infections and LDMs of VOCs was 78% and 82%, respectively, which was much higher than that observed during the first stage of the pandemic and generally suggestive of positive selection. On the other hand, we see less similarity between mutations in chronic infections and mutations that fix after a VOC has emerged (Supplementary Fig. 1), with a much lower proportion of non-synonymous substitutions in the latter (on average, 61%). A likely explanation for this observation is that after a VOC spreads in the population, selection is more limited due to the very tight transmission bottleneck9,10,11,12.

The most striking similarity between chronic infections and VOC LDMs was observed along the S protein and, in particular, at the regions that correspond to the N-terminal domain (NTD) (genomic nucleotides 21,59822,472) and the receptor-binding domain (RBD) (genomic nucleotides 22,51723,183). Several mutations at the RBD have been shown to enhance affinity to the ACE2 receptor and allow for better replication23,24, whereas other mutations, both at RBD and NTD, are known to enhance antibody evasion25,26,27. The most commonly observed substitutions in chronic infections were in the S protein: E484K/Q and various deletions in the region spanning the NTD supersite, particularly amino acids 140145, all shown previously to confer antibody evasion28. Chronic infections shared the enrichment of ORF3a/ORF7a/ORF8 mutations with the neutral set but lacked an enrichment across most of the N protein. Overall, it seems that mutations in chronic infections are predictive of LDMs of VOCs, as was noted previously2.

When focusing on the differences between VOCs and viruses in chronic infections, several intriguing differences emerged. First, four VOCs bear a three-amino-acid deletion in the nsp6 protein (ORF1a:3,6753,677), which is an event not observed in our set of chronic infections. Next, in VOCs, there is an enrichment in the region of the S encompassing the S1/S2 boundary (positions 23,50024,000 in Fig. 1a). This enrichment is primarily driven by S:P681H/R, a highly recurrent globally occurring mutation29, surprisingly never observed in our chronic infection set. A recent study analyzed recurrent mutations, with recurrence indicative of positive selection, and tested which of the recurrent mutations led to clade expansionthat is, were associated with onwards transmission30. Some recurrent mutations led to more dense clades, suggesting that they were especially successful in driving transmission, whereas others did not lead to considerable onwards transmission, suggesting that they were less successful. Notably, we observed that successful recurrent mutations were almost never present in our chronic set, whereas less successful recurrent mutations (S:E484K/Q and S:144) were the most abundant (Table 2). Overall, these results suggest that there may be a tradeoff between antibody evasion and transmissibility. This tradeoff, if it exists, might not play a role in chronic infections but would affect the ability of a variant created in a chronic infection to be transmitted onwards. Thus, only under specific conditions, a transmissible variant would emerge in chronic infections. Four of five VOCs independently acquired a mutation at or near the S1/S2 boundary (S:P681H/R or H655Y), suggesting that this may be a factor driving transmissibility. We note that Beta is an exception with no such mutations, yet this variant also displayed limited global transmission.

We went on to examine co-occurring substitutions, defined as pairs of substitutions that appeared in two or more patients. We used Fishers exact test to assess whether pairs of substitutions occurred together more often than expected from their individual frequencies (Methods) as a measure of possible epistasis. Intriguingly, four pairs of substitutions across four different proteins emerged as significantly enriched and formed a network of interactions: T30I in envelope, H125Y in the membrane glycoprotein, S13I in the S protein and T3058I in ORF1a (Fig. 1b). This finding was intriguing on multiple fronts. First, envelope and membrane glycoprotein have generally remained very conserved throughout the entire pandemic, and, specifically, the two replacements found are at highly conserved sites (Supplementary Table 1). However, despite their rarity, we found that some of the pairs of mutations also tend to significantly co-occur in globally dispersed sequences (blue asterisks in Fig. 1b). The replacements in S and ORF1a, on the other hand, have been observed only a small number of times in the global phylogeny. Notably, all of the first three proteins form a part in the virion structure itself; however, the functional meaning of this remains unclear. Other pairs of mutations found to co-occur were the three most common S antibody evasion mutations, yet these co-occurrences were not statistically significant. Larger cohorts of patients and further data will be required to determine the implications of these findings.

We noted very wide variation in the background and treatments given to different patients, both for their background condition and for Coronavirus Disease 2019 (COVID-19). When examining medical background, the patients could be roughly classified into one of the following categories: hematologic cancers, HIV/AIDS, organ transplantation and autoimmune disorders (Table 1). The latter two categories were often treated with steroids. Some, but not all, of the patients with hematological cancer and others were treated with antibodies targeting B cells, presumably causing profound B cell depletion. In line with this, most of the patients with confirmed B cell depletion showed negative serology for SARS-CoV-2 at one or more timepoints (Supplementary Table 1). Some patients were treated with ABT against SARS-CoV-2, whereas others were not; and, in some ABT-treated patients, antibody evasion mutations were detected, whereas, in others, they were not. Finally, we found that, whereas in some ABT-treated patients, antibody evasion mutations were detected, sometimes these mutations fixed before the treatment. The course of VL across time, coupled with ABT, is illustrated for some patients in Fig. 2b. Thus, for example, patient P5 and the patient described by Choi et al.8 are shown to fix antibody evasion mutations just before ABT.

a, Results of a random forest classifier used to explain an outcome of antibody evasion. The effect of each feature on model outcome is shown: mean SHAP absolute values (left) and individual SHAP values for each feature, ordered based on contribution (right). The color range corresponds to the values of each feature, from red (high value) to blue (low value). b, Illustration of individuals who experienced viral rebound and mutations associated with antibody evasion. Ct values are used here as an inversed proxy for VL and are presented according to the day of infection (denoted as number of days after the first positive PCR test), with the dashed red horizontal line and shaded area representing a negative or borderline result, respectively. Blue dots represent samples that were sequenced. Only amino acid replacements in the S protein are shown, with predicted antibody evasion mutations shown in bold (Supplementary Table 1). Positive samples from BAL, ETA or sputum are indicated in brown. Antibody-based anti-COVID-19 treatments are represented by dashed vertical lines on the day of administration. ALL, acute lymphoblastic leukemia; APS, antiphospholipid syndrome; CLL, chronic lymphocytic leukemia; ETA, endotracheal aspirates; P, patient.

We noted that many patients (four of the six patients sequenced herein and several others in the total set of 27 patients) displayed an intriguing cycling pattern of VL (reflected by cycle threshold (Ct) values), with very high Ct values reaching negative or borderline-negative results at one or more stages of the infection, followed by rebound of the virus (Fig. 2b). In the four above-mentioned patients, this rebound was accompanied by clinical evidence of disease, which is highly suggestive of active viral replication. Several different hypotheses could explain this pattern. First, the virus may have cleared and been followed by re-infection with another variant. Because this pattern can be ruled out using sequencing, such cases were excluded from our analysis (Methods). Second, the virus may cycle between different niches, such as upper and lower airways. Its re-emergence in the upper airways (nasopharynx) may be due to selective forces or genetic drift. When considering selective forces, viral rebound may occur due to the near clearance of the virus, driven either by ABT or by the endogenous immune system, and followed by the emergence of a more fit variant with antibody evasion properties.

We fit a random forest classifier to assess the effect of different clinical and demographic features on an outcome of antibody evasion (Methods and Supplementary Tables 2 and 3). We treated each sequencing timepoint as a sample and used age, sex, B cell depletion, steroid treatment, days-since-infection, ABT and viral rebound as explaining variables. We then trained a classifier while considering the structure of the data, composed of samples belonging to the same patient (Methods). After training, we generated SHapley Additive exPlanations (SHAP) values31,32 that quantified the effect of each feature on the classifiers outcome. We found that the feature with the strongest association with antibody evasion was viral rebound, followed by days-since-infection and age (Fig. 2a). Other features had a relatively minor effect, and similar results were obtained with other classifiers (Supplementary Figs. 2 and 3). Regarding the effect of age, we note that young individuals are a minority in this dataset and rarely present an antibody evasion mutation, and, thus, the small sample size may be responsible for the small effect observed with this feature. All in all, these results suggest that ABT is not necessary for driving antibody evasion, in line with the fact that evasion is sometimes observed before (for example, E484K in P5; Fig. 2b) or in the absence of ABT (for example, ref. 33). If so, what may be driving immune escape in some patients is actually the weakened immune system of the patient, although ABT and its waning may also play a role in some patients. To summarize, viral rebound may serve as an indicator for the emergence of a mutant with properties of antibody evasion (Fig. 2b), and monitoring for viral rebound in patients with chronic disease is critical.

Next, we went on to examine patterns of variation over time across the different patients. In many of the case reports, the authors noted the emergence and disappearance (and sometimes re-emergence) of particular substitutions (Fig. 3). For example, in patient B reported by Perez-Lago et al.34, the mutation S:A1078V is present at a low frequency on day 81, rises to fixation on day 100 and then drops and disappears from day 107 onwards (Fig. 3). When re-analyzing the data, we noted that this pattern of dynamic polymorphisms across time was observed in most patients (Supplementary Table 2). From an evolutionary point of view, it is quite unlikely for one or more substitutions to disappear from a given population, and, because we observe this at very different loci across all patients, we consider that it is not likely that all of this pattern is due to recurrent sequencing problems or due to biases of the viral polymerase. We and others have previously noted sequencing errors that occur predominantly when VL is low, when errors that occur during reverse transcription or early PCR cycles are carried over to higher frequencies10,11,35. However, this phenomenon most often leads to errors in intra-host variants segregating at relatively low frequency and is less common at the consensus sequence level, which is defined here as mutations present at a frequency of 80% or higher. We, thus, conclude that the existence of dynamic polymorphisms likely reflects subpopulations of the virus that co-exist in a patients body, as further discussed below.

Each series of boxed lines represents a patient, and each line represents a sequenced timepoint with time-since-infection on the right. The different open reading frames are color-coded. For each patient, only mutations relative to the first timepoint sequenced that appeared at a frequency ranging from 20% to 100% are shown. Most samples were nasopharyngeal, except those marked by asterisks, which were obtained from endotracheal aspirates.

Visit link:

Drivers of adaptive evolution during chronic SARS-CoV-2 infections - Nature.com

Posted in Evolution | Comments Off on Drivers of adaptive evolution during chronic SARS-CoV-2 infections – Nature.com

Evolution to acquire Nolimit City for up to 340m – iGaming Business

Posted: at 12:38 pm

Live dealer giant Evolution has agreed to acquire slot developer Nolimit City, as it continues to expand its presence in the slots sector, for up to 340m.

Evolution will pay an initial consideration of 200m, with a further 140m dependent on future performance in 2023, 2024 and 2025. All of this total will be paid in cash, paid from existing reserves.

Nolimit City will be the fourth brand under Evolutions slots portfolio. In 2020, the business took its first step into the sector when it acquired NetEnt, which itself owns Red Tiger. Last year, Evolution then acquired Big Time Gaming, the supplier best-known for the Megaways mechanic.

Like with the acquisition of Big Time Gaming, Evolution boasted of Nolimit Citys efficient profit margins. The business expects revenue of 30m in 2022, while its earnings before interest, tax, depreciation and amortisation are expected to be around 23m.

The acquisition is in line with Evolutions strategy of being the worlds number one provider of online casino games, supplying its customers with the best gaming content, the Evolution board said.

Evolution chairman Jens von Bahr said he was particularly impressed with the level of innovation that Nolimit City had brought to the space.

With the addition of Nolimit City to the Evolution family we extend our portfolio of truly innovative and cutting edge games from the top brands and game makers in the industry, Von Bahr said. We have followed Jonas [Tegman], Emil [Svrd] and their team for a few years and been impressed as they have established a completely new style of slot games. I am proud that yet another of the very best minds in our industry has chosen to join the Evolution network.

Nolimit City cofounder Jonas Tegman said the two businesses are fully aligned on strategy.

I cannot think of a better match between two companies than between Nolimit City and Evolution, we are fully aligned in terms of people, product, technology and how to get the job done, he said. The slot vertical is under massive change, and we cant wait to take on the challenge of global expansion together with Evolution, helping out with navigating towards the best slot product offering in the market.

In its recent financial reports, Evolution reported slow earnings for its existing slots portfolio. While its core live-dealer products experienced growth of 44.3% in Q1, revenue from its RNG division division was up by 19.3%. However, this RNG growth was almost entirely inorganic, due to the acquisition of Big Time Gaming. On a like-for-like basis, RNG revenue was up by only 1.8%.

Completion of the acquisition is conditional upon regulatory approvals and is expected in Q3 of 2022.

Go here to see the original:

Evolution to acquire Nolimit City for up to 340m - iGaming Business

Posted in Evolution | Comments Off on Evolution to acquire Nolimit City for up to 340m – iGaming Business

Hitching a Ride Through History: The Evolution of Kansas City Public Transportation – Flatland

Posted: at 12:38 pm

Share this story

Published 4 hours ago

It may be hard to imagine today. But Kansas City was once home to the third largest cable car system in the United States behind only San Francisco and Chicago.

Once known for its expansive transit system before becoming extraordinarily dependent on the automobile Kansas City has a deep and ever-evolving history of public transportation and ridership.

The history of public transit can be simplified into several overlapping eras, according to local historian and retired Metropolitan Community College professor Bill Worley. The first four eras include the mule and horse car era, the cable car era, the streetcar era and the introduction of buses.

The story, however, ends with a back-to-the future twist in the fifth era the reintroduction of streetcars on Main Street in 2016.

The first streetcars in Kansas City were pulled by mules and horses starting in 1869. Locally, mules were more common, Worley said. The cable car era began in the early 1880s in San Francisco. Very soon after, Kansas City began using cable cars in the same decade.

From 1895 to 1912, ridership increased from 31 million fare-paying riders to nearly 120 million riders, according to data compiled in the book A Splendid Ride: The Streetcars of Kansas City, 1870-1957 by Monroe Dodd.

Kansas City became home to one of the most extensive cable car systems in the country, not seeing the last of the cable cars until 1912. After that, streetcars were electrified.

Ridership numbers broken down by type of transportation began in 1922, with 1922 to 1928 data documented in the 1928 Report on the Street Railway Situation published by the Kansas City Public Service Co.

In 1922, only railway revenue passengers were counted, with reports documenting over 136 million revenue passengers the peak of the railway. The number of railway revenue passengers would never be as large again.

Even overall ridership of public transit would not get that high again until 1942 during World War II when it reached nearly 153 million revenue passengers.

The arrival of the motor bus in 1924 introduced a new form of public transportation to Kansas City. In 1925, the first year motor bus data was documented, bus ridership made up only 1% of total revenue transit passengers.

In 1926, the Kansas City Public Service Co. took over running the citys public transportation, consolidating ownership of public transportation. The company began replacing streetcars with buses where it would save money, but the change didnt happen overnight.

From 1926 to 1937, data from Kansas City Public Service Co. reports showed a gradual migration from streetcar users to motor bus users. By 1937, the number of motor bus revenue passengers made up 20% of all ridership.

The introduction of motor buses was accompanied by automobiles, too, and Kansas City was no stranger to the Model T.

Many think of the Model T and think of Detroit, or the nearby suburb of Dearborn where the primary factory was built, Worley said.

And thats true, but they also produced it in 1908 in Kansas City, Worley said. Kansas City was the other Ford manufacturing plant the only other Ford manufacturing in the entire United States at that time.

Unlike the motor bus, automobiles were privately owned and not controlled by bigger companies, like the Kansas City Public Service Co., so only wealthy people could afford cars in the beginning.

The automobile takeover was a slow process, Worley said, since not many people could afford them early on. Initially, they didnt pose an immediate threat to public transit.

Still, it was estimated that the 80% of people using mass transit to enter downtown dwindled to 50% by 1939, according to A Splendid Ride.

In 1937, the trolley bus was introduced, which was a hybrid between the streetcar and motor bus. The first year data was available for trolley ridership was in 1940 with reports of 9,724,027 trolley bus revenue passengers carried, making up 15% of the total riders for the year.

Reports for 1938 and 1939 were not available, but it still only took three years for trolley bus ridership to reach 15%. In comparison, the motor bus did not reach at least 15% until its 11th year of service.

Overall ridership of public transportation was beginning to make a comeback following dips in ridership during the Great Depression.

Streetcar ridership rebounded slightly in the 1940s with World War II. Gasoline was rationed, so the electrified streetcars were unaffected. Meanwhile, both cars and buses required gasoline to run.

The comeback still wasnt enough to save the streetcars. A variety of factors including the growing prevalence of automobiles, construction of new highways and the migration of people to the suburbs ultimately brought an end of the streetcar.

In August 1955, the board members of the Kansas City Public Service Co. unanimously voted to sell all 144 street cars, according to an article from the Kansas City Times from Aug. 13, 1955.

The company was negotiating sales for streetcars to be used in a European city with a rail system of the same gauge as in Kansas City. All of the trolley wires and other equipment would also be sold.

At the time of the article, there were only six streetcar lines still in existence: Country Club, 31st Street, Troost Avenue, 12th Street, Dodson and the Rockhill.

While the decision would save money overall, not everyone agreed with the switch. Another article from the Kansas City Times reported on the City Council meeting when the streetcar decision was introduced. With about 40 people attending, seven spoke against the decision.

One resident who spoke against the decision was Joe Gregg, a University of Kansas City student who brought a petition with 150 signatures all against banning streetcars.

The reasons cited streetcars being more comfortable, quieter and safer. Another resident in opposition, Robert B. Langworthy said of the people who spoke for the proposal, none spoke as a transit rider.

A petition against the removal of the final streetcars was even up to over 11,000 signatures by April 1957, according to coverage from the Kansas City Times. In the same year, the population was 854,000.

In 1957, the last streetcars in Kansas City were removed. Buses became the primary public transit option in the Kansas City area.

A quarter of a century ago, Kansas City had 800 street cars. For the rest of this week it will have 41. Next Sunday, all will be gone, read an article from the Kansas City Star in June 1957. This is the week to say good-by to Kansas Citys street cars.

In 1965 the Kansas City Area Transportation Authority took over transit operations and is still in charge today. From there, automobiles take over, Worley said.

In 1970, total ridership had decreased by over 64% from 1955. From there, data continued to trend down into the turn of the century.

The Kansas City public transportation system, in many ways, looks both different and similar to the way it did when ridership was at its highest.

The new streetcar debuted in May 2016 with nearly 1.4 million unlinked passenger trips for the partial year. Trips peaked in 2019, reaching over 2.2 million unlinked passenger trips.

Without paid fares, ridership is measured using automatic passenger counters placed above streetcar and bus doors counting passengers each time they enter a vehicle regardless of how many vehicles they take. These measure unlinked passenger trips, which are also used by the Federal Transit Administration as the national standard for measuring transportation use.

Free fares are a new part of public transportation, with the new streetcar running from Union Station to River Market being permanently free.

The COVID-19 pandemic reduced streetcar ridership, but it never stopped running.

Now, streetcar ridership is running at about 75% of pre-pandemic rates, according to Donna Mandelbaum, communications and marketing director for the Kansas City Streetcar Authority.

As of May 2022, ridership for the year totaled 529,752 unlinked passenger trips.

Buses also continued to run during the pandemic, even beginning a free-fare program in 2020 that had been planned to gradually take place beginning years prior. The program, according to Dick Jarrold, senior vice president of RideKC, helped keep both operators and passengers safe by limiting contact.

Cindy Baker, interim vice president of marketing and communications for the Kansas City Area Transportation Authority, said they plan to continue the free fare service through at least 2023.

You know, we hope to go beyond that, but we do have to look at a variety of revenue sources and partnerships to continue the program, Baker said.

Bus driver shortages have led to the sort of staffing issues many businesses have seen. Baker said pre-pandemic, there used to be an extra board, or 12 to 15 backup bus drivers. Now, the authority has none.

We are always taking applicants and really looking for folks to apply, go through training and get on there actually helping us deliver the service, Jarrold said. Recruiting has been ramped up significantly.

As for overall transit ridership, trends have continued to decrease over the past few decades, even before the pandemic. The most recent decrease began in 2013 and 2014. Ridership went from 16,166,950 passenger trips in 2012 to 12,044,179 in 2019, getting as low as 9,139,474 in 2021 during the COVID-19 pandemic.

Kansas City has become less dense as a community, which is one cause of the decrease in ridership, according to Jarrold. Many communities in the country follow the trend, too.

We are, as many communities are, autocentric, and invest significantly in roads, highways, etc., Jarrold said. So that, over the long haul, has had an impact on transit and transit ridership.

For now, Kansas Citys public transportation and ridership continue to evolve even through a pandemic.

The streetcar, I think, has changed a lot of peoples ideas about transit and what that means for the future, Jarrold said.

Annie Jennemann is a Dow Jones data journalism reporting intern. She is a graduate student at the University of Missouri.

Discover more unheard stories about Kansas City, every Thursday.

Check your inbox, you should see something from us.

Link:

Hitching a Ride Through History: The Evolution of Kansas City Public Transportation - Flatland

Posted in Evolution | Comments Off on Hitching a Ride Through History: The Evolution of Kansas City Public Transportation – Flatland

An evolution in the industry: Top trends for Space 2.0 – Military Embedded Systems

Posted: at 12:38 pm

Story

June 22, 2022

Space 2.0 represents a major shift in the development of defense and aerospace applications: With artificial intelligence (AI) applications moving aboard, systems must support higher processing and throughput capabilities. On-orbit processing requires an adaptive architecture so that systems can process, analyze, and reconfigure themselves to optimize performance and responsiveness. This, in turn, is driving innovation in organic packaging and reliability. Finally, to build these complex systems, engineers need greater design agility to accelerate development, maintain lower costs, and achieve faster time-to-launch.

There has never been a more exciting time to design for space. Developing and launching systems into space is no longer solely within the reach of governments. The innovation, agility, and vision of private enterprise are ushering in a whole new era: Space 2.0. The shape of space is expanding far beyond traditional defense and aerospace to an expansive range of practical and profitable applications.

Consider SpaceXs low-Earth orbit constellation of satellites to provide broadband connectivity. Because these satellites require less fuel to get into orbit and are less expensive to launch, they can deliver value while having a shorter expected lifecycle of just four or five years. In this time, technology will have advanced and the next generation of satellites will be ready to replace them.

Emerging trends

The tremendous interest in low-Earth-orbit constellations goes well beyond simply connecting the worlds seven billion-plus people. There are countless applications possible with this technology. Using a traditional satellite can take up to a month to process an image. In contrast, a constellation of smaller craft can provide real-time imaging that can be used immediately to help firefighters on the ground, detect and track objects like planes using hyperspectral cameras and synthetic aperture radar, or transform how users navigate the planet, just to name a few examples.

Low-Earth-orbit satellites also can mean shorter missions, which reduces risk and costs. Using satellites in this way means a possible increase in the overall pace of innovation in space, moving to newer process nodes and packaging technologies much earlier. When the payload can be updated every five years instead of every 10 to 20 years, this enables mission specialists to do more with less each successive generation.

Among the major trends is the rise of on-orbit processing, which requires more compute and input/output (I/O) slots; this, in turn, is driving the move toward organic BGA [ball-grid array] packaging and away from legacy technologies like ceramic column-grid-attach solutions. Also seen: a sharp increase in development agility, resulting in faster evaluation, prototyping, and the launch of new technology. (Figure 1.)

[Figure 1| On-orbit processing requires more compute and I/O, which means a shift in packaging requirements.]

Challenges of designing for space

Operating in space presents some of the most challenging barriers to design: First, the environment is extreme and unforgiving, and systems must be ruggedized and designed for no single point of failure. Downtime for maintenance is not an option in space. In addition, designers must deal with challenges such as:

[Figure 2| The challenges of designing for space are detailed.]

Machine learning in space

The foundation of addressing these design challenges is to offload processing from the ground station and bring it on-board. Rather than sending data and images to Earth for processing and introducing all the latency associated with this satellites will process data themselves and send information about what that data means instead. This requires satellites to support AI capabilities in orbit, including object detection and image classification, to start.

A key part of making on-orbit processing viable is understanding that AI is an ever-changing field of research and that machine learning (ML) models require continual optimization. First, ML models can adapt over time to become faster and more accurate. Second, the algorithms themselves change as new breakthroughs are made. Thus, space-based systems need a flexible and adaptive architecture that can change models and algorithms on the fly.

Because ML is involved, programmable software is not enough. ML is compute-intensive and requires hardware acceleration to provide real-time responsiveness. When the algorithms change, the hardware needed to accelerate the algorithms change as well. Thus, an adaptive platform requires a combination of configurable software and hardware that can update in concert with each other. In short, to support on-orbit processing, systems need to be able to process, analyze, and reconfigure from the architecture up through to the application code.

Moving toward organic packaging

Being able to deliver reliable system components that will operate during the long mission life needed and the extreme environments found in space require a completely different level of design, manufacturing, and testing. Quality control must work with design teams from the very start to achieve the levels of reliability required by the government.

For example, Six Sigma, an established and reliable leader in the defense and aerospace industry for over 30 years, is the sole supplier of solder column attachment to ceramic-grid-array packages, primarily used in space applications. While the government has actively sought out a second source, the processes and expertise required to provide the world-class reliability offered by Six Sigma are so rigorous that, to date, no other supplier has been able to achieve certification.

As the industry moves toward new process nodes like 7 nm technology, the dies are too large for legacy space-grade packaging and techniques like solder column attachment. Simply put, the processing requirements for on-orbit AI wont fit anymore. Theres also the significant increase in I/O to consider.

As a result, the industry is beginning to move away from legacy packaging and to organic packing and flip-chip packaging for space-grade products. In addition to being able to support the larger die size and I/O needed, organic packaging reliability has been proven in the commercial market and has a much wider ecosystem of support.

Of course, there are still challenges to overcome: Space development will not shift overnight. It takes years to qualify space-grade products, and the many legacy ecosystems in place will continue to need support. However, the defense sector is interested in having access to the latest technology, and the players understand that innovation means change.

Continued innovation in space-based design and systems

The defense and aerospace industries as well as any company considering space-based applications need technology that can provide the necessary performance, adaptability, and reliability for Space 2.0 applications. New technology alone is not enough, however. As systems become more complex, the difficulty in integrating components becomes more challenging. Even evaluating a simple ML platform can take weeks when developers must integrate components from multiple vendors themselves.

Its critical to understand the demanding requirements developers face while building reliable systems for space.

True innovation and on-orbit reconfigurability will be possible with:

Space 2.0 promises an exciting future. The ability of the private sector to launch its own systems brings new vision to the industry. On-orbit processing will extend the capabilities of space-based systems into viable commercial applications that improve quality of life around the world. True unlimited on-orbit reconfigurability provides the software and hardware flexibility these systems need to implement and accelerate real-time AI capabilities. The move to organic packaging will enable the industry to onboard the processing and I/O required for next-generation systems. OEMs will enjoy the many benefits of design agility as it becomes easier to evaluate, design, adapt, and reuse space-based IP.

Inderjit Singhis the Senior Director of Assembly & Packaging Engineering Group at AMD. He has been inthis role for the last11 years as part of the Adaptive and Embedded Computing Group (formerly Xilinx). He has more than 31 years of assembly, manufacturing, package development, design, reliability, and chip-to-package interaction experiences. He holds a bachelor of applied science degree, majoring in applied physics, from University Science Malaysia.

Minal Sawantis the Director for Aerospace & Defense Products at AMD. Aspart of the Adaptive and Embedded Computing Group (formerly Xilinx), she is responsible for driving the business strategy for AMD A&D solutions and drives enablement of new-generation platforms and architectures. Minal has supported defense, aerospace, and high-reliability markets for over 20years. Minal holds a masters degreein electrical engineering from University of Oklahoma.

AMD https://www.amd.com/en

Featured Companies

Go here to read the rest:

An evolution in the industry: Top trends for Space 2.0 - Military Embedded Systems

Posted in Evolution | Comments Off on An evolution in the industry: Top trends for Space 2.0 – Military Embedded Systems

ZENVIA announces evolution of its business structure – Yahoo Finance

Posted: at 12:38 pm

Enhanced business structure to provide more flexibility and autonomy, promoting value generation and strengthening customer relationships

SO PAULO, June 22, 2022 /PRNewswire/ -- Zenvia Inc. ("ZENVIA" or "Company") (NASDAQ:ZENV), the leading cloud-based CX communications platform in Latin America empowering companies to transform their existing communications with end customers along their life cycle, announced today that will be implementing changes in the corporate structure following the accelerated growth of its SaaS business.

The Company has focused on strengthening its three existing business lines: SaaS, CPaaS, and Consulting. To fully capture the potential created with the successful integration of D1 Smarkio and ongoing integration and optimization of Movidesk and SenseData, ZENVIA is tactically reorganizing its structure to allow for more autonomy when it comes to revenue generation activities by having teams exclusively dedicated to each business line.

As a result, Raphael Godoy, our former CMO, will become SaaS Chief Revenue Officer (CRO). Cristiano Franco will be CPaaS CRO, and Luca Bazuro will become the Consulting business line CRO. The Consulting business, resulting from the D1 Smarkio integration, will now support the adoption of Zenvia's SaaS and CPaaS products by the corporate market.

Raphael, Luca, and Cristiano will report to Cassio Bobsin, ZENVIA's founder and CEO. Rogrio Perez, former CX Services Executive Director, will now report directly to Raphael Godoy.

Shay Chor, current Investor Relations Officer, will become ZENVIA'S Chief Financial Officer (CFO), combining financial, legal, and investor relations departments, and will report directly to Cassio Bobsin. Mariana Cambiaghi will remain as Finance Executive Director, responsible for accounting, financial controls & processes, and treasury. Mrs. Cambiaghi will report directly to Shay Chor, as will Laura Hirata, who remains the General Counsel.

Story continues

Additional ZENVIA's executive team members reporting directly to Cassio Bobsin include Gabriela Vargas, who will become Chief Marketing Officer (CMO), comprising Strategy, Business Excellence, and Institutional Marketing; Lilian Lima, Chief Technology Officer (CTO), responsible for IT and Products; and Katiuscia Teixeira, Chief Human Resources Officer (CHRO), to lead People and Culture areas.

"This new organizational chart follows a natural and much needed evolution of our company's structure following the tremendous growth recorded in value-added solutions. We have strengthened our strategy as a customer experience platform and will now have a business structure dedicated to each respective target market. This strategic decision will unlock the true potential of what we have accomplished in the past two years, and will ignite us to continue creating unique experiences for the end consumer," said Cassio Bobsin, ZENVIA's CEO.

"I am honored to have the opportunity to further support ZENVIA's transformation and am confident that we will maintain the track record of delivering excellent results as we have done since the IPO, despite the challenging macro environment. We believe that this organizational change will better position us for a strong business expansion, especially in the SaaS area, and that our teams' size and structures are adequate to support this growth," says Shay Chor, ZENVIA's CFO.

About ZENVIAZENVIA is driven by the purpose of empowering companies to create unique experiences for customer communications through its unified end-to-end platform. ZENVIA empowers companies to transform their existing customer communications from non-scalable, physical, and impersonal interactions into highly scalable, digital first and hyper contextualized experiences across the customer journey. ZENVIA's unified end-to-end CX communications platform provides a combination of (i) SaaS focused on campaigns, sales teams, customer service and engagement, (ii) tools, such as software application programming interfaces, or APIs, chatbots, single customer view, journey designer, documents composer and authentication, and (iii) channels, such as SMS, Voice, WhatsApp, Instagram and Webchat. Its comprehensive platform assists customers across multiple use cases, including marketing campaigns, customer acquisition, customer onboarding, warnings, customer services, fraud control, cross-selling and customer retention, among others. ZENVIA's shares are traded on Nasdaq, under the ticker ZENV.

Forward-Looking StatementsThis press release contains "forward-looking statements" within the meaning of the "safe harbor" provisions of the Private Securities Litigation Reform Act of 1995. These forward-looking statements are made as of the date they were first issued and were based on current expectations, estimates, forecasts and projections as well as the beliefs and assumptions of management. Words such as "expect," "anticipate," "should," "believe," "hope," "target," "project," "goals," "estimate," "potential," "predict," "may," "will," "might," "could," "intend," variations of these terms or the negative of these terms and similar expressions are intended to identify these statements. Forward-looking statements are subject to a number of risks and uncertainties, many of which involve factors or circumstances that are beyond Zenvia's control.

Zenvia's actual results could differ materially from those stated or implied in forward-looking statements due to several factors, including but not limited to: our ability to innovate and respond to technological advances, changing market needs and customer demands, our ability to successfully acquire new businesses as customers, acquire customers in new industry verticals and appropriately manage international expansion, substantial and increasing competition in our market, compliance with applicable regulatory and legislative developments and regulations, the dependence of our business on our relationship with certain service providers, among other factors.

Contacts

Cision

View original content:https://www.prnewswire.com/news-releases/zenvia-announces-evolution-of-its-business-structure-301572692.html

SOURCE Zenvia

View original post here:

ZENVIA announces evolution of its business structure - Yahoo Finance

Posted in Evolution | Comments Off on ZENVIA announces evolution of its business structure – Yahoo Finance

Page 42«..1020..41424344..5060..»