If you flew your spaceship through a wormhole, could you make it out alive? Maybe… – SYFY WIRE

Can you already hear Morgan Freemans sonorous voice as if this was another episode of Through the Wormhole?

Astrophysicists have figured out a way to traverse a (hypothetical) wormhole that defies the usual thinking that wormholes (if they exist) would either take longer to get through than the rest of space or be microscopic. These wormholes just have to warp the rules of physics which is totally fine since they would exist in the realm of quantum physics. Freaky things could happen when you go quantum. If wormholes do exist, some of them might be large enough for a spacecraft to not only fit through, but get from this part of the universe to wherever else in the universe in one piece.

"Larger wormholes are possible with aspecial type of dark sector,a type of matter that interactsonly gravitationally with our own matter. The usual dark matter is an example.However, the one we assumed involves a dark sector that consists of an extradimensional geometry,"Princeton astrophysicist Juan Maldacena and grad student Alexey Milekhin told SYFY WIRE.Theyrecently performed a new study that reads like a scientific dissection of what exactly happened to John Crichtons spaceship when it zoomed through a wormhole in Farscape.

"This type of larger wormhole isbased on therealization that a five-dimensional spacetime could be describing physics at lowerenergies than the ones we usually explore, but that it would have escaped detection because it couples with our matter only through gravity," Maldacena and Milekhinsaid."In fact, its physics issimilar to adding many strongly interacting massless fields to the known physics,and for this reason it can give rise to the required negative energy."

While the existence of wormholes has never been proven, you could defend theories that they could exist deep in the quantum realm. The problem is, even if they do exist, they are thought to be infinitesimal. Hypothetical wormholes would also take so long to get across that youd basically be a space fossil by the time you got to the other end. Maldacena and Milekhin have found a theoretical way for a wormhole thatcould get you across the universe in seconds and manage not to crush your spacecraft. At least it would seem like seconds to you. To everyone else on Earth, it could be ten thousand years. Scary thought.

"Usually whenpeople discuss wormholes, they have in mind 'short'wormholes: the ones forwhich the travel time would be almost instantaneous even for a distant observer.We think that such wormholes are inconsistent with the basic principles of relativity," the scientists said. "The ones we considered are 'long': for a distant observed the path alongnormal space-time is shorter than through the wormhole.There is a time-dilation factor because the extreme gravity makes travel time very short for the traveller. For an outsider, the time it takes is much longer, so we have consistency with the principles of relativity, which forbid travel faster than the speed of light."

Fortraversable wormholesto exist, but the vacuum of space would have to be cold and flat to actually allow for what they theorize. Space is already cold. Just pretend that its flat for the sake of imagining Maldacena and Milekhin's brainchild of a wormhole.

"These wormholes are big, the gravitational forces will be rather small. So, if they were in empty flat space,they would not be hazardous. We chose their size to be big enough so that theywould be safe from large gravitational forces," they said.

Negative energy would also have to exist in a traversable wormhole. Physics forbids such a thing from being a reality. In quantum physics, the concept of this exotic energy is explained by Stephen Hawking as the absence of energy from two pieces of matter being closer together as opposed to being far apart, because energy needs to be burned so they can be separated despite gravitational force struggling to pull them back together. Fermions, which include subatomic particles such as electrons, protons, and neutrons (with the exception that they would need to be massless), would enter one end and travel in circles. They would come out exactly where they went in, which suggests that the modification of energy in the vacuum can make it negative.

"Early theorized wormholes were not traversable; an observer going through a wormhole encounters a singularity before reaching the toher side, which is related ot the fact that positive energy tends to attract matter and light," the scientists said."This is whyspacetime shrinks at the singularity of a black hole. Negative energy prevents this. The main problem is that the particular type of negative energy that is needed is not possible in classical physics, and in quantum physics it is only possible in some limited amounts and for special circumstances.

Say you make it to a gaping wormhole ready to take you...nobody knows where. What would it feel like to travel through it? Probably not unlike Space Mountain, if you ask Maldacena and Milekhin. In their study, they described these wormholes as "the ultimate roller coaster."

The only thing a spaceship pilot would need to do, unlike Farscapes Crichton, who totally lost control, is get the ship in sync with the tidal forces of the wormhole so they could be in the right position to take off. These are the forces that will push and pull an object away from another object depending on the difference in the objects strength of gravity, and that gravity would power the spaceship through.This is whyit would basically end upflying itself. But there are still obstacles.

"The problem is that every object which enters the wormhole will be acceleratedto very high energies," the scientists said."It means that a wormhole must be kept extremely cleanto be safe for human travel. In particular, even the pervasive cosmic microwaveradiation, which has very low energy, would be boosted to high energies andbecome dangerous for the wormhole traveler."

So maybe this will never happen. Wormholes may never actually be proven to exist. Even if they dont, it's wild to think about the way quantum physics could even allow for a wormhole that you could coast right through.

Visit link:

If you flew your spaceship through a wormhole, could you make it out alive? Maybe... - SYFY WIRE

Managing Complexity in the New Era of HPC – insideHPC

By Bill Wagner, CEO Bright Computing

Until recently, High Performance Computing (HPC) was a fairly mature and predictable area of information technology. It was characterized by a narrow category of applications used by a largely fixed set of industries running on predominantly Intel-based on-premise systems. But over the last few years, all of that has begun to change. New technologies, cloud, edge, and a broadening set of commercial use cases in the areas of data analytics and machine learning have set in motion a tsunami of change for HPC. This is no longer a tool for rocket scientists and the research elite. HPC is quickly becoming a strategic necessity for all industries that want to gain a competitive advantage in their markets, or at least keep pace with their industry peers in order to survive.

While HPC has given commercial users a powerful set of new tools that drive innovation, it has also introduced a variety of challenges to those organizations, including increased infrastructure costs, complexities associated with new technologies, and a lack of HPC know-how to take advantage of it. The challenges introduced by this new era of HPC have given rise to new implications for how companies execute their HPC strategies, with most embarking on a steep and risky learning curve to the detriment of their IT staff and budget.

On the technology side, options have never been more prevalent. With a wide range of choices in hardware, software, and even consumption models, organizations are now faced with an array of choices. New processing elements (Intel, AMD, ARM, GPUs, FPGAs, IPUs), containers (Kubernetes, Docker, and Singularity), and cloud options (hybrid and multi-cloud) have disrupted the HPC industry, challenging organizations to pick infrastructure solutions (both hardware and software) that will be able to tackle their diversifying workloads, while seamlessly working together.

In the past, HPC clusters were built with a fairly static mindset. The notion of combining X86 and ARM architectures in the same cluster was not even a consideration. Furthermore, extending your HPC cluster to the public cloud for additional capacity was something you planned to do down the road. Hosting containerized machine learning applications and data analytics applications on your HPC cluster harmoniously alongside traditional MPI-based modeling and simulation applications was on the wishlist. Offering end users bare metal, VMs, and containers on the same cluster was unheard of, and deploying edge compute as an integral part of your core HPC infrastructure fell under the category of maybe someday. However, in todays new world of HPC, IT managers and infrastructure architects are feeling the pressure to make all these things happen right now. The availability of new, highly specialized hardware and software is both enticing and intimidating. If organizations dont take advantage of all that HPC offers, someone else will, and losing the race for competitive advantage can deal a devastating blow to businesses vying for market share.

In the days of traditional HPC, you built a static cluster and focused your energy on keeping it up and running for its lifespan. As such, research institutions and commercial HPC practitioners alike were able to get by with building custom scripts to integrate a collection of different open-source tools to manage their clusters. But integrating tools for server provisioning, monitoring, alerts, and change management is difficult, labor-intensive, and an ongoing maintenance burden, but possible nonetheless for organizations with the human resources and skill to do so. In the emerging new era of HPC, clusters are far from static and far more complex as a result. The need to leverage new types of processors and accelerators, servers from different manufacturers, to integrate with the cloud, to extend to the edge, to host machine learning and data analytics applications and offer end-users VMs and containers alongside bare metal servers raises the bar exponentially for organizations that contemplate a do-it-yourself approach to building a cluster management solution.

Now more than ever before, there is an increasing need for a professional, supported cluster management tool that spans hardware, software, and consumption models for the new era in HPC. Bright Cluster Manager is a perfect example of a commercial tool with the features and built-in know-how to build and manage heterogenous high-performance Linux clusters for HPC, machine learning, and analytics with ease. Bright Cluster Manager automatically builds your cluster from bare metal setting up networking, user directories, security, DNS, and more and sits across an organizations HPC resourceswhether on-premise, in the cloud, or at the edgeand manages them across workloads. Bright can also react to increasing demand for different types of applications and instantly reassign resources within the cluster to service high-priority workloads based on the policies you set. Intersect360 states, Fundamentally, Bright Computing helps address the big question in HPC: how to match diverse resources to diverse workloads in a way that is both efficient today and future-proof for tomorrow. [1]

Bright Computing highlights the transition that one organization made from their home-grown approach to Bright Cluster Manager. The Louisiana Optical Network Infrastructurea premier HPC and high-capacity middle-mile fiber-optic network provider for education and research entities in Louisianamade the switch from their do-it-yourself HPC management setup to Bright Cluster Manager software to provide consistency, ease-of-use, and the ability to easily extend resources to the cloud.

LONI had previously used a homegrown cluster management system that presented a myriad of challenges including lack of a graphical user interface (GUI), daunting complexity for new employees, and proneness to out-of-sync changes and configurations, said LONI Executive Director, Lonnie Leger. Likewise, the do-it-yourself infrastructure we had placed constraints on end-users due to a lack of knowledge continuity concerning cluster health, performance, and capability. By leveraging a commercial solution such as Bright Cluster Manager, we now have an enterprise-grade cluster management solution that embodies the required skills and expertise needed to effectively manage our HPC environment.

This decision to move from in-house, piecemeal open source to a fully supported commercial cluster management solution was born out of necessity for LONI. With a desire to diversify their services, they had quickly outgrown their DIY setup and HPC expertise. While expansion wasnt impossible, it became a daunting task as internal personnel and HPC expertise were limited. This example is but one of many in the new world of HPC. As more organizations try to navigate the challenge of managing the interdependency between hardware and software, dealing with hardware problems, isolating performance degradations, and keeping up with a constant demand for changes, the need for commercially supported cluster management solutions has become more important than ever before.

All of the change taking place in HPC that breaks and broadens how we think about it makes it necessary to remind ourselves what HPC really is. Intersect360 Research defines HPC as the use of servers, clusters, and supercomputersplus associated software tools, components, storage, and servicesfor scientific, engineering, or analytical tasks that are particularly intensive in computation, memory usage, or data management. [2] This definition is important because it recognizes that HPC can be much broader than what it has been traditionally, and with that broadening comes a whole new level of complexity. The harsh reality is that as organizations embrace a broader definition of HPC to propel their business, they must come to terms with the complexity that needs to be overcome in order to manifest it.

With Bright Cluster Manager software, complexity is automated away and replaced with flexibility. Bright builds and pre-tests a turn-key high-performance cluster from a wizard based on your specifications and instruments the cluster with health checks and monitoring, provides detailed insight on resource utilization, dynamically assigns resources to service end-user workloads based on demand, extends your cluster to the public cloud for additional resources if desired, extends to the edge for centralized management of remote resources, supports mixed hardware environments, offers bare metal, VMs or containers from the same cluster and provides command line, GUI and API based access to all functionality.

As stated by Intersec360 Research, Data science and machine learning? Intel or AMD? GPUs or FPGAs? Docker or Kubernetes? Cloud, on-premise, or edge? AWS or Azure? Bright Cluster Manager lets users decide individually how to incorporate all of these transitionssome or all, mix and match, now or laterin a single HPC cluster environment. With so many independent trends continuing to push HPC forward, Bright Computing is aiming to be the company that helps users pull them all together. [3]

Bright Computing helps address the big question in HPC: how to match diverse resources to diverse workloads in a way that is both efficient today and future-proof for tomorrow.

For more information about Bright Computing solutions for HPC, visit http://www.brightcomputing.com or email us at info@brightcomputing.com

[1] Intersect360 Research Paper: Bright Computing: Managing Multiple Paths to Innovation

[2] Intersect360 Research Paper: Bright Computing: Managing Multiple Paths to Innovation

[3] Intersect360 Research Paper: Bright Computing: Managing Multiple Paths to Innovation

Bright Computing is the leading provider of platform-independent commercial cluster management software. Bright Cluster Manager, Bright Cluster Manager for Data Science, and Bright OpenStack automate the process of installing, provisioning, configuring, managing, and monitoring clusters for HPC, data analytics, machine learning, and OpenStack environments.

See the article here:

Managing Complexity in the New Era of HPC - insideHPC

Waymo Just Started Testing Its Driverless Trucks in Texas – Singularity Hub

Its been almost four years since Uber shipped 50,000 cans of beer across Colorado in a self-driving truck. It was the first-ever commercial shipment completed using self-driving technology. Now competitor Waymo is launching a much larger driverless trucking experiment.

With a new hub in Dallas, Waymos heavy-duty trucks took to the Texas roads this week to start the companys road testing of its driverless fleet, which consists of 13 Peterbilt 18-wheelers complete with cameras, lidar, and on-board computers.

The trucks wont be running completely autonomously; theyll always have a safety driver onboard and ready to take over at any moment. The company plans to hire local truckers for these jobs.

The trucks wont be carrying commercial goods yet, either, but theyll be loaded up with weights to mimic a commercial load. Waymo hasnt yet said how long the testing phase will last, or when it thinks its trucks will start operating fully autonomously.

From the sound of it, its not likely to be soon, nor sudden; a company spokesperson told Trucks.com, We will likely see fully driverless trucks begin to hit the road within the coming years, but its not going to be a flip the switch moment achieving fully driverless happens gradually, guided by a safe and responsible approach.

Waymo was planning to roll out its truck testing in the spring, but was delayed by the pandemic. The company started using its fleet of autonomous Chrysler Pacifica minivans to map Texas and New Mexico roads in January. They chose these states because of their expansive and high-quality highway systems, good weather, and large trucking industry. Waymo competitor TuSimple also has a hub in Dallas and is currently conducting testing on Texas roads (and in July announced plans for a cross-country network of driverless trucks).

Waymo started out as the Google Self-Driving Car Project in 2009. Though its still held by Alphabet, the company just had its first outside funding round in March of this year, raising a whopping $2.5 billion. Interestingly, CEO John Krafcik insists Waymo is not, in fact, a self-driving car company, instead focusing on its aim to build the worlds most experienced driver (though the fact that that driver is not a person necessarily implies its a computer). In practice, this highly experienced driver will be a package of both hardware and software that could be installed in cars and trucks.

Though some fear that the advent of self-driving trucks could put thousands of people out of a job, proponents of the technology make the opposite argument, citing a shortage of drivers thats causing truckers to be overworked.

Industry insiders envision self-driving tech acting more as a copilot than a replacement; for example, when they know theyre about to be on a highway for a good long stretch, drivers could switch into fully autonomous mode and take a nap, look at their phones, etc. Besides giving them the rest they need, theyd also save time and get to their destinations faster.

The transition, as mentioned above, will be gradual; so though we dont know exactly when, we can be fairly certain that at some point in the not-too-distant future, driverless trucks will be transporting a lot more than beerand not just in Texas.

Image Credit: Source: Waymo

View original post here:

Waymo Just Started Testing Its Driverless Trucks in Texas - Singularity Hub

Baltimore Writer’s Club: Q&A with JHU prof and author Andrew H. Miller – – Baltimore Fishbowl

Its hard to imagine a more opportune time to contemplate the lives were not leading. After five months of quarantine, however, finding the motivation to do so might be even harder. Where to begin? How to proceed? Luckily, Johns Hopkins English professor Andrew H. Miller has written the perfect guidebook to accompany us on this journey.

On Not Being Someone Else: Tales of Our Unled Lives, Millers third book and first for a general audience, is a thought-provoking blend of criticism and memoir as well as a page-turning introduction to literature and philosophy. Thoughts of contingency are viral, he writes:

Each of us no doubt could make a list: if my parents hadnt moved . . . when I was young; if I had gone to a different college; if I hadnt taken that one class with that one teacher; if [that particular relationship hadnt ended]; . . . if I had taken another job . . . What would my life be like? What would I be like?

Humans, Miller points out, are highly susceptible to the pathogenic tendency to ruminate. As a result, we go about our days haloed by evaporating, airborne unrealities, the specters of the people [we] might have become if wed made different choices, followed different paths. Why, he asks, do we try to figure out who we are by focusingfixatingon who were not? What makes us so convinced that understanding the present requires looking back on a past that never existed in the first place? And why are certain fantasies about who we might have been so pertinacious?

Unled lives, Miller argues, are a largely modern preoccupation, a by-product of the post-Industrial drive to capitalize on resources and opportunities. Alas, how burdensome weve found this! Modern culture abounds with examples of the mental torment wrought by what Miller calls our singularity, the inescapable fact that each of us is limited to being a single self among many possible selves. He examines three classic, twentieth-century variations on this theme: Robert Frosts The Road Not Taken (1916), Henry Jamess The Jolly Corner (1908), and Frank Capras Its a Wonderful Life (1946). As he goes on to show, even those Victorian novels most celebrated for their realismthe triple-decker tomes of Anthony Trollope and George Eliot, for instancedevote countless pages to describing what did not happen to their fictional protagonists. He notices a similar phenomenon on screen: Consider the number of films that depend on a characters sense of being misrecognized, taken as someone else, living the life of someone else, even while remaining him or herself.

As a work of criticism, On Not Being Someone Else isfittinglysingular. Persuasive but never prescriptive, Millers distinguishes himself through his willingness to explore the affective dimensions of personal narrative. Urging us to examine our strivings, failings, and longings, On Not Being Someone Else at once invites us to reflect and encourages us to be ourselves. Even if were just sitting at home, imagining the summer vacations we didnt take, and wondering when, perhaps whether, well ever be able to resume our wanderings.

BFB: In 2015, in the middle of working on this project, you went through a major life change. After more than two decades at the University of Indiana in Bloomington, you moved to Baltimore and started teaching at JHU. Im curious . . . Did the experience of moving shape your thinking on the subject of lifes crossroads?

Andrew H. Miller: Thats a good question. I hadnt thought about the book and the move as being conjoined in a meaningful way, but youre right to point out that both were happening at the same time. And journeys play a big role in this book about paths untaken. But Id begun the book long before the move; the most important way that the move and the book were connected wasnt in the content. Coming to Johns Hopkins, where faculty are given more time to focus on their research and writing, allowed me to finish the book. Thats the straightforward, logistical answer. It was a very difficult move: My wife [Mary Favret, also an English professor at Hopkins] and I were very attached to our colleagues at IU, and our son was starting his sophomore year of high school. Earlier in our careers wed had an opportunity to go to another school. That time we chose to stay at Indiana University; this time we moved.

BFB: What youre describing reminds me of the one traveler, two roads scenario in Frosts The Road Not Taken. Is it fair to say that at some level you were reenacting that pivotal moment of decision-making, only this time you took the other path?

Andrew H. Miller: Thats interesting . . . On some level, yes, but its important to keep in mind that conceiving of life in this way, as a forking path, is not without limitations. In this case, I think comparing the two decisions risks making the whole situation appear more deliberate than it was. We never had any regrets about staying at IU; it was not at all as though we felt we had made a bad choiceon the contrary, actually. Time passed. Then the chance to come to JHU presented itself, and we realized that circumstances had changed: we were in our fifties; our children were getting ready to leave home; suddenly, the idea of having everything around us be new seemed like good fortune. So here we are.

BFB: The book itself represents another kind of departure. Whereas your two previous works were scholarly monographs, here you address an audience of general readers in a somewhat unconventional format. What prompted those shifts?

Andrew H. Miller: I knew pretty early on that I wanted to write for a wider audience. In part, that desire was born of my perceiving how others responded to the topic. I could tell this was a subject that resonated deeply and broadly. This made me think about what I myself respond to in criticism. Several of the writers I find most engagingRoland Barthes, James Wood, Maggie Nelsonuse short, essayistic modes to great effect. I decided to try my hand at it.

BFB: In the preface, you outline some of the reasons unlived lives are hard to write about. For me, some of the most arresting moments in On Not Being Someone Else were the glimpses of your own writing process and its attendant struggles. Beyond the difficulties inherent in the topic, what did you find most challenging about this book?

Andrew H. Miller: Part of the appeal of this project was that it forced me to confront a new set of demands. I wanted to write about my academic work and my personal experience, and I wanted to do so in my own natural voice, in a conversational way. At the same time, I felt obligated not to betray my discipline. I also felt obligated not to betray my audience. Sometimes these imperatives were in conflict, and I had to struggle to find what seemed like the right balance.

Let me clarify what I mean. One of the main things that I wanted to do was leave readers with some work to do, something to keep thinking about after they close the book. I also wanted to give them plenty of space to disagree with my ideas and to arrive at their own interpretations. In that way, the book has the potential to become theirseach individual readers, I meanand to remain alive in their thoughts . . .

That vitality was important to me, and at a certain point I could see that in order to foster it, I had to hold back. I had to remind myself that the point was not to be exhaustive, not to have the last word. And this meant resisting some conventions of academic discourse, and restraining my own impulse to split philosophical hairs.

BFB: What youre saying reminds me of something you wrote in the introduction: only if we acknowledge what an author has not done can we appreciate what he has. Thats in reference to Henry Jamess uncanny knack for letting the reader see simultaneously the achieved work of art and its unrealized possibilities. Was that one of your goals?

Andrew H. Miller: Well . . . no and yes! On one hand, no, I didnt consciously set out to write something with that sort of duality. On the other, I can appreciate that I was drawn to unled lives in part because its such a paradoxical topic; it has that duality built-in and that appeals to me. So, in that spirit, yes, one of my goals was to show how thinking about the lives you havent led canby a kind of counter-motionlead you to think about the life youre leading. I find consolation in that, and I hope my readers do, too.

Andrew H. Miller will discuss On Not Being Someone Else: Tales of Our Unled Lives with William Egginton in a virtual event on Monday, August 31, at 6:30 p.m. Click here to register. This Zoom event is part of the Humanities in the Village series sponsored by The Ivy Bookshop and Bird in Hand.

Jennie Hann received her PhD in English from Johns Hopkins. The recipient of an Emerging Critics fellowship from the National Book Critics Circle, shes writing a biography of the poet Mark Strand.

More here:

Baltimore Writer's Club: Q&A with JHU prof and author Andrew H. Miller - - Baltimore Fishbowl

Jompame raises almost 400 thousand pesos to help a Dominican boy who fished at curfew for dinner – Dominican Today

Jompame, the online collection platform for social assistance, has raised almost 400 thousand pesos before noon today to help feed the boy Alexander de Len (13), who was stopped by several policemen at night, at beginning of curfew, when the minor returned from fishing for dinner.

At 11:27 am, the solidarity of the hands of 306 people had donated RD $ 375,590.75. The goal, according to the platform page, is RD $ 400,000 to cover one year of feeding the little one.

The initiative in favor of the minor, who was surprised by the four policemen in the vicinity of Los Negros beach, in Azua with the crab sack, is released accompanied by a video on Instagram in which the child explains that despite the financial limitations in the house where he lives with his father and grandmother, everything that arrives is shared.

He claims to be proud of his father for whom he asks help.

The four policemen who stopped him made a financial contribution that night and took him to his home.

Alexanders mother passed away when he was three years old.

Jompame Was founded by Katherine Motyka.Motyka is the second Dominican woman to be accepted at Singularity University at NASA.

Katherine graduated first in her class on Industrial Engineering and earned a scholarship to study Materials and Manufacturing Science at Jonkoping University, Sweden. But it is after completing her studies that she discovered the world of entrepreneurship, winning the competitive Startup Weekend Santo Domingo competition twice within a very short time.

Link:

Jompame raises almost 400 thousand pesos to help a Dominican boy who fished at curfew for dinner - Dominican Today

Top Theoretical Cosmologist Claims That The Term Big Bang is Misleading – Webby Feed

We hear so much about the Big Bang in astronomy and astrophysics nowadays that we all have a minimum idea about what the theory claims. All matter, time and space were once cramped into a singularity smaller than the tip of a needle. For apparently no reason, the singularity violently erupted, making all its material to engage into an everlasting expansion.

Most astronomers rely on the Big Bang theory as an explanation for how the Universe was born. Einsteins general relativity, the permanent expansion of space, as well as the cosmic microwave background radiation are all considered solid proof that the theory is correct. But there are still some shortcomings, whether scientists admit it or not. What existed before the Big Bang? Where did the singularity get that much matter? How did the laws of physics were born? These are only three of many other questions that dont seem to have a compelling answer.

Phillip James Edwin Peebles (more known as Jim Peebles) is one of the worlds leading theoretical cosmologists and a Nobel Laureate. During an exclusive e-mail interview for The Business Standard, Peebles spoke about the Big Bang and other aspects of cosmology. Although the great scientist doesnt deny the veridicality of the famous theory, he says the following:

Incidentally, the name big bang is misleading because it connotes an event in space time. The theory describes the close to homogenous evolution of the universe from very large densities and temperatures. But the name seems fixed so I use it.

He continued by saying:

And I might add that there are ideas about what the universe was doing before it was expanding, as inflation, but we have little in the way of tests.

Jim Peebles had major theoretical contributions in topics like dark matter, the cosmic microwave background, primordial nucleosynthesis, and structure formation. The scientist was also awarded half of the Nobel Prize in Physics in 2019 for theoretical discoveries in cosmology.

Related

View original post here:

Top Theoretical Cosmologist Claims That The Term Big Bang is Misleading - Webby Feed

John Boyega Says Hed Love To Play Red Hood Since Hes Too Old For Static Shock – Fortress of Solitude

After declaring that hes moved on from Star Wars, British actor John Boyega seems to have his sights set on a role in a DC movie.

The actor made the comments on Twitter after a fan suggested that he should play Static, a character who gains electricity-based abilities after being exposed to a chemical explosion. He later becomes a member of the Teen Titans which was possible after Milestone Comics closed and Static was incorporated into the DC Universe.

As announced at DC FanDome, a Static Shock movie is now officially in the works, with director Reginald Hudlin stating that the project is being developed as a theatrical feature at Warner Bros.

Boyega, who often interacts with fans on social media, responded to the Tweet saying hes too old and would love to see a newcomer play the part instead.

Several fans then joined the conversation to suggest other DC roles for the former Star Wars actor.

When one person limited his casting to John Stewart or Green Lantern Boyega said, Lmaooooo too funny. I cant be red hood? Damn.

The Red Hood is an alias used by various DC characters. But the most widely known person to assume the identity in the comics main continuity is Jason Todd, a former Robin who is killed by the Joker and then resurrected with a very different personality.

While the character hasnt made an appearance on the big screen yet, DC show Titans is set to explore the Red Hood in its third season. The story will see Jason Todd (Curran Walters) ditch his Robin costume in favour of the Red Hood in order to take revenge on his old team after a big falling out.

While Boyega rose to fame as Finn in the Star Wars sequel trilogy, hes has also had starring roles in Detroit and Pacific Rim: Uprising. Hes also given stellar performances in the Steve McQueen series, Small Axe (on BBC) and is set to star in a new feature film called Naked Singularity.

With so much versatility in his repertoire, John Boyega would be a great addition to any superhero movie he signs on for.

You can check out Boyegas exchange with fans on the original Twitter post below.

See original here:

John Boyega Says Hed Love To Play Red Hood Since Hes Too Old For Static Shock - Fortress of Solitude

Singularity Viewer

Singularity Viewer is an exciting client for Second Life and OpenSim, which strives to combine the beloved look and feel of Viewer 1.23 with the latest and greatest of available technology, and to stay compatible with future Second Life changes and features.Singularity is an open-source project powered entirely by volunteer force and willpower! Singularity 1.8.9: Animesh, Bento, BoM, VMM and Experiences!! posted Apr 1, 2020, 6:09 PM by Liru Frs [ updated May 7, 2020, 1:56 AM]

This release is a colossal leap forward, and it comes with full support for Animesh, Bento, Bakes on Mesh, Viewer Managed Marketplace, HTTP Asset fetching, and

!

Since LL has turned off support for the old method of fetching assets. Liru, Shyotl, and two new additions to the team, Bittenbythedark and Router Gray, have worked really hard to restore OS X support to offer alongside 1.8.9 in an emergency mac release. Our emergency Mac release has support for everything mentioned here, and everything from previous releases, but certain random things may be buggy.

Graphics

We realize this release has been a long wait for those of you not on the alpha or test builds, and were working on infrastructure to deliver speedier updates in the future. In closing, the team would like to thank everyone credited above; Asriazh, Beware, Cheesy, Gooz, Kitty Barnett, MyBrains, Nomade Zhao, Sappa, Tazy, Testicular Slingshot, Torric, Yoshiko; everyone who tested the betas, alphas and test builds; Stashed.io for reducing our Windows build time; the Alchemy Viewer Team for sharing the infrastructure; everyone who supports us and you, for sticking around through this giant wall of text. Now get out there and enjoy the new release!

Yes, we have alpha builds; yes, they support bento. The link has been in the sidebar for so long.

Also, I apologize for not having posted this sooner. We plan to have more builds out soon.

There are plenty of features and fixes in these builds as well, and we'd love to have your feedback.

Our alpha builds are what become our release builds, so please don't be scared off by the title, they're usually more stable than our last release! (And if they're not, you better tell us before they become our release!)

Anyways, Happy New Year from the Singularity Team!

This past year has been a tumultuous one for our team, one of our developers passed away, another one left to pursue other interests, we were hampered in our ability to update and test the viewer by a lack of infrastructure and hardware issues.

It was not all bad, as we recently gained a new developer, miKa-Pyon, most of our hardware issues were resolved, and were working on improvements with renewed vigor.

Nevertheless, Singularity is constantly evolving, and as we move on to new technologies, we cannot retain support for older platforms. Our toolchain has been updated, we now use modern programming language features which require recent GCC and Microsoft Visual Studio 2015. Unfortunately, as part of this required toolchain update, some older platforms have become too burdensome if not impossible to support. On the upside, thanks to these newer language features we can write better code and get better performance for the viewer. Not only can we use modern C++, we are also able to share code and prebuilt libraries with our sister project, Alchemy Viewer, this is highly beneficial as the development workload is now halved between our two projects. Due to these updates, those compiling our project will find that we now make use of autobuild.While making 32-bit linux builds is very high on our agenda, it requires a large effort and did not make this deadline. We are currently in need of a Mac developer to help us get Singularity for macOS back on track and building again.

Now, on to the other changes (There are a lot of them):

Skin Changes:

Gemini now included as part of standard release. The default skin is still Dark, but this skin is a nice dark-themed alternative.

Weve recreated the skins package.

It now has Dark Green skin by SLB Wirefly.

Skin files have been cleaned up, resulting in a substantially smaller on-disk size. (Shyotl)

Translation Changes:

As usual, the French and Spanish translations were heavily updated as we evolved (Nomade Zhao, Damian Zhaoying)

The German translation is now being upkept by miKa!

Grid Compatibility Changes:

Latest inventory protocol (AISv3) support has been merged in to maintain future compatibility with the SecondLife grid. (Shyotl)

QtWebkit browser has been replaced with a Chromium variant.

SSL library has been updated and includes TLS 1.2 support.

SLVoice (vivox) has been updated to latest version

Serverside baking (sunshine) baking implementation has been updated. (Shyotl)

Avatar render info is now reported to the sim.

General New Features:

Added Mouselook IFF feature. Displays name under crosshair matching coloration of avatar on minimap (Input Prefs) (Alchemy)

Mouselook can now show position and health, if damage is on. (Input Prefs) (Alchemy)

Added the Region Tracker which allows monitoring of multiple regions in a single floater (Alchemy)

Hover height slider added to Quick Settings Panel popup (bottom right corner)

/hover command has been added. Supports values -2.0 through 2.0.

Folder links now support drag-and drop operations, as well as pasting.

Teleport and Look options have been added to Area Search.

Antispam supports filtering receipt of landmarks. (Adv. Chat->Antispam)

You can now edit descriptions and names of multiple objects selected in bulk. (Liru Frs)

A Cloud Setting option has been added to the Windlight Floater (Alchemy)

Edit Linked Parts display of impact has been improved.

You can now pan in and out with alt-shift + pgup/pgdn/e/c (ctrl-shift + pgup/pgdn/e/c in Linux)

You can now hide your own lookat beacon (System->Security & Privacy) (Alchemy)

Local Gesture Preview Feature:

Adds a dropdown option to gesture preview button to preview locally.

Chat done in gestures will be local, along with sounds and animations being played locally

General Interface Changes:

Drop Targets, like the ones in the autoresponse preferences now have a clear button. (Liru Frs)

Double-Click Autopilot is now offered in System Prefs->General.

Made the snapshot floater shorter by changing that radio group to a combo box~

Search All tab now behaves similar to modern Web search, as the original Search All page is no longer maintained by Linden Lab and no longer behaves properly.

Available Toolbar Button Changes:

Added Quit, Region Tracker

Autoreplace button now comes with a toggle to turn it on and off

Menu changes:

Help->Grid Status

Fake Away, Busy, and Away are now in World->Status

Option to sit on away added to World->Status (Alchemy)

World->Status->Autoresponse

Singularity->Resync Animations (alternative to /resync command)

New list right-click menu options:

Multiple avatars can be selected to invite to group.

Share, which lets you easily send an inventory item directly from your inventory to someone else or to a group of people.

Chat History (opens your chat log, if you have one with the selected person)

Radar is now more functional and optimized (Liru Frs, Mika Pyon)

You can now see avatar distance in alerts about their range (right click the radar, Alerts->"Include distance in alerts")

Fixed the rare spam of enter/leave messages

No more rate control, updates happen instantly as we get the messages.

Title now updates more appropriately

No longer spams chat on teleport (Sim Federal of Alchemy)

Improvements to friends list:

Search should now work right

Online Count should be more accurate.

When changing friend rights, only the checkboxes are locked, no longer the entire list.

Avatar Profile Changes:

Chat UI Changes:

IM Tabs can now be configured to show names in different formats (Adv. Chat->Chat UI)

Autoresponse Changes:

New UI setup in preferences

Autoresponse to muted changes

Option to send autoresponse only if away

Can now change autoresponse settings from World->Status->autoresponse

Add the option to block conferences from nonfriends exclusively (Communication prefs)

We fixed a long standing issue where some objects chatting would not get linked because we didnt see them in the world for whatever reason.

Clicking a [Friend] is Online notification will now open an IM with them (Liru Frs)

RLVa updates (Liru Frs)

Don't filter parts of words out just because they match a name under restraint

Escape potentially dirty strings before using them as regex in replace_all_regex

@shownametags support

Radar no longer hides when @shownames restricted, it just hides names

Radar will alert when @shownames restricted, but not when @shownametags restricted

Radar will not offer menu when @shownames or @shownametags restricted, and the IM and Profile buttons will disable.

When @shownames restricted, allow offering calling cards now.

Support RLV 2.9 features: @camunlock, @camavdist, @camzoommax, @camzoommin, @camdistmax, @camdistmin (Liru Frs)

IFF respects RLVa

@adjustheight now supports hover height instead of being deprecated (Kitty Barnett)

Performance, Stability, and Maintenance:

FMOD Ex has been updated to FMOD Studio. (Shyotl, Drake)

Build infrastructure has been migrated to autobuild. (Shyotl, Drake)

See the article here:

Singularity Viewer

Gravitational singularity – Wikipedia

A gravitational singularity, spacetime singularity or simply singularity is a location in spacetime where the gravitational field of a celestial body is predicted to become infinite by general relativity in a way that does not depend on the coordinate system. The quantities used to measure gravitational field strength are the scalar invariant curvatures of spacetime, which includes a measure of the density of matter. Since such quantities become infinite at the singularity, the laws of normal spacetime break down.[1][2]

Gravitational singularities are mainly considered in the context of general relativity, where density apparently becomes infinite at the center of a black hole, and within astrophysics and cosmology as the earliest state of the universe during the Big Bang. Physicists are undecided whether the prediction of singularities means that they actually exist (or existed at the start of the Big Bang), or that current knowledge is insufficient to describe what happens at such extreme densities.

General relativity predicts that any object collapsing beyond a certain point (for stars this is the Schwarzschild radius) would form a black hole, inside which a singularity (covered by an event horizon) would be formed.[3] The PenroseHawking singularity theorems define a singularity to have geodesics that cannot be extended in a smooth manner.[4] The termination of such a geodesic is considered to be the singularity.

The initial state of the universe, at the beginning of the Big Bang, is also predicted by modern theories to have been a singularity.[5] In this case the universe did not collapse into a black hole, because currently-known calculations and density limits for gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. Neither general relativity nor quantum mechanics can currently describe the earliest moments of the Big Bang,[6] but in general, quantum mechanics does not permit particles to inhabit a space smaller than their wavelengths.[7]

Many theories in physics have mathematical singularities of one kind or another. Equations for these physical theories predict that the ball of mass of some quantity becomes infinite or increases without limit. This is generally a sign for a missing piece in the theory, as in the ultraviolet catastrophe, re-normalization, and instability of a hydrogen atom predicted by the Larmor formula.

Some theories, such as the theory of loop quantum gravity, suggest that singularities may not exist.[8] This is also true for such classical unified field theories as the EinsteinMaxwellDirac equations. The idea can be stated in the form that due to quantum gravity effects, there is a minimum distance beyond which the force of gravity no longer continues to increase as the distance between the masses becomes shorter, or alternatively that interpenetrating particle waves mask gravitational effects that would be felt at a distance.

There are different types of singularities, each with different physical features which have characteristics relevant to the theories from which they originally emerged, such as the different shape of the singularities, conical and curved. They have also been hypothesized to occur without Event Horizons, structures which delineate one spacetime section from another in which events cannot affect past the horizon; these are called naked.

A conical singularity occurs when there is a point where the limit of every diffeomorphism invariant quantity is finite, in which case spacetime is not smooth at the point of the limit itself. Thus, spacetime looks like a cone around this point, where the singularity is located at the tip of the cone. The metric can be finite everywhere coordinate system is used.

An example of such a conical singularity is a cosmic string and a Schwarzschild black hole.[9]

Solutions to the equations of general relativity or another theory of gravity (such as supergravity) often result in encountering points where the metric blows up to infinity. However, many of these points are completely regular, and the infinities are merely a result of using an inappropriate coordinate system at this point. In order to test whether there is a singularity at a certain point, one must check whether at this point diffeomorphism invariant quantities (i.e. scalars) become infinite. Such quantities are the same in every coordinate system, so these infinities will not "go away" by a change of coordinates.

An example is the Schwarzschild solution that describes a non-rotating, uncharged black hole. In coordinate systems convenient for working in regions far away from the black hole, a part of the metric becomes infinite at the event horizon. However, spacetime at the event horizon is regular. The regularity becomes evident when changing to another coordinate system (such as the Kruskal coordinates), where the metric is perfectly smooth. On the other hand, in the center of the black hole, where the metric becomes infinite as well, the solutions suggest a singularity exists. The existence of the singularity can be verified by noting that the Kretschmann scalar, being the square of the Riemann tensor i.e. R R {displaystyle R_{mu nu rho sigma }R^{mu nu rho sigma }} , which is diffeomorphism invariant, is infinite.

While in a non-rotating black hole the singularity occurs at a single point in the model coordinates, called a "point singularity", in a rotating black hole, also known as a Kerr black hole, the singularity occurs on a ring (a circular line), known as a "ring singularity". Such a singularity may also theoretically become a wormhole.[10]

More generally, a spacetime is considered singular if it is geodesically incomplete, meaning that there are freely-falling particles whose motion cannot be determined beyond a finite time, being after the point of reaching the singularity. For example, any observer inside the event horizon of a non-rotating black hole would fall into its center within a finite period of time. The classical version of the Big Bang cosmological model of the universe contains a causal singularity at the start of time (t=0), where all time-like geodesics have no extensions into the past. Extrapolating backward to this hypothetical time 0 results in a universe with all spatial dimensions of size zero, infinite density, infinite temperature, and infinite spacetime curvature.

Until the early 1990s, it was widely believed that general relativity hides every singularity behind an event horizon, making naked singularities impossible. This is referred to as the cosmic censorship hypothesis. However, in 1991, physicists Stuart Shapiro and Saul Teukolsky performed computer simulations of a rotating plane of dust that indicated that general relativity might allow for "naked" singularities. What these objects would actually look like in such a model is unknown. Nor is it known whether singularities would still arise if the simplifying assumptions used to make the simulation were removed. However, it is hypothesized that light entering a singularity would similarly have its geodesics terminated, thus making the naked singularity look like a black hole.[11][12][13]

Disappearing event horizons exist in theKerr metric, which is a spinning black hole in a vacuum, if theangular momentum( J {displaystyle J} ) is high enough. Transforming the Kerr metric toBoyerLindquist coordinates, it can be shown[14]that the coordinate (which is not the radius) of the event horizon is, r = ( 2 a 2 ) 1 / 2 {displaystyle r_{pm }=mu pm (mu ^{2}-a^{2})^{1/2}} , where = G M / c 2 {displaystyle mu =GM/c^{2}} , and a = J / M c {displaystyle a=J/Mc} . In this case, "event horizons disappear" means when the solutions are complex for r {displaystyle r_{pm }} , or 2 < a 2 {displaystyle mu ^{2} M 2 {displaystyle J>M^{2}} ), i.e. the spin exceeds what is normally viewed as the upper limit of its physically possible values.

Similarly, disappearing event horizons can also be seen with theReissnerNordstrmgeometry of a charged black hole if the charge( Q {displaystyle Q} ) is high enough. In this metric, it can be shown[15]that the singularities occur at r = ( 2 q 2 ) 1 / 2 {displaystyle r_{pm }=mu pm (mu ^{2}-q^{2})^{1/2}} , where = G M / c 2 {displaystyle mu =GM/c^{2}} , and q 2 = G Q 2 / ( 4 0 c 4 ) {displaystyle q^{2}=GQ^{2}/(4pi epsilon _{0}c^{4})} . Of the three possible cases for the relative values of {displaystyle mu } and q {displaystyle q} , the case where 2 < q 2 {displaystyle mu ^{2} M {displaystyle Q>M} ), i.e. the charge exceeds what is normally viewed as the upper limit of its physically possible values. Also, actual astrophysical black holes are not expected to possess any appreciable charge.

A black hole possessing the lowest M {displaystyle M} value consistent with its J {displaystyle J} and Q {displaystyle Q} values and the limits noted above, i.e., one just at the point of losing its event horizon, is termed extremal.

Before Stephen Hawking came up with the concept of Hawking radiation, the question of black holes having entropy had been avoided. However, this concept demonstrates that black holes radiate energy, which conserves entropy and solves the incompatibility problems with the second law of thermodynamics. Entropy, however, implies heat and therefore temperature. The loss of energy also implies that black holes do not last forever, but rather evaporate or decay slowly. Black hole temperature is inversely related to mass.[16] All known black hole candidates are so large that their temperature is far below that of the cosmic background radiation, which means they will gain energy on net by absorbing this radiation. They cannot begin to lose energy on net until the background temperature falls below their own temperature. This will occur at a cosmological redshift of more than one million, rather than the thousand or so since the background radiation formed.[citation needed]

The rest is here:

Gravitational singularity - Wikipedia

The Global Work Crisis: Automation, the Case Against Jobs, and What to Do About It – Singularity Hub

The alarm bell rings. You open your eyes, come to your senses, and slide from dream state to consciousness. You hit the snooze button, and eventually crawl out of bed to the start of yet another working day.

This daily narrative is experienced by billions of people all over the world. We work, we eat, we sleep, and we repeat. As our lives pass day by day, the beating drums of the weekly routine take over and years pass until we reach our goal of retirement.

We repeat the routine so that we can pay our bills, set our kids up for success, and provide for our families. And after a while, we start to forget what we would do with our lives if we didnt have to go back to work.

In the end, we look back at our careers and reflect on what weve achieved. It may have been the hundreds of human interactions weve had; the thousands of emails read and replied to; the millions of minutes of physical laborall to keep the global economy ticking along.

According to Gallups World Poll, only 15 percent of people worldwide are actually engaged with their jobs. The current state of work is not working for most people. In fact, it seems we as a species are trapped by a global work crisis, which condemns people to cast away their time just to get by in their day-to-day lives.

Technologies like artificial intelligence and automation may help relieve the work burdens of millions of peoplebut to benefit from their impact, we need to start changing our social structures and the way we think about work now.

Automation has been ongoing since the Industrial Revolution. In recent decades it has taken on a more elegant guise, first with physical robots in production plants, and more recently with software automation entering most offices.

The driving goal behind much of this automation has always been productivity and hence, profits: technology that can act as a multiplier on what a single human can achieve in a day is of huge value to any company. Powered by this strong financial incentive, the quest for automation is growing ever more pervasive.

But if automation accelerates or even continues at its current pace and there arent strong social safety nets in place to catch the people who are negatively impacted (such as by losing their jobs), there could be a host of knock-on effects, including more concentrated wealth among a shrinking elite, more strain on government social support, an increase in depression and drug dependence, and even violent social unrest.

It seems as though we are rushing headlong into a major crisis, driven by the engine of accelerating automation. But what if instead of automation challenging our fragile status quo, we view it as the solution that can free us from the shackles of the Work Crisis?

In order to undertake this paradigm shift, we need to consider what society could potentially look like, as well as the problems associated with making this change. In the context of these crises, our primary aim should be for a system where people are not obligated to work to generate the means to survive. This removal of work should not threaten access to food, water, shelter, education, healthcare, energy, or human value. In our current system, work is the gatekeeper to these essentials: one can only access these (and even then often in a limited form), if one has a job that affords them.

Changing this system is thus a monumental task. This comes with two primary challenges: providing people without jobs with financial security, and ensuring they maintain a sense of their human value and worth. There are several measures that could be implemented to help meet these challenges, each with important steps for society to consider.

Universal basic income (UBI)

UBI is rapidly gaining support, and it would allow people to become shareholders in the fruits of automation, which would then be distributed more broadly.

UBI trials have been conducted in various countries around the world, including Finland, Kenya, and Spain. The findings have generally been positive on the health and well-being of the participants, and showed no evidence that UBI disincentivizes work, a common concern among the ideas critics. The most recent popular voice for UBI has been that of former US presidential candidate Andrew Yang, who now runs a non-profit called Humanity Forward.

UBI could also remove wasteful bureaucracy in administering welfare payments (since everyone receives the same amount, theres no need to prevent false claims), and promote the pursuit of projects aligned with peoples skill sets and passions, as well as quantifying the value of tasks not recognized by economic measures like Gross Domestic Product (GDP). This includes looking after children and the elderly at home.

How a UBI can be initiated with political will and social backing and paid for by governments has been hotly debated by economists and UBI enthusiasts. Variables like how much the UBI payments should be, whether to implement taxes such as Yangs proposed valued added tax (VAT), whether to replace existing welfare payments, the impact on inflation, and the impact on jobs from people who would otherwise look for work require additional discussion. However, some have predicted the inevitability of UBI as a result of automation.

Universal healthcare

Another major component of any society is the healthcare of its citizens. A move away from work would further require the implementation of a universal healthcare system to decouple healthcare from jobs. Currently in the US, and indeed many other economies, healthcare is tied to employment.

Universal healthcare such as Medicare in Australia is evidence for the adage prevention is better than cure, when comparing the cost of healthcare in the US with Australia on a per capita basis. This has already presented itself as an advancement in the way healthcare is considered. There are further benefits of a healthier population, including less time and money spent on sick-care. Healthy people are more likely and more able to achieve their full potential.

Reshape the economy away from work-based value

One of the greatest challenges in a departure from work is for people to find value elsewhere in life. Many people view their identities as being inextricably tied to their jobs, and life without a job is therefore a threat to ones sense of existence. This presents a shift that must be made at both a societal and personal level.

A person can only seek alternate value in life when afforded the time to do so. To this end, we need to start reducing work-for-a-living hours towards zero, which is a trend we are already seeing in Europe. This should not come at the cost of reducing wages pro rata, but rather could be complemented by UBI or additional schemes where people receive dividends for work done by automation. This transition makes even more sense when coupled with the idea of deviating from using GDP as a measure of societal growth, and instead adopting a well-being index based on universal human values like health, community, happiness, and peace.

The crux of this issue is in transitioning away from the view that work gives life meaning and life is about using work to survive, towards a view of living a life that itself is fulfilling and meaningful. This speaks directly to notions from Maslows hierarchy of needs, where work largely addresses psychological and safety needs such as shelter, food, and financial well-being. More people should have a chance to grow beyond the most basic needs and engage in self-actualization and transcendence.

The question is largely around what would provide people with a sense of value, and the answers would differ as much as people do; self-mastery, building relationships and contributing to community growth, fostering creativity, and even engaging in the enjoyable aspects of existing jobs could all come into play.

Universal education

With a move towards a society that promotes the values of living a good life, the education system would have to evolve as well. Researchers have long argued for a more nimble education system, but universities and even most online courses currently exist for the dominant purpose of ensuring people are adequately skilled to contribute to the economy. These job factories only exacerbate the Work Crisis. In fact, the response often given by educational institutions to the challenge posed by automation is to find new ways of upskilling students, such as ensuring they are all able to code. As alluded to earlier, this is a limited and unimaginative solution to the problem we are facing.

Instead, education should be centered on helping people acknowledge the current crisis of work and automation, teach them how to derive value that is decoupled from work, and enable people to embrace progress as we transition to the new economy.

While we seldom stop to think about it, much of the suffering faced by humanity is brought about by the systemic foe that is the Work Crisis. The way we think about work has brought society far and enabled tremendous developments, but at the same time it has failed many people. Now the status quo is threatened by those very developments as we progress to an era where machines are likely to take over many job functions.

This impending paradigm shift could be a threat to the stability of our fragile system, but only if it is not fully anticipated. If we prepare for it appropriately, it could instead be the key not just to our survival, but to a better future for all.

Image Credit: mostafa meraji from Pixabay

Link:

The Global Work Crisis: Automation, the Case Against Jobs, and What to Do About It - Singularity Hub

University of Colorado students share architecture projects in the Rocky Mountains – Dezeen

A high-altitude lavatory with gabion walls and a reimagined motel feature in this VDF school show of work from University of Colorado Denver's College of Architecture and Planning.

The projects range from built to conceptual and were created by students as part of their graduate and undergraduate degrees in architecture.

While some designed interventions to improve the experience of tourists and trekkers in the Rocky Mountains, others imagined electric vehicle charging stations for Tesla, which are capable of responding to the context in which they are placed.

University: University of Colorado Denver, College of Architecture and PlanningCourses: BSc Architecture, MArchStudios: BSc Architecture Design Studio 4 and the "Normal, Colfax" Research and Design SeminarMArch Studio 4: Design-Build and Studio 6: Prototype Replication and Singularity

MArch Studio 6: Prototype Replication and Singularity statement:

"Through the design of a prototype for a Tesla-branded electric vehicle (EV) charging facility, this studio investigated the tensions and synergies between the repeatability required to create multiple manifestations of the charging facility and the need to remain flexible and adapt to the site while developing and maintaining brand identity.

"As a studio funded by the PCI Foundation, the students used precast concrete as the primary construction system, requiring them to address the repeatability of the precast members within a single prototype or through multiple manifestations of the prototype."

University of Colorado Denver student housing by Macy Funk, BSc Architecture

"The University of Colorado Denver campus is unique in its diverse student body, which lives in private housing spread across the metropolitan area. The cultural diversity of the student body extends to every facet of the university's identity and is foundational to its values.

"This project posits an on-campus housing solution for students that reflects their common desire to gather and learn from one another socially. The resulting building proposal is bisected and divided by a loose collection of cylindrical and ovoid cloisters."

Studio: Design Studio 4Tutor: Kevin Hirth

Vocational School by Regan Wood, Sara Rowsell and Alli Purvis, BSc Architecture

"Sited along a dense urban corridor, the vocational school responds to Denver's legacy as an economy of largely self-contained labour and education. It consists of a simple, stripped structure that houses the life, work and training of its inhabitants.

"Students are provided with leasable space to practice their craft in close proximity to one another. The radical stance of the dense urban forms, reminiscent of similar buildings in the adjacent downtown area, is emphasised through the overlay of a rubberised roofing membrane that covers the surface of the school, landscape and other surrounding elements."

Studio: Design Studio 4Tutor: Kevin Hirth

Motel by Justin Watson, BSc Architecture

"The American West has a long tradition of itineracy. In Colorado alone, towns have swollen and shrunk with incredible speed due to the boom and bust of gold, oil, steel, tourism and agriculture. In the twentieth century, this itineracy was epitomised by the suburban station wagon, laden with luggage and ferrying families to far-flung destinations of leisure.

"The twenty-first century has seen this model disrupted by the pervasiveness of inexpensive air travel and the consolidation of the hotel industry. Roadside motels at the base of the Rocky Mountains once bustling with business now often represent a stepping stone for those close to homelessness, providing day-to-day housing at a cut-price rate.

"This project reimagines a roadside motel on a rural site in the plains just east of Denver. It hopes to offer a place for rest and relaxation to all inhabitants of the city while creating a new legacy for an often tarnished and abandoned building typology."

Studio: Design Studio 4Tutor: Kevin Hirth

Mobile Home by Trevor Carrasco, BSc Architecture

"This concept was produced as a part of an ongoing research project studying a decaying but well-preserved urban corridor built during the 1960's. It reimagines a common low-cost prefabricated housing model as a monument.

"Formal characteristics were derived from vernacular structures nearby and reconfigured into a new figure in the landscape to foreground issues of social and economic inequity."

Course: "Normal, Colfax" Research and Design SeminarTutor: Kevin Hirth

Cottonwood Cabins by the MArch Colorado Building Workshop students

"High on the Colorado Plateau, in a desert landscape characterised by juniper and ponderosa pine forests, six bunkhouses and an outdoor kitchen create a welcome refuge for trekkers at the Cottonwood Gulch base camp. The objective was to foster a sense of community while reinterpreting the local vernacular which is rooted in the surrounding landscape.

"The cabin's construction is an investigation into mass timber building techniques. The screw-laminated timber acts as a single diaphragm, achieving greater spans and cantilevers than individual pieces of lumber could alone. The cabins are elevated above the landscape to give a degree of separation from the fauna of the high desert. On the interior, bunks are suspended from the ceiling offering trekkers the agency to occupy the space how they wish."

Project website: coloradobuildingworkshop.cudenvercap.orgStudio: Studio 4: Design-BuildTutors: Rick Sommerfeld, Will Koning and JD Signom

Longs Peak Privies by the MArch Colorado Building Workshop students

"Longs Peak in Rocky Mountain National Park is one of the most frequented peaks in the State of Colorado that is more than 14,000 feet high. But since backcountry toilets were installed on the trail in 1983, the technology has deteriorated in the harsh climate to the point that waste now has to be removed by shovel, placed into five-gallon buckets and carried down the mountain using llamas.

"We collaborated with the National Park Service to design and construct new backcountry privies using lightweight prefabricated construction and emerging methods of waste collection to minimise the human footprint in Colorado's backcountry.

"The final design consists of prefabricated, structural gabion walls. Within the gabions, thin steel plate moment frames triangulate the lateral loads within the structure while stones, collected on-site, are used as ballast. This innovative assembly allows for rapid on-site construction and an architecture that disappears into the surrounding landscape."

Project website: coloradobuildingworkshop.cudenvercap.orgStudio: Studio 4: Design-BuildTutors: Rick Sommerfeld and Will Koning

Electric Oasis by Kristina Bjornson and Malgosia Tomasik, MArch

"The notion of the prototype is deficient in the fact that it assumes a mass-produced scheme can be imposed on any landscape despite its individual needs. In creating a prototype for a Tesla charger station, we wanted to challenge the standardisation of architecture by encouraging unique modifications in the design process.

"We followed a kit-of-parts approach that allows the supercharger stations to adapt and react to their context, taking into account the climatic zone, urban versus rural setting, proximity to other charging stations and lot size. These criteria inform the envelope design, orientation, light filtration and overall scheme. Distinct characteristics of light infiltration were considered to develop a responsive parametric facade based on the unique orientation and climatic data of the site."

Kristina Bjornson website: kvbjornson.comMalgosia Tomasik website: goshatomasik.com

Engaging Flows by Shane Krenn and Lorraine Ziegler

"The typology of the gas station has traditionally augmented the notions of efficiency and in-and-out culture, separating the traveller from the local. We conduct an investigation on how a new prototypical architecture could facilitate lingering. Early discussions pointed us towards the clustering of programmatic volutes to guide flows, generate in-between spaces for impermanent programmes and reframe the context to situate the traveller alongside the local.

"As a conceptual prototype for Tesla, brand recognition and repeatability across differing contexts necessitated the development of a kit of parts. A series of concrete panels and fins yield a multiplicity of programmatic volute shapes, allowing the prototype to be adapted across environments."

Shane Krenn website: shanekrenn.com/engagingflowsLorraine Ziegler portfolio: issuu.com/lorrainezoranziegler

Virtual Design Festival's student and schools initiativeoffers a simple and affordable platform for student and graduate groups to present their work during the coronavirus pandemic.Click here for more details.

The rest is here:

University of Colorado students share architecture projects in the Rocky Mountains - Dezeen

Iron Man 2020 Was Right and Tony Stark Learned the Hard Way – CBR – Comic Book Resources

Arno Stark's dark predictions for the future of the Marvel Universe were right - and it might be too late for Iron Man to do anything about it.

WARNING: The following contains spoilers for Iron Man 2020 #5, by Dan Slott, Christos Gage, Pete Woodsand VC's Joe Caramagna, on sale now

Despite very few people knowing of its plans, The Singularityis one of the most deadly threatsin the history of the Marvel Universe. Knowledge of The Singularity caused Arno Stark/Iron Man 2020 to start prepping for the threat's arrival in some pretty brutal ways. This led to Tony Stark/Iron Man working to stop Arno, but it turns out Arno had a point. The Singularity has just reached Earth -- and unless the rival Iron Men can come together, it could destroy the entire world.

RELATED:Iron Man: Tony Stark Reveals His Most VERSATILE Armor Ever

Also known as The Extinction Entity, the Singularity is a massive alien creature that has spent an unknown amount of time traveling through the cosmos. It has been slowly making its way to Earth with the intention oftaking control of all artificial intelligence on the planet and usingit to force organic life into subservience -- at which point, the Singularity will assimilate all life into itself.Becoming aware of that possibility, the Rigellian Recorder 451 rebelled against his basic state of observation and tried to help Earth defend from the threat in the hopes that the world would be able to meet its potential and bring peaceto the galaxy.

451's actions included constructing the Godkiller armor, a suit of robotic armor powerful enough to stave off planet-wide threats -- as well as helping Howard and Maria Stark finally conceive a child. Impacting the DNA of the child and influencing it with Kree technology to advance its intelligence, 451 personally designed the child to be capable of outthinking and devising a way to defeat the Singularity. This child wasArno Stark. However, due to Howard Stark'sinterfering with the child's DNA while in utero, Arno was born with a genetic fault that left him forced to use machinery to even breathe. Arno grew up becoming a brilliant inventor readying himself for the coming of the Singularity.

RELATED:Iron Man: How Pepper Potts Became A Superhero

Arno eventually even began to have dreams of the monstrous force approaching Earth, dedicating himself to saving theplanet under any circumstances. This included taking over Stark Unlimited from Tony and combining it with Baintronics, using their newly combined wealth to createnew technologies meant for combatting the Singularity. All of his less-scrupulous actions have been done in pursuit of helping protect the world from this threat, building to him attempting to take mental control of both human life and artificial life in a bid to unite them in battle against the Singularity.

The restored Tony Stark did his best to bring down Arno for how dark hisactions have become in pursuit of this eventuality, at points doubting that the Singularity is even real. This makes it all the more surprising when Tony and his allies quickly overwhelm Arno but are shocked to find themselves confronted by the Singularity as it approaches the Earth, claiming it has come to consume all life on the planet. The gigantic being might even be capable of following through on this threat if given the chance.

Considering the potential galactic potential humanity has showcased -- especially in the recent century of technological innovation -- the Singularity is clearly poised to be one of the most sudden and dangerous threats to the planet yet. It might take the Stark brothers finally putting aside their differences if they want any hope of saving the world from the monstrosity before it destroys everything -- or at the very most, it may take Tony embracing Arno's darker impulses toif they want any real chance of bringing down the Singularity.

KEEP READING:One Of Iron Man's Toughest Enemies Is Really An A.I.

Daredevil: Who REALLY Brought Frank Miller Back for 'Born Again'?

See more here:

Iron Man 2020 Was Right and Tony Stark Learned the Hard Way - CBR - Comic Book Resources

Scientists: What if black holes had a safe zone where little planets could live? Let’s call them ‘blanets’ – The Next Web

Every once in a while the scientific community comes up with a discovery so important that it immediately changes the course of human evolution. Im talking of course about the invention of the word blanet. I think we can all agree that it is the cutest word science has ever created.

However, if youre like me, youll be disappointed to know that a blanet is in fact not a tiny little planet covered in soft comfy blankets it would have had a pillow moon as well. Blanets are actually a theoretical class of planetary bodies proposed by astrophysicists that could exist within a safe zone adjacent to a black hole.

Scientists have long hypothesized that black holes could play host to planetary bodies. The big idea is that a singularity is infinitely dense, but as you get further away there comes a point where it should logically be able to snag a hold on a massive object and maintain it within its gravity without devouring it.

The same thing (sort of) happens when planets form around stars, such as our solar system did around our sun.

[Read: Heres what would happen if a black hole fought a wormhole]

A new study conducted by astrophysicists in Japan attempts to shed light on how blanets could form within the confines of a black holes gravitational boundaries. In essence, the researchers calculate that blanetary formation would work quite similar to its planetary cousins.

According to the researchers pre-print study on arXiv:

We proposed that a new class of planets, blanets (i.e., black hole planets) can be formed, provided that the standard scenario of planet formation is present in the circumnuclear disk. Here, we investigated the physical conditions of the blanet formation outside the snowline (rsnow several parsecs) in more detail, especially considering the effect of the radial advection of the dust aggregates.

Planets are formed when dust eddies swirling around a star fuse into a disc from which a planet is spun out of. In black holes the same essential function is at work, however the end result wouldnt be anything like Earth or other bodies were likely to recognize as a planet. Per the study:

Our results suggest that blanets could be formed around relatively low-luminosity active galactic nuclei (AGNs) during their lifetime (. 108 yr). The gaseous envelope of a blanet should be negligibly small compared with the blanet mass. Therefore, the system of blanets are extraordinarily different from the standard Earth-type planets in the exoplanet systems.

Other astrophysicists have posited hypothetical star systems merged with black holes. In these scenarios, scientists have proposed a binary singularity/star paradigm where a black hole and a star of equal mass would exist in perfect equilibrium. Under such circumstances which are incredibly specific hundreds of habitable planets could revolve around the black sun binary in a belt.

Both theories are based on calculations and explain more about the mechanics and physics of black holes than they do about the actual existence of blanets which would require some pretty specific creation parameters. But that doesnt stop us from imagining entire galaxies hidden inside the fuzzy edges of a supermassive black hole.

Theres no reason why the Milky Way, and us inside of it, couldnt exist within a wispy tendril of a super-duper massive black holes outer edges. Maybe deep down inside Earth was a blanet all along. Then again, maybe its just turtles all the way down.

Read next: Stream Xbox Game Pass games to your Android device with xCloud in September

Do you want to get the sassiest daily tech newsletter every day, in your inbox, for FREE? Of course you do: sign up for Big Spam here.

Originally posted here:

Scientists: What if black holes had a safe zone where little planets could live? Let's call them 'blanets' - The Next Web

Russia and the Arctic Council: What Happens Next? Homeland Security Today – HSToday

Next May, Russia will succeed Iceland as chair of the Arctic Council. With the Far North heating up in terms of both climate change and geopolitical competition, Russias chairmanship comes at a critical juncture for the region. The next few years may well determine if we can mitigate Arctic environmental degradationand preserve the region as a zone of peace rather than conflict.

Two key tensions will define Russias tenure at the helm of the Arctic Council. The first deals with military security: Russias increased pace of Arctic militarization versus the Councils exclusion of hard security issues. The second tension concerns climate and energy security. The accelerating pace of polar climate change is evident, but Russia stands to gain economically from the warming Arctic. How Russian President Vladimir Putin squares this environmental circle will have major repercussions for not just the Russian Arctic, but the whole world.

Formed in 1996, the Arctic Council is the leading intergovernmental forum for polar cooperation. Its members are the eight states with territory in the Arctic Circle: Canada, Denmark, Finland, Iceland, Norway, Russia, Sweden, and America. The Council evolved from the 1991 Arctic Environmental Protection Strategy and retained a singular focus on environmental issues like sustainable development. This singularity of focus, combined with the forums explicit exclusion of military security issues, means the Arctic Council is ill-equipped to single-handedly govern an increasingly openand crowdedFar North.

Read the rest of the analysis by the American Security Project here.

(Visited 99 times, 8 visits today)

More here:

Russia and the Arctic Council: What Happens Next? Homeland Security Today - HSToday

The Secret to a Long Healthy Life Is in the Genes of the Oldest Humans Alive – Singularity Hub

The first time I heard nematode worms can teach us something about human longevity, I balked at the idea. How the hell can a worm with an average lifespan of only 15 days have much in common with a human who lives decades?

The answer is in their genesespecially those that encode for basic life functions, such as metabolism. Thanks to the lowly C. elegans worm, weve uncovered genes and molecular pathways, such as insulin-like growth factor 1 (IGF-1) signaling that extends healthy longevity in yeast, flies, and mice (and maybe us). Too nerdy? Those pathways also inspired massive scientific and popular interest in metformin, hormones, intermittent fasting, and even the ketogenic diet. To restate: worms have inspired the search for our own fountain of youth.

Still, thats just one success story. How relevant, exactly, are those genes for humans? Were rather a freak of nature. Our aging process extends for years, during which we experience a slew of age-related disorders. Diabetes. Heart disease. Dementia. Surprisingly, many of these dont ever occur in worms and other animals. Something is obviously amiss.

In this months Nature Metabolism, a global team of scientists argued that its high time we turn from worm to human. The key to human longevity, they say, lies in the genes of centenarians. These individuals not only live over 100 years, they also rarely suffer from common age-related diseases. That is, theyre healthy up to their last minute. If evolution was a scientist, then centenarians, and the rest of us, are two experimental groups in action.

Nature has already given us a genetic blueprint for healthy longevity. We just need to decode it.

Long-lived individuals, through their very existence, have established the physiological feasibility of living beyond the ninth decade in relatively good health and ending life without a period of protracted illness, the authors wrote. From this rare but valuable population, we can gain insight into the physiology of healthy aging and the development of new therapies to extend the human healthspan.

While it may seem obvious now, whether genes played a role in longevity was disputed for over a century. After all, rather than genes, wouldnt access to health care, socioeconomic status, diet, smoking, drinking, exercise, or many other environmental and lifestyle factors play a much larger role? Similar to height or intelligence (however the latter is assessed), the genetics of longevity is an enormously complicated and sensitive issue for unbiased studying.

Yet after only a few genetic studies of longevity, a trend quickly emerged.

The natural lifespan in humans, even under optimal conditions in modern societies, varies considerably, the authors said. One study, for example, found that centenarians lived much longer than people born around the same time in the same environment. The offspring of centenarians also have lower chances of age-related diseases and exhibit a more youthful profile of metabolism and age-related inflammation than others of the same age and gender.

Together, about 25 to 35 percent of the variability in how long people live is determined by their genesregardless of environment. In other words, rather than looking at nematode worm genes, we have a discrete population of humans whove already won the genetic lottery when it comes to aging. We just need to parse what winning means in terms of biology. Genes in hand, we could perhaps tap those biological phonelines and cut the wires leading to aging.

Identification of the genetic factors that underlie extreme human lifespan should provide insights into the mechanisms of human longevity and disease resistance, the authors said.

Once scientists discovered that genes play a large role in aging, the next question was which ones are they?

They turned to genome-wide association studies, or GWAS. This big data approach scans existing genomic databases for variations in DNA coding that could lead to differences in some outcomefor example, long versus short life. The differences dont even have to be in so-called coding genes (that is, genes that make proteins). They can be anywhere in the genome.

Its a powerful approach, but not that specific. Think of GWAS as rudimentary debugging software for biological code: it only looks for differences between different DNA letter variants, but doesnt care which specific DNA letter swap most likely impacts the final biological program (aging, in this case).

Thats a huge problem. For one, GWAS often finds dozens of single DNA letter changes, none powerful enough to change the trajectory of aging by itself. The technique highlights a village of DNA variants, that together may have an effect on aging by controlling the cells course over a lifetime, without indicating which are most important. Its also hard to say that a DNA letter change causally leads to (or protects against) aging. Finally, GWAS studies are generally performed on populations of European ancestry, which leaves out a huge chunk of humansfor example, the Japanese, who tend to produce an outsized percentage of centenarians.

So what needs to change?

Rather than focusing on the general population, the key is to home in on centenarians of different cultures, socioeconomic status, and upbringing. If GWAS are like fishing for a rare species in several large oceans, then the authors point is to focus on pondsdistributed across the worldwhich are small, but packed with those rare species.

Extremely long-lived individuals, such as centenarians, compose only a tiny proportion (~0.01 percent to 0.02 percent) of the United States population, but their genes contain a biological blueprint for healthy aging and longevity, the authors said. Theyre spared from usual age-related diseases, and this extreme and extremely rare phenotype is ideal for the study of genetic variants that regulate healthspan and lifespan.

Its an idea that would usually make geneticists flinch. Its generally thought that the larger the study population, the better the result. Here, the recommendation is to narrow our focus.

And thats the point, the authors argue.

Whatever comes out of these studies will likely have a much larger impact on aging than a GWAS fishing experiment. Smaller (genomic) pond; larger (pro-youth) fish. Whats more, a pro-youth gene identified in one European-based long-living population can be verified in another group of centenarianssay, Japaneseensuring that the gene candidates reflect something fundamental about human aging, regardless of race, culture, upbringing, and wealth.

A genomic screen of centenarians can easily be done these days on the cheap. But thats only the first step.

The next step is to validate promising anti-aging genetic differences, similar to how scientists validated such differences in nematode worms during classic longevity studies. For example, a promising pro-youth gene variant can be genetically edited into mice using CRISPR or some other tool. Scientists can then examine how the mice grow up and grow old, compared to their non-edited peers. Does the gene make these mice more resilient to dementia? What about muscle wasting? Or heart troubles? Or hair greying and obesity?

From these observations, scientists can then use an enormous selection of molecular tools to further dissect the molecular pathways underlying these pro-youth genetic changes.

The final step? Guided by centenarian genes and validated by animal models of aging, we can design powerful drugs that sever the connection between the genes and proteins that drive aging and its associated diseases. Metformin is an experimental pill that came out of aging studies in nematode wormsimagine what studies in human centenarians will yield.

Despite enormous improvements in human health over the past century, we remain far from a situation in which living to 100 years of age in fairly good health is the norm, the authors said.

But as centenarians obviously prove, this is possible. By digging into their genes, scientists may find a path towards healthy longevitynot just for the genetically fortunate, but for all of us.

Image credit:Cristian Newman / Unsplash

Originally posted here:

The Secret to a Long Healthy Life Is in the Genes of the Oldest Humans Alive - Singularity Hub

What If the Big Bang Was Actually a Big Bounce? – WIRED

Steinhardt and company imagine a universe that expands for perhaps a trillion years, driven by the energy of an omnipresent (and hypothetical) field, whose behavior we currently attribute to dark energy. When this energy field eventually grows sparse, the cosmos starts to gently deflate. Over billions of years a contracting scale factor brings everything a bit closer, but not all the way down to a point. The dramatic change comes from the Hubble radius, which rushes in and eventually becomes microscopic. The universes contraction recharges the energy field, which heats up the cosmos and vaporizes its atoms. A bounce ensues, and the cycle starts anew.

In the bounce model, the microscopic Hubble radius ensures smoothness and flatness. And whereas inflation blows up many initial imperfections into giant plots of multiverse real estate, slow contraction squeezes them essentially out of existence. We are left with a cosmos that has no beginning, no end, no singularity at the big bang, and no multiverse.

From Any Cosmos to Ours

One challenge for both inflation and bounce cosmologies is to show that their respective energy fields create the right universe no matter how they get started. Our philosophy is that there should be no philosophy, Ijjas said. You know it works when you dont have to ask under what condition it works.

She and Steinhardt criticize inflation for doing its job only in special cases, such as when its energy field forms without notable features and with little motion. Theorists have explored these situations most thoroughly, in part because they are the only examples tractable with chalkboard mathematics. In recent computer simulations, which Ijjas and Steinhardt describe in a pair of preprints posted online in June, the team stress-tested their slow-contraction model with a range of baby universes too wild for pen-and paper analysis.

Adapting code developed by Frans Pretorius, a theoretical physicist at Princeton University who specializes in computational models of general relativity, the collaboration explored twisted and lumpy fields, fields moving in the wrong direction, even fields born with halves racing in opposing directions. In nearly every case, contraction swiftly produced a universe as boring as ours.

You let it go andbam! In a few cosmic moments of slow contraction it looks as smooth as silk, Steinhardt said.

Katy Clough, a cosmologist at the University of Oxford who also specializes in numerical solutions of general relativity, called the new simulations very comprehensive. But she also noted that computational advances have only recently made this kind of analysis possible, so the full range of conditions that inflation can handle remains uncharted.

Its been semi-covered, but it needs a lot more work, she said.

While interest in Ijjas and Steinhardts model varies, most cosmologists agree that inflation remains the paradigm to beat. [Slow contraction] is not an equal contender at this point, said Gregory Gabadadze, a cosmologist at New York University.

The collaboration will next flesh out the bounce itselfa more complex stage that requires novel interactions to push everything apart again. Ijjas already has one bounce theory that upgrades general relativity with a new interaction between matter and space-time, and she suspects that other mechanisms exist too. She plans to put her model on the computer soon to understand its behavior in detail.

The group hopes that after gluing the contraction and expansion stages together, theyll identify unique features of a bouncing universe that astronomers might spot.

The collaboration has not worked out every detail of a cyclic cosmos with no bang and no crunch, much less shown that we live in one. But Steinhardt now feels optimistic that the model will soon offer a viable alternative to the multiverse. The roadblocks I was most worried about have been surpassed, he said. Im not kept up at night anymore.

Excerpt from:

What If the Big Bang Was Actually a Big Bounce? - WIRED

This AI Could Bring Us Computers That Can Write Their Own Software – Singularity Hub

When OpenAI first published a paper on their new language generation AI, GPT-3, the hype was slow to build. The paper indicated GPT-3, the biggest natural language AI model yet, was advanced, but it only had a few written examples of its output. Then OpenAI gave select access to a beta version of GPT-3 to see what developers would do with it, and minds were blown.

Developers playing with GPT-3 have taken to Twitter with examples of its capabilities: short stories, press releases, articles about itself, a search engine. Perhaps most surprising was the discovery GPT-3 can write simple computer code. When web developer, Sharif Shameem, modified it to spit out HTML instead of natural language, the program generated code for webpage layouts from prompts like a button that looks like a watermelon.

I used to say that AI research seemed to have an odd blind spot towards automation of programming work, and I suspected a subconscious self-preservation bias, tweeted John Carmack, legendary computer game developer and consulting CTO at Oculus VR. The recent, almost accidental, discovery that GPT-3 can sort of write code does generate a slight shiver.

While the discovery of GPT-3s coding skills may have been somewhat serendipitous, there is, in fact, a whole field dedicated to the development of machine learning algorithms that can code. The research has been making progress, and a new algorithm just recently took another step.

The algorithm, called machine inferred code similarity (MISIM), is the brainchild of researchers from Intel, Georgia Institute of Technology, University of Pennsylvania, and MIT. Trained on the huge amount of code already publicly available on the web, MISIM can figure out what a program is supposed to do. Then, after finding other similar programs and comparing it to them, MISIM can offer ways to make the program faster or more efficient.

It isnt the first machine learning algorithm to make recommendations or compare similarity, but according to the researchers in a new preprint paper on MISIM, it was up to 40 times more accurate at the task when it went head to head with several of its most advanced competitors.

Near term, the AI could be a useful sidekick for todays programmers. Further out, the field could open programming to anyone who can describe what they want to create in everyday language or bring machines that write and maintain their own code.

The pursuit of computers that can code is almost as old as modern computer science itself. While there have been advances in programming automation, the recent explosion in machine learning is accelerating progress in a field called machine programming.

In a 2018 paper on the field, a group of Intel and MIT researchers wrote, The general goal of machine programming is to remove the burden of writing correct and efficient code from a human programmer and to instead place it on a machine.

Researchers are pursuing systems that can automate the steps required to transform a persons intentthat is, what they want a piece of software to dointo a working program. Theyre also aiming to automate the maintenance of software over time, like, for instance, finding and fixing bugs, keeping programs compatible, or updating code to keep up with hardware upgrades.

Thats easier said than done, of course. Writing software is as much art as it is science. It takes a lot of experience and creativity to translate human intent into the language of machines.

But as GPT-3 shows, language is actually a skill machine learning is rapidly mastering, and programming languages are not so different from English, Chinese, or Swahili. Which is why GPT-3 picking up a few coding skills as a byproduct of its natural language training is notable.

While algorithmic advances in machine learning, like GPT-3, are key to machine programmings success, theyd be useless without good training data. Luckily, theres a huge amount of publicly available code on sites like GitHubreplete with revision histories and notesand code snippets and comment threads on sites like Stack Overflow. Even the internet at large, with accessible webpages and code, is an abundant source of learning material for AI to gobble up.

In theory, just as GPT-3 ingests millions of example articles to learn how to write, machine programming AIs could consume millions of programs and learn to code. But how to make this work in practice is an open question. Which is where MISIM comes in.

MISIM advances machine programming a step by being able to accurately identify what a snippet of code is supposed to do. Once its classified the code, it compares it to millions of other snippets in its database, surfaces those that are most similar, and suggests improvements to the code snippet based on those other examples.

Because MISIM classifies the codes purpose at a high level, it can find code snippets that do the same thing but are written differentlytheres more than one way to solve the same problemand even snippets in other programming languages. Simplistically, this is a bit like someone reading a New Yorker article, identifying its topic, and then finding all the other articles on that topicwhether theyre in Der Spiegel or Xinhua.

Another benefit of working at that higher level of classification is the program doesnt need the code to be compiled. That is, it doesnt have to translate it into the machine code thats executed by the computer. Since MISIM doesnt require a compiler, it can analyze code snippets as theyre being written and offer similar bits of code that could be faster or more efficient. Its a little like an email autocomplete feature finishing your sentences.

Intel plans to offer MISIM to internal developers for just this purpose. The hope is itll prove a useful sidekick, making the code-writing process faster, easier, and more effective. But theres potentially more it can do. Translation between computer languages, for example, could also be a valuable application. It could perhaps help coders update government software written in archaic languages to something more modern.

But Justin Gottschlich, director of machine programming at Intel, has an even grander vision: the full democratization of coding.

Combine MISIM (or something like it) with natural language AI, and future programmers could simply write down what they want a piece of software to do, and the computer whips up the code. That would open programming to anyone with a decent command of their native language and a desire to make something cool.

As Gottschlich told MIT Technology Review, I would like to see 8 billion people create software in whatever way is most natural for them.

Image credit: Markus Spiske /Unsplash

More:

This AI Could Bring Us Computers That Can Write Their Own Software - Singularity Hub

Construction of the World’s Biggest Nuclear Fusion Plant Just Started in France – Singularity Hub

Fusion power promises to provide limitless green energy using cheap and abundant fuel, but its a long-running joke that its always 20 years away. Last week, though, construction started on the ITER fusion plant in France, which hopes to prove the commercial viability of fusion power.

While conventional nuclear power plants generate energy by splitting atoms, nuclear fusion involves smashing two atoms together. This produces dramatically more energy than the process of fission that weve already mastered and doesnt produce long-lived radioactive waste. It also doesnt rely on radioactive elements like uranium and plutonium for fuel, instead using abundant isotopes of hydrogen called deuterium and tritium.

The only catch is that trying to contain a nuclear fusion reaction is like trying to keep the sun in a box. Its the same reaction that powers all stars, and trying to corral that kind of raw power and turn it into something we can use effectively is a challenge scientists have been struggling with for decades.

To get the fuel to fuse, it first has to be heated to 10 times the temperature of the suns core, which creates a superhot plasma. To maintain the fusion reactions, this plasma needs to be strictly confined and isolated from other components. Fortunately, plasmas can be manipulated using magnetic fields, and so gigantic electromagnets are used to keep the plasma spinning around a donut-shaped reactor called a tokamak.

The problem is that all this heating and magnetic confinement requires colossal amounts of energy. While weve managed to get fusion reactions running on Earth theyve always used considerably more energy than theyve produced. The International Thermonuclear Experimental Reactor (ITER) in France is designed to change that.

The project has been a long time in the making. The idea was formulated at the tail end of the Cold War as a multinational collaboration, but design work didnt properly start until the turn of the millennium, and its parent organization wasnt launched until 2007. Last week French president Emmanuel Macron hosted a ceremony to celebrate the beginning of the assembly of the reactor.

Over the past five years factories, universities, and national laboratories all over the world have been working to build the components for the plant, some of which weigh several hundred tons, including a magnet powerful enough to lift an aircraft carrier. It will take another five years to piece all the parts together and get the reactor ready for its first test run.

Constructing the machine piece by piece will be like assembling a three-dimensional puzzle on an intricate timeline, director-general of ITER Bernard Bigot said in a press release. Every aspect of project management, systems engineering, risk management, and logistics of the machine assembly must perform together with the precision of a Swiss watch.

The hope is that by 2025 the plant will be able to produce first plasma, a test designed to make sure the reactor works; the test will produce roughly 500 megawatts of thermal power. It will be another decade until the plant is expected to produce enough energy to be commercially viable, though. That will involve building an even larger plasma chamber to provide 10-15 times more electrical power.

While 15 years away might not seem like much of an improvement over 20, those behind the project are confident that these are the first steps towards fusion power fulfilling its promise of revolutionizing our energy systems.

It faces some competition, though. Both the UK government and a variety of startups have announced plans to pursue nuclear fusion, often aiming for much smaller and easier-to-build reactors than ITER. And despite its heavy backing from multiple nation-states, the projects long history of cost overruns and delays means its certainly not a sure-fire winner.

But the project will soon be one of the worlds largest science experiments, and winner or not, theres little doubt it will significantly push forward our understanding of fusion power. Harnessing the power of the sun on Earth may not sound like a crazy idea for much longer.

Image Credit: ITER

Go here to read the rest:

Construction of the World's Biggest Nuclear Fusion Plant Just Started in France - Singularity Hub

Rob Nail: 3 Ways To Create A Better Reality Virtually – Forbes

getty

When crisis strikes, it is helpful to call on people looking not just today but tomorrow.

One of those thinkers isRob Nail, former CEO of Singularity U, and a successful technology entrepreneur. "I'm an engineer. I like to build stuff. And I like to solve hard problems. One of those challenges is artificial intelligence, which he spoke about at length in an interview with Rhett Power and me on our LinkedIn Live program,Whats Next: Previews, Predictions and Prognostications.

Three areas of interest, or endeavors, guide Nails thinking as he looks to the future and how AI technology will shape and complement it. The first endeavor is empathy. I think there's always going to be things that humans want to do with other humans, right? The emotional interaction, biological love, you name it. I mean, some people will bring in technology to more of that than others do.

The secondendeavoris entertainment. We're creative beings. We want to play and think and be creative and solve problems. That's going to be fun. For Nail, an engineer at heart, problem-solving is fun.

The thirdendeavoris exploration. For me, it's about exploring the big questions of humanity. Like the big questions, the philosophers for thousands of years have wanted to dive deeper into it. Why are we here? What else is there? What is consciousness? If we can stop fighting with each other, we could put all of our efforts on these explorations. Then we're going to go places.

Nail challenges technology investors to think about the future of companies in their portfolio. One of my recommendations is that when you're interviewing entrepreneurs, or if you're an entrepreneur, spend some of your time fantasizing about that future.

For example, what would happen, as it did in the case of Google and Facebook, what would happen if billions of people became customers?What does that feel like? What, how could that be abused and misused, and if necessary, corrected? What would I do today to avoid that malign future? This is all in the spirit of taking a little time, doing some future scenario planning. Social responsibility must be a concern for all tech entrepreneurs.

Look for opportunity

The pandemic has disrupted the lives of everyone on this planet. For Nail, that is an opportunity. There is an amazing opportunity to build a bridge between jobs disrupted and opportunities coming. And this is an area that I want to work on. Together with colleagues and investors, we have been thinking about it, but I'm always looking for new angles and people interested in that space."

While it may be tempting to go long and go bold, just trying one new thing can be beneficial, says Nail. Doing so gives you the ability to experiment without breaking the bank. It may be even better for small companies. The advantage small have over big companies is that they can be more nimble and adaptable. They can move faster on those potential trends that are very differentiating," Of course, such companies do not have the financial resources of larger companies. Still, they may have greater impetus and ability to get things done more efficiently and effectively.

For example, Nail himself, who began his career after Stanford in robotics with Velocity 11, is engineering an AI avatar who can help Nail plan more carefully, test assumptions, and make better decisions. As forward-thinking as that is, Nail is also linking that avatar to a smartwatch to monitor physical health. Big idea yes, but related to something practical and doable.

Rob Nail, explorer-engineer, embodies the thinking that will help us shape our "new normal." It will be women, and men like him, who challenge us to think beyond our immediate environment to dream about, and ultimately, create the future that benefits us best. This effort fuels human ambition.

Read the original:

Rob Nail: 3 Ways To Create A Better Reality Virtually - Forbes

SentinelOne Research Identifies IoT Vulnerabilities Enabling Remote Takeover and Network Intrusion – Yahoo Finance

Barak Sternberg to Present Research Findings at DefCon after Working with Smart Device Provider HDL Automation on Vulnerability Patches

SentinelOne, the autonomous cybersecurity platform company, today announced that Barak Sternberg, SentinelLabs security researcher, has identified four unique vulnerabilities in HDL Automation smart devices. The vulnerabilities exposed thousands of HDL devices to remote control by adversaries, leading to possible network intrusion, secret exfiltration, and even ransomware attacks. SentinelOne alerted HDL to the issues via the responsible disclosure process, and the vulnerabilities have been patched. Sternberg will present the findings at DefCon on Saturday, August 8 at 9AM PST, and the complete research will be available on the SentinelLabs blog.

IoT devices are ubiquitous in the home and the workplace, connecting lights, air conditioning, and even heat-sensors to home or corporate networks. IoT devices are also potential security weak points that attackers target to exploit internal network configurations, change arbitrary controllers, and cause software or hardware damage. With enterprises adding more and more connected devices to their networks, vulnerabilities like those outlined in SentinelLabs research are concerning as every connection to the enterprise network is a potential vulnerability.

"IoT can pose a significant threat to enterprise security because, while anything you connect to your network is a potential point of ingress, not everyone considers that IoT devices contain unintended vendor-created backdoors" said Sternberg. "Many organizations dont design smart thermostats or refrigerators with security in mind. However, even mundane devices such as this can be open to attackers, making it critical to understand exactly how many devices you have connected to your network and to harden every endpoint."

SentinelLabs identified two vulnerabilities that enabled account takeover; a flaw in the "forgot your password" function and a takeover of the debug email account. Two additional vulnerabilities relating to endpoint APIs were also identified. Due to these flaws, SentinelLabs researchers were able to compromise remote servers used as proxies for configuring smart devices and worked with HDL Automation on patch solutions. If attackers were simply interested in causing chaos, they could do physical damage by raising the temperature in a server room, disabling security cameras, or disabling sensors designed to detect leaks or voltage surges. The four new-found IoT vulnerabilities highlight the sensitivity and cost of IoT cyberattacks in impacting our digital way of life.

Further details on SentinelOnes research will be released on the SentinelLabs blog at the time of the DefCon presentation. Sternberg will present his findings at DefCon IoT Village, on Saturday, August 8th at 9 AM PST.

To learn more about how SentinelOne secures IoT devices and protects corporate networks from IoT-related intrusions, visit http://www.sentinelone.com. The SentinelOne Singularity Platform includes broad IoT capabilities through SentinelOne Ranger, which identifies every connected device on the network and prevents them from being exploited.

About SentinelOne

SentinelOne is the only cybersecurity solution encompassing AI-powered prevention, detection, response and hunting across endpoints, containers, cloud workloads, and IoT devices in a single autonomous platform. With SentinelOne, organizations gain full transparency into everything happening across the network at machine speed to defeat every attack, at every stage of the threat lifecycle. To learn more visit http://www.sentinelone.com or follow us at @SentinelOne, on LinkedIn or Facebook.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200806005536/en/

Contacts

Will Clarkfama PR for SentinelOneP: 401-714-4192E: S1@famapr.com.

The rest is here:

SentinelOne Research Identifies IoT Vulnerabilities Enabling Remote Takeover and Network Intrusion - Yahoo Finance