Page 24«..1020..23242526..3040..»

Category Archives: Singularity

What Is the Metaverse? A Beginner’s Guide to Tech’s Latest Obsession – Singularity Hub

Posted: February 15, 2022 at 5:06 am

A couple months ago, friends and business contacts started asking me for a crash course on my professional research studying virtual environments. Their interest reflects an explosionwhich youve probably noticedof noise and hype surrounding something called the metaverse.

This article is an introduction for a complete or almost beginner. Theres plenty of mainstream coverage on the topic, but it often conflates concepts: virtual reality is not the metaverse (though its related), and crypto/Web3 by itself is not the metaverse (though also related). Confusing, I know. Whether youre a businessperson or bystander, this is my best effort to lay everything out.

In 99.99% of cases, provided the term is used correctly, you could replace the word metaverse with internet and the sentence will mean the same thing. So why is everyone using this fancy new word? I think analyst Doug Thompson said it very well when he noted that were using the term as a proxy for a sense that everything is about to change.

So if the metaverse is just the internetwhat about the internet is about to change? To answer that question, Ive broken this article into four parts:

For those that want my definition of the metaverse up front, Id say: The metaverse is the internet, but its also a spatial (and often 3D), game-engine-driven collection of virtual environments. There is a lot missing from this definition (like avatars), but if youre like many people, that will already sound like a made-up buzzword salad.

Lets explore.

To understand the changes coming to life online, you have to start with the seemingly obvious way we currently access the internet: computers.

And to understand where were headed, you have to look at the history of computer interfaces. By computer interface, Im referring to the way that humans interact with digital machines to get them to do what we want.

We take for granted how easy and intuitive working with computers has become in our lifetimes, but it wasnt always so easy.

In the middle of the 20th century, the programming language engineers used to get a computer to do things involved sticking their hands into the actual machines to wire cables. (Also, most early computer programmers were women.)

Then engineers invented a new interface using punch cards, which allowed us to keep our hands to ourselves.

After punch cards came command lines (like MS-DOS), which were a breakthrough because you could interact by typing words. But the real mainstream moment for computers was the invention of the graphical user interface (GUI). This is when working with computers came to involve clicking pictures and is what most of us take for granted as just how they work today. GUIs are now used in everything from ATMs to ticketing machines, and theyre the reason ordinary, non-programmer people like us can use them.

Why go through this history?

The point is that at every stage in the development just described, working with computers became easier, more accessible, and more people could use them.

Clay Bavor, at Google, whose description of this history and insights I am borrowing here, puts it this way:

Over the past several decades, every time people made computers work more like we doevery time we removed a layer of abstraction between us and themcomputers became more broadly accessible, useful, and valuable to us. We, in turn, became more capable and productive.

Today, the next great computing interface is emergingit just doesnt have a good name yet. You may have heard about concepts like augmented reality, virtual reality, mixed reality, immersive computing, or whatever two-letter acronym.

What all of these concepts share is that they involve the use of three-dimensional space. That is a very big deal.

My colleague at Singularity University, interface designer Jody Medich, taught me just how important 3D space is for the human brain. Which makes sense. We are born into 3D space. We grow up living in 3D space. It would make sense that our brains and bodies are built to interact in 3D space.

So this term spatial computing is becoming a commonly used way to refer to these interfaces. Be careful not to conflate this with the metaverse, since many spatial computing people dont consider themselves to be involved with, or a part of, all this metaverse nonsense. But it is related, and well come to that.

One other way to think of this is to consider why we dont typically see grandparents playing video game consoles. It takes time to develop the motor skills to smash buttons on a controller in the right way. Similarly, we take for granted that at some point we had to learn the motor skills to type.

We do, however, see more grandparents playing systems like the Nintendo Wii. You pick up a controller and swing your arms. Intuitive, easy, and anyone can do it. Thats a spatial interface. The big deal is that more people, including many more grandparents, could become comfortable using computers.

Generally, you should also think of spatial things as having the properties of moving around in space. In this sense, though not controlled using a spatial interface, a traditional video game like Fortnite is spatial (you move around), whereas a Zoom call is not. I dont want to complicate things, but spatial audio is another thing too.

To explain why this matters, I often use the example of Protectwise (now a Verizon company). They build tools to help cybersecurity professionals detect threats to their computer systems. Typically, a cybersecurity person lives life inside dashboards looking at log files to sense whats happening. What if that data could be turned into a spatial environment? Now patrolling your companys computer system is like playing a video game. More people could do that since its more intuitive. Take a look:

Spatial computing like this is coming to life online.

Game engines may be one of the most consequential technologies of the next decade. I know that might sound a little crazy. But hear me out.

A game engine is the software tool developers use to build (and run) video games. In these software programs you can upload 3D objects, apply rules for how those objects can move, add sounds, etc. The Protectwise thing shown above was made using the Unity game engine.

In business, the term video game is also misleading, since it suggests something recreational or non-serious. But as the world becomes more digital, game engines are powering the computing interfaces for all sorts of industries.

Aaron Lewis nicely points out, game engines are basically eating the world. Urban planning, architecture, automotive engineering firms, live music and events, filmmaking, etc. have all shifted a lot of their workflows/design processes to Unreal Engine and Unity.

Take the new electric Hummer as an example; the first car to have an Unreal-Engine-based interface. The vehicle takes information from its sensors and visualizes it in 3D on the dashboard. This is spatial computing in the real world.

Another jargon-y term you might start to hear is digital twin, which is the idea that physical things (like a Hummer) can use its sensor data to create a software copy of itself inside a computer. This lets humans interact with simulated industrial objects as if theyre computers.

A famous example is Hong Kong International Airports Terminal 1 which uses a digital twin in the Unity game engine to give facilities managers a real-time view of passenger activity and equipment that might need repairs. Think of it like the terminals 3D selfie.

While theres more happening in the world of game engines than I can go into, there are two engines to know: Unreal and Unity. Unreal is owned by Epic Games, the publisher who owns Fortnite, and Unity is a large publicly traded company. (Personally, Ive only ever used Unity since its designed to be somewhat beginner-friendly.)

The last thing you should know about game engines is that they are going to see mind-bending levels of improvement this decade. You might not have seen the internet lose its collective mind over the demo release of the newest Unreal Engine 5, but everyone went nuts. For an approachable summary of why its a big deal, Estella Tse gave me a really clear explanation.

And if you have 20 minutes to spare, my classmate sent me this, and it blew my mind:

The takeaway is that this decade, graphics will stop looking like graphics. The limit for how high game resolution can be is falling away, and well see photorealistic virtual environments that look like real life. This means you should try to see past the cartoonish aesthetic todays metaverse coverage will put in your mind.

For example, imagine what that means for something like Beyond Sports, a Dutch company that uses Unity and real-time positional data taken from sports to render live events as they are happening inside virtual reality. Picture this in 10 yearswalking around inside a game live with your friendsand now were approaching what we might be doing in the metaverse.

And here we can introduce the first part of a good definition of what the term metaverse is pointing toward:

If you start paying attention to it, youll notice game engines everywhere, which is especially true for.

Now that weve introduced spatial computing and game engines, weve arrived where most mainstream coverage of the metaverse picks as its starting point.

Virtual environments are the places well be logging into in tomorrows internet. They are also a tricky thing to define. In many ways, Twitter and Discord (an online messaging platform) are already virtual environments where people meet and exchange messages and information.

The virtual environments Im exploring here, however, are the spatial ones built in game engines, and there are two kinds to explore. First is real-world augmented reality (think Pokmon Go).

The other is the more traditional online or purely digital virtual environments you have to sit down at a computer (or put on a VR headset) to access, though this distinction is arbitrary and already falling away.

Pokmon Go is a helpful example of AR in the real world. Its a spatial game, built using Unity, that overlays 3D characters on the physical world. Does that mean we can consider Pokmon Go to be part of the metaverse? Well yeah, sort of I guess, sure, whos to say (I guess Im saying). The current definition is slippery. Were still in the define your terms stage, so be careful of that in the media.

In the future, it wont just be gamesthe entire physical world will be like a canvas we can paint with data.

To make this all happen, technology companies are scrambling to build whats referred to as the mirrorworld or AR cloud. These words mean the same thing as digital twin from earlier. Just extend the airport terminal concept to the entire Earth, and you have a tool to build virtual stuff on top of our everyday world. If you want to go deeper on this, I wrote this article exploring its impact on society.

This is all another way of saying the internet is spilling out of our phones and computers and merging with physical realityand its why that Hummer might be part of the metaverse. See, for example, how Niantic (the company who publishes Pokmon Go), market their technology to developers.

So, the metaverse wont just be random cartoon game worlds built by developers. It will also be digital replicas of very real spaces, likely the whole planet, and digital twins of industrial stuff like your car. It will eventually come to include sitting in your backyard with family members beamed in as avatars or putting on a VR headset to walk around other cities in real time.

Next, lets explore more traditional virtual worlds. Perhaps the most well-known example is a platform called Second Life, which was a huge phenomenon roughly 15 years ago and is still big today.

If youre not familiar, Second Life is a collection of virtual worlds built by users that you can explore as an avatar. Millions of users signed up, and lots of stuff happens there. Its also a good reminder that anytime you see the media claim that something is the first virtual-based whatever, thats very likely not true.

A very real economy exists in Second Life, where users buy and sell virtual goods and services, and it has its own currency; the Linden Dollar.

Today, theres a whole suite of platforms that could be thought of as successors to Second Life. One of these, Rec Room, just raised $145 million at a $3.5 billion valuation; this stuff is getting serious. Other platforms include VRChat, Altspace, Decentraland, and Somnium Space, among many others.

Another trendy thing to do in metaverse-speak is talk about how games like Fortnite and Roblox are fledgling metaverse experiences (which is true). On the surface, they come masked as games, but underneath they are spatial environments where people meet up and increasingly go to Travis Scott or Lil Nas X concerts.

The ultimate vision of the metaverse is that all of these experiences (Beyond Sports, Pokmon Go, Fortnite, Roblox) will become an interconnected network of virtual environmentsin other words, the internet, but for experiencing stuff.

My own journey into all this started several years ago on a platform called Sansar, originally launched by the same company behind Second Life. Here, my friend Sam is showing me around a space built by one of their users; Fnatic (one of the biggest eSports teams in the world). Im in a VR headset at home in San Francisco while Sam is in Los Angeles:

What struck me is that I was walking around with Sam inside the internet. Also, here was a retail e-commerce site to buy clothes online. Just like the world wide web must have struck CEOs in the mid-90s as an oddity that may (or may not) be relevant for their business, todays CEOs are probably scratching their heads observing all this metaverse noise.

I will say that just as most companies today have a website, at some point most companies will have a 3D virtual environment of some kind.

With spatial computing, game engines, and virtual environments like these, were closing the gap between experiences you have in real life (going to a concert, hanging out with friends, etc.) and experiences mediated by a computer online. This is what concepts like Ready Player One (a book by Ernest Cline adapted into a film by Steven Spielberg) are pointing toward.

And here we get our next helpful description of the metaverse:

To tie it all back together the metaverse is the internet, but also a spatial (and often 3D), game-engine-driven collection of virtual environments.

And just as todays internet has absorbed vast portions of our economic activity, tomorrows metaverse will consist of massiveoh no, please not NFTshere it comes

One of my favorite statistics is that Second Life still supports an annual economy worth roughly $500 million (that number has grown during the pandemic). The GDP of Second Life is larger than the economies of some real-world countries.

Fortnite, a game that doesnt cost a penny to play, still earned $9 billion in 2018 and 2019. How? They sell in-game stuff for players to express themselves in a variety of ways including virtual clothing, dance moves, and other items. In some ways the metaverse is just a giant virtual fashion industry.

If that sounds silly or weird, just think about how someone carefully plans what clothes to wear or what profile picture to use on LinkedIn. We care about how we express ourselves in the world. If were going to spend an increased portion of our time online, its not so silly to expect people will want to buy expensive Gucci bags to carry around Roblox.

So where do NFTs fit into all of this? Among other uses, NFTs offer the infrastructure to let people take custody of this virtual stuff.

I hate to do this, but its worth taking one giant step back to unpack what an NFT actually is.

The first thing to note is that NFTs run on blockchains. A blockchain is really just a fancy Excel spreadsheet that keeps track of who owns what (like what a bank does to keep track of who owns what money). Today, we rely on centralized authorities like banks to keep track of how much money is in which accounts as cash is shuffled between people and businesses. The idea behind a blockchain is that everyone just gets a copy of the same spreadsheet, and the big deal/breakthrough is that through complicated cryptography (which is where the crypto in cryptocurrency comes from) all those spreadsheets communicate and agree about which transactions are legitimate.

No more needing central trusted authorities. No way to hack, change, or mess around with what the spreadsheet says.

NFT stands for non-fungible token. The key word is fungible which just means you can exchange something for an equivalent version, and it will be equally valuable (bitcoin is fungible because it doesnt matter which bitcoin you have they are all equally valuable). Non-fungible is the opposite: each item is unique. This is why were seeing a lot of digital art being bought and sold using NFTs. NFTs use blockchains to determine who owns what.

This shift toward a decentralized way of managing life online is called Web3 (a word youll hear more and more and is worth getting to know).

Lets use a real example. Maybe you saw the front page of The Wall Street Journal this summer when an NFT for a digital image sold for $69 million.

Let me save you $69 million and share the link where that images file lives online. You can save it to your computer, and now you also own the file. Right? Well, kind of, but not in the sense everyone cares about.

Most media coverage doesnt explain this, but most NFTs are not the thing itself; in this case the JPEG file. The NFT is the token associated with the metadata that points at the thing. Heres the metadata for that NFT, by the way. There is something called an on-chain NFT, but we wont go there.

The reason NFTs and the metaverse are conflated so often is that theres an expectation that they may power these virtual economies by acting as the infrastructure mediating the exchange of information and assets online.

To be clear, this is not yet a universally agreed upon idea. Second Life, Fortnite, and plenty of other platforms have been doing just fine without NFTs. But one reason NFT/crypto is one of the noisiest places on the internet is because its fast-paced, novel, and supported by an absurd amount of money.

I dont mean that in a negative way; but this area is the unmapped, build-it-as-we-go, unsettled frontier of life online. There are some fascinating projects at the front end of this, but whether NFTs do or dont power some dematerialized system of capitalism misses the point that NFTs are likely more important than just owning stuff.

Now we can tie together everything weve learned about the metaverse and review a recent scene from an online event to explore the way NFTs might play a role.

Here is the Metaverse Festival (yes, a real thing) that was headlined by performances from global stars like Deadmau5. It happened in the browser-based Decentraland (a spatial, game-engine-driven, virtual environment).

Its Friday night, and you head to the nightlife district of Decentraland (a plot of land which is an NFT). To get in you must be of age. You carry an identity token (which could be an NFT), to verify your eligibility for entrance.

You notice someone wearing a hoodie (an NFT) from RTFKT, a virtual apparel company recently acquired by Nike. Those are expensive, Im told.

Its free to enter, because the event is sponsored by Kraken, who wants to be the cryptobank of the metaverse. By attending, youre issued whats called a proof of attendance protocol token, or POAP (that is an NFT).

Later, when you sign up at Kraken, they offer a discount to those who can show, with that token, they attended the event.

Metaverse or no metaverse, as John Palmer explains, NFTs mean the internet becomes a place where everyone has an inventory. Earlier we mentioned the metaverse is an interconnected collection of experiences, and if thats the case, you might want to carry your single identity, history, and inventory of assets around with you. If that sounds familiar its basically giving users back their own cookies and personal data from big companies. I dont want to go there; but its why a lot of people are worried about Facebook/Meta building a centralized metaverse instead of one that is open and decentralized.

Ive gone way down an NFT rabbit hole here, but its worth stitching together the spatial computing/virtual world developments with whats been happening in crypto/Web3. When I started this research seven-ish years ago, crypto people were far away and somewhere else. Today, I now have to explore Web3 stuff too, since these areas are merging.

If youre still herethank youyou might still be wondering: So, what? How does any of this meaningfully improve anything about the world, or even the internet? How is any of this better than what we have today? Honestly, fair point.

Lots of people will have points of view, and I wont advocate one perspective or another. But I do have a personal anecdote.

At the start of my MBA program, the UK government had implemented a rule that no more than six people could be together indoors. For 300 zero-chill/connect on LinkedIn business students, thats a tough start to the year.

We even tried a full Zoom call with all of us.

My classmates will remember me as that kid who threw some weird internet house party using a platform called High Fidelity. It employs spatial audio, so you only hear the people clumped around you. It took some getting used to but was a reasonable way to get 150 of us, as basic 2D avatars, moving around in a shared online space.

What the metaverse enables, through dimensional space, is a way to replicate some but not all natural human behaviors, which you cant replicate in existing online spaces such as Slack, Discord, or Zoom. There are times you want the magical chaos of unplanned social interaction mediated by personal space.

In the professional world, Im endlessly fascinated by this company which runs a 60,000-person organization from inside a virtual world built in Unity. The founder of that software company tells me that when you have to actually walk your avatar from meeting to meeting, theres opportunities for chance encounters youd never get jumping from Zoom to Zoom.

Ive also used that same software to run learning programs, and likewise, there are moving around the room type learning activities I could never run using Zoom.

Additionally, the metaverse might grow to become a more intuitive internet. Just like spatial computing interfaces are easier to use, websites may become like walking into physical stores, something our brains and bodies might better understand.

But it goes without saying, we wont replace real-world experiences nor should we want to. We also wont stop using todays platforms, like video-conferencing. The metaverse is just the evolutionary next stage of the internet, and offers a new suite of communication tools that will be more helpful for some things and less for others.

To conclude: this is all a long-winded way of saying the metaverse is the internet. But spatial. And built with game engines. And probably NFTs. And who knows where that takes us

This article is republished with permission from the authors Medium page. Read the original article.

Image Credit: The first selfie the author ever took as an avatar.

Excerpt from:

What Is the Metaverse? A Beginner's Guide to Tech's Latest Obsession - Singularity Hub

Posted in Singularity | Comments Off on What Is the Metaverse? A Beginner’s Guide to Tech’s Latest Obsession – Singularity Hub

String theory fuzzballs resolve famous black hole paradox – Advanced Science News

Posted: at 5:06 am

Scientists have turned to string theory to better understand black holes, proposing they can be modeled as "fuzzballs" made up of interacting strings.

Black holes are among the most mysterious objects in the universe. For more than a century, physicists have used Einsteins theory of general relativity to describe them, treating gravity as a deformation of spacetime created by the energy and momentum of particles and fields.

In this theory, a black hole, is considered an infinitely dense point called a singularity, which is surrounded by a spherical surface known as an event horizon or just a horizon for short with empty space existing between them. The gravity in the region beneath the horizon is so strong that no particles or waves can escape it and are doomed to fall into the singularity.

In this theory, black holes are characterized by only three parameters: mass, electric charge, and angular momentum encoding its rotational properties. However, this contradicts a quantum mechanical principle called a unitarity of time evolution, which states that the information must not be lost during the time development of a physical system.

Black holes are formed from huge amounts of matter consisting of an enormous number of particles that each have their own set of physical parameters. If the classical description of black holes is correct, then the information about the matter used to create them has definitely been lost given the simplicity of the that description implied by the no hair theorem. This is known as the black hole information loss paradox.

A group of American physicists led by Samir Mathur from Ohio State University has sought to resolve the paradox in a new paper published in the Turkish Journal of Physics. They propose replacing the convenient general relativistic picture of black holes as empty space with all its mass located in its center, with a ball-shaped mess of interacting strings called fuzzballs.

These hypothetical objects have neither a horizon nor a singularity, and sizes similar to those of same-mass black holes. This concept of a black hole fuzzball is based on string theory, a modern theory whose central postulate is that elementary particles, which are often considered as being point-like, are actually tiny vibrating strings with different oscillation modes that correspond to different types of particles. These string theory fuzzballs are characterized not by three numbers, but by a huge number of parameters composed of all the strings they are made up of, resolving the information loss paradox.

Black hole fuzzballs also help rectify another paradox in black hole physics. In the 1970s, Stephen Hawking analyzed the electromagnetic field in the vicinity of a horizon and predicted that black holes radiate photons in a similar way as heated bodies, such as stars or pieces of burning coal.

The mechanism of this hypothetical radiation emitted by a black hole results from the creation of photons in the vacuum outside its horizon due to quantum effects. Some of these particles cross the horizon and fall to the singularity, whereas others manage to escape the black holes gravitational field and travel away. In principle, they can be observed in the same way we see the light emitted by the Sun and other hot bodies. This radiation is known as Hawking radiation and has yet to be detected as its energy is so low that it exceeds the sensitivity of current instruments.

The difference between Hawking radiation from black holes and electromagnetic wave emissions from heated bodies like stars, for example, is that in the latter, the photons are generated by interacting elementary particles, and not in the vacuum.

Because of this peculiarity in how black hole radiation is generated, the photons emitted during a black holes lifespan, would have an entropy that is too large for the process to be consistent with the general principles of quantum mechanics, which demand this entropy to be smaller than the entropy of the black hole.

In order to solve this paradox, physicists have considered something called a wormhole paradigm, which, requires that both the photons that escape the black holes gravitational field as well as particles that fall into it should be considered when accounting for entropy. If one defines the Hawking radiation as a union of these two sets of particles, then the quantum mechanical correlations between them reduces the entropy of the black holes radiation, resolving the paradox.

But the Ohio State researchers analysis suggests that all realizations of this paradigm lead either to non-physical, larger-than-one probabilities of certain phenomena the aforementioned violation of unitarity or to a violation of the original Hawking proposal that black holes radiate like heated bodies. Instead, Mathur and his colleagues found these issues dont arise if black holes are considered not as objects with a singularity and a horizon, but as string theory fuzzballs with radiation produced by the interacting strings.

While the theory might work on paper, detecting this low-energy radiation is another challenge. It has been predicted that the interaction between the black holes gravitational waves and the fuzzballs surface would leave an imprint in its spectrum. Many scientists hope to be able to register such a subtle change with next generation Earth-based and space-based gravitational observatories, allowing them to determine if the fuzzballs are real or not.

Reference: Bin Guo, et al., Contrasting the fuzzball and wormhole paradigms for black holes, Turkish Journal of Physics (2021), arXiv:2111.05295

Originally posted here:

String theory fuzzballs resolve famous black hole paradox - Advanced Science News

Posted in Singularity | Comments Off on String theory fuzzballs resolve famous black hole paradox – Advanced Science News

Upping the Advantage: Mandiant and SentinelOne Announce Strategic Partnership – Mandiant

Posted: at 5:06 am

Resilient organizations require effective, flexible cyber security controls that work best for their specific needs. To further help our customers reduce the business impact of cyber threats and keep todays modern IT environments safe, Mandiant partners with fellow market leaders to deliver highly adaptable and intelligence-led solutions.

The addition of SentinelOne to our elite list of strategic partners marks a major milestone in Mandiants journey as a vendor agnostic organization. Through this strategic alliance, customers will be able to maximize their security investments with both companies through enhanced consulting services, tighter platform integrations and new offerings built on our technologies and shared expertise. The integration of Mandiant Advantage, a multi-vendor XDR, and SentinelOnes Singularity XDR will spur new innovation and a seamless experience for customersbridging both enterprise and mid-market users.

One of the core inaugural offerings of this strategic alliance, is integrating Mandiant Advantage and SentinelOnes Singularity platforms for delivering Mandiant incident response investigations and compromise assessments. For customers of SentinelOne, Mandiant will now be able to deliver its Incident Response and Compromise Assessment services utilizing SentinelOnes Singularity platform saving effort, time and resources.

In the next phase of this strategic alliance, which will be announced in the second half of this year, Mandiant Managed Defense will support SentinelOne Singularity for our Managed Detection and Response (MDR) solution. The integration enables joint customers to diagnose and remediate threats faster and more accurately through enhanced visibility, automation and alert prioritization.

Well be announcing more, new joint offerings and capabilities in the future, including additional platform integrations, bi-directional data sharing and increased functionality.

Its imperative as the new Mandiant that we work together with technology partners like SentinelOne, that continue to support our platform and services, showcasing Mandiant at the forefront of cyber investigations and XDR capabilities. Were now able to go out as a neutral third party and work with the technology vendors with the purpose of making certain that were providing our joint customers with the best possible outcomes. Its through this lens that we look forward to growing our elite technology partnership ecosystem.

Please visit our Mandiant partner page for more information on the SentinelOne partnership.

Read more here:

Upping the Advantage: Mandiant and SentinelOne Announce Strategic Partnership - Mandiant

Posted in Singularity | Comments Off on Upping the Advantage: Mandiant and SentinelOne Announce Strategic Partnership – Mandiant

Stream It Or Skip It: Bigbug on Netflix, Jean-Pierre Jeunet’s Farce About Sex and the Singularity – Decider

Posted: at 5:06 am

Huzzah to Netflix for giving French surrealist-auteur Jean-Pierre Jeunet a platform for his new achievement in weirdness, Bigbug, his first feature in nine years. You may know Jeunet for his effervescent melancholy-pop Oscar nominee Amlie, his sellout movie Alien: Resurrection, or possibly even 1991 debut and sub-cult classic Delicatessen, a movie that blends sweetness and grotesquerie like no other. Like the latter, Bigbug is a dystopian satire, albeit for the smarthome era, and its chock-full of the cuckoo comedy and uber-styled visual sensibility that are his hallmarks. Sounds good on paper, doesnt it? But in execution, well, that might be another story.

The Gist: Its 2045. There are finally flying cars and robot maids and flooded Netherlands and floating drones that observe people at the same time they spew advertisements. Gotta take the good with the bad, I guess, as ever, right? Alice (Elsa Zylberstein) lives in a French suburb where all the houses look alike, but at least they look fab as f, mid-century modern for the mid-21st-century, because everything old is new again. Shes a cultural anomaly who still collects those things, you know, whaddayacallem, books, and loves to write words longhand with ink and paper like they did oh so long ago. Her potential beau, Max (Stephane De Groot), is aroused by such ornate calligraphy, although he could be faking it, because his primary objective is to get up Alices dress. She either buys into his faux-intellectualism or ignores it, but either way, she too wouldnt mind getting some. Theyre middle-aged divorcees, theyre horny, theyre consenting, so hey, go for it, although theyre goofy enough that, you know, maybe we dont need to watch?

Anyway, Max is at Alices house with his teenage son Leo (Helie Thonnat) in tow, and the kids too blas to be mortified by the rampant libidos of his elders. Not that anyones ever alone in this reality, because Nestor is the Alexa-like invisible presence in peoples homes, the entity with a voice who you ask to unlock doors and turn on lights. Theres a crew in every home now: In Alices, Monique (Claude Perron) is the humanoid maid who does laundry, preps meals and uses her sensors to detect the state of Maxs sincerity (3%) and erection (100%) via digital readouts that only she, and we, can see. Einstein is a decapitated head-contraption that spiders around like Google on multiple legs; theres a cleanup droid that vacuums and spritzes and looks like it fell off MST3Ks Satellite of Love; and theres a cute little one that entertained Alices daughter when she was young.

The scheduled dropoff of said daughter, Nina (Marysold Fertard), results in awkward interactions among Max, Alice, her ex-husband Victor (Youssef Hadji) and his fiancee/secretary Jennifer (Claire Chust). Dropping by to spice the brew is Alices neighbor Francoise (Isabelle Nanty), hoping to retrieve the eighth clone of her accident-prone dog, and her sports bot Greg (Alban Lenoir), who we soon learn is actually her sex bot. And then the doors wont unlock and the air conditioning wont turn on, because apparently the singularity is happening, and Monique and co., disconnected from the AI insurrection, want to keep their owners safe. So theyre all stuck with each other, watching TV, which shows the French version of Ow My Balls (its called Homo Ridiculous) or a debate between a human and the prevailing ruler of this reality, Yonyx (Francois Levantal), of whom there are many, all with terrifying yellow-green eyes, teeth out to here and stomping around like RoboCop. The indoor automatons arent affiliated with Yonyx; in fact, theyd rather be human, so they try to emulate their owners by reading books and such, although they never shtoink each other, which is what said owners are frequently trying to do.

What Movies Will It Remind You Of?: Imagine Delicatessen crossed with Idiocracy and The Jetsons, and youre in the ballpark.

Performance Worth Watching: In a cast of characters who show little depth or personality, one of the robots stands out: Playing Monique, Perron is the only one who stirs up much in the way of effective comedy.

Memorable Dialogue: Leo spits some slang: Mecas have taken over the dacha. Weve been doofussed.

Sex and Skin: Sexy lingerie; spanking; a few doinking iterations; lady toplessness.

Our Take: As always, Jeunet hits the sweet spot between artsy weirdness and whimsical charm. But as its increasingly sweaty protagonists try to outwit their charming robot captors so they can apparently escape to the freedom of AI totalitarianism? I think thats one of the jokey ironies here Bigbug ends up being a mishmash of broad comedy and scattered ideas. The human characters are shallow nincompoops, obsessed with maintaining lifes conveniences (climate control, vacation) or satisfying their concupiscent urges. The house bots want to be like them, which would absolutely make them stupider, and I think thats one of the other jokey ironies here.

That dichotomy alone would be a concept worth honing into a sharp internet-of-things satire spiced with sex comedy, especially considering how Jeunet siphons the narrative into a single location. But the filmmakers ambition spurts through cracks in the foundation, indulging political commentary, bureaucracy jokes and the ineffectual overarching plot about humanitys inevitable enslavement despite the characters rampant urges to scrump, theres not much in the way of dramatic tension or release, which is one of the movies unintentional jokey ironies. Its ultimately too broad and silly, the comedy landing here and there (I liked the throwaway one-liner about certain cheeses that have been banned for being non-nutritionally correct), but most of the gags are dragged out and toothless. Its visually inspired, a pleasure to look at, but tonally, it rarely rises above grating and repetitive farce.

Our Call: Bigbug is a disappointing mishmash of dopey humor masking smart ideas and a misfire for Jeunet. SKIP IT.

John Serba is a freelance writer and film critic based in Grand Rapids, Michigan. Read more of his work at johnserbaatlarge.com.

StreamBigbug on Netflix

Read the original here:

Stream It Or Skip It: Bigbug on Netflix, Jean-Pierre Jeunet's Farce About Sex and the Singularity - Decider

Posted in Singularity | Comments Off on Stream It Or Skip It: Bigbug on Netflix, Jean-Pierre Jeunet’s Farce About Sex and the Singularity – Decider

This Week’s Awesome Tech Stories From Around the Web (Through February 12) – Singularity Hub

Posted: at 5:06 am

ARTIFICIAL INTELLIGENCE

Computer Scientists Prove Why Bigger Neural Networks Do BetterMordechai Rorvig | QuantaFundamental mathematical results had suggested that networks should only need to be so big, but modern neural networks are commonly scaled up far beyond that predicted requirementa situation known as overparameterization. In a [new paper], Sbastien Bubeck of Microsoft Research and Mark Sellke of Stanford University provided a new explanation for the mystery behind scalings success.

Atomically Thin Materials Significantly Shrink QubitsDexter Johnson | IEEE SpectrumNow researchers atMIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. We are addressing both qubit miniaturization and quality, said William Oliver, the director for theCenter for Quantum Engineering at MIT.

Great, DARPA Just Flew a Black Hawk Helicopter With Nobody in ItMack DeGuerin | GizmodoDARPA describes its Aircrew Labor In-Cockpit Automation System (ALIAS) as a tailorable, drop-in, removable kit, meant to add sophisticated automation to pre-built aircraft at a fraction of the cost of upgrading individual models with new, advanced avionics and software. The agency imagines this system will one day reduce pilot workloads and ultimately improve aircraft safety.

European Fusion Reactor Sets Record for Sustained EnergyDaniel Clery | ScienceIn experiments culminating the 40-year run of the Joint European Torus (JET), the worlds largest fusion reactor, researchers announced today they have smashed the record for producing controlled fusion energy. On 21 December 2021, the UK-based JET heated a gas of hydrogen isotopes to 150 million degrees Celsius and held it steady for 5 seconds while nuclei fused together, releasing 59 megajoules (MJ) of energyroughly twice the kinetic energy of a fully laden semitrailer truck traveling at 160 kilometers per hour.

Public Blockchains Are the New National Economies of the MetaverseTascha Che | WiredThe trustless and programmable nature of public blockchains have made it possible to implement new fiscal and monetary policy tools in the blockchain economies, which in many cases have advantages over the traditional economic policy tools of national governments. In addition, the proof-of-stake mechanism adopted by second-generation public blockchains introduces a de facto universal basic capital income for their network citizens.

Destinus Plans to Fly a Hydrogen-Powered, Hypersonic Cargo Craft With $29M Seed RoundDevin Coldewey | TechCrunchWhile the craft is far from completion, let alone testing and certification, a $29 million seed round should help things along. The stated plan is to build a hypersonic vehicle (i.e. multiples of the speed of sound) powered by liquid hydrogen and with only water as exhaust, which would enable point-to-point delivery nearly anywhere on the planet. Ambitious, yes. Expensive, yes. Difficult to engineer, also yes.

Youre (Maybe) Gonna Need a Patent for That Woolly MammothMatt Reynolds | WiredColossala startupcofounded by the Harvard geneticist George Churchwants to resurrect a woolly mammoth within the next six years. Its CEO, Ben Lamm, is confident that a mammoth is patentable. But bringing back a species that last stomped the Earth 4,000 years ago raises all kinds of questions that scientists warn were not fully prepared for. Can someone really patent a mammoth? And if they can,should they?

Moores Not Enough: 4 New Laws of ComputingAdenekan Dedeke | IEEE Spectrum[Metcalfes Law and Moores Law] both provide tremendous power to explain and predict behaviors of some seemingly incomprehensible systems and phenomena in the sometimes inscrutable information-technology world. I contend, moreover, that there are still other regularities in the field of computing that could also be formulated in a fashion similar to that of Moores and Metcalfes relationships. I would like to propose four such laws.

Bostons Federal Reserve Says It Has Solved Technical Challenges of a Digital DollarS. Dent | EngadgetThe US Federal Reserve is continuing its research into a digital dollar and has unveiled a technical specification for how it might work, The Washington Post has reported. Researchers designed a system that can handle more than 1.7 million transactions a second and settle payments in under two seconds, while operating 24/7 without service outages, according to a newpaper on the subject.

Why This Could Be a Critical Year for Electric CarsJack Ewing and Neal E. Boudette | The New York TimesBattery-powered cars are having a breakthrough moment and will enter the mainstream this year as automakers begin selling electric versions of one of Americans favorite vehicle type:pickup trucks. Their arrival represents the biggest upheaval in the auto industry since Henry Ford introduced the Model T in 1908 and could have far-reaching consequences for factory workers, businesses and the environment.

Mega Comet Arriving From the Oort Cloud Is 85 Miles WideGeorge Dvorsky | GizmodoThese latest observations confirm that Comet Bernardinelli-Bernstein is the largest Oort Cloud object ever detected, as its nearly twice as big as comet Hale-Bopp (observed in 1997), the nucleus of which measured between 25 and 50 miles (40 and 80 km) wide. Its also bigger than Comet Sarabat (observed in 1729), which had a nucleus measuring somewhere around 62 miles (100 km) in diameter.

Image Credit: Oleksii Drozdov / Unsplash

Read the original post:

This Week's Awesome Tech Stories From Around the Web (Through February 12) - Singularity Hub

Posted in Singularity | Comments Off on This Week’s Awesome Tech Stories From Around the Web (Through February 12) – Singularity Hub

Drones as Big as 747s Will Fly Cargo Around the World With Low Emissions, Startup Says – Singularity Hub

Posted: at 5:06 am

The global supply chain is currently experiencing all kinds of glitches, from material shortages to labor shortages and beyond. Moving goods from point A to point B has become more expensive, and theres no quick fix in sight. But a San Diego-based startup plans to meet some of the demand for air freight with an innovative solution: autonomous cargo drones as big as a Boeing 747. And customers are jumping on board.

Natilus, founded in 2016, this week announced $6 billion worth of pre-orders for over 440 of its aircraft. The company says its blended wing design can fit 60 percent more cargo than existing freight aircraft while cutting costs by 60 percent and with 50 percent less carbon emissions.

Aleksey Matyushev, the companys CEO, pointed out in a press release that moving freight by sea is 13 times cheaper than moving it by air, but takes 50 times as long. Natilus intends to revolutionize the transport industry by providing the timeliness of air freight at an affordable cost reduction of 60 percent, making air cargo transportation substantially more competitive, he said.

How will they do this? Much of the savings will reportedly come from the aircrafts design.

The passenger planes were used to riding in, as well as many cargo planes, have a tube and wing design, which is what it sounds like: passengers or cargo ride in a hollow tube (called the fuselage), and the attached wings are what generate lift and allow the plane to fly (sounds pretty precarious put that way, doesnt it?).

A blended wing body design, on the other hand, merges the wings and the fuselage, meaning the body is much wider and flatter than that of traditional passenger planes. Until now, blended wing body aircraft have primarily been used for military purposes, but aircraft manufacturers and NASA are starting to look into expanding the planes uses and coming up with new prototypes.

Part of why passenger aircraft use the tube and wing design is because its easier to pressurize said tube, and while sitting in a narrow seat packed in next to 200-some other narrow seats isnt the most comfortable thing ever, it works; we board the plane, put our stuff above our heads or at our feet, sit stiffly for several hours, then wait impatiently to file out upon landing.

The tube and wings configuration doesnt make as much sense for cargo, though; its mostly moved in rectangular pallets, and if you picture a sort of aircraft tetris, packing 3D rectangles into a cylinder isnt a very efficient use of space. With a blended wing body design, the interior of the aircraft can have a rectangular cross section, utilizing far more of the available volume. From the outside, it looks a lot like an airborne version of a manta ray.

Natilus says its aircraft will improve interior space utilization even more, as theyre designed around cargo and have a diamond-shaped bay that rotates the cargo area 45 degrees. From a freight perspective, it makes a lot sense, said Matyushev. It has 50 percent more volume internally, so it doubles the amount of revenue cargo per flight. With conventional designs you start to run out of volume before you maximize the takeoff weight of the airplane.

The planes exterior shape is also more aerodynamic, which will allow it to go faster while burning less fuel.

Though the aircraft will be made for autonomous flight, theyll initially operate with oversight from remote pilots, until regulations allow for full autonomy. Theyll be able to use existing ground infrastructure and standard air cargo containers. The company plans to make 4 different aircraft: a 3.8-ton payload short-haul plane, a 60-ton payload medium/long-range plane, and 100- and 130-ton payload long-range planes.

Some of the customers that recently pre-ordered aircraft from Natilus include Kenyan cargo airline Astral Aviation, drone services provider Volatus Aerospace (slated in the first production slot for a 3.8-ton plane), and freight forwarding company Flexport.

To date, Natilus has completed two wind tunnel tests to validate its aircraft, and is planning for the first flight of a full-scale prototype to take place in 2023.

Image Credit: Volatus Aerospace

Originally posted here:

Drones as Big as 747s Will Fly Cargo Around the World With Low Emissions, Startup Says - Singularity Hub

Posted in Singularity | Comments Off on Drones as Big as 747s Will Fly Cargo Around the World With Low Emissions, Startup Says – Singularity Hub

Startup Will Drill 12 Miles Into Earth’s Crust to Tap the Boundless Energy Below – Singularity Hub

Posted: at 5:06 am

When it comes to renewable energy, almost all the love goes to solar and wind. Which isnt surprising, given the tear both technologies have been on of late.

But solar and wind have their drawbacks. Neither is continuous, reliable, or universally practical. That means energy storage and transportation are crucial. And while there are promising trends on both fronts, to date, batteries are still expensive and resource-intensive to make, maintain, and replace, and new infrastructure takes time to build. Not to mention the fact that wind and solar take up a lot of space to generate energy.

So, what if there was a nearly limitless source of energy available anywhere on the planet? What if the only thing preventing us from tapping said energy source was technology? And what if that tech drew on the expertise of a century-old, trillion-dollar industry, and could readily slot into much of the infrastructure already built for that industry?

The answer to these questions is and has always been directly beneath our feet. The core of our planet is hotter than the sunall we have to do is drill deep enough to liberate some of its heat. At least, thats the dream Quaise Energy is pitching, and the startup, spun out of MIT in 2018, recently secured $40 million in new funding to go after it.

The big idea? Swap out traditional drill bits for millimeter-wave beams of light to vaporize rock instead of crushing it. These contactless drills could bore holes as deep as 12 miles into the Earths crust where the rock reaches temperatures upwards of 700 degrees Fahrenheit. Water goes down the hole, is converted to supercritical steam, and shoots back to the surface to drive standard turbines and produce electricity to feed the grid.

Today we have an access problem, Quaise cofounder and CEO Carlos Araque told IEEE Spectrum in 2020. The promise is that, if we could drill 10 to 20 km deep, wed basically have access to an infinite source of energy.

Its a compelling elevator pitch, and Quaise is built on promising foundations with an MIT origin story. But the startup still needs to prove its experimental tech works outside the lab and then solve the kinds of problems that only become relevant when dealing with scalding rock under immense pressure. More on that in a moment. First, a little context.

Geothermal energy doesnt get as much attention because its dependent on special conditions. Iceland, for example, extensively used geothermal (alongside hydropower) to make its grid nearly 100 percent renewable. And, yes, the Icelandverse is special. Its a majestic island shaped by volcanoes and glaciers, and when fire and water combine near the surface, geothermals a no-brainer.

Conventional geothermal plants tap into steam rising up through cracks and fissures in the rock to heat buildings and drive turbines that produce electricity. While Earths underworld is universally fiery, the conditions for traditional geothermal arent so broadly distributed.

Thats why next-generation geothermal energy is all aboutmaking the right conditions instead of relying on them to occur naturally. Enhanced geothermal systems (EGS) drill into hot rock with fewer naturally occurring cracks, fissures, and water. They then split the rock with high-pressure fluidsa technology borrowed from the oil and gas industry, where its known as frackingand pump in water to liberate the heat.

Of course, fracking has some baggage, but EGS proponents stress there are notable differences. The fluids used for EGS are safer and pose less risk of polluting groundwater. The chances of inducing seismic activity are also lower, as EGS makes use of smaller fractures in the rock and uses less pressure compared to the recovery of oil and gas in shale.

The kind of geothermal system Quaise is proposing is at the far horizon of EGS, according to this excellent Vox explainer.

By drilling deeper, the startup will hit rock at higher temperaturesso hot, in fact, that they can produce supercritical water, a fourth phase of water thats neither liquid nor gas and has a few special properties. Supercritical water, for example, holds 4 to 10 times as much energy per unit mass and doubles its conversion to electricity.

Not only do you get more energy out of your well, Eric Ingersoll, a clean energy analyst at LucidCatalyst told Vox, you get more electricity out of that energy.

Its worth noting that Quaise is targeting temperature, not depth, according to Araque. In some parts of the world, like Iceland, rock hits the requisite temperatures three to five miles below the surface; but elsewhere those temps are only found 12 miles underground.

We want geothermal to be viable no matter where you are in the world, and for that you need to go deeper, Araque told the Power Podcast. 20 kilometers, 12 miles, will pretty much get you 95 percent of the population of the world.

So, Quaises first and most crucial challenge is drilling deep enough. And that wont be an easy task.

The deeper you drill, the hotter the rockthis is a blessing and a curse. Using conventional technology, theres a point past which it just isnt practical to go deeper, according to Araque. One problem is your electronics melt. Worse, drill bits get torn up by the temperatures and hard rock. To replace a drill bit at those depths, you might spend a week to pull it up, a couple hours to replace it, and another week to push it back down.

To solve the problem, the idea, and its not a new one, is to go contactless.

MITs Paul Woskov, whose research is the bedrock of Quaises approach, spent a decade proving out the physics involved. The system will use a beam of millimeter-wave energyan electromagnetic frequency in the territory of microwavesgenerated by a gyrotron on the surface. The microwave beam shoots down the drill hole alongside a gasnitrogen, air, or argonand evaporates layers of rock deep in the Earth. Then the gas binds and carries the vaporized rock back up to the surface like a plume of volcanic ash.

The team is using investments and grantsa total of $63 millionto scale up to blasting first rock in the field in the western US in 2024. From there, theyll increase the depth until they hit their targets. Araque says, most of the rest of itfrom creating permeability in the rock to setting up geothermal plants harvesting the Earths heatis already proven.

The bottleneck is the drilling technology, according to Araque. If you can crack that, everything else falls into place.

What might that look like? Quaises long-term plan is to approach power plants running on fossil fuels and offer to drill geothermal fields customized to match their existing equipment. The fields sit on a footprint 100 to 1,000 times less than whats needed for solar or wind. Once hooked up, its basically business as usual: turbines create electricity and feed it to the gridand our homes, cars, and businessesvia existing infrastructure.

Were not happy with megawatts, Araque said. We need terawatts from the grid around the world. Thats energy transition.

Its a big visionand still an early one. Well likely have to wait years to see if it pays off. In the meantime, other companies will chip away at the problem, extending geothermal energys reach with shallower projects and closed-loop systems (no fracking required).

Regardless, it seems more than a worthy project. Earths geological engine isnt scheduled to die for a billion years, and its energy is available from any point on the surfaceas long as we can dig deep enough. As the technology advances, geothermal could become an abundant and reliable addition to the energy mix.

Image Credit: NASA/Goddard Media Studios

Looking for ways to stay ahead of the pace of change? Rethink whats possible. Join a highly curated, exclusive cohort of 80 executives for Singularitys flagship Executive Program (EP), a five-day, fully immersive leadership transformation program that disrupts existing ways of thinking. Discover a new mindset, toolset and network of fellow futurists committed to finding solutions to the fast pace of change in the world. Click here to learn more and apply today!

Follow this link:

Startup Will Drill 12 Miles Into Earth's Crust to Tap the Boundless Energy Below - Singularity Hub

Posted in Singularity | Comments Off on Startup Will Drill 12 Miles Into Earth’s Crust to Tap the Boundless Energy Below – Singularity Hub

SentinelOne Announces Zscaler Integration, Simplifying XDR and Zero Trust Adoption – Business Wire

Posted: at 5:06 am

MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)--SentinelOne (NYSE: S), an autonomous cybersecurity platform company, today announced a new integration with ZscalerTM to simplify enterprise security, enabling enhanced end-to-end visibility, automated response, and conditional access. Together, SentinelOne and Zscaler provide advanced threat detection and remediation across networks, endpoints, and cloud applications, streamlining the adoption and enforcement of zero trust policies to keep users, devices, and applications secure.

The joint solution allows Singularity XDR to ingest Zscaler data, providing end-to-end visibility. The integrated solution empowers SOC teams with contextualized data on abnormal activity, accelerating investigation and threat triage. Analysts benefit from automatic and manual response actions from Singularity XDR, limiting an attackers ability to infiltrate and launch an attack.

"Todays security challenges require defense in depth, John McLeod, CISO, NOV. SentinelOne and Zscaler are key components in our security stack that help us advance our overall security posture. Together, Singularity XDR and Zscaler automate the triage and investigation functions in the SOC, enabling a small team to respond against threats with speed and accuracy."

The joint solution also strengthens zero trust frameworks with automated policy orchestration. Coordinated user access control via the Zscaler Zero Trust ExchangeTM provides secure conditional access to private and SaaS applications with device posture details from SentinelOne.

In a world where hybrid work is becoming the norm, removing the silos of security solutions is key for implementing Zero Trust from endpoint to apps, says Amit Raikar, VP Technology Alliances and Business Development at Zscaler. Driven by customer demand and feedback, our new cloud to cloud integration with SentinelOne delivers a cross platform threat hunting view, and API automation for faster and effective response. Combined with prior device posture conditional access integration, we further strengthen our zero trust solution for empowering work from anywhere securely.

This integration is the result of a symbiotic relationship between Zscaler and SentinelOne, said Yonni Shelmerdine, VP Product Management, SentinelOne. With the Zscaler Zero Trust Exchange feeding data into our threat analytics cloud, and our XDR platform triggering response actions in Zscaler, decisions are performed with additional context and speed. The joint solution provides users with zero trust bolstered by XDR to help keep the world a safer place.

With attack vectors multiplying as a result of hybrid work models and BYOD programs, enterprises are struggling to secure increasing numbers of vulnerable assets both inside and outside the traditional network perimeter. Security teams still cope with tools and data that live in silos, prohibiting proper context and understanding. The SentinelOne Zscaler integration simplifies enterprise security across the entire network, from the endpoint to the cloud.

For more information on the SentinelOne and Zscaler integration, visit https://s1.ai/zscaler-sb

About SentinelOne

SentinelOnes cybersecurity solution encompasses AI-powered prevention, detection, response and hunting across endpoints, containers, cloud workloads, and IoT devices in a single autonomous XDR platform.

Read the original here:

SentinelOne Announces Zscaler Integration, Simplifying XDR and Zero Trust Adoption - Business Wire

Posted in Singularity | Comments Off on SentinelOne Announces Zscaler Integration, Simplifying XDR and Zero Trust Adoption – Business Wire

Sony’s Racing AI Just Beat the World’s Best Gran Turismo Drivers – Singularity Hub

Posted: at 5:06 am

Over the last several years, AIs have learned to best humans in progressively more complicated games, from board games like chess and Go to computer games like Pacman and Starcraft II (and lets not forget poker!). Now an AI created by Sony has overtaken humans in another popular and complex game: Gran Turismo. Besides being a feat in itself, the accomplishment could have real-world implications for training self-driving cars.

For those unfamiliar, Gran Turismo is a series of racing simulation games made for Sonys PlayStation consoles. The games creators aimed to bring as much real-world accuracy to its cars and driving as possible, from employing principles of physics to using actual recordings of cars engines. The realism of Gran Turismo comes from the detail that we put into the game, said Charles Ferreira, an engineer at Polyphony Digital, the creative studio behind Gran Turismo. All the details about the engine, the tires, the suspension, the tracks, the car model

Sony launched its AI division in April 2020 to do research in AI and robotics as they relate to entertainment. The division partnered with Polyphony Digital and the makers of PlayStation to develop Gran Turismo Sophy (GT Sophy), the AI that ended up beating the games best human players. A paper detailing how the system was trained and how its technique could be applied to real-world driving was published yesterday in Nature.

Putting the pedal to the metal is one skill you need to be good at Gran Turismo (or at racing cars in real life), but speed alone doesnt separate champs from runners-up. Strategy and etiquette are important too, from knowing when to pass another car versus waiting it out, to avoiding collisions while staying as close to other vehicles as possible, to where to go wide or cut in. As the papers authors put it, drivers must execute complex tactical maneuvers to pass or block opponents while operating their vehicles at their traction limits.

So how did an AI manage to tie these different skills together in a way that led to a winning streak?

GT Sophy was trained using deep reinforcement learning, a subfield of machine learning where an AI system or agent receives rewards for taking certain actions and is penalized for otherssimilar to the way humans learn through trial and errorwith the goal of maximizing its rewards.

GT Sophys creators focused on three areas in training the agent: car control (including understanding car dynamics and racing lines), racing tactics (making quick decisions around actions like slipstream passing, crossover passes, or blocking), and racing etiquette (following sportsmanship rules like avoiding at-fault collisions and respecting opponents driving lines).

Sony AIs engineers had to walk a fine line when creating GT Sophys reward function; the AI had to be aggressive without being reckless, so it received rewards for fast lap times and passing other cars while being penalized for cutting corners, colliding with a wall or another car, or skidding.

Researchers fed the system data from previous Gran Turismo games then set it loose to play, randomizing factors like starting speed, track position, and other players skill level for each run. GT Sophy was reportedly able to get around the track with just a few hours of training, though it took 45,000 total training hours for the AI to become a champ and beat the best human players.

Outracing human drivers so skillfully in a head-to-head competition represents a landmark achievement for AI, said Stanford automotive professor J. Christian Gerdes, who was not involved in the research, in a Nature editorial published with Sony AIs paper. GT Sophys success on the track suggests that neural networks might one day have a larger role in the software of automated vehicles than they do today.

Though GT Sophys racing abilities wouldnt necessarily transfer well to real carsparticularly on regular roads or highways rather than a circular trackthe systems success can be seen as a step towards building AIs that understand the physics of the real world and interact with humans. Sonys research could be especially applicable to etiquette for self-driving cars, given that these boundaries are important despite being loosely defined (for example, its less egregious to cut someone off in a highway lane if you immediately speed up after doing so, as opposed to slowing down or maintaining your speed).

Given that self-driving cars have turned out to be a far more complex and slow-moving endeavor than initially anticipated, incorporating etiquette into their software may be low on the priority listbut it will ultimately be important for cars run by algorithms to avoid being the target of road rage from human drivers.

In the meantime, GT Sophy will continue refining its racing abilities, as it has plenty of room for improvement; for example, the AI consistently passes other cars with an impending time penalty, when it would often make more sense to wait for penalized cars to slow down instead.

Sony also says it plans to integrate GT Sophy into future Gran Turismo games, but hasnt yet disclosed a corresponding timeline.

Image Credit: Sony AI

Looking for ways to stay ahead of the pace of change? Rethink whats possible. Join a highly curated, exclusive cohort of 80 executives for Singularitys flagship Executive Program (EP), a five-day, fully immersive leadership transformation program that disrupts existing ways of thinking. Discover a new mindset, toolset and network of fellow futurists committed to finding solutions to the fast pace of change in the world. Click here to learn more and apply today!

See more here:

Sony's Racing AI Just Beat the World's Best Gran Turismo Drivers - Singularity Hub

Posted in Singularity | Comments Off on Sony’s Racing AI Just Beat the World’s Best Gran Turismo Drivers – Singularity Hub

Astronomers Think They’ve Just Spotted an ‘Invisible’ Black Hole for the First Time – Singularity Hub

Posted: at 5:06 am

Astronomers famously snapped the first ever direct image of a black hole in 2019, thanks to material glowing in its presence. But many black holes are actually near impossible to detect. Now another team using the Hubble Space Telescope seems to have finally found something nobody has seen before: a black hole which is completely invisible. The research, which has been posted online and submitted for publication in the Astrophysical Journal, is yet to be peer-reviewed.

Black holes are whats left after large stars die and their cores collapse. They are incredibly dense, with gravity so strong that nothing can move fast enough to escape them, including light. Astronomers are keen to study black holes because they can tell us a lot about the ways that stars die. By measuring the masses of black holes, we can learn about what was going on in stars final moments, when their cores were collapsing and their outer layers were being expelled.

It may seem that black holes are by definition invisible; they after all earned their name through their ability to trap light. But we can still detect them through the way they interact with other objects thanks to their strong gravity. Hundreds of small black holes have been detected by the way they interact with other stars.

There are two different approaches to such detection. In X-ray binary stars, in which a star and a black hole orbit a shared center while producing X-rays, a black holes gravitational field can pull material from its companion. The material circles the black hole, heating up by friction as it does so. The hot material glows brightly in X-ray light, making the black hole visible, before being sucked into the black hole and disappearing. You can also detect pairs of black holes as they merge together, spiraling inwards and emitting a brief flash of gravitational waves, which are ripples in spacetime.

There are many rogue black holes that are drifting through space without interacting with anything, however, making them hard to detect. Thats a problem, because if we cant detect isolated black holes, then we cant learn about how they formed and about the deaths of the stars they came from.

To discover such an invisible black hole, the team of scientists had to combine two different types of observations over several years. This impressive achievement promises a new way of finding the previously elusive class of isolated black holes.

Einsteins General Theory of Relativity predicted that massive objects will bend light as it travels past them. That means that any light passing very close to an invisible black holebut not close enough to end up inside itwill be bent in a similar way to light passing through a lens. This is called gravitational lensing, and can be spotted when a foreground object aligns with a background object, bending its light. The method has already been used to study everything from clusters of galaxies to planets around other stars.

The authors of this new research combined two types of gravitational lensing observations in their search for black holes. It started with them spotting light from a distant star suddenly magnify, briefly making it appear brighter before going back to normal. They could not see any foreground object that was causing the magnification via the process of gravitational lensing, though. That suggested the object might be a lone black hole, something which had never been seen before. The problem was that it could also just have been a faint star.

Figuring out if it was a black hole or a faint star required a lot of work, and thats where the second type of gravitational lensing observations came in. The authors repeatedly took images with Hubble for six years, measuring how far the star appeared to move as its light was deflected.

Eventually this let them calculate the mass and distance of the object which caused the lensing effect. They found it was about seven times the mass of our sun, located about 5,000 light years away, which sounds far away but is actually relatively close. A star that size and that close should be visible to us. Since we cant see it, they concluded it must be an isolated black hole.

Taking that many observations with an observatory like Hubble isnt easy. The telescope is very popular and there is a lot of competition for its time. And given the difficulty of confirming an object like this, you might think the prospects for finding more of them arent great. Luckily, were at the beginning of a revolution in astronomy. This is thanks to a new generation of facilities, including the ongoing Gaia survey, and upcoming Vera Rubin Observatory and Nancy Grace Roman Space Telescope, all of which will take repeated measurements of large parts of the sky in unprecedented detail.

Thats going to be huge for all areas of astronomy. Having regular, high-precision measurements of so much of the sky will let us investigate en masse things which change on very short timescales. Well study things as varied as asteroids, exploding stars known as supernovas, and planets around other stars in new ways.

When it comes to the search for invisible black holes, that means rather than celebrating finding just one, we could soon be finding so many that it becomes routine. That will let us fill in the gaps in our understanding of the deaths of stars and the creation of black holes.

Ultimately, the galaxys invisible black holes are about to find it much harder to hide.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: L.DEP/Shutterstock.com

Go here to see the original:

Astronomers Think They've Just Spotted an 'Invisible' Black Hole for the First Time - Singularity Hub

Posted in Singularity | Comments Off on Astronomers Think They’ve Just Spotted an ‘Invisible’ Black Hole for the First Time – Singularity Hub

Page 24«..1020..23242526..3040..»