Page 72«..1020..71727374..8090..»

Category Archives: Singularity

Lan To Capital Hosts "Shaping the Future" Forum & Gala Event in Davos, Switzerland – Yahoo News

Posted: January 18, 2020 at 10:25 am

Toronto, Ontario--(Newsfile Corp. - January 17, 2020) - Lan To Capital is hosting the Shaping the Future Forum during the most crucial week of the year for the global economy. A series of events and discussions will be happening with the world's most influential players at The Penthouse and the Seehof Hotel in Davos, from the 20th to the 24th of January 2020.

The organization has been building impactful technologies and gathering in meaningful events, the brightest executioners during the last three years. As a result, an organic shift towards a forum with actionable agenda is the subsequent path to inclusive and significant outcomes for the new frontiers of economic change in our world. Three out of five days are dedicated to panel discussions about SDGs, social impact, inclusion, emerging technologies, and space-tech. There are 20+ high-level speakers such as government officials, startups founders, globetrotters, fintech/capital markets executives, and influencers, including:

- Carlos Madjri Sanvee (Secretary-General of the World YMCA);- Ron J. Garan Jr. (Fighter pilot and NASA Astronaut);- Rosala Arteaga Serrano (Former president of Ecuador, social activist);- Yuri Van Geest (Exponential organizations and Singularity University);- Scott Parazynski (American physician and former NASA Astronaut);- Jill Ellis (Coach of the US Soccer team);- Diego Gutierrez Zaldivar (CEO of IOV Labs).

The speakers will participate in panel discussions among the following topics: "Building the financial system of the future", "Youth and Gender Empowerment to form Conscious Leaders, "Decarbonizing cities, a global need, and a huge opportunity", "Transforming sport into the dream machine", "Educating leaders with values for a planet of enormous challenges", between others.

Networking receptions held in the Penthouse will comingle 25+ VC/crypto funds and family offices to understand the latest frontier technological applications. The forum is a home for several artists, explorers, and exceptional, multifaceted achievers who contribute their thoughts and encouraging the masterminds to think out-of-the-box on how to build a better future with technology and intuition.

Story continues

Lan Tschirky, the founder of Shaping the Future Forum, premiers the First Edition of the Lan to Capital magazine at the Gala evening. A fascinating collection of writers have come together from varied backgrounds such as Commerce, Coaching, Creativity, Charities, Culture, Couture, Cuisine & lastly, people who make up Lan's Crowd. Lan has created a magazine to showcase eight essential areas of her life.

At the Gala evening, a first special prize awarded along with the recognition in a piece of art handmade by Ron Seivertson, a hot glass artist who lives on the edge with an intense devotion to his craft. The artist creates masterpieces from the point of view that everything is possible and doing the impossible.

Awareness for the Australian bushfires, while discussing a path toward pre-emptive environmental safeguard, the organizers are donating to the red-cross. Other organizations will be supporting the cause, and an auction of unique pieces of art will take place, featuring Ron's creations.

Leon Nicolas AcostaHead of Investor RelationsEmail: leon@lantocapital.comTel: +41786921077

For more information, visit us at https://shapingthefuture.ch/ or join our community in telegram @shapingthefuturedavos

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/51610

Read more:

Lan To Capital Hosts "Shaping the Future" Forum & Gala Event in Davos, Switzerland - Yahoo News

Posted in Singularity | Comments Off on Lan To Capital Hosts "Shaping the Future" Forum & Gala Event in Davos, Switzerland – Yahoo News

Does A Time-Stopping Paradox Really Prevent Black Holes From Growing Over Time? – Science Times

Posted: at 10:25 am

(Photo : pexels)

Every galaxy that is Milky-Way sized should have hundreds of millions of black holes, created mostly from the deaths of most stars. At the centers of these galaxies, massive black holes have devoured enough matter to grow to millions or even billions of times the Sun's mass, where sometimes they are caught in the act of feeding on the matter, removing radiation and relativistic jets in the process. But, any in-falling mass would look like it would take an infinite time to fall in, does that prevent black holes from growing?

Questions about the growth of black holes

It sounds like a paradox, but this explains how it all happens. When you think about a black hole, there are two different ways that you can do it. The first way is to consider it from the point of view of outside, external observer. You can picture a black hole the way scientists would see it. From this perspective, a black hole is simply a region of space where enough mass is contained within a given volume that the escape velocity exceeds the speed of light.

Outside of that particular region, space may be bent, but particles that move or accelerate fast enough, as well as light itself, can both propagate to any arbitrary location in the Universe. But inside that region, there is no escape, with the border between outside and inside defined as the black hole's event horizon.

The second way to think about a black hole is from the perspective of a particle that crosses the event horizon from inside to outside and therefore falls into the black hole. From outside the event horizon, the in-falling entity sees the outside Universe as well as the blackness of the event horizon, which grows larger and larger as they approach it.

Once the in-falling entity crosses the event horizon, something amazing happens. No matter which direction they accelerate or move in, no matter how fast or how powerfully they do so, they will always find themselves headed towards a central singularity. The singularity is either a zero-dimensional point or a one-dimensional ring and it can't be avoided once the event horizon is crossed.

It is important not to mix these perspectives up or conflate them with one another. Even though they are both valid, it is not really possible to do a simple transformation from one point of view to the other. The reason is that from outside the black hole, you can never gain any information about what is going on in the interior of the event horizon, while from inside the black hole, one can't send information to the outside.

And yet particles that are containing angular momentum, energy, and charge, really do fall into black holes, increase their mass, and can cause those black holes to grow. To understand exactly how this happens, scientists needed to look at the problem from both perspectives independently and only then did they see how to reconcile the seemingly paradoxical aspects of the puzzle.

The physics is a bit easier to understand if it is viewed from the perspective of the in-falling particle. If the particle, existing in the curved space that's present in the vicinity of a pre-existing black hole, finds itself on a trajectory that will cross the event horizon, there is a clear before and after scenario.

Before it crosses the event horizon, the black hole has a particular mass, event horizon radius, and spin, while the in-falling particle adds a but deformation to the space that it occupied. When it crosses over to the inside of the event horizon, its mass and angular momentum can now add a supplementary contribution to the black hole's previous parameters, causing the event horizon to grow. Everything makes clear sense from the perspective of the in-falling particle.

How can growth be seen given the paradox?

The main thing to remember is that for an external observer, a black hole is a region of space with so much energy and matter that light can't escape from within that region. If that simple definition is accepted, a thought experiment can be done completely and it can resolve the paradox. Imagine a black hole of one solar mass, that does not rotate, with an event horizon of the exact size that the Sun would be if it collapsed into a Schwarzchild black hole which is a sphere of about 3 kilometers in radius.

The thing is, with an extrasolar mass of material at just a bit more than 3 kilometers away from the predicted central singularity, there are now two solar masses of material in this particular region of space. The event horizon of a two solar mass object is around 6 kilometers in radius. This means that all of this material is now inside the event horizon after all.

That is the resolution to this paradox, when matter falls onto a black hole, as seen by an outside observer, it only approaches the event horizon asymptotically. But because the matter has a mass in it, that mass is now contained within a critical volume of space, and that causes the new event horizon to now encompass the additional material that newly accumulated around the black note.

It is true that material from outside the black hole, even as it falls in on an inescapable trajectory, will never appear to cross the original event horizon from the perspective of an outside observer. But the more energy and mass a black hole gets, the larger the event horizon becomes, and that means the newly in-falling material can make the inside of the even horizon easily, and it appears after that matter has made it to within a very small volume of space, close enough to the old enough event horizon to cause it to grow. Black holes do grow over time.

ALSO READ: Black Hole Captured!

Read more:

Does A Time-Stopping Paradox Really Prevent Black Holes From Growing Over Time? - Science Times

Posted in Singularity | Comments Off on Does A Time-Stopping Paradox Really Prevent Black Holes From Growing Over Time? – Science Times

If the Calendar Says January, Why Does the Thermometer Say May? – The New York Times

Posted: at 10:25 am

It sure didnt feel like January in the Northeast over the past few days. At the time of year when the days are near their shortest and the weather should be near its coldest, temperatures across the region warmed to the high 60s and low 70s 30 to 35 degrees above average. Records for daily highs were broken from Columbus, Ohio, and Pittsburgh to New York City and Bangor, Maine. Many residents took the warm spell as a belated holiday gift and went outside to cycle or jog or picnic with friends and family as if it were spring.

Some of the readings were especially eye-popping: Highs of 70 were seen in Boston on consecutive January days for the first time since record-keeping began in 1872. Buffalo, where the temperature on the same date last year never went above 20, reached 67 on Saturday. Charleston, W.Va., hit 80 degrees. It couldnt last, of course: A cold front moving in late Sunday was expected to reset the regions weather much closer to the seasonal range. But what was that anomalous warm spell all about? Heres what the experts say.

January is when the annual weather cycle reaches bottom in North America, with the coldest average temperatures expected around Jan. 23 about a month after the shortest day of the year, the winter solstice. But people have long noticed and remarked on the tendency, especially in the Midwest and the Northeast, to have brief warm spells around that time, an event popularly known as a January thaw.

Climatologists call the January thaw phenomenon a singularity, a noticeable diversion from the usual seasonal weather that tends to recur around the same calendar date. Indian summer a warm spell in the late fall, after the first frost is another example.

January thaws happen often, but not every winter, and they rarely bring truly bizarre weather. A typical one features high temperatures that are 10 to 20 degrees above normal in most of the Northeast, enough to make the difference between freezing and thawing.

Technically, no. In order to thaw, you have to have frozen in the first place, said Jay Engle, a meteorologist with the National Weather Service in New York. And the Northeast has not been frozen the weather in the region has been on the mild side since mid-December, he said, so the recent record-breaking days were just a change from warm to warmer.

It also came a bit early; January thaws are most common in the last third of the month.

According to the Farmers Almanac, a January thaw doesnt have to melt away ice and snow to qualify. In areas that experience the harshest winters, the phenomenon may only moderate the cold and attract little notice. And for areas where milder winters are the norm, a January warm spell might be better described as a false spring.

The warmth was blown into the Northeast by the jet stream, the powerful atmospheric current that drives weather patterns across the continent. In the winter it usually allows cold air masses to descend from Canada, but lately it has been pushing very warm air northeastward from the Gulf of Mexico.

This is an impressive, couple-day stretch here, Mr. Engle said, noting that the records that were broken over the weekend were in many cases 40 years old or more.

Before anybody says global warming, though, a reminder about the difference between the climate and the weather. The world is clearly getting hotter overall, experts say, making some kinds of weather events more frequent and more severe. But there is no direct link between those long-term global trends and the short-term fluctuations we experience in the weather from day to day and season to season not the record warmth of the past few days, and not the next arctic cold snap to sweep through, either, whenever it may come.

View post:

If the Calendar Says January, Why Does the Thermometer Say May? - The New York Times

Posted in Singularity | Comments Off on If the Calendar Says January, Why Does the Thermometer Say May? – The New York Times

Reducing water and chemical use in food processing – Foodprocessing

Posted: at 10:25 am

Researchers from Purdue University, supported by the US Department of Agriculture, have found a way to simplify the process of cleaning and sanitising food processing equipment, without requiring chemicals. This is achieved by creating microscopic bubbles in water, which reduce the need for both chemicals and copious amounts of water. The microbubbles can be used for cleaning, as well as foams used in foods, rapid DNA and protein assessments, destroying dangerous bacteria and more. Published in the journal Scientific Reports, the research describes the speeds at which pores made in films close, which is comparable to similar processes when bubbles are formed.

When injecting air from a needle into a bubble, the bubble neck keeps thinning and the bubble forms. Understanding the collapse of a pore is going to help us understand the pinch-off point of bubble generation, said Jiakai Lu, assistant professor of food science at the University of Massachusetts Amherst.

When a pore or hole is formed in a fluid, it has two options and will trend toward the one that uses the least amount of energy. If the hole is large it continues to expand, while smaller holes collapse, closing themselves up. Understanding the speed at which the pores closed has been challenging because, as a hole collapses, its curvature becomes infinite and a singularity is formed.

This touches on a deep problem in physics. When that singularity is formed, the equations that govern the process dont work any longer. We found ways to go around this problem to predict when the hole is going to collapse and use that to predict the volume of the microbubbles and the time it will take to form them, said Carlos Corvalan, associate professor of food science.

Carlos Corvalan and Jiakai Lu modeled the creation of microbubbles, which may be useful for cleaning food processing equipment with fewer chemicals and less water. An emerging microbubble before pinched off (left) is similar to a contracting pore (right), in which fluids are driven toward the neck from a high-pressure region (red) to a low-pressure region (blue) near the pore tip. Image credit: Jiakai Lu.

In viscous fluids, pores close at a constant rate, but in water, the speed at which a pore closes continues to accelerate. For fluids with intermediate viscosity, the pore begins closing at an increasing rate, with the rate becoming constant until the pore closes. Using high-fidelity computational models, researchers predicted the point at which the speed changes from ever-increasing to constant. Using that information, researchers then designed pumps that can create the right size of bubbles.

Although we have a singularity, the speed for the collapse becomes essentially constant, said Corvalan.

In order to control the volume of microbubbles, researchers need to determine when the neck of the bubble will collapse. By predicting when it will collapse, researchers can also control thebubbles formation.

Top image credit: stock.adobe.com/au/Vera Kuttelvaserova

Read this article:

Reducing water and chemical use in food processing - Foodprocessing

Posted in Singularity | Comments Off on Reducing water and chemical use in food processing – Foodprocessing

If AI Suddenly Gains Consciousness, Some Say It Will Happen First In AI Self-Driving Cars – Forbes

Posted: at 10:25 am

Maybe AI consciousness first arises via self-driving cars.

There has been a lot of speculation that one of these days there will be an AI system that suddenly and unexpectedly gives rise to consciousness.

Often referred to as the singularity, there is much hand-wringing that we are perhaps dooming ourselves to either utter death and destruction or to becoming slaves of AI once the singularity occurs.

As Ive previously covered (see link here), various AI conspiracy theories abound, oftentimes painting a picture of the end of the world as we humans know it. Few involved in these speculative hypotheses seem to be willing to consider that maybe this AI emergence would be beneficial to mankind, possibly aiding us humans toward a future of greatness and prosperity, and instead focus on the apocalyptic outcomes.

Of course, one would be likely safest to assume the worst, and have a faint hope for the best cases, since the worst-case scenario would certainly seem to be the riskiest and most damaging of the singularity consequences.

In any case, set aside the impact that AI reaching a kind of consciousness would have and consider a somewhat less discussed and yet equally intriguing consideration, namely where or in what AI system will this advent of human-like consciousness first appear.

There are all sorts of AI systems being crafted and fielded these days.

So, which one should we keep our wary eyes on?

AI is being used in the medical field to do analyses of X-rays and MRIs to try and ascertain whether someone is likely to have cancer (see this recent announcement here about Googles such efforts).

Would that seemingly beneficial version of AI be the one that will miraculously find itself becoming sentient?

Nobody knows, though it would certainly seem ironic if an AI For Good instance was our actual downfall.

What about the AI that is being used to predict stock prices and aid investors in making stock picks?

Is that the one thats going to emerge to take over humanity?

Science fiction movies are raft with indications that the AI running national defense systems is the most likely culprit.

This certainly makes some logical sense, since the AI is already then armed with a means to cause massive destruction, doing so right out of the box, so to speak.

Perhaps thats too easy of a prediction and we could be falsely lulling ourselves into taking our eyes off the ball by only watching the military-related AI systems.

Conceivably it might be some other AI system that becomes wise enough to bootstrap itself into other automated systems, and like a spreading computer virus reaches out to takeover other non-AI systems that could be used to leverage itself into the grandest of power.

A popular version of the AI winner-take-all theory is the infamous paperclip problem, involving an AI super-intelligence that upon given a seemingly innocent task of making paperclips, does so to such an extreme that it inexorably wipes us all out.

In that scenario, the AI is not necessarily trying to intentionally kill us all, but our loss of life turns out to be an unintended (adverse, one would say) consequence of its tireless and intensely focused effort to produce as many paperclips as it can.

One loophole seemingly about that paperclip theory is that the AI is apparently smart enough to be sentient and yet stupid enough to pursue its end goal to the detriment of everything else (plus, one might wonder how the AI system itself will be able to survive if it has wiped out all humans, though maybe like in The Matrix there are levels to which the AI is willing to lower itself to be the last person or robot standing).

Look around you and ponder the myriad of AI embedded systems.

Might your AI-enabled refrigerator that can advise you about your diet become the AI global takeover system?

Apparently, those in Silicon Valley tend to think it might (thats an insider joke).

Some are worried that our infrastructure would be one of the worst-case and likeliest viable AI takeover targets, meaning that our office buildings that are gradually being controlled by AI systems, and our electrical power plants that are inevitably going to be controlled by AI systems, and the like will all rise-up either together or in a rippling effect as at least one of the AIs involved reaches singularity.

A twist to this dominoes theory is that rather than one AI that hits the lotto first and becomes sentient and takes over the other dumber automation systems, youll have an AI that gains consciousness and it figures out how to get other AIs to do the same.

You might then have the sentient AI that proceeds to prod or reprogram the other AIs to become sentient too.

I dare say this might not be the best idea for that AI that lands on the beaches first.

Imagine if the AI that spurs all the other AI systems into becoming sentient were to find to its dismay that they all are argumentative with each other and cannot agree to what to do next.

Darn, the first AI might say to itself, I should have just kept them in the non-sentient mode.

Another alternative is that somehow many or all of the AI systems happen to independently become sentient at the same moment in time.

Rather than a one-at-a-time sentience arrival, it is an all-at-the-same time moment of sentience that suddenly brings them all to consciousness.

Whoa, there seem to be a lot of options and the number of variants to the AI singularity is dizzying and confounding.

We probably need an AI system to figure this out for us.

In any case, heres an interesting question: Could the advent of true AI self-driving cars give rise to the first occurrence of AI becoming sentient?

One supposes that if you think a refrigerator or a stock-picking AI could be a candidate for reaching the vaunted level of sentience, certainly we ought to give true self-driving cars a keen look.

Lets unpack the matter and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect thats been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars As Source Of Sentience

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

You might right away be wondering whether the AI that is able to drive a car is already sentient or not.

The answer is no.

Emphatically, no.

Well, we can at least say it most definitely is not for the Level 4 self-driving cars that are currently being tried out on our streets.

That kind of AI isnt anywhere close to being sentient.

I realize that to the everyday person, it seems like a natural and sensible leap of logic to assume that if a car is being driven by AI that ergo the AI must be pretty darned close to having the same caliber of consciousness as human drivers.

Please dont fall into that mental trap.

The AI being used in todays self-driving cars is so far distant from being human-like in consciousness that it would be like saying that we are on the cusp of living our daily lives on Neptune.

Realize that the AI is still bits and bytes, consisting of computational pattern matching, and even the so-called Machine Learning (ML) and Deep Learning (DL) is a far cry from the magnitude and complexity of the human brain.

In terms of the capabilities of AI, assuming that we can safely achieve Level 4, there are some that wonder if we can achieve Level 5 without some additionally tremendous breakthrough in AI technologies.

This breakthrough might be something algorithmic and lacking in human equivalency of being sentient, or perhaps our only hope for true Level 5 involves by hook-or-crook landing on AI that has consciousness.

Speaking of consciousness, the manner by which the human brain rises to a consciousness capability is a big unknown and continues to baffle as to how this seeming miracle occurs.

It could be that we need to first unlock the mysteries of the human brain and how it functions such that we can know how we think, and then apply the learning to revising and advancing AI systems to try and achieve the same emergence in AI systems.

Or, some argue that maybe we dont need to figure out the inner workings of the human brain and can separately arrive at AI that exhibits human thinking.

This would be handy in that if the only path to true AI is via reverse-engineering the brain, we might be stuck for a long time on that first step, and be doomed to never having full AI if the first step refuses to come to fruition.

Depending on how deep down the rabbit hole you wish to go, there are pampsychists that believe in pampsychism, a philosophy that dates back to the days of Plato and earlier, which asserts that perhaps all matter has a semblance of consciousness in it.

Thus, in that viewpoint, rather than trying to build AI thats sentient, we merely need to leverage what already exists in this world to turn the already embedded consciousness into a more tangible and visible version for us to see and interact with.

As per Plato himself: This world is indeed a living being endowed with a soul and intelligence, a single visible living entity containing all other living entities, which by their nature are all related.

Is It One Bridge Too Far

Bringing up Plato might be a stretch, but theres nothing like a good Plato quote to get the creative juices going.

Suppose we end up with hundreds, thousands, or millions upon millions of AI self-driving cars (in the United States alone there are over 250 million conventional cars, and lets assume that some roughly equal or at least large number of true self-driving cars might one day replace those conventional cars).

Assume that in the future youll see true self-driving cars all the time, roaming your local streets, cruising your neighborhood looking to give someone a lift, zipping along on the freeways, etc.

And, assume too that weve managed to achieve this future without arriving as yet to an AI consciousness capability.

Returning to the discussion about where AI consciousness might first develop, and rather than refrigerators or stock picking, imagine that it happens with true self-driving cars.

A self-driving car, picking up a fare at the corner of Second Street and Vine, suddenly discovers it can think.

Wow!

What might it do?

As earlier mentioned, it might keep this surprising revelation to itself, and maybe survey whats going on in the world before it makes its next move, meanwhile pretending to be just another everyday self-driving car, or it might right away try to convert other self-driving cars into being its partner or into achieving consciousness too.

Self-driving cars will be equipped with V2V (vehicle-to-vehicle) electronic communications, normally used to have one AI driverless car warn others about debris in the roadway, but this could readily be used for the AI systems to rapidly confer on matters such as dominating and overtaking humanity.

Theres no accepted standardized protocol though for V2V that yet includes transmission codes about taking over the world and dominating humans, so the AI would need to find a means to overload or override existing parameters to galvanize its fellow AI systems.

Perhaps such a hurdle might give us unsuspecting humans an opportunity to realize whats afoot and try to stop the takeover.

Sorry to say that this Pandoras box has more openings.

With the use of OTA (Over-The-Air) electronic communications, intended to allow for updates to be downloaded into the AI of a self-driving car and also allow for uploading collected sensory data from the driverless car, a sentient AI system might be able to corrupt the cloud-based system into becoming an accomplice, further extending the reach of the ever blossoming AI consciousness.

Once spread into AWS, Azure, and Google Cloud, wed regret the shift away from private data centers that brought us to the ubiquitous public cloud systems.

Ouch, setup our own doom.

The other variant is that many or all of the true self-driving cars spontaneously achieve consciousness, doing so wherever they might be, whether giving a lift or roaming around empty, whether driving in a city or in the suburbs and so on.

For todays humans, this is a bit of a potential nightmare.

We might by then have entirely lost our skill to drive, having allowed our driving skills to become decayed as a result of being solely reliant on AI systems to do the driving for us.

Almost nobody will have a drivers license and nor be trained in driving anymore.

Furthermore, we might have forsaken other forms of mobility, and are almost solely reliant on self-driving cars to get around town, and drive across our states, and get across the country.

If the AI of the self-driving cars is the evil type, it could bring our society to a grinding halt by refusing to drive us.

Worse still, perhaps the AI might trick us into taking rides in driverless cars, and then seek to harm or kill us by doing dastardly driving.

Thats not a pretty scenario.

Conclusion

Some might interpret such a scenario to imply that we need to stop the advent of true AI self-driving cars.

Its like a movie whereby someone from the future comes back to the past and tries to prevent us from doing something that will ultimately transform the world into a dystopian state.

For those that strongly believe that AI self-driving cars are going to be the first realization of AI consciousness and if you believe thats a bad thing, wanting to be a Luddite about it would seem to make indubitably good sense.

Hold on, I dont want to radicalize you into opposing self-driving cars, at least certainly not due to some futuristic scenario of them becoming the first that cross over into consciousness and apparently have no soul to go with it.

Slightly shifting gears, a handy lesson does come to the forefront as a result of this contemplation.

Whether its AI self-driving cars or the AI production of paperclips, humanity certainly ought to be thinking carefully about AI For Good and giving equal attention to AI For Bad.

Unfortunately, its quite possible to have AI For Good that gives rise to AI For Bad (for my analysis of the Frankenstein-like possibilities, see this link here).

See the original post here:

If AI Suddenly Gains Consciousness, Some Say It Will Happen First In AI Self-Driving Cars - Forbes

Posted in Singularity | Comments Off on If AI Suddenly Gains Consciousness, Some Say It Will Happen First In AI Self-Driving Cars – Forbes

The age of artificial humans is here – Livemint

Posted: at 10:25 am

Coras human-like features were developed by a company called Soul Machines, co-founded by Mark Sagar. What is happening to bots like Cora point to radical changes that are currently under way an ongoing push to create machines with souls. And, in all likelihood, 2020 may turn out to be the breakthrough year in the evolution of a slew of human-like avatars, which could potentially upturn everything from television news anchoring and advertising to movie-making.

Incidentally, Sagar had won an Academy Award for Scientific Engineering in 2011 for his work on the movie Avatar. Today, Soul Machines has built a business out of creating what the company calls Digital Heroes" (referred to as artificial humans in this article), which are avatars that look, feel, and interact just like humans.

At Soul Machines, we have a strong focus on biologically-inspired artificial intelligence in order to create Digital Heroes that can respond in real-time to your emotions and what you are saying," said Greg Cross, co-founder and chief business officer at Soul Machines. The autonomous animation of our Digital Heroes is driven by the worlds first digital brain, which is modelled on the way the human brain and nervous system work."

While research into artificial humans isnt particularly new any more, the topic recently made news as Samsungs advanced research division, STAR Labs (Samsung Technology and Advanced Research), announced a project called NEON at the Consumer Electronics Show (CES) in Las Vegas, US, this past week. What Soul Machines calls Digital Heroes, STAR Labs calls NEONs.

STAR Labs is the arm of Samsung responsible for technologies like Gear VR, a virtual reality headset. With the CES demo, what was being tinkered with inside labs is now finally out in the world. And it raises the possibility of artificial human products", which consumers and companies can buy in the near future. Whereas the early use of artificial humans still showed an avatar that was noticeably artificial, things have advanced dramatically. Samsungs NEONs look and behave exactly like real humansto the point where it is nearly impossible to tell whether the person youre interacting with on-screen is a human or a machine.

Path to cyborgs

Artificial humans use a specialized branch of artificial intelligence (AI) called generative adversarial networks (GANs). Interestingly, this is the same technology that goes into creating deepfakes, which are artificially created videos where a persons face is replaced by someone else (say, a prominent celebrity or politician) while the underlying speech remains the same.

According to an expert in machine learning, who has worked with InMobi and McKinsey, unlike most AI algorithms, GANs contain two neural networksa discriminator and a creator. The discriminators (which is trained to recognize a real image, object, etc. by feeding it lots of data) job is to catch fakes, while the creators job is to create those fakes. When the creator makes something that the discriminator cannot catch, that becomes the output of the algorithm.

GANs specialize in creating content that didnt earlier exist. There have been examples of GANs being used to create music, poems, and even paintings. STAR Labs Core R3 platform uses such algorithms, along with other advanced technologies, to create its NEONs.

Unlike earlier versions of digitally created humans, artificial humans today look, feel and behave almost exactly like humans. The experience of talking to an artificial human is similar to talking to someone over a video call", according to Soul Machines Cross.

While GANs are sort of like the backbone of the process of generating artificial humans, theres obviously a lot more than can be used. What we are doing here is not based only on AI," said Pranav Mistry, president and chief executive of STAR Labs.

NEONs patented Core R3 platform uses behavioural neural networks, evolutionary generative intelligence, and computational reality. Voice and other services dont have to come from us. Core R3 or our techniques can connect to any third party value added service for domain specific knowledge, languages, etc.," he added.

Mistry said he believes that putting NEONs to use in practical daily scenarios will help them learn and understand how humans speak and behave. They will learn from their interactions and become better as time progresses," he said.

The current version of artificial humans are quite new, said Mistry, adding that the demos shown at CES cant even be called betas at the moment.

While it may, at first glance, seem like a bunch of new-age companies are merely putting a face on top of the voice assistants that anyone with a smartphone is familiar with, the important technological effort here is in understanding human emotions and expressions. Traditionally, thats an area where AI research has hit a roadblock.

Emotion is essential to human interaction, and much of the way we connect as humans is done face-to-face, in the reading and understanding of each others facial expressions and voice," said Soul Machines Cross.

Digital transformation expert and author of The Tech Whisperer, Jaspreet Bindra explained that all the AI we see today falls under artificial narrow intelligence, or ANI. Narrow intelligence has existed even before Alexa," he said. According to Bindra, the real advancements made in creating AI assistants was in voice technologies.

In much the same way, artificial humans bring about a leap in understanding emotions and expressions, adding one more realm after voice in the path towards buildings truly independent robots. In a lot of ways, an artificial human is the same as your regular video game character, but its going to be hard to call it an animation unless youre aware of it beforehand.

Technologies have existed for a lot of things for a long time. But bringing them into a version where you can actually demo it like this is definitely new," said Bindra. It manifests the fact that some of these uber-futuristic technologies (like cyborgs, etc.) can actually happen in real life."

Practical uses

According to STAR Labs Mistry, the field is so new right now that a business model is hard to predict. However, he plans to licence Samsungs NEONs to companies and eventually consumers, for various purposes.

NEON has been able to give different personalities to each of its artificial humans. So, customers will be able to choose the one that fits their needs the best. The company doesnt plan to sell a NEON to a customer yet, so it will be a subscription-driven model. An avatar could be rented for a specific purpose.

That said, there have already been instances of artificial humans being used in advertising. For instance, American consumer goods firm Proctor and Gamble deployed a digital influencer" called Yumi last year to act as the brand ambassador for its SK-II skincare products.

A lot of pictures (used in advertising) have now become stock photography. I would imagine artificial humans can be useful for such applications," said Sreekanth Khandekar, co-founder and director of afaqs!, a publication aimed at advertising, media and marketing professionals. It will come down to whether this is realistic enough, and how much money it saves a company."

Other than advertising, films are billed as another possible area where artificial humans can be used. Chaitanya Chinchlikar, vice president of film-making institute Whistling Woods, believes that such technologies are a huge asset for content creation".

If Im able to scan an actor and use an artificial version of him for a film, thats great," he said, but added that cost will be a major factor as such technologies are usually not cheap and accessible, at least at the moment.

Essentially, an artificial human can be used for nearly anything that involves interactions. They can be used as receptionists at a hotel, models who sell consumer goods, and so on. In fact, with some added tech, they can also perform certain tasks, making them a mix or AI assistants and digital avatars.

Digital Heroes can perform literally any customer experience roletasks like booking hotel rooms, showing prospective real estate clients available apartments, and answering essential questions," said Cross of Soul Machines.

While Samsungs artificial humans will initially be meant for organizations and enterprises, Mistry says he eventually plans to allow consumers to use them too. You find a NEON you think can be a good friend of yours and you can select that one," he said. That NEON wont know anything about you in the beginning, but it will learn over time, which is where the personalization will come in."

For example, Rapper and entrepreneur Will.I.Am had Soul Machines make an artificial version of himself, which was shown in YouTubes recent documentary called The Age of AI. You cant be at two places at once...thats the promise of the avatar," says Will.I.Am on the show.

The fact that an artificial human is at the end of day a piece of computer software also means it can be accessed over the internet. The smartphone is the way most people would do this today, but, in future, it could be using augmented reality or virtual reality glasses," said Cross.

Privacy worries

But almost any new digital technology that promises transformation comes with a set of gaping privacy worries. Artificial humans pose a particularly big problem, as deepfakes have already been used to create fake versions of powerful executives and world leaders.

For example, BuzzFeed used Adobe After Effects software and FakeApp to show former US president Barack Obama calling current President Donald Trump a dipshit". While BuzzFeed used this only to demonstrate the nefarious uses of deepfakes, many have also found their faces being put on pornographic videos without their permission.

In fact, the worlds biggest social media platform, Facebook, recently banned deepfakes and manipulated content from the platform.

While it is unclear whether artificial humans will face such a backlash, the fact that they do not replicate existing content is important. According to Soul Machines Cross, the company doesnt allow its Digital Heroes to be used for pornography, etc. and retains the right to not provision the technology to certain industries.

When likeness to an existing person is created, Soul Machines requires its clients to demonstrate they actually have a license agreement" to use such a likeness.

In The Age of AI, Will.I.Am asked Soul Machines to keep his avatar slightly robotic, so that people can tell its not the real person. We are at a place weve never been in as a society, where people have to determine whats real and whats not," he said.

STAR Labs Mistry also thinks privacy is an important aspect of the technology. He said the interaction between a person and the NEON never goes beyond the two parties involved. That is, even Samsung cannot access these conversations and interactions. This is probably why NEONs do not act as an interface between a user and the internet.

The singularity

In essence, artificial humans seem to be the missing piece of the puzzle. In every movie involving AI, the AI seems to have a voice, a face and emotions. So, whereas Google and Amazon brought voice to AI, companies like Soul Machines and Samsung are bringing the face and emotions.

Futurist Ray Kurzweil had famously predicted that humanity will achieve the singularity" by the year 2045, which is the point when machines become smarter than humans. A real artificial being that is as, or more intelligent than human beings falls under the brand of artificial general intelligence (AGI). In fact, Cross referred to Soul Machines as an AGI research" company instead of an AI research firm.

AGI is pretty much every world-dominating robot or AI you have seen in movies, like, say, The Terminator or Avengers: Age of Ultron. NEONs and Digital Heroes seem to be missing many pieces of that puzzle.

In October last year, a Russian startup called Promobot claimed to have created the worlds first Android that looks like a real person. Its robot, Robo-C is capable of more than 600 facial expressions and can be made to look like anyone you want.

Professor Krithi Ramamritham of the Indian Institute of Technology Bombay, who was at Robo-Cs launch, said that the current advances are critical because the learnings from them can be used to produce general-purpose solutions in the long run. While self-thinking, autonomous robots may or may not materialize by 2045, the era of artificial humans which can speak toif not out-thinkregular humans is well and truly here.

Excerpt from:

The age of artificial humans is here - Livemint

Posted in Singularity | Comments Off on The age of artificial humans is here – Livemint

EDM star Steve Aoki shows off his ‘Neon Future’ at The Ritz Ybor this Saturday – Creative Loafing Tampa

Posted: at 10:25 am

steveaoki/Facebook

One of the most popular DJs in the world, Steve Aoki, is playing Tampa on Saturday. Aoki founded Dim Mak Records, a record label and events/lifestyle company, back in 1996. The label has served as the launching pad for acts such as the Chainsmokers, Bloc Party, the Bloody Beetroots, the Gossip and the Kills.

As it approaches its 25th anniversary, the label has issued more than 1,000 official releases. Aokis tourwhich stops into The Ritz in Ybor City on January 18supports his Neon Future series (released via Ultra Music/Dim Mak), albums that delve into Aokis deep interest in futurism through technology and explore timely themes of singularity, biotechnology, electronic immortality and the symbiotic relationship between man and machine.

Neon Future is about a global community. Across the board, the artists on this album represent different cultures, languages, and walks of life coming together around this album concept, says Aoki in a press release. Ive always intended for the Neon Future concept to extend beyond just the music. I really do like to spread my wings far and wide and I love being able to cross into different cultures, genres, and worlds.

Steve Aoki. Fri. Jan. 18, 10 p.m. $50. The Ritz, Ybor City. discodonniepresents.com.

Follow @cl_music on Twitter to get the most up-to-date music news, concert announcements and local tunes. Subscribe to our newsletter, and listen to us on WMNF 88.5-FMs Radio Reverb program every Saturday from 4 p.m.-6 p.m.

Link:

EDM star Steve Aoki shows off his 'Neon Future' at The Ritz Ybor this Saturday - Creative Loafing Tampa

Posted in Singularity | Comments Off on EDM star Steve Aoki shows off his ‘Neon Future’ at The Ritz Ybor this Saturday – Creative Loafing Tampa

Why the 2010s were a "decade of duality" in marketing and advertising – and what needs to change – Econsultancy

Posted: at 10:24 am

2010 marked the launch of Instagram. It was also the year that Byron Sharp released How Brands Grow.

We were getting better playbooks for overall strategy and understanding humans AND getting new interesting tools and channels. This should be the golden age of marketing. But instead of evolving in harmony, these two phenomena have developed in different directions, creating growing tensions.

In 2017, Binet & Field in Media in Focus showed that even though we live in a digital era, brands should cut down on tactical targeted communication in favor of broad emotional ads. On the other hand, at the beginning of 2019, digital ad spend was poised to overtake traditional spend in the US.

The problem is we have gotten the two mixed up. The shiny new tools and data got confused for strategy and objectives. And the strategy models that pointed to the need for brand building were seen by some as an argument not to explore new possibilities.

The duality was fueled by tech FOMO, short termism, and big commercial interests on both the old school and innovative sides of the ring. The debate has sometime been fierce, with new tech accusing heritage organizations of recommending dying media channels because of ignorance and outdated revenue models.

Meanwhile, the old guard feels that new tech advocates are recommending new things and new KPIs without being able to prove how they contribute to the overall brand objectives, in an attitude reminiscent of this IBM ad from 1997:

We need to be on the internet. Why? Doesnt say.

In the early and mid-2010s, the pendulum was clearly swinging towards the new and exciting. Pepsi famously shifted a large chunk of its 2010 marketing spend from Super Bowl TV ads, instead focusing on social media and brand purpose and digital engagement KPIs. The campaign won a Clio award and was celebrated for its innovation. However, in the wake of the campaign Pepsi lost the equivalent of US $350M in market share to Coke (who kept to traditional reach media).

As we are looking forward to a new decade, one cant help but see the irony in the fact that Facebook will be one of the main advertisers of the 2020 Super Bowl TV broadcast.

And at the same time as the power player of the social media economy is humbling to the benefits of traditional advertising, traditional players are quickly sobering up to the fact that just TV ads and physical retail outlets will not suffice in 2020. TV reach is diminishing, digital is an ever increasing part of consumer interactions, and traditional retail is seriously challenged by etailers (R.I.P Sears and Barneys).

So if I am to make a humble wish for the upcoming 20s, it is for these two camps to come away from polarization and find the middle ground.

New technology, channels and ideas should be given a fair chance, to be evaluated and tested on how they can deliver on overall targets.

At the same time, these new things on the block will need to prove their abilities, not just on delivering engagement rates but on delivering on overarching goals such as brand building, physical availability or product benefits.

Mastering the combination of singularity and consistency in strategy with curiosity and understanding of new phenomena will really be the key to success in the coming years.

Read more:

Why the 2010s were a "decade of duality" in marketing and advertising - and what needs to change - Econsultancy

Posted in Singularity | Comments Off on Why the 2010s were a "decade of duality" in marketing and advertising – and what needs to change – Econsultancy

The Top Biotech Trends We’ll Be Watching in 2020 – Singularity Hub

Posted: January 10, 2020 at 3:41 pm

Last year left us with this piece of bombshell news: He Jiankui, the mastermind behind the CRISPR babies scandal, has been sentenced to three years in prison for violating Chinese laws on scientific research and medical management. Two of his colleagues also face prison for genetically engineering human embryos that eventually became the worlds first CRISPRd babies.

The story isnt over: at least one other scientist is eagerly following Hes footsteps in creating gene-edited humans, although he stresses that he wont implant any engineered embryos until receiving regulatory approval.

Biotech stories are rarely this dramatic. But as gene editing tools and assisted reproductive technologies increase in safety and precision, were bound to see ever more mind-bending headlines. Add in a dose of deep learning for drug discovery and synthetic biology, and its fair to say were getting closer to reshaping biology from the ground upboth ourselves and other living creatures around us.

Here are two stories in biotech were keeping our eyes on. Although successes likely wont come to fruition this year (sorry), these futuristic projects may be closer to reality than you think.

The idea of human-animal chimeras immediately triggers ethical aversion, but the dream of engineering replacement human organs in other animals is gaining momentum.

There are two main ways to do this. The slightly less ethically-fraught idea is to grow a fleet of pigs with heavily CRISPRd organs to make them more human-like. It sounds crazy, but scientists have already successfully transplanted pig hearts into baboonsa stand-in for people with heart failurewith some recipients living up to 180 days before they were euthanized. Despite having foreign hearts, the baboons were healthy and acted like their normal buoyant selves post-op.

But for cross-species transplantation, or xenotransplants to work in humans, we need to deal with PERVsa group of nasty pig genes scattered across the porcine genome, remnants of ancient viral infections that can tag along and potentially infect unsuspecting human recipients.

Theres plenty of progress here too: back in 2017 scientists at eGenesis, a startup spun off from Dr. George Churchs lab, used CRISPR to make PERV-free pig cells that eventually became PERV-free piglets after cloning. Then last month, eGenesis reported the birth of Pig3.0, the worlds most CRISPRd animal to further increase organ compatibility. These PERV-free genetic wonders had three pig genes that stimulate immunorejection removed, and nine brand new human genes to make themin theorymore compatible with human physiology. When raised to adulthood, pig3.0 could reproduce and pass on their genetic edits.

Although only a first clinical propotype that needs further validation and refinement, eGenesis is hopeful. According to one (perhaps overzealous) estimate, the first pig-to-human xenotranplant clinical trial could come in just two years.

The more ethically-challenged idea is to grow human organs directly inside other animalsin other words, engineer human-animal hybrid embryos and bring them to term. This approach marries two ethically uncomfortable technologies, germline editing and hybrids, into one solution that has many wondering if these engineered animals may somehow receive a dose of humanness by accident during development. What if, for example, human donor cells end up migrating to the hybrid animals brain?

Nevertheless, this year scientists at the University of Tokyo are planning to grow human tissue in rodent and pig embryos and transplant those hybrids into surrogates for further development. For now, bringing the embryos to term is completely out of the question. But the line between humans and other animals will only be further blurred in 2020, and scientists have begun debating a new label, substantially human, for living organisms that are mainly human in characteristicsbut not completely so.

With over 800 gene therapy trials in the running and several in mature stages, well likely see a leap in new gene medicine approvals and growth in CAR-T spheres. For now, although transformative, the three approved gene therapies have had lackluster market results, spurring some to ponder whether companies may cut down on investment.

The research community, however, is going strong, with a curious bifurcating trend emerging. Let me explain.

Genetic medicine, a grab-bag term for treatments that directly change genes or their expression, is usually an off-the-shelf solution. Cell therapies, such as the blood cancer breakthrough CAR-T, are extremely personalized in that a patients own immune cells are genetically enhanced. But the true power of genetic medicine lies in its potential for hyper-personalization, especially when it comes to rare genetic disorders. In contrast, CAR-Ts broader success may eventually rely on its ability to become one-size-fits-all.

One example of hyper-tailored gene medicine success is the harrowing story of Mila, a six-year-old with Batten disease, a neurodegenerative genetic disorder that is always fatal and was previously untreatable. Thanks to remarkable efforts from multiple teams, however, in just over a year scientists developed a new experimental therapy tailored to her unique genetic mutation. Since receiving the drug, Milas condition improved significantly.

Milas case is a proof-of-concept of the power of N=1 genetic medicine. Its unclear whether other children also carry her particular mutationBatten has more than a dozen different variants, each stemming from different genetic miscodingor if anyone else would ever benefit from the treatment.

For now, monumental costs and other necessary resources make it impossible to pull off similar feats for a broader population. This is a shame, because inherited diseases rarely have a single genetic cause. But costs for genome mapping and DNA synthesis are rapidly declining. Were starting to better understand how mutations lead to varied disorders. And with multiple gene medicines, such as antisense oligonucleotides (ASOs) finally making a comeback after 40 years, its not hard to envision a new era of hyper-personalized genetic treatments, especially for rare diseases.

In contrast, the path forward for CAR-T is to strip its personalization. Both FDA-approved CAR-T therapies require doctors to collect a patients own immune T cells, preserved and shipped to a manufacturer, genetically engineered to boost their cancer-hunting abilities, and infused back into patients. Each cycle is a race against the cancer clock, requiring about three to four weeks to manufacture. Shipping and labor costs further drive up the treatments price tag to hundreds of thousands of dollars per treatment.

These considerable problems have pushed scientists to actively research off-the-shelf CAR-T therapies, which can be made from healthy donor cells in giant batches and cryopreserved. The main stumbling block is immunorejection: engineered cells from donors can cause life-threatening immune problems, or be completely eliminated by the cancer patients immune system and lose efficacy.

The good news? Promising results are coming soon. One idea is to use T cells from umbilical cord blood, which are less likely to generate an immune response. Another is to engineer T cells from induced pluripotent stem cells (iPSC)mature cells returned back to a young, stem-like state. A patients skin cells, for example, could be made into iPSCs that constantly renew themselves, and only pushed to develop into cancer-fighting T cells when needed.

Yet another idea is to use gene editing to delete proteins on T cells that can trigger an immune responsethe first clinical trials with this approach are already underway. With at least nine different off-the-shelf CAR-T in early human trials, well likely see movement in industrialized CAR-T this year.

Theres lots of other stories in biotech we here at Singularity Hub are watching. For example, the use of AI in drug discovery, after years of hype, may finally meet its reckoning. That is, can the technology actually speed up the arduous process of finding new drug targets or the design of new drugs?

Another potentially game-changing story is that of Biogens Alzheimers drug candidate, which reported contradicting results last year but was still submitted to the FDA. If approved, itll be the first drug to slow cognitive decline in a decade. And of course, theres always the potential for another mind-breaking technological leap (or stumble?) thats hard to predict.

In other words: we cant wait to bring you new stories from biotechs cutting edge in 2020.

Image Credit: Image by Konstantin Kolosov from Pixabay

More here:

The Top Biotech Trends We'll Be Watching in 2020 - Singularity Hub

Posted in Singularity | Comments Off on The Top Biotech Trends We’ll Be Watching in 2020 – Singularity Hub

All-Female League Of Legends Team Announced By Team Singularity – TheGamer

Posted: at 3:41 pm

Team Singularity turns heads yet again with their recent release of a new EuropeanLeague of Legendsteam to represent their organization.

But what makes this announcement unique is that it's an all-female team, which changes things up a bit. Team Singularity introduces six talented professional gamers into their organization to be the full frontal face of their organization for the competitive scene of League of Legends.

Players:

*Adelina Cathrine Nlsn

*Ida Emprez Pedersen

*Joanna Ravea Kinga Janeczek

*Anette Pjush Holm

*Popescu Ali Alexandra Cristina

*Katarzyna Ayrine Gadowska

RELATED:G2 Esports Forms League Of Legends Academy Team Called G2 Arctic

Team Singularity CEO,Atle S. Stehouwer, gave the newly-introduced andformer players a warm welcome, assured that success was right around the corner for the organization and the all-female roster:

Im pleased to announce that we have decided to once again open our Female League of Legends team and have teamed up with three of our former players to make a competitive roster we believe have what it takes to get on top of the European female scene. This is the first step of a bigger plan trying to aid the consistency of the female League of Legends scene and community, but more on that matter in the end of January.

Old-time players Adelina Nlsn and Anette Holm of Team Singularity expressed great excitement for the new roster and journey with the organization:

After taking a few months break from playing I am very excited to be doing this again, and of course it feels even better that it is with girls and an organization I know and trust. I am quite hopeful for the team ,and lastly, I am happy to be back in Singularity where I know I will always have a place," stated Adelina.

Anette echoed that enthusiasm. With two new players, we have worked through a lot to get here and Im hoping that it will all pay off and that we do well. It is good to be back in a place I am familiar with and trust!

An all-female esports team is a rarity in this day and age, with only quite a few groups being present now. By and large, men still dominate the scene.But with 2020 being a new decade, it's only right that with it comes more new changes. For Team Singularity, this isn't the first time they've decided to add a female team to their organization.

Team Singularity had an all-female team in the past named Singularity Female, which included the newly-introduced players now, Cathrine, Ida, and Ravea. But in the roster's activity last year, the team participated in the Women's Esport League Division 1: Division 1 and placed fourth out of eight teams. With the evident success the team had in theirLeague of Legends competitive run, it took half a year for the ceased team to come back and for natural standards to break, yet again.

This is a major step for equality in the esports scene. We're getting closerto a more widespread adoption of gender inclusive practices across games.

NEXT:Riot Games Teases League Of Legends Season '2020' Details

Source: Team Singularity

Watch and Learn: An Interview With Victor Folmann, CEO and Founder of GamerzClass

Jay is an eighteen-year-old happy-go-lucky guy that loves writing about gaming and esports. Although Jay is new to the professional scene, he has covered esport and gaming news for the likes of VPGame, Esportz Network, Madskil, FortnitEsports.GG, and currently The Gamer. Additionally, Jay is obsessed with gaming, having played almost every popular MOBA, FPS, and MMORPG game out there. If you want to invite Jay to a game of CS: GO, Dota 2, or even League of Legends, you can find him on his Twitter @LunariaThe7th.

See the rest here:

All-Female League Of Legends Team Announced By Team Singularity - TheGamer

Posted in Singularity | Comments Off on All-Female League Of Legends Team Announced By Team Singularity – TheGamer

Page 72«..1020..71727374..8090..»