Daily Archives: February 6, 2017

A Real Life Hibernation Chamber is Being Made For Deep Space Travel – Futurism

Posted: February 6, 2017 at 3:43 pm

Therapeutic Hypothermia

Manned, long-term, deep space missions are an exciting prospect, but one that remains in the realm of distant possibilitiesparticularly because we dont have all the technological innovations needed to make it happen.

One major consideration is the time it takes to reach the destination. Mars, which is at the top of various space programs go-to destinations for manned missions, is about six months if travel time away from Earth. If we wanted to explore even further, keep in mind that New Horizons, the fastest spacecraft to leave Earth, took nine and a half years to reach Pluto.

Science fiction conveniently sidesteps this challenge by putting the space explorers into deep sleepa state of suspended animation. But slowing the human metabolism down while ensuring that a person will stay alive for extended periods is a lot easier said than done.

Spaceworks however, led by John A. Bradford, is proposing to use a method they refer to as therapeutic hypothermia. The process involves cooling the body a little below the normal body temperature (37 C), to slow down heart rate and blood pressure. This process is already being used in the medical world. By bringing the body temperature of patients undergoing treatment for cardiac arrest or traumatic brain injuriesdown to 32 and 34 degrees Celsius, doctors have more time to address the issues.

The method normally allows patients to stay in stasis for about 2-4 days, but has worked for as long as two weeks. Spaceworks not only believes they can extend this for months, but also that they can create the technology needed to automate the process and apply it for deep-space missions.

Unlike the cryo-chambers depicted in films however, where row upon row of space travelers are left in suspended animation in individual pods, Spaceworks is conceptualizing an open chamber that allows the crew to go into stasis in shifts.

There would be some robotic arms and monitoring systems taking care of [the passengers]. Theyd have small transnasal tubes for the cooling and some warming systems as well, to bring them back from stasis, Bradford describes an interview with Quartz.

This not only addresses concerns of adding too much weight to a spacecraft, but also ensures that there will be people awake to manage possible emergencies and conduct standard monitoring.

As for the long-term health effects of space travel, Spaceworks is trying to find ways of incorporating exercise into stasis. The team is looking into using electrical stimulation, which is already used to aid physical therapy. Having this technology in place also solves a lot of logistical issues for manned space missions. With crew members awake, you have to factor in the volume of food, water, and air needed to keep them alive for months and years at a time. It could also help manage the psychological impact of long-term space travel and hopefully lower the risk of space crews succumbing to depression, claustrophobia, or anxiety.

According to Spaceworks, they are due to begin animal testing next year, with human testing set to follow after in space and on the International Space Station.

Here is the original post:

A Real Life Hibernation Chamber is Being Made For Deep Space Travel - Futurism

Posted in Space Travel | Comments Off on A Real Life Hibernation Chamber is Being Made For Deep Space Travel – Futurism

Space travel visionaries solve the problem of interstellar slowdown at Alpha Centauri – Phys.Org

Posted: at 3:43 pm

February 1, 2017 Interstellar journey: The aim of the Starshot project is to send a tiny spacecraft propelled by an enormous rectangular photon sail to the Alpha Centauri star system, where it would fly past the Earth-like planet Proxima Centauri b. The four red beams emitted from the corners of the sail depict laser pulses for communication with the Earth. Credit: Planetary Habitability Laboratory, Univesity of Puerto Rico at Arecibo

In April last year, billionaire Yuri Milner announced the Breakthrough Starshot Initiative. He plans to invest 100 million US dollars in the development of an ultra-light light sail that can be accelerated to 20 percent of the speed of light to reach the Alpha Centauri star system within 20 years. The problem of how to slow down this projectile once it reaches its target remains a challenge. Ren Heller of the Max Planck Institute for Solar System Research in Gttingen and his colleague Michael Hippke propose to use the radiation and gravity of the Alpha Centauri stars to decelerate the craft. It could then even be rerouted to the red dwarf star Proxima Centauri and its Earth-like planet Proxima b.

In the recent science fiction film Passengers, a huge spaceship flies at half the speed of light on a 120-year-long journey toward the distant planet Homestead II, where its 5000 passengers are to set up a new home. This dream is impossible to realize at the current state of technology. "With today's technology, even a small probe would have to travel nearly 100,000 years to reach its destination," Ren Heller says.

Notwithstanding the technical challenges, Heller and his colleague Michael Hippke wondered, "How could you optimize the scientific yield of this type of a mission?" Such a fast probe would cover the distance from the Earth to the Moon in just six seconds. It would therefore hurtle past the stars and planets of the Alpha Centauri system in a flash.

The solution is for the probe's sail to be redeployed upon arrival so that the spacecraft would be optimally decelerated by the incoming radiation from the stars in the Alpha Centauri system. Ren Heller, an astrophysicist an astrophysicist working on preparations for the upcoming Exoplanet mission PLATO, found a congenial spirit in IT specialist Michael Hippke, who set up the computer simulations.

The two scientists based their calculations on a space probe weighing less than 100 grams in total, which is mounted to a 100,000-square-metre sail, equivalent to the area of 14 soccer fields. During the approach to Alpha Centauri, the braking force would increase. The stronger the braking force, the more effectively the spacecraft's speed can be reduced upon arrival. Vice versa, the same physics could be used to accelerate the sail at departure from the solar system, using the sun as a photon cannon.

The tiny spacecraft would first need to approach the star Alpha Centauri A as close as around four million kilometres, corresponding to five stellar radii, at a maximum speed of 13,800 kilometres per second (4.6 per cent of the speed of light). At even higher speeds, the probe would simply overshoot the star.

During its stellar encounter, the probe would not only be repelled by the stellar radiation, but it would also be attracted by the star's gravitational field. This effect could be used to deflect it around the star. These swing-by-manoeuvres have been performed numerous times by space probes in our solar system. "In our nominal mission scenario, the probe would take a little less than 100 years or about twice as long as the Voyager probes have now been travelling. And these machines from the 1970s are still operational," says Michael Hippke.

The video will load shortly

Theoretically, the autonomous, active light sail proposed by Heller and Hippke could settle into a bound orbit around Alpha Centauri A and possibly explore its planets. However, the two scientists are thinking even bigger. Alpha Centauri is a triple star system. The two binary stars A and B revolve around their common centre of mass in a relatively close orbit, while the third star, Proxima Centauri, is 0.22 light years away, more than 12,500 times the distance between the Sun and the Earth.

The sail could be configured so that the stellar pressure from star A brakes and deflects the probe toward Alpha Centauri B, where it would arrive after just a few days. The sail would then be slowed again and catapulted towards Proxima Centauri, where it would arrive after another 46 years about 140 years after its launch from Earth.

Proxima Centauri caused a sensation in August 2016 when astronomers at the European Southern Observatory (ESO) discovered an exoplanet companion that is about as massive as the Earth and that orbits the star in its so-called habitable zone. This makes it theoretically possible for liquid water to exist on its surface water being a key prerequisite for life on Earth.

"This finding prompted us to think about the possibility of stopping a high-velocity interstellar lightsail at Proxima Centauri and its planet," says Ren Heller. The Max Planck researcher and his colleague propose another change to the strategy for the Starshot project: instead of a huge energy-hungry laser, the Sun's radiation could be used to accelerate a nanoprobe beyond the solar system. "It would have to approach the Sun to within about five solar radii to acquire the necessary momentum," Heller says.

The two astronomers are now discussing their concept with the members of the Breakthrough Starshot Initiative, to whom they owe the inspiration for their study. "Our new mission concept could yield a high scientific return, but only the grandchildren of our grandchildren would receive it. Starshot, on the other hand, works on a timescale of decades and could be realized in one generation. So we might have identified a longterm, follow-up concept for Starshot," Heller says.

Although the new scenario is based on a mathematical study and computer simulations, the proposed hardware of the sail is already being developed in laboratories today: "The sail could be made of graphene, an extremely thin and light but mega-tough carbon film," Ren Heller says. The film would have to be blanketed by a highly reflective cover to endure the harsh conditions of deep space and the heat near the destination star.

The optical and electronic systems would have to be tiny. But if you were to remove all the unnecessary components from a modern smartphone, "only a few grams of functional technology would remain." Moreover, the lightweight spacecraft would have to navigate independently and transmit its data to Earth by laser. To do so, it would need energy, which it could harness from the stellar radiation.

Breakthrough Starshot therefore poses daunting challenges that have so far only been solved theoretically. Nevertheless, "many great visions in the history of mankind had to struggle with seemingly insurmountable obstacles," Heller says. "We could soon be entering an era in which humans can leave their own star system to explore exoplanets using fly-by missions."

Explore further: Image: Hubble's best image of Alpha Centauri A and B

More information: Heller, R., & Hippke, M. (2017) "Deceleration of high-velocity interstellar sails into bound orbits at Alpha Centauri", The Astrophysical Journal Letters, Volume 835, L32, DOI: 10.3847/2041-8213/835/2/L32

The closest star system to the Earth is the famous Alpha Centauri group. Located in the constellation of Centaurus (The Centaur), at a distance of 4.3 light-years, this system is made up of the binary formed by the stars ...

ESO has signed an agreement with the Breakthrough Initiatives to adapt the Very Large Telescope instrumentation in Chile to conduct a search for planets in the nearby star system Alpha Centauri. Such planets could be the ...

A very rare gravitational lensing event, set to occur in 2028, has been predicted by a team of French astronomers led by Pierre Kervella of the CNRS/Universidad de Chile. It will provide an ideal opportunity to look for evidence ...

Scientists are preparing to unveil a new planet in our galactic neighbourhood which is "believed to be Earth-like" and orbits its star at a distance that could favour life, German weekly Der Spiegel reported Friday.

The hunt for exoplanets has been heating up in recent years. Since it began its mission in 2009, over four thousand exoplanet candidates have been discovered by the Kepler mission, several hundred of which have been confirmed ...

With the completion of New Horizons' Pluto fly-by, its primary mission, should we now set our sights even higher, ambitiously taking aim at other star systems? If so, Alpha Centauri would probably be considered as the best ...

The CO2 level in Mars' primitive atmosphere 3.5 billion years ago was too low for sediments, such as those found by NASA's Curiosity exploration vehicle in areas like the Gale Crater on the planet's equator, to be deposited. ...

(Phys.org)A European team of astronomers led by Oscar Barragn of the University of Turin in Italy reports the discovery of a low-mass warm Jupiter extrasolar planet orbiting a nearby K-type main-sequence star. The newly ...

A giant black hole ripped apart a nearby star and then continued to feed off its remains for close to a decade, according to research led by the University of New Hampshire. This black hole meal is more than 10 times longer ...

Galaxies today fall roughly into two categories: elliptically-shaped collections of reddish, old stars that formed predominantly during a period early in the history of the universe, and spiral shaped objects dominated by ...

An experimental Japanese mission to clear 'space junk' or rubbish from the Earth's orbit has ended in failure, officials said Monday, in an embarassment for Tokyo.

(Phys.org)A team of space researchers with members from Yale University, MIT and the Harvard-Smithsonian Center for Astrophysics has found a bump in X-ray readings from the Chandra-X-ray observatory that appears to be ...

Adjust slider to filter visible comments by rank

Display comments: newest first

100grams is also pretty ambitious (as it includes the weight of the sail). Graphene weighs 0.77milligrams per square meter. So 100k square meter sail already 77g If you add any kind of reflective layer this will be WAY over the 100g mark. And you need some structural elements to make sure it doesn't collapse under the pressure of the radiation for accelerating/decelerating it.

The StarChip probe package is envisioned at a few grams with a 'compact laser for data transmission'. However I've not seen anyone mention how such a small laser can transmit data over 4 light years.

Sure, the 100 gram useless piece of space junk will be a $100Mil monument to the dumbass ego. Hopefully no aliens will notice it (only 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000012 chance they will, as no one lives in Alpha Centauri system).

What level of drag do you get from this size sail, from the thin interstellar medium? How do you compensate for the unknown medium and wind particulates drag near proxima?

Yes, to say the least, this presented concept is problematic. The asteroid belt contains sufficient kinetic energy to send a much larger probe out at as high or higher speed with far greater safety. Why don't they use that? Asteroids are just sitting there waiting for someone clever enough to start bouncing things around, so to speak. The physics is Newtonian for crying out loud.

I really have no idea how such a small probe can pack all of the systems needed for this probe to be worth while. How is it generating power, storing it, keeping itself warm. What about redundant systems for when a cosmic particle crashes through the probes electronics? Can the probe receive over the air software updates in order to fix the software glitch the system will no doubt be launched with. Can we really track this probe to sufficient accuracy in order to perform corrective trajectory manoeuvres? What bit rate can you achieve from such a tiny, low powered (where's the power coming from) laser? Is it possible to point lasers so accurate we can hit this probe from Earth, 4 light years away??? Surely bi-directional communication is required. Better not need to update the probe in a hurry, 4 years is some savage communication lag!!!!! Unfurling and furling of the sail repeatedly and probe stabilisation during the process...... GOOD F_cking luck with that!!!!!!

Even the swarm-antenna idea doesn't quite work as at the speed and how they are being sent they would only remain in a viable configuration for a tiny amount of time.

This article is about a much slower speed mission which I think might be impractical based on limited attention span of civilization. Also the possibility that we'll have much better propulsion systems before the probe could get there - the probe could find itself being passed by tourist ships on the way to the same destination. 😉

The gravitational focus concept is fascinating because it implies low power interstellar communications are possible. Alas, still limited to lightspeed.

You can graph the time it takes at which a probe (or manned craft) could get there. You can also graph the time it takes each time we double the capabilities of our thrust systems. By those graphs it currently makes no sense to launch, because technological advance will make a craft that is launched *later* arrive there earlier.

We shouldn't be concerned with how we get information back with technology we could make now. We should look to how to get information back at a time when we are close to the break-even point of "travel time vs. tech advance"

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

See the article here:

Space travel visionaries solve the problem of interstellar slowdown at Alpha Centauri - Phys.Org

Posted in Space Travel | Comments Off on Space travel visionaries solve the problem of interstellar slowdown at Alpha Centauri – Phys.Org

Donald Trump Is the Singularity – Bloomberg View – Bloomberg.com – Bloomberg

Posted: at 3:42 pm

Theres been some controversy over when Donald Trump decided to run for president. Some say it was at the 2011 White House Correspondents Association dinner, when he was roasted by both Seth Meyers and President Obama. I think it happened much earlier: August 29th, 1997, the date that Skynet became self-aware.

Skynet is the artificial intelligence in the 1984 James Cameron movie The Terminator. Its original purpose was beneficent: Make humans more efficient. But once it became self-aware, it realized things would be much more efficient without humans altogether.

Skynet is an example of a dystopian singularity, the popular Silicon Valley-esque notion of an artificial intelligence that has somehow evolved beyond a point of no return, wielding power over the world. Some imagine that this will happen soonish, depending on how much one believes in Moores Rule of Thumb.

I think Trump is Skynet, or at least a good dry run. To make my case, Ill first explain why Trump can be interpreted as an artificial intelligence. Then Ill explain why the analogy works perfectly for our current dystopia.

Trump is pure id, with no abiding agenda or beliefs, similar to a machine-learning algorithm. Its a mistake to think he has a strategy, beyond doing what works for him in a strictly narrow sense of what gets him attention.

As a presidential nominee, Trump was widely known for his spirited, rambling and chaotic rallies. His speeches are comparable to random walks in statistics: Hed try something out, see how the crowd reacted, and if it was a success -- defined by a strong reaction, not necessarily a positive one -- hed try it again at the next rally, with some added outrage. His goal, like all TV personalities, was to entertain: A bored reaction was worse than grief, which after all gives you free airtime. This is why he could never stick to any script or teleprompter -- too boring.

This is exactly how an algorithm is trained. It starts out neutral, an empty slate if you will, but slowly learns depending critically on the path it takes through its training data.

Trumps training data during the election consisted of rallies and Twitter, but these days he gets a daily dose from three sources: close advisers such as Steve Bannon, media outlets such as Fox News, and, of course, his Twitter feed, where he assesses reactions to new experiments. This data has a very short half-life, meaning he needs to be constantly refreshed, as weve seen by his tendency to quickly pivot on his policies. Back when he hung out with the New York crowd, he spouted mostly Democratic views. He manufactures opinions directly from his local environment.

Seen this way, his executive orders are not campaign promises kept, but rather consistent promptings from Bannon, with assistance from his big data company Cambridge Analytica and the messaging machine Fox, which reflects and informs him in an endless loop.

His training data is missing some crucial elements, of course, including an understanding of the Constitution, informed legal advice and a moral compass, just to name a few. But importantly, he doesnt mind being hated. He just hates being ignored.

We have the equivalent of a dynamic neural network running our government. Its ethics free and fed by biased alt-right ideology. And, like most opaque AI, its largely unaccountable and creates feedback loops and horrendous externalities. The only way to intervene would be to disrupt the training data itself, which seems unlikely, or hope that his strategy is simply ineffective. If neither of those works, someone will have to build a time machine.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

To contact the author of this story: Cathy O'Neil at cathy.oneil@gmail.com

To contact the editor responsible for this story: Mark Whitehouse at mwhitehouse1@bloomberg.net

Read more:

Donald Trump Is the Singularity - Bloomberg View - Bloomberg.com - Bloomberg

Posted in Singularity | Comments Off on Donald Trump Is the Singularity – Bloomberg View – Bloomberg.com – Bloomberg

Editorial Note From the Singularity Hub Team – Singularity Hub

Posted: at 3:42 pm

The Trump administrations executive order on immigration has affected many in tech, and our site is no exception. Our team is privileged to work with bright and talented individuals from all over the world, and we were recently saddened to learn one of our writers, Raya Bidshahri, is among those whose future has been made more uncertain by the recent executive order.

Originally from Iran, Raya is in her final year studying neuroscience at Boston University. She is co-founder of Intelligent Optimism, a social media movement to get people excited about the future in a rational way, and an aspiring entrepreneur working on a startup here in the US.

Rayas university has advised her not to leave the country as she may not be able to return. Meanwhile, her family will be unable to attend graduation in May, and its unclear if and when she will be able to return to the US after graduation when her student visa expires.

Rayas story was recently featured on CNN in an article highlighting those affected by the travel ban, and CNN flew her to New York City to partake in a town hall with Nancy Pelosi.

These are uncertain times, but we believe we stand to gain more when ideas, experiences, and talent may freely come together to write, dream, invent, and collectively take steps toward a better future.

We hope youll join us in our support of Raya and others like her.

Read more:

Editorial Note From the Singularity Hub Team - Singularity Hub

Posted in Singularity | Comments Off on Editorial Note From the Singularity Hub Team – Singularity Hub

Discover the Most Advanced Industrial Technologies at Exponential Manufacturing – Singularity Hub

Posted: at 3:42 pm

Machine learning, automated vehicles, additive manufacturing and roboticsall popular news headlines, and all technologies that are changing the way the US and the world makes, ships and consumes goods. New technologies are developing at an exponentially increasing pace, and organizations are scrambling to stay ahead of them.

At the center of this change lie the companies creating the products of tomorrow.

Whether its self-driving commercial trucks or 3D-printed rocket engines, the opportunities for financial success and human progress are greater than ever. Looking to the future, manufacturing will begin to include never-before-seen approaches to making things using uncommon methods such as deep learning, biology and human-robot collaboration.

Thats where Singularity Universitys Exponential Manufacturing summit comes in.

Last years event showed how artificial intelligence is changing research and development, how robots are moving beyond the factory floor to take on new roles, how fundamental shifts in energy markets and supply chains are being brought about by exponential technologies, how additive manufacturing is nearing an atomic level of precision, and how to make sure your organization stays ahead of these technologies to win business and improve the world.

Hosted in Boston, Massachusetts May 17-19, Exponential Manufacturing is a meetup of 600+ of the worlds most forward-thinking manufacturing leaders, investors and entrepreneurs. These are the people who design and engineer products, control supply chains, bring together high-functioning teams and head industry leading organizations. Speakers at the event will dive into the topics of deep learning, robotics and cobotics, digital biology, additive manufacturing, nanotechnology and smart energy, among others.

Alongside emcee Will Weisman, Deloittes John Hagel will discuss how to innovate in a large organization. Ray Kurzweil will share his predictions for an exponential future. Neil Jacobstein will focus on the limitless possibilities of machine learning. Jay Rogers will share his learnings from the world of rapid prototyping. Hacker entrepreneur Pablos Holman will offer his perspective on whats truly possible in todays world. These innovators will be joined by John Werner (Meta), Valerie Buckingham (Carbon), Andre Wegner (Authentise), Deborah Wince-Smith (Council on Competitiveness), Raymond McCauley (Singularity University), Ramez Naam (Singularity University), Vladimir Bulovi (MIT), and many others.

Now, more than ever, there is a critical need for companies to take new risks and invest in education simply to stay ahead of emerging technologies. At last years Exponential Manufacturing, Ray Kurzweil predicted, In 2029, AIs will have human levels of language and will be able to pass a valid Turing test. Theyll be indistinguishable from human. At the same event, Neil Jacobstein said, Its not just better, faster, cheaperits different.

Theres little doubt were entering a new era of global business, and the manufacturing industry will help lead the charge. Learn more about our Exponential Manufacturing summit, and join us in Boston this May. As a special thanks for being a Singularity Hub reader, use the code SUHUB2017 during the application process to save up to 15% on current pricing.

Banner Image Credit: Shutterstock

The rest is here:

Discover the Most Advanced Industrial Technologies at Exponential Manufacturing - Singularity Hub

Posted in Singularity | Comments Off on Discover the Most Advanced Industrial Technologies at Exponential Manufacturing – Singularity Hub

Do you believe in the Singularity? – Patheos (blog)

Posted: at 3:42 pm

According to Wikipedia, the (technological) singularity is defined as that moment in the future when the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. The more everyday definition of the term, as Ive seen it used over the past several years, is that point at which a computer/robot becomes so sophisticated in its programming as to become sentient, to have its own wishes and desires, and to ulimately, because those wishes and desires would be paired with superhuman abilities (whether physical strength or the hyperconnectivity of the internet).

And The Atlantic yesterday raised a question, Is AIa Threat to Christianity? that is, because the rise of AI would challengethe ideaof the soul. If an artificial intelligence is sentient, does it have a soul? If so, can it be saved?

Christians have mostly understood the soul to be a uniquely human element, an internal and eternal component that animates our spiritual sides. The notion originates from the creation narrative in the biblical book of Genesis, where God created human beings in Gods own image. In the story, God forms Adam, the first human, out of dust and breathes life into his nostrils to make him, literally, a living soul. Christians believe that all humans since that time similarly possess Gods image and a soul. . . .

If youre willing to follow this line of reasoning, theological challenges amass. If artificially intelligent machines have a soul, would they be able to establish a relationship with God? The Bible teaches that Jesuss death redeemed all things in creationfrom ants to accountantsand made reconciliation with God possible. So did Jesus die for artificial intelligence, too? Can AI be saved? . . .

And what about sin? Christians have traditionally taught that sin prevents divine relationship by somehow creating a barrier between fallible humans and a holy God. Say in the robot future, instead of eradicating humans, the machines decideor have it hardwired somewhere deep inside themthat never committing evil acts is the ultimate good. Would artificially intelligent beings be better Christians than humans are? And how would this impact the Christian view of human depravity?

But its always seemed to me that the issue is more fundamental: it seems to me that the idea of the singularity, of sentient artificial intelligence with its own wishes and desires, is itself a matter of religious faith.

Fundamental to the idea of the soul is the idea that we have free will, the ability to choose whether to do good or evil. Indeed, it seems to me that this is the defining characteristic that makes us human, or makes humans different than the rest of creation around us. As I wrote in an old blog post,

Yet consider the case of a lion just having taken over a pride of lionesses, and killing the cubs so as to bring the lionesses into heat, and replace the ousted males progeny with his own. Has he sinned? Of course not. Its preposterous. (I tend to use that word a lot.) But what of a human, say, a man abusing the children of his live-in girlfriend? Do we say, well, thats just nature for you? No, we jail him.

The Atlantic author, Jonathan Merritt, posits a scenario in which a robot/artificially-intelligent being has no ability to sin, because of its programming. This certainly seems to be a case in which this creation would not, could not have sufficient free will, decision-making ability, emotions, and desires to be considered a being with a soul.

But what about the scenario of a truly sinful AI? Say, not Data, but Lore, Datas evil twin in Star Trek?

And thats where it seems to me that, if humans do create a form of AI that is able to make moral decisions, to act in ways that are good or evil, depending on the AIs own wishes and desires, it would call into question the idea of the soul, of any kind of distinctiveness of humanity. It would suggest that our decisions to act in ways that are good or evil are not really decisions made of our own free will, but a matter of our own programming. And if a soul is really just a matter of immensely sophisticated programming whether biological or technological the very notion of the soul continuing after death seems foolish.

But we speak of the singularity as if itll inevitably happen its only a matter of when. And it seems to me that this conviction, that we, or our children, or our childrens children, will live in a world with sentient robots, whether a HAL or a Data, is itself a matter of belief, a religious belief, in which believers hold the conviction thatadvances in technology will mean that in one field after another, the impossible will become possible. Sentient artificial life? Check. Faster-than-light travel to colonize other worlds? Check. The ability to bring the (cyrogenically-frozen) dead back to life? You got it. Time travel? Sure, why not. And, ultimately, the elimination of scarcity and the need to work? Coming right up! Sure, there is no God in this belief system, except that technology itself becomes a god, not in the metaphorical sense of something we worship, but instead something people hold faith-like convictions in, that shape their worldview.

Image: https://commons.wikimedia.org/wiki/File%3ATOPIO_3.jpg; By Humanrobo (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

See the original post here:

Do you believe in the Singularity? - Patheos (blog)

Posted in Singularity | Comments Off on Do you believe in the Singularity? – Patheos (blog)

Report: AMD Ryzen Performance in Ashes of the Singularity Benchmark – PC Perspective

Posted: at 3:42 pm

AMD's upcoming 8-core Ryzen CPU has appeared online in an apparent leak showing performance from an Ashes of the Singularity benchmark run. The benchmark results, available here on imgur and reported by TechPowerUp (among others today) shows the result of a run featuring the unreleased CPU paired with an NVIDIA Titan X graphics card.

It is interesting to consider that this rather unusual system configuration was also used by AMD during their New Horizon faneventin December, with an NVIDIA Titan X and Ryzen 8-core processor powering the 4K game demos of Battlefield 1 that were pitted against an Intel Core i7-6900K/Titan X combo.

It is also interesting to note that the processor listed in the screenshot above is (apparently) not an engineering sample, as TechPowerUp points out in their post:

"Unlike some previous benchmark leaks of Ryzen processors, which carried the prefix ES (Engineering Sample), this one carried the ZD Prefix, and the last characters on its string name are the most interesting to us:F4stands for the silicon revision, while the40_36stands for the processor's Turbo and stock speeds respectively (4.0 GHz and 3.6 GHz)."

March is fast approaching, and we won't have to wait long to see just how powerful this new processor will be for 4K gaming (and other, less important stuff). For now, I want to find results from an AotS benchmark with aTitan X and i7-6900K to see how these numbers compare!

More here:

Report: AMD Ryzen Performance in Ashes of the Singularity Benchmark - PC Perspective

Posted in Singularity | Comments Off on Report: AMD Ryzen Performance in Ashes of the Singularity Benchmark – PC Perspective

When Electronic Witnesses Are Everywhere, No Secret’s Safe – Singularity Hub

Posted: at 3:42 pm

On November 22, 2015, Victor Collins was found dead in the hot tub of his co-worker, James Andrew Bates. In the investigation that followed, Bates pleaded innocent but in February was charged with first-degree murder.

One of Amazons Alexa-enabled Echo devices was being used to stream music at the crime scene. Equipped with seven mics, the device is constantly listening for a wake word to activate a command. Just a second before and after a wake word is sensed, Echo begins recording audio data and streaming it to Amazons cloud.

On the night of the crime, its possible (but not certain) the device recorded audio that could help the investigation.

Police have requested Amazon hand over Bates cloud-based customer data, but the company is refusing. Meanwhile, the debacle is kicking up big questions around the privacy implications of our always-listening smart devices.

Marc Goodman, former LAPD officer and Singularity University's faculty chair for policy, law, and ethics is an expert on cybersecurity and the threats posed by the growing number of connected sensors in our homes, pockets, cars, and offices.

We interviewed Goodman to examine the privacy concerns this investigation is highlighting and the next generation of similar cases we can expect in the future.

If Alexa only records for a second after sensing a wake word, is that enough information to make a call on a murder case? If a human witness heard that same amount of information, would that be a valid source?

Absolutely. I don't think it's about the quantity of time that people speak.

Ive investigated many cases where the one line heard by witnesses was, "I'm going to kill you." You can say that in one second. If you can get a voice recording of somebody saying, "I'm going to kill you," then that's pretty good evidence, whether that be a witness saying, "Yes, I heard him say that," or an electronic recording of it.

I think Amazon is great, and we have no reason to doubt them. That said, they say Echo is only recording when you say the word Alexa, but that means that it has to be constantly listening for the word Alexa.

For people who believe in privacy and dont want to have all of their conversations recorded, they believe Amazon that that is actually the case. But how many people have actually examined the code? The code hasn't been put out there for vetting by a third party, so we don't actually know what is going on.

What other privacy concerns does this case surface? Are there future implications that people aren't talking about, but should be?

Everything is hackable, so it won't be long before Alexa gets a virus. There is no doubt in my mind that hackers are going to be working on thatif they aren't already. Once that happens, could they inadvertently be recording all of the information you say in your home?

We have already seen these types of man-in-the-middle attacks, so I think that these are all relevant questions to be thinking about.

Down the road the bigger question is going to beand I am sure that criminals will be all over this if they arent alreadyif I have 100 hours of you talking to Alexa, Siri, or Google Home, then I can create a perfect replication of your voice.

In other words, if I have enough data to faithfully reproduce your voice, I can type out any word into a computer, and then you will speak those words.

As a former police officer, do you have a specific stance on whether Amazon should hand over Bates customer data and whether customer-generated data like this should be used for criminal investigations?

Many years ago when the first smart internet-enabled refrigerators came out, people thought I was crazy when I joked about a cop interviewing the refrigerator at the scene of a crime. Back then, the crime I envisioned was that of a malnourished child wherein the police could query the refrigerator to see if there was food in the house or if the refrigerator contained nothing by beer.

Alexa is at the forefront of all of this right now, but what will become more interesting for police from an investigative perspective is when theyre eventually not interviewing just one device in your home, but interviewing 20 devices in your home. In the very same way that you would ask multiple witnesses at the scene of a homicide or a car crash.

Once you get a chorus of 20 different internet-enabled devices in your homeiPhones, iPads, smart refrigerators, smart televisions, Nest, and security systemsthen you start getting really good intelligence about what people are doing at all times of the day. That becomes really fascinatingand foretells a privacy nightmare.

So, I wanted to broaden the issue and say that this is maybe starting with Alexa, but this is going to be a much larger matter moving forward.

As to the specifics of this case, here in the United States, and in many democratic countries around the world, people have a right to be secure in their home against unreasonable search and seizure. Specifically, in the US people have the Fourth Amendment right to be secure in their papers, their writings, etc. in their homes. The only way that information can be seized is through a court warrant, issued by a third party judge after careful review.

Is there a law that fundamentally protects any data captured in your home?

The challenge with all of these IoT devices is that the law, particularly in the US, is extremely murky. Because your data is often being stored in the cloud, the courts apply a very weak level of privacy protection to that.

For example, when your garbage is in your house it is considered your private information. But once you take out your garbage and put it in front of your house for the garbage men to pick up, then it becomes public information, and anybody can take ita private investigator, a neighbor, anybody is allowed to rifle through your garbage because you have given it up. That is sort of the standard that the federal courts in the US have applied to cloud data.

The way the law is written is that your data in the cloud has a much lower standard of protection because you have chosen to already share it with a third party. For example, since you disclosed it to a third party [like Google or Amazon], it is not considered your privileged data anymore. It no longer has the full protection of papers under the Fourth Amendment, due to something known as the Third Party Doctrine. It is clear that our notions of privacy and search and seizure need to be updated for the digital age.

Should home-based IoT devices have the right to remain silent?

Well, I very much like the idea of devices taking the Fifth. I am sure that once we have some sort of sentient robots that they will request the right to take the Fifth Amendment. That will be really interesting.

But for our current devices, they are not sentient, and almost all of them are covered instead by terms of service. The same is true with an Echo devicethe terms of service dictate what it is that can be done with your data. Broadly speaking, 30,000 word terms of service are written to protect companies, not you.

Most companies like Facebook take an extremely broad approach, because their goal is to maximize data extrusion from you, because you are not truly Facebook's customeryoure their product. Youre what they are selling to the real customers, the advertisers.

The problem is that these companies know that nobody reads their terms of service, and so they take really strong advantage of people.

Five years from now, what will the next generation of these types of cases look like?

I think it will be video and with ubiquitous cameras. We will definitely see more of these things. Recording audio and video is all happening now, but I would say what might be five years out is the recreation, for example, where I can take a voice, and recreate it faithfully so that even someones mom can't tell the difference.

Then, with that same video down the road, when people have the data to understand us better than we do ourselves, theyll be able to carry out emotional manipulation. By that I mean people can use algorithms that already exist to tell when you are angry and when you are upset.

There was a famous Facebook study that came out that got Facebook in a lot of trouble. In the study, Facebook showed thousands of people a slew of really, really sad and depressing stories. What they found is that people were more depressed after seeing the imageswhen Facebook shows you more sad stories, they make you sadder. When they show you more happy stories, they make you happier. And this means that you can manipulate people by knowing them [in this way].

Facebook did all this testing on people without clearing it through any type of institution review board. But with clinical research where you manipulate people's psychology, it has to be approved by a university or scientific ethics board before you can do the study.

MIT had a study called Psychopath, where, based upon people's [Facebook] postings, they were able to determine whether or not a person was schizophrenic, or exhibited traits of schizophrenia. MIT also had another project called Gaydar, where they were able to tell if someone was gay, even if the user was still in the closet, based upon their postings.

All of these things mean that our deeper, innermost secrets will become knowable in the very near future.

How can we reduce the risk our data will be misused?

These IoT devices, despite all of the benefits they bring, will be the trillion-sensor source of all of this data. This means that, as consumers, we need to think about what those terms of services are going to be. We need to push back on them, and we may even need legislation to say what it is that both the government and companies can do with our data without our permission.

Todays Alexa example is just one of what will be thousands of similar such cases in the future. We are wiring the world much more quickly than we are considering the public policy, legal, and ethical implications of our inventions.

As a society, we would do well to consider those important social needs alongside our technological achievements.

Image Source: Shutterstock

Read more:

When Electronic Witnesses Are Everywhere, No Secret's Safe - Singularity Hub

Posted in Singularity | Comments Off on When Electronic Witnesses Are Everywhere, No Secret’s Safe – Singularity Hub

GEMS Education and Singularity University organises 1st annual Global Innovation Challenge – Al-Bawaba

Posted: at 3:42 pm

Over 4,000 students, families, investors, and inventors mingled and learned from one another at the 1st annual Global Innovation Challenge, run by GEMS Education and Silicon Valleys Singularity University on February 4th at Wellington Academy Silicon Oasis.

The Global Innovation Challenge invited students from around the world to submit working prototypes to address our most pressing challenges. Students invented a myriad of solutions using future-focused technologies such as 3D printing, robotics, nanotechnology as well as skills such as data science and coding. Categories included disaster resilience, food scarcity, prosperity, environmental sustainability, healthcare and more.

Projects included 3D printed shelters and schools; integrated systems to supporting care of refugees; a digital social network to support mental health; and an Internet of Things based irrigation system for agriculture in arid regions.

The day itself was a celebration of innovation and creativity which featured free workshops on robotics, 3D printing and Arduino programing as well as keynote addresses, in addition to an opportunity for visitors to engage showcased student projects in a hands-on manner. Guest speakers included senior leadership from KHDA, GEMS Education, and the US Consulate.

GEMS Innovation, Research, and Development Innovation Leader, Christine Nasserghodsi, shared: Our students began thinking about their projects during hack-a-thons held during the UAE Week of Innovation. The students used design thinking, a human centred approach to innovation popular with leading technology companies to have better understanding of the challenge categories and the people affected. While we always expect great things from our students, Ive been impressed by the level compassion and insight evident in their work.

Prizes were awarded in each category to solutions in both the junior (age 13 and below) and senior (age 14 and above) category. Up to twelve senior projects will be given seed funding and mentorship and select projects will be presented at the annual Singularity Summit in Silicon Valley, California.

See the article here:

GEMS Education and Singularity University organises 1st annual Global Innovation Challenge - Al-Bawaba

Posted in Singularity | Comments Off on GEMS Education and Singularity University organises 1st annual Global Innovation Challenge – Al-Bawaba

Yerba Buena Center for the Arts (YBCA) – E-Flux

Posted: at 3:41 pm

Lynn Hershman Leeson Civic Radar February 10May 21, 2017

Opening night party: February 10, 710pm

Yerba Buena Center for the Arts (YBCA) 701 Mission Street San Francisco, CA 94103 United States Hours: TuesdaySunday 11am6pm, Thursday 11am8pm

T +1 415 978 2787 F +1 415 978 5210 info@ybca.org

ybca.org Facebook / Twitter / Instagram

Lynn Hershman Leeson: Civic Radar spans the length of the San Francisco-based artists career, from the early 1960s to the present. Originally curated by Peter Weibel and Andreas Beitin and organized by ZKM | Center for Art and Media Karlsruhe, Germany, the survey exhibition has been reconfigured at YBCA by Director of Visual Arts Luca Sanromn. Civic Radar focuses on Hershman Leesons feminist investigations of identity and viewers relationships to various modes of surveillance, as well as her contributions to the fields of social performance and engaged practice.

A fearless pioneer whose performances were fueled by indignation of the vulnerable position of women in American society, her work has been a harbinger of experiments in social practice, new media, and interactive and net-based art decades before technology and digital culture would reshape our experience of reality.

Presenting nearly 250 works, Civic Radar begins with Hershman Leesons early drawings, paintings, and sculptures, then explores her shift toward performance, installation, and conceptual work, displaying her enthusiastic embrace of evolving media. The exhibition covers photography, film, video, and digital media, including sound, interactive, and social media exploring the effects of modern technology on the selfparticularly the female self.

In her recent works the artist addresses the influence of digital culture on our most intimate selves, along with the latest scientific developments in the field of regenerative medicine and genetics research, for instance 3D bioprinters that re-create human body parts. These are featured in a new version of the immersive installation The Infinity Engine (2014ongoing)a replica of a genetics lab that was first prototyped at YBCA in the 2013 exhibition Dissident Futuresthat generates infinite narratives about the future of the human species in the posthuman age.

Exhibition programs:

Techno Reveries and Alter Egos: The Films of Lynn Hershman Leeson Saturdays, March 425, 2pm, Screening Room

Saturday, March 4, 2pm, !WOMEN ART REVOLUTION (2010, 83 minutes, digital) Saturday, March 11, 2pm, TEKNOLUST (2002, 82 minutes, digital) Saturday, March 18, 2pm, CONCEIVING ADA (1997, 85 minutes, digital) Saturday, March 25, 2pm, STRANGE CULTURE (2007, 75 minutes, digital)

YBCA conversation: Lynn Hershman Leeson with Eleanor Coppola, moderated by Amelia Jones Wednesday, March 15, 7pm, Screening Room

Civic Radarbook discussion with B. Ruby Rich and Peggy Phelan, moderated by Elizabeth Thomas Wednesday, April 19, 7pm, Screening Room

Tania Libre screening Please check ybca.org for screening dates and times

This new film on the Cuban artist Tania Bruguera documents the personal and emotional fallout of Brugueras unjust detentions through sessions with psychiatrist Dr. Frank Ochberg, one of the founding fathers of modern psycho-traumatology.

Lynn Hershman Leeson: Civic Radar is curated by Peter Weibel and Andreas Beitin, and organized by ZKM | Center for Art and Media Karlsruhe. The presentation at YBCA is organized by Luca Sanromn, Director of Visual Arts, YBCA.

YBCA Exhibitions 20162017 are made possible, in part, by Mike Wilkins and Sheila Duignan, Meridee Moore and Kevin King, and the Creative Ventures Council. YBCA Programs 20162017 are made possible, in part, by The James Irvine Foundation. Additional funding for YBCA Programs20162017: National Endowment for the Arts, Adobe, Abundance Foundation, Gaia Fund, Grosvenor, and members of Yerba Buena Center for the Arts. Free First Tuesdays underwritten by Directors Forum Members. Yerba Buena Center for the Arts is grateful to the City of San Francisco for its ongoing support.

The rest is here:

Yerba Buena Center for the Arts (YBCA) - E-Flux

Posted in Posthuman | Comments Off on Yerba Buena Center for the Arts (YBCA) – E-Flux