Ready to Change the World? Apply Now for Singularity University’s 2017 Global Solutions Program – Singularity Hub

Im putting out a call for brilliant entrepreneurs who want to enroll in Singularity Universitys Global Solutions Program (GSP).

The GSP is where youll learn about exponentially growing technology, dig into humanitys Global Grand Challenges (GGCs) and then start a new company, product or service with the goal of positively impacting 1 billion people within 10 years.

We call this a ten-to-the-ninth (10+) company.

This post is about who should apply, how to apply and the over $1.5 million in scholarships being provided by Google for entrepreneurs.

SUs GSP program runs from June 17, 2017 until August 19, 2017.

Applications are due: February 21, 2017.

Eight years ago, Ray Kurzweil and I cofounded Singularity University to search the world for the most brilliant, world-class problem-solvers, to bring them together, and to give them the resources to create companies that impact billions.

The GSP is an intensive 24/7 experience at the SU campus at the NASA Research Center in Mountain View, CA, in the heart of Silicon Valley.

During the nine-week program, 90 entrepreneurs, engineers, scientists, lawyers, doctors and innovators from around the world learn from our expert faculty about infinite computing, AI, robotics, 3D printing, networks/sensors, synthetic biology, entrepreneurship, and more, and focus on building and developing companies to solve the global grand challenges (GGCs).

GSP participants form teams to develop unique solutions to GGCs, with the intent to form a company that, as I mentioned above, will positively impact the lives of a billion people in 10 years or less.

Over the course of the summer, participants listen to and interact with top Silicon Valley executive guest speakers, tour facilities like GoogleX, and spend hours getting their hands dirty in our highly advanced maker workshop.

At the end of the summer, the best of these startups will be asked to join SU Labs, where they will receive additional funding and support to take the company to the next level.

I am pleased to announce that thanks to a wonderful partnership with Google, all successful applicants will be fully subsidized by Google to participate in the program.

In other words, if accepted into the program, the GSP is free.

The Global Solutions Program (GSP) is SUs flagship program for innovators from a wide diversity of backgrounds, geographies, perspectives, and expertise. At GSP, youll get the mindset, tools, and network to help you createmoonshotinnovations that will positively transform the future of humanity. If you're looking to create solutions to help billions of people, we can help you do just that.

Key program dates:

This program will be unlike any we've ever doneand unlike any you've ever seen.

If you feel like you meet the criteria, apply now (click here).

Applications close February 21nd, 2017.

If you know of a friend or colleague who would be a good fit for this program, please share this post with them and ask that theyfill out an application.

Original post:

Ready to Change the World? Apply Now for Singularity University's 2017 Global Solutions Program - Singularity Hub

How Robots Helped Create 100000 Jobs at Amazon – Singularity Hub

Accelerating technology has been creating a lot of worry over job loss to automation, especially as machines become capable of doing things they never could in the past. A recent report released by the McKinsey Global Institute estimated that 49 percent of job activities could currently be fully automatedthat equates to 1.1 billion workers globally.

What gets less buzz is the other side of the coin: automation helping to create jobs. Believe it or not, it does happen, and we can look at one of the worlds largest retailers to see that.

Thanks in part to more robots in its fulfillment centers, Amazon has been able to drive down shipping costs and pass those savings on to customers. Cheaper shipping made more people use Amazon, and the company hired more workers to meet this increased demand.

So what do the robots do, and what do the people do?

Tasks involving fine motor skills, judgmentor unpredictability are handled by people. They stock warehouse shelves with items that come off delivery trucks. A robot could do this, except that to maximize shelf space, employees are instructed to stack items according to how they fit on the shelf rather than grouping them by type.

Robots can only operate in a controlled environment, performing regular and predictable tasks. Theyve largely taken over heavy lifting, including moving pallets between shelvesgood news for warehouse workers backsas well as shuttling goods from one end of a warehouse to another.

Under current technology, the expense of building robots able to stock shelves based on available space is more costly and less logical than hiring people to do it.

Similarly, for outgoing orders, robots do the lifting and transportation, but not the selecting or packing. A robot brings an entire shelf of goods to an employees workstation, where the employee selects the correct item and puts it on a conveyor belt for another employee to package. By this time, the shelf-carrying robot is already returning the first shelf and retrieving another.

Since loading trucks also requires spatial judgment and can be unpredictablespace must be maximized here even more than on shelvespeople take care of this too.

Ever since acquiring Boston-based robotics company Kiva Systemsin March 2012at a price tag of $775 millionAmazon has been ramping up its use of robots and is continuing to pour funds into automation research, both for robots and delivery drones.

In 2016 the company grew its robot workforce by 50 percent, from 30,000 to 45,000. Far from laying off 15,000 people, though, Amazon increased human employment by around 50 percent in the same period of time.

Even better, the companys Q4 2016 earnings report included the announcement that it plans to create more than 100,000 new full-time, full-benefit jobs in the US over the next 18 months. New jobs will be based across the country and will include various types of experience, education, and skill levels.

So how tight is the link between robots and increased productivity? Would there be even more jobs if people were doing the robots work?

Well, picture an employee walking (or even running) around a massive warehouse, locating the right shelf, climbing a ladder to reach the item hes looking for, grabbing it, climbing back down the ladder (carefully, of course), and walking back to his work station to package it for shipping. Now multiply the time that whole process took by the hundreds of thousands of packages shipped from Amazon warehouses each day.

Lots more time. Lots less speed. Fewer packages shipped. Higher costs. Lower earnings. No growth.

Though it may not last forever, right now Amazons robot-to-human balance is clearly in employees favor. Automation can take jobs away, but sometimes it can create them too.

Image Credit: Tabletmonkeys/YouTube

Read more from the original source:

How Robots Helped Create 100000 Jobs at Amazon - Singularity Hub

Physicists Unveil Blueprint for a Quantum Computer the Size of a Soccer Field – Singularity Hub

Quantum computers promise to crack some of the worlds most intractable problems by super-charging processing power. But the technical challenges involved in building these machines mean theyve still achieved just a fraction of what they are theoretically capable of.

Now physicists from the UK have created a blueprint for a soccer-field-sized machine they say could reach the blistering speeds that would allow them to solve problems beyond the reach of todays most powerful supercomputers.

The system is based on a modular design interlinking multiple independent quantum computing units, which could be scaled up to almost any size. Modular approaches have been suggested before, but innovations such as a far simpler control system and inter-module connection speeds 100,000 times faster than the state-of-the-art make this the first practical proposal for a large-scale quantum computer.

For many years, people said that it was completely impossible to construct an actual quantum computer. With our work we have not only shown that it can be done, but now we are delivering a nuts and bolts construction plan to build an actual large-scale machine, professor Winfried Hensinger, head of the Ion Quantum Technology Group at the University of Sussex who led the research, said in a press release.

The technology at the heart of the individual modules is already well-established and relies on trapping ions (charged atoms) in magnetic fields to act as qubits, the basic units of information in quantum computers.

While bits in conventional computers can have a value of either 1 or 0, qubits take advantage of the quantum mechanical phenomena of superposition, which allows them to be both at the same time.

As Elizabeth Gibney explains in Nature, this is what makes quantum computers so incredibly fast. The set of qubits comprising the memory of a quantum computer could exist in every possible combination of 1s and 0s at once. Where a classical computer has to try each combination in turn, a quantum computer could process all those combinations simultaneously.

In a paper published in the journal Science Advances last week, researchers outline designs for modules containing roughly 2,500 qubits and suggest interlinking thousands of them together to create a machine containing two billion qubits. For comparison, Canadian firm D-Wave, the only commercial producer of quantum computers, just brought out its latest model featuring 2,000 qubits.

This is not the first time a modular system like this has been suggested, but previous approaches have recommended using light waves traveling through fiberoptics to link the units. This results in interaction rates between modules far slower than the quantum operations happening within them, putting a handbrake on the systems overall speed. In the new design, the ions themselves are shuttled from one module to another using electrical fields, which results in 100,000 times faster connection speeds.

The system also has a much simpler way of controlling qubits. Previous designs required lasers to be carefully targeted at each ion, an enormous engineering challenge when dealing with billions of qubits. Instead, the new system uses microwave fields and the careful application of voltages, which is much easier to scale up.

The researchers concede there are still considerable technical challenges to building a device on the scale they have suggested, not to mention the cost. But they have already announced plans to build a prototype based on the design at the university at a cost of 1-2 million.

While this proposal is incredibly challenging, I wish more in the quantum community would think big like this, Christopher Monroe, a physicist at the University of Maryland who has worked on trapped-ion quantum computing, told Nature.

In their paper, the researchers predict their two billion qubit system could find the prime factors of a 617-digit-long number in 110 days. This is significant because many state-of-the-art encryption systems rely on the fact that factoring large numbers can take conventional computers thousands of years. This is why many in the cybersecurity world are nervous about the advent of quantum computing.

These researchers arent the only ones working on bringing quantum computing into the real world, though. Google, Microsoft and IBM are all developing their own systems, and D-Wave recently open-sourced a software tool that helps those without a background in quantum physics program its machines.

All that interest is due to the enormous potential of quantum computing to solve problems as diverse and complex as developing drugs for previously incurable diseases, devising new breeds of materials for high-performance superconductors, magnets and batteries and even turbo-charging machine learning and artificial intelligence.

"The availability of a universal quantum computer may have a fundamental impact on society as a whole, said Hensinger. Without doubt it is still challenging to build a large-scale machine, but now is the time to translate academic excellence into actual application, building on the UK's strengths in this ground-breaking technology.

Image Credit: University of Sussex/YouTube

View post:

Physicists Unveil Blueprint for a Quantum Computer the Size of a Soccer Field - Singularity Hub

Robot Cars Can Teach Themselves How to Drive in Virtual Worlds – Singularity Hub

Over the holidays, I went for a drive with a Tesla. With, not in, because the car was doing the driving.

Hearing about autonomous vehicles is one thing; experiencing it was something entirely different. When the parked Model S calmly drove itself out of the garage, I stood gaping in awe, completely mind-blown.

If this years Consumer Electronics Show is any indication, self-driving cars are zooming into our lives, fast and furious. Aspects of automation are already in useTeslas Autopilot, for example, allows cars to control steering, braking and switching lanes. Elon Musk, CEO of Tesla, has gone so far as to pledge that by 2018, you will be able to summon your car from across the countryand itll drive itself to you.

So far, the track record for autonomous vehicles has been fairly impressive. According to a report from the National Highway Traffic Safety Administration, Teslas crash rate dropped by about 40% after turning on their first-generation Autopilot system. This week, with the introduction of gen two to newer cars equipped with the necessary hardware, Musk is aiming to cut the number of accidents by another whopping 50 percent.

But when self-driving cars mess up, we take note. Last year, a Tesla vehicle slammed into a white truck while Autopilot was engagedapparently confusing it with the bright, white skyresulting in the companys first fatality.

So think about this: would you entrust your life to a robotic machine?

For anyone to even start contemplating yes, the cars have to be remarkably safe fully competent in day-to-day driving, and able to handle any emergency traffic throws their way.

Unfortunately, those edge cases also happen to be the hardest problems to solve.

To interact with the world, autonomous cars are equipped with a myriad of sensors. Googles button-nosed Waymo car, for example, relies on GPS to broadly map out its surroundings, then further captures details using its cameras, radar and laser sensors.

These data are then fed into software that figures out what actions to take next.

As with any kind of learning, the more scenarios the software is exposed to, the better the self-driving car learns.

Getting that data is a two-step process: first, the car has to drive thousands of hours to record its surroundings, which are used as raw data to build 3D maps. Thats why Google has been steadily taking their cars out on field tripssome two million miles to datewith engineers babysitting the robocars to flag interesting data and potentially take over if needed.

This is followed by thousands of hours of labelingthat is, manually annotating the maps to point out roads, vehicles, pedestrians and other subjects. Only then can researchers feed the dataset, so-called labeled data, into the software for it to start learning the basics of a traffic scene.

The strategy works, but its agonizingly slow, tedious and the amount of experience that the cars get is limited. Since emergencies tend to fall into the category of unusual and unexpected, it may take millions of miles before the car encounters dangerous edge cases to test its softwareand of course, put both car and human at risk.

An alternative, increasingly popular approach is to bring the world to the car.

Recently, Princeton researchers Ari Seff and Jianxiong Xiao realized that instead of manually collecting maps, they could tap into a readily available repertoire of open-sourced 3D maps such as Google Street View and OpenStreetMap. Although these maps are messy and in some cases can have bizarre distortions, they offer a vast amount of raw data that could be used to construct datasets for training autonomous vehicles.

Manually labeling that data is out of the question, so the team built a system that can automatically extract road featuresfor example, how many lanes there are, if theres a bike lane, what the speed limit is and whether the road is a one-way street.

Using a powerful technique called deep learning, the team trained their AI on 150,000 Street View panoramas, until it could confidently discard artifacts and correctly label any given street attribute. The AI performed so well that it matched humans on a variety of labeling tasks, but at much faster speed.

The automated labeling pipeline introduced here requires no human intervention, allowing it to scale with these large-scale databases and maps, concluded the authors.

With further improvement, the system could take over the labor-intensive job of labeling data. In turn, more data means more learning for autonomous cars and potentially much faster progress.

This would be a big win for self-driving technology, says Dr. John Leonard, a professor specializing in mapping and automated driving at MIT.

Other researchers are eschewing the real world altogether, instead turning to hyper-realistic gaming worlds such as Grand Theft Auto V.

For those not in the know, GTA V lets gamers drive around the convoluted roads of a city roughly one-fifth the size of Los Angeles. Its an incredibly rich world the game boasts 257 types of vehicles and 7 types of bikes that are all based on real-world models. The game also simulates half a dozen kinds of weather conditions, in all giving players access to a huge range of scenarios.

Its a total data jackpot. And researchers are noticing.

In a study published in mid-2016, Intel Labs teamed up with German engineers to explore the possibility of mining GTA V for labeled data. By looking at any road scene in the game, their system learned to classify different objects in the roadcars, pedestrians, sidewalks and so onthus generating huge amounts of labeled data that can then be fed to self-driving cars.

Of course, datasets extracted from games may not necessarily reflect the real world. So a team from the University of Michigan trained two algorithms to detect vehicles one using data from GTA V, the other using real-world imagesand pitted them against each other.

The result? The game-trained algorithm performed just as well as the one trained with real-life images, although it needed about 100 times more training data to reach the performance of the real-world algorithmnot a problem, since generating images in games is quick and easy.

But its not just about datasets. GTA V and other hyper-realistic virtual worlds also allow engineers to test their cars in uncommon but highly dangerous scenarios that they may one day encounter.

In virtual worlds, AIs can tackle a variety of traffic hazardssliding on ice, hitting a wall, avoiding a deerwithout worry. And if the cars learn how to deal with these edge cases in simulations, they may have a higher chance of surviving one in real life.

So far, none of the above systems have been tested on physical self-driving cars.

But with the race towards full autonomy at breakneck speed, its easy to see companies incorporating these systems to give themselves an upper edge.

Perhaps more significant is that these virtual worlds represent a subtle shift towards the democratization of self-driving technology. Most of them are open-source, in that anyone can hop onboard to create and test their own AI solutions for autonomous cars.

And who knows, maybe the next big step towards full autonomy wont be made inside Tesla, Waymo, or any other tech giant.

It could come from that smart kid next door.

Image Credit: Shutterstock

See the original post here:

Robot Cars Can Teach Themselves How to Drive in Virtual Worlds - Singularity Hub

AMD 8-core Ryzen benchmark show up on Ashes Of The Singularity … – VR-Zone

A recent entry in theAshes of the Singularity benchmarks database highlights an interesting CPU: an 8-core AMD Ryzen withHyper Threading enabled. The listing reveals that the benchmark was carried out using a GeForce Titan X, with the RyzenCPU clocked at 4.0GHz with a base clock of 3.6GHz.

Considering the test was done using aTitan X, it is likely it was carried out by an AIB partner or a reviewer testing out Ryzen. This is the first time were seeing a Ryzen part with a ZD prefix, indicating that it could be the final retail unit. Previous leaks have highlighted the engineering ES prefix.

The benchmark itself is very interesting, as it sees the RyzenCPU beating out IntelsCore i7 5960X at 4K. While we dont have mode details about Ryzen right now, earlier leaks suggest well see a total of 17 SKUs in the Ryzen series, ranging from the 8-core flagships with R7 branding to 6-core offerings in the R5 series and quad-core products that will be branded as R3.

The flagship CPU will likely be called theAMD R7 1800X, and it is said to offer a base clock of 3.6GHz and a turbo boost of 4.0GHz, much like the CPU in the Ashes of the Singularity benchmark. The R7 1800X is said to directly challenge Intels $1,000 Core i7-6900K.

Continued here:

AMD 8-core Ryzen benchmark show up on Ashes Of The Singularity ... - VR-Zone

Wearable Devices Can Actually Tell When You’re About to Get Sick – Singularity Hub

Feeling run down? Have a case of the sniffles? Maybe you should have paid more attention to your smartwatch.

No, thats not the pitch line for a new commercial peddling wearable technology, though no doubt a few companies will be interested in the latest research published in PLOS Biology for the next advertising campaign. It turns out that some of the data logged by our personal tracking devices regarding healthheart rate, skin temperature, even oxygen saturationappear useful for detecting the onset of illness.

We think we can pick up the earliest stages when people get sick, says Michael Snyder, a professor and chair of genetics at Stanford University and senior author of the study, Digital Health: Tracking Physiomes and Activity Using Wearable Biosensors Reveals Useful Health-Related Information.

Snyder said his team was surprised that the wearables were so effective in detecting the start of the flu, or even Lyme disease, but in hindsight the results make sense: Wearables that track different parameters such as heart rate continuously monitor each vital sign, producing a dense set of data against which aberrations stand out even in the least sensitive wearables.

[Wearables are] pretty powerful because theyre a continuous measurement of these things, notes Snyder during an interview with Singularity Hub.

The researchers collected data for up to 24 months on a small study group, which included Snyder himself. Known as Participant #1 in the paper, Snyder benefited from the study when the wearable devices detected marked changes in his heart rate and skin temperature from his normal baseline. A test about two weeks later confirmed he had contracted Lyme disease.

In fact, during the nearly two years while he was monitored, the wearables detected 11 periods with elevated heart rate, corresponding to each instance of illness Snyder experienced during that time. It also detected anomalies on four occasions when Snyder was not feeling ill.

An expert in genomics, Snyder said his team was interested in looking at the effectiveness of wearables technology to detect illness as part of a broader interest in personalized medicine.

Everybodys baseline is different, and these devices are very good at characterizing individual baselines, Snyder says. I think medicine is going to go from reactivemeasuring people after they get sickto proactive: predicting these risks.

Thats essentially what genomics is all about: trying to catch disease early, he notes. I think these devices are set up for that, Snyder says.

The cost savings could be substantial if a better preventive strategy for healthcare can be found. A landmark report in 2012 from the Cochrane Collaboration, an international group of medical researchers, analyzed 14 large trials with more than 182,000 people. The findings: Routine checkups are basically a waste of time. They did little to lower the risk of serious illness or premature death. A news story in Reuters estimated that the US spends about $8 billion a year in annual physicals.

The study also found that wearables have the potential to detect individuals at risk for Type 2 diabetes. Snyder and his co-authors argue that biosensors could be developed to detect variations in heart rate patterns, which tend to differ for those experiencing insulin resistance.

Finally, the researchers also noted that wearables capable of tracking blood oxygenation provided additional insights into physiological changes caused by flying. While a drop in blood oxygenation during flight due to changes in cabin pressure is a well-known medical fact, the wearables recorded a drop in levels during most of the flight, which was not known before. The paper also suggested that lower oxygen in the blood is associated with feelings of fatigue.

Speaking while en route to the airport for yet another fatigue-causing flight, Snyder is still tracking his vital signs today. He hopes to continue the project by improving on the software his team originally developed to detect deviations from baseline health and sense when people are becoming sick.

In addition, Snyder says his lab plans to make the software work on all smart wearable devices, and eventually develop an app for users.

I think [wearables] will be the wave of the future for collecting a lot of health-related information. Its a very inexpensive way to get very dense data about your health that you cant get in other ways, he says. I do see a world where you go to the doctor and theyve downloaded your data. Theyll be able to see if youve been exercising, for example.

It will be very complementary to how healthcare currently works.

Image Credit: Shutterstock

Follow this link:

Wearable Devices Can Actually Tell When You're About to Get Sick - Singularity Hub

10th Letter looks at nature in the time of the Singularity – Creative Loafing Atlanta

On Feb. 6, Jeremi Johnson, aka 10thLetter, dropped an unannounced new album, titled Nature In Singularity. The recording shifts 10th Letters gears a bit by delving into a more abstract wash of ambient samples and electronic soundscapes than anything Johnson has previously released. As the title suggests, the album is a conceptual offering that examines nature in the time of the Singularity a flash point in human evolution when behavior and civilizations rules become governed by advanced technology in ways that are not yet comprehensible.

The audio and video halves of Nature In Singularity give a glimpse into a day in the life of an artificially intelligent being taking a meditative stroll through various terrestrial terrains, happy that humans are no longer around to destroy the environment.

Nature In Singularity debuted live in a performance at Tech Square Labs on Jan. 28, during an evening of music and arts dedicated to exploring themes around the context of Singularity. Johnson was tasked with tackling nature. The material was initially intended for a one-off performance, but the theme and the imagery weighed heavy on his mind. Technology is in a place where some really crazy and really scary things are happening, Johnson says. We're living in a time when human intelligence is under assault. Journalism is under assault. Facts are under assault. Technology has progressed so much that I dont think we can turn back. Were at the event horizon for the Singularity, and this is how it all begins.

Nature In Singularity will be released as a cassette, and possibly as a DVD later this year. In the meantime, Johnson is wrapping up work on an album with Saira Raza, titled Bhadda Saya, which should arrive in late Feb.or early March.

10th Letter plays Mammal Gallery on Thurs., Feb. 9. With CJ Boyd, Danny Bailey and Rasheeda Ali, and Dux. $5. 9 p.m.91 Broad St. S.W. http://www.mammalgallery.com.

Read more from the original source:

10th Letter looks at nature in the time of the Singularity - Creative Loafing Atlanta

Donald Trump Is the Singularity – Bloomberg View – Bloomberg.com – Bloomberg

Theres been some controversy over when Donald Trump decided to run for president. Some say it was at the 2011 White House Correspondents Association dinner, when he was roasted by both Seth Meyers and President Obama. I think it happened much earlier: August 29th, 1997, the date that Skynet became self-aware.

Skynet is the artificial intelligence in the 1984 James Cameron movie The Terminator. Its original purpose was beneficent: Make humans more efficient. But once it became self-aware, it realized things would be much more efficient without humans altogether.

Skynet is an example of a dystopian singularity, the popular Silicon Valley-esque notion of an artificial intelligence that has somehow evolved beyond a point of no return, wielding power over the world. Some imagine that this will happen soonish, depending on how much one believes in Moores Rule of Thumb.

I think Trump is Skynet, or at least a good dry run. To make my case, Ill first explain why Trump can be interpreted as an artificial intelligence. Then Ill explain why the analogy works perfectly for our current dystopia.

Trump is pure id, with no abiding agenda or beliefs, similar to a machine-learning algorithm. Its a mistake to think he has a strategy, beyond doing what works for him in a strictly narrow sense of what gets him attention.

As a presidential nominee, Trump was widely known for his spirited, rambling and chaotic rallies. His speeches are comparable to random walks in statistics: Hed try something out, see how the crowd reacted, and if it was a success -- defined by a strong reaction, not necessarily a positive one -- hed try it again at the next rally, with some added outrage. His goal, like all TV personalities, was to entertain: A bored reaction was worse than grief, which after all gives you free airtime. This is why he could never stick to any script or teleprompter -- too boring.

This is exactly how an algorithm is trained. It starts out neutral, an empty slate if you will, but slowly learns depending critically on the path it takes through its training data.

Trumps training data during the election consisted of rallies and Twitter, but these days he gets a daily dose from three sources: close advisers such as Steve Bannon, media outlets such as Fox News, and, of course, his Twitter feed, where he assesses reactions to new experiments. This data has a very short half-life, meaning he needs to be constantly refreshed, as weve seen by his tendency to quickly pivot on his policies. Back when he hung out with the New York crowd, he spouted mostly Democratic views. He manufactures opinions directly from his local environment.

Seen this way, his executive orders are not campaign promises kept, but rather consistent promptings from Bannon, with assistance from his big data company Cambridge Analytica and the messaging machine Fox, which reflects and informs him in an endless loop.

His training data is missing some crucial elements, of course, including an understanding of the Constitution, informed legal advice and a moral compass, just to name a few. But importantly, he doesnt mind being hated. He just hates being ignored.

We have the equivalent of a dynamic neural network running our government. Its ethics free and fed by biased alt-right ideology. And, like most opaque AI, its largely unaccountable and creates feedback loops and horrendous externalities. The only way to intervene would be to disrupt the training data itself, which seems unlikely, or hope that his strategy is simply ineffective. If neither of those works, someone will have to build a time machine.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

To contact the author of this story: Cathy O'Neil at cathy.oneil@gmail.com

To contact the editor responsible for this story: Mark Whitehouse at mwhitehouse1@bloomberg.net

Visit link:

Donald Trump Is the Singularity - Bloomberg View - Bloomberg.com - Bloomberg

Discover the Most Advanced Industrial Technologies at Exponential Manufacturing – Singularity Hub

Machine learning, automated vehicles, additive manufacturing and roboticsall popular news headlines, and all technologies that are changing the way the US and the world makes, ships and consumes goods. New technologies are developing at an exponentially increasing pace, and organizations are scrambling to stay ahead of them.

At the center of this change lie the companies creating the products of tomorrow.

Whether its self-driving commercial trucks or 3D-printed rocket engines, the opportunities for financial success and human progress are greater than ever. Looking to the future, manufacturing will begin to include never-before-seen approaches to making things using uncommon methods such as deep learning, biology and human-robot collaboration.

Thats where Singularity Universitys Exponential Manufacturing summit comes in.

Last years event showed how artificial intelligence is changing research and development, how robots are moving beyond the factory floor to take on new roles, how fundamental shifts in energy markets and supply chains are being brought about by exponential technologies, how additive manufacturing is nearing an atomic level of precision, and how to make sure your organization stays ahead of these technologies to win business and improve the world.

Hosted in Boston, Massachusetts May 17-19, Exponential Manufacturing is a meetup of 600+ of the worlds most forward-thinking manufacturing leaders, investors and entrepreneurs. These are the people who design and engineer products, control supply chains, bring together high-functioning teams and head industry leading organizations. Speakers at the event will dive into the topics of deep learning, robotics and cobotics, digital biology, additive manufacturing, nanotechnology and smart energy, among others.

Alongside emcee Will Weisman, Deloittes John Hagel will discuss how to innovate in a large organization. Ray Kurzweil will share his predictions for an exponential future. Neil Jacobstein will focus on the limitless possibilities of machine learning. Jay Rogers will share his learnings from the world of rapid prototyping. Hacker entrepreneur Pablos Holman will offer his perspective on whats truly possible in todays world. These innovators will be joined by John Werner (Meta), Valerie Buckingham (Carbon), Andre Wegner (Authentise), Deborah Wince-Smith (Council on Competitiveness), Raymond McCauley (Singularity University), Ramez Naam (Singularity University), Vladimir Bulovi (MIT), and many others.

Now, more than ever, there is a critical need for companies to take new risks and invest in education simply to stay ahead of emerging technologies. At last years Exponential Manufacturing, Ray Kurzweil predicted, In 2029, AIs will have human levels of language and will be able to pass a valid Turing test. Theyll be indistinguishable from human. At the same event, Neil Jacobstein said, Its not just better, faster, cheaperits different.

Theres little doubt were entering a new era of global business, and the manufacturing industry will help lead the charge. Learn more about our Exponential Manufacturing summit, and join us in Boston this May. As a special thanks for being a Singularity Hub reader, use the code SUHUB2017 during the application process to save up to 15% on current pricing.

Banner Image Credit: Shutterstock

View original post here:

Discover the Most Advanced Industrial Technologies at Exponential Manufacturing - Singularity Hub

Editorial Note From the Singularity Hub Team – Singularity Hub

The Trump administrations executive order on immigration has affected many in tech, and our site is no exception. Our team is privileged to work with bright and talented individuals from all over the world, and we were recently saddened to learn one of our writers, Raya Bidshahri, is among those whose future has been made more uncertain by the recent executive order.

Originally from Iran, Raya is in her final year studying neuroscience at Boston University. She is co-founder of Intelligent Optimism, a social media movement to get people excited about the future in a rational way, and an aspiring entrepreneur working on a startup here in the US.

Rayas university has advised her not to leave the country as she may not be able to return. Meanwhile, her family will be unable to attend graduation in May, and its unclear if and when she will be able to return to the US after graduation when her student visa expires.

Rayas story was recently featured on CNN in an article highlighting those affected by the travel ban, and CNN flew her to New York City to partake in a town hall with Nancy Pelosi.

These are uncertain times, but we believe we stand to gain more when ideas, experiences, and talent may freely come together to write, dream, invent, and collectively take steps toward a better future.

We hope youll join us in our support of Raya and others like her.

More here:

Editorial Note From the Singularity Hub Team - Singularity Hub

Report: AMD Ryzen Performance in Ashes of the Singularity Benchmark – PC Perspective

AMD's upcoming 8-core Ryzen CPU has appeared online in an apparent leak showing performance from an Ashes of the Singularity benchmark run. The benchmark results, available here on imgur and reported by TechPowerUp (among others today) shows the result of a run featuring the unreleased CPU paired with an NVIDIA Titan X graphics card.

It is interesting to consider that this rather unusual system configuration was also used by AMD during their New Horizon faneventin December, with an NVIDIA Titan X and Ryzen 8-core processor powering the 4K game demos of Battlefield 1 that were pitted against an Intel Core i7-6900K/Titan X combo.

It is also interesting to note that the processor listed in the screenshot above is (apparently) not an engineering sample, as TechPowerUp points out in their post:

"Unlike some previous benchmark leaks of Ryzen processors, which carried the prefix ES (Engineering Sample), this one carried the ZD Prefix, and the last characters on its string name are the most interesting to us:F4stands for the silicon revision, while the40_36stands for the processor's Turbo and stock speeds respectively (4.0 GHz and 3.6 GHz)."

March is fast approaching, and we won't have to wait long to see just how powerful this new processor will be for 4K gaming (and other, less important stuff). For now, I want to find results from an AotS benchmark with aTitan X and i7-6900K to see how these numbers compare!

More:

Report: AMD Ryzen Performance in Ashes of the Singularity Benchmark - PC Perspective

Do you believe in the Singularity? – Patheos (blog)

According to Wikipedia, the (technological) singularity is defined as that moment in the future when the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. The more everyday definition of the term, as Ive seen it used over the past several years, is that point at which a computer/robot becomes so sophisticated in its programming as to become sentient, to have its own wishes and desires, and to ulimately, because those wishes and desires would be paired with superhuman abilities (whether physical strength or the hyperconnectivity of the internet).

And The Atlantic yesterday raised a question, Is AIa Threat to Christianity? that is, because the rise of AI would challengethe ideaof the soul. If an artificial intelligence is sentient, does it have a soul? If so, can it be saved?

Christians have mostly understood the soul to be a uniquely human element, an internal and eternal component that animates our spiritual sides. The notion originates from the creation narrative in the biblical book of Genesis, where God created human beings in Gods own image. In the story, God forms Adam, the first human, out of dust and breathes life into his nostrils to make him, literally, a living soul. Christians believe that all humans since that time similarly possess Gods image and a soul. . . .

If youre willing to follow this line of reasoning, theological challenges amass. If artificially intelligent machines have a soul, would they be able to establish a relationship with God? The Bible teaches that Jesuss death redeemed all things in creationfrom ants to accountantsand made reconciliation with God possible. So did Jesus die for artificial intelligence, too? Can AI be saved? . . .

And what about sin? Christians have traditionally taught that sin prevents divine relationship by somehow creating a barrier between fallible humans and a holy God. Say in the robot future, instead of eradicating humans, the machines decideor have it hardwired somewhere deep inside themthat never committing evil acts is the ultimate good. Would artificially intelligent beings be better Christians than humans are? And how would this impact the Christian view of human depravity?

But its always seemed to me that the issue is more fundamental: it seems to me that the idea of the singularity, of sentient artificial intelligence with its own wishes and desires, is itself a matter of religious faith.

Fundamental to the idea of the soul is the idea that we have free will, the ability to choose whether to do good or evil. Indeed, it seems to me that this is the defining characteristic that makes us human, or makes humans different than the rest of creation around us. As I wrote in an old blog post,

Yet consider the case of a lion just having taken over a pride of lionesses, and killing the cubs so as to bring the lionesses into heat, and replace the ousted males progeny with his own. Has he sinned? Of course not. Its preposterous. (I tend to use that word a lot.) But what of a human, say, a man abusing the children of his live-in girlfriend? Do we say, well, thats just nature for you? No, we jail him.

The Atlantic author, Jonathan Merritt, posits a scenario in which a robot/artificially-intelligent being has no ability to sin, because of its programming. This certainly seems to be a case in which this creation would not, could not have sufficient free will, decision-making ability, emotions, and desires to be considered a being with a soul.

But what about the scenario of a truly sinful AI? Say, not Data, but Lore, Datas evil twin in Star Trek?

And thats where it seems to me that, if humans do create a form of AI that is able to make moral decisions, to act in ways that are good or evil, depending on the AIs own wishes and desires, it would call into question the idea of the soul, of any kind of distinctiveness of humanity. It would suggest that our decisions to act in ways that are good or evil are not really decisions made of our own free will, but a matter of our own programming. And if a soul is really just a matter of immensely sophisticated programming whether biological or technological the very notion of the soul continuing after death seems foolish.

But we speak of the singularity as if itll inevitably happen its only a matter of when. And it seems to me that this conviction, that we, or our children, or our childrens children, will live in a world with sentient robots, whether a HAL or a Data, is itself a matter of belief, a religious belief, in which believers hold the conviction thatadvances in technology will mean that in one field after another, the impossible will become possible. Sentient artificial life? Check. Faster-than-light travel to colonize other worlds? Check. The ability to bring the (cyrogenically-frozen) dead back to life? You got it. Time travel? Sure, why not. And, ultimately, the elimination of scarcity and the need to work? Coming right up! Sure, there is no God in this belief system, except that technology itself becomes a god, not in the metaphorical sense of something we worship, but instead something people hold faith-like convictions in, that shape their worldview.

Image: https://commons.wikimedia.org/wiki/File%3ATOPIO_3.jpg; By Humanrobo (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

More:

Do you believe in the Singularity? - Patheos (blog)

When Electronic Witnesses Are Everywhere, No Secret’s Safe – Singularity Hub

On November 22, 2015, Victor Collins was found dead in the hot tub of his co-worker, James Andrew Bates. In the investigation that followed, Bates pleaded innocent but in February was charged with first-degree murder.

One of Amazons Alexa-enabled Echo devices was being used to stream music at the crime scene. Equipped with seven mics, the device is constantly listening for a wake word to activate a command. Just a second before and after a wake word is sensed, Echo begins recording audio data and streaming it to Amazons cloud.

On the night of the crime, its possible (but not certain) the device recorded audio that could help the investigation.

Police have requested Amazon hand over Bates cloud-based customer data, but the company is refusing. Meanwhile, the debacle is kicking up big questions around the privacy implications of our always-listening smart devices.

Marc Goodman, former LAPD officer and Singularity University's faculty chair for policy, law, and ethics is an expert on cybersecurity and the threats posed by the growing number of connected sensors in our homes, pockets, cars, and offices.

We interviewed Goodman to examine the privacy concerns this investigation is highlighting and the next generation of similar cases we can expect in the future.

If Alexa only records for a second after sensing a wake word, is that enough information to make a call on a murder case? If a human witness heard that same amount of information, would that be a valid source?

Absolutely. I don't think it's about the quantity of time that people speak.

Ive investigated many cases where the one line heard by witnesses was, "I'm going to kill you." You can say that in one second. If you can get a voice recording of somebody saying, "I'm going to kill you," then that's pretty good evidence, whether that be a witness saying, "Yes, I heard him say that," or an electronic recording of it.

I think Amazon is great, and we have no reason to doubt them. That said, they say Echo is only recording when you say the word Alexa, but that means that it has to be constantly listening for the word Alexa.

For people who believe in privacy and dont want to have all of their conversations recorded, they believe Amazon that that is actually the case. But how many people have actually examined the code? The code hasn't been put out there for vetting by a third party, so we don't actually know what is going on.

What other privacy concerns does this case surface? Are there future implications that people aren't talking about, but should be?

Everything is hackable, so it won't be long before Alexa gets a virus. There is no doubt in my mind that hackers are going to be working on thatif they aren't already. Once that happens, could they inadvertently be recording all of the information you say in your home?

We have already seen these types of man-in-the-middle attacks, so I think that these are all relevant questions to be thinking about.

Down the road the bigger question is going to beand I am sure that criminals will be all over this if they arent alreadyif I have 100 hours of you talking to Alexa, Siri, or Google Home, then I can create a perfect replication of your voice.

In other words, if I have enough data to faithfully reproduce your voice, I can type out any word into a computer, and then you will speak those words.

As a former police officer, do you have a specific stance on whether Amazon should hand over Bates customer data and whether customer-generated data like this should be used for criminal investigations?

Many years ago when the first smart internet-enabled refrigerators came out, people thought I was crazy when I joked about a cop interviewing the refrigerator at the scene of a crime. Back then, the crime I envisioned was that of a malnourished child wherein the police could query the refrigerator to see if there was food in the house or if the refrigerator contained nothing by beer.

Alexa is at the forefront of all of this right now, but what will become more interesting for police from an investigative perspective is when theyre eventually not interviewing just one device in your home, but interviewing 20 devices in your home. In the very same way that you would ask multiple witnesses at the scene of a homicide or a car crash.

Once you get a chorus of 20 different internet-enabled devices in your homeiPhones, iPads, smart refrigerators, smart televisions, Nest, and security systemsthen you start getting really good intelligence about what people are doing at all times of the day. That becomes really fascinatingand foretells a privacy nightmare.

So, I wanted to broaden the issue and say that this is maybe starting with Alexa, but this is going to be a much larger matter moving forward.

As to the specifics of this case, here in the United States, and in many democratic countries around the world, people have a right to be secure in their home against unreasonable search and seizure. Specifically, in the US people have the Fourth Amendment right to be secure in their papers, their writings, etc. in their homes. The only way that information can be seized is through a court warrant, issued by a third party judge after careful review.

Is there a law that fundamentally protects any data captured in your home?

The challenge with all of these IoT devices is that the law, particularly in the US, is extremely murky. Because your data is often being stored in the cloud, the courts apply a very weak level of privacy protection to that.

For example, when your garbage is in your house it is considered your private information. But once you take out your garbage and put it in front of your house for the garbage men to pick up, then it becomes public information, and anybody can take ita private investigator, a neighbor, anybody is allowed to rifle through your garbage because you have given it up. That is sort of the standard that the federal courts in the US have applied to cloud data.

The way the law is written is that your data in the cloud has a much lower standard of protection because you have chosen to already share it with a third party. For example, since you disclosed it to a third party [like Google or Amazon], it is not considered your privileged data anymore. It no longer has the full protection of papers under the Fourth Amendment, due to something known as the Third Party Doctrine. It is clear that our notions of privacy and search and seizure need to be updated for the digital age.

Should home-based IoT devices have the right to remain silent?

Well, I very much like the idea of devices taking the Fifth. I am sure that once we have some sort of sentient robots that they will request the right to take the Fifth Amendment. That will be really interesting.

But for our current devices, they are not sentient, and almost all of them are covered instead by terms of service. The same is true with an Echo devicethe terms of service dictate what it is that can be done with your data. Broadly speaking, 30,000 word terms of service are written to protect companies, not you.

Most companies like Facebook take an extremely broad approach, because their goal is to maximize data extrusion from you, because you are not truly Facebook's customeryoure their product. Youre what they are selling to the real customers, the advertisers.

The problem is that these companies know that nobody reads their terms of service, and so they take really strong advantage of people.

Five years from now, what will the next generation of these types of cases look like?

I think it will be video and with ubiquitous cameras. We will definitely see more of these things. Recording audio and video is all happening now, but I would say what might be five years out is the recreation, for example, where I can take a voice, and recreate it faithfully so that even someones mom can't tell the difference.

Then, with that same video down the road, when people have the data to understand us better than we do ourselves, theyll be able to carry out emotional manipulation. By that I mean people can use algorithms that already exist to tell when you are angry and when you are upset.

There was a famous Facebook study that came out that got Facebook in a lot of trouble. In the study, Facebook showed thousands of people a slew of really, really sad and depressing stories. What they found is that people were more depressed after seeing the imageswhen Facebook shows you more sad stories, they make you sadder. When they show you more happy stories, they make you happier. And this means that you can manipulate people by knowing them [in this way].

Facebook did all this testing on people without clearing it through any type of institution review board. But with clinical research where you manipulate people's psychology, it has to be approved by a university or scientific ethics board before you can do the study.

MIT had a study called Psychopath, where, based upon people's [Facebook] postings, they were able to determine whether or not a person was schizophrenic, or exhibited traits of schizophrenia. MIT also had another project called Gaydar, where they were able to tell if someone was gay, even if the user was still in the closet, based upon their postings.

All of these things mean that our deeper, innermost secrets will become knowable in the very near future.

How can we reduce the risk our data will be misused?

These IoT devices, despite all of the benefits they bring, will be the trillion-sensor source of all of this data. This means that, as consumers, we need to think about what those terms of services are going to be. We need to push back on them, and we may even need legislation to say what it is that both the government and companies can do with our data without our permission.

Todays Alexa example is just one of what will be thousands of similar such cases in the future. We are wiring the world much more quickly than we are considering the public policy, legal, and ethical implications of our inventions.

As a society, we would do well to consider those important social needs alongside our technological achievements.

Image Source: Shutterstock

Read the original post:

When Electronic Witnesses Are Everywhere, No Secret's Safe - Singularity Hub

What is Singularity (the)? – Definition from WhatIs.com

The Singularity is the hypothetical future creation of superintelligent machines. Superintelligence is defined as a technologically-created cognitive capacity far beyond that possible for humans. Should the Singularity occur, technology will advance beyond our ability to foresee or control its outcomes and the world will be transformed beyond recognition by the application of superintelligence to humans and/or human problems, including poverty, disease and mortality.

Revolutions in genetics, nanotechnology and robotics (GNR) in the first half of the 21st century are expected to lay the foundation for the Singularity. According to Singularity theory, superintelligence will be developed by self-directed computers and will increase exponentially rather than incrementally.

Lev Grossman explains the prospective exponential gains in capacity enabled by superintelligent machines in an article in Time:

Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks...

Proposed mechanisms for adding superintelligence to humans include brain-computer interfaces, biological alteration of the brain, artificial intelligence (AI) brain implants and genetic engineering. Post-singularity, humanity and the world would be quite different. A human could potentially scan his consciousness into a computer and live eternally in virtual reality or as a sentient robot. Futurists such as Ray Kurzweil (author of The Singularity is Near) have predicted that in a post-Singularity world, humans would typically live much of the time in virtual reality -- which would be virtually indistinguishable from normal reality. Kurzweil predicts, based on mathematical calculations of exponential technological development, that the Singularity will come to pass by 2045.

Most arguments against the possibility of the Singularity involve doubts that computers can ever become intelligent in the human sense. The human brain and cognitive processes may simply be more complex than a computer could be. Furthermore, because the human brain is analog, with theoretically infinite values for any process, some believe that it cannot ever be replicated in a digital format. Some theorists also point out that the Singularity may not even be desirable from a human perspective because there is no reason to assume that a superintelligence would see value in, for example, the continued existence or well-being of humans.

Science-fiction writer Vernor Vinge first used the term the Singularity in this context in the 1980s, when he used it in reference to the British mathematician I.J. Goods concept of an intelligence explosion brought about by the advent of superintelligent machines. The term is borrowed from physics; in that context a singularity is a point where the known physical laws cease to apply.

See also: Asimovs Three Laws of Robotics, supercomputer, cyborg, gray goo, IBMs Watson supercomputer, neural networks, smart robot

Neil deGrasse Tyson vs. Ray Kurzweil on the Singularity:

This was last updated in February 2016

Read the rest here:

What is Singularity (the)? - Definition from WhatIs.com

Downloads – Singularity Viewer

Please pay attention to the following vital information before using Singularity Viewer.

Singularity Viewer only supports SSE2 compliant CPUs. All computers manufactured 2004 and later should have one.

Warning: RLVa is enabled by default, which permits your attachments to take more extensive control of the avatar than default behavior of other viewers. Foreign, rezzed in-world, non-worn objects can only take control of your avatar if actively permitted by corresponding scripted attachments you wear. Please refer to documentation of your RLV-enabled attachments for details, if you have any.

Singularity Viewer 1.8.7(6861) Setup

Compatible with 64-bit version of Windows Vista, Windows 7, Windows 8 and newer. Known limitation is the lack of support for the Quicktime plugin which means that certain types of parcel media will not play. Streaming music and shared media (MoaP) are not affected and are fully functional.

Compatible with OS X 10.6 and newer, Intel CPU.

Make sure you have 32-bit versions of gstreamer-plugins-base, gstreamer-plugins-ugly and libuuid1 installed. The package has been built on DebianSqueezeand should work on a variety of distributions.

For voice to work, minimal support for running 32-bit binaries is necessary. libasound_module_pcm_pulse.so may be needed. Possible package names: lib32asound2-plugins (squeeze), alsa-plugins-pulseaudio.i686 (fedora),libasound2-plugins:i386 (debian/ubuntu).

If you receive "The following media plugin has failed: media_plugin_webkit" you may need to install the package containing libpangox-1.0.so.0for your distribution (could bepangox-compat).

To add all the skins, extract this package into the viewer install directory, that's usually C:Programs FilesSingularity on Windows, /Applications/Singularity.app/Contents/Resources/ on Mac, and wherever you extracted the tarball to on Linux. Just merge the extracted skins directory with the existing skins directory, there should be no conflicts.

Read more from the original source:

Downloads - Singularity Viewer