Page 102«..1020..101102103104..»

Category Archives: Singularity

How the World Has Changed From 1917 to 2017 – Singularity Hub

Posted: February 15, 2017 at 9:33 pm

Over the last 100 years, the world has changed tremendously.

For perspective, this year at Abundance 360, I gave a few fun examples of what the world looked like in 1917.

This blog is a look at what the world looked like a century ago and what it looks like today.

Lets dive in.

One hundred years ago, things looked a little bit different.

1. World Literacy Rates

- 1917: The world literacy rate was only 23 percent.

- Today: Depending on estimates, the world literacy rate today is 86.1 percent.

2. Travel Time

- 1917: It took 5 days to get from London to New York; 3.5 months to travel from London to Australia.

- Today: A nonstop flight gets you from London to New York in a little over 8 hours, and you can fly from London to Australia in about a day, with just one stop.

3. Average Price of a US House

- 1917: The average price of a U.S. house was $5,000. ($111,584.29 when adjusted for inflation).

- Today: As of 2010, the average price of a new home sold in the U.S. was $272,900.

4. The First Hamburger

- 1917: The hamburger bun was invented by a fry cook named Walter Anderson, who co-founded White Castle.

- Today: On average, Americans eat three hamburgers a week. That's a national total of nearly 50 billion burgers per year. And now were even inventing 100 percent plant-based beef burgers produced by Impossible Foods and available at select restaurants.

5. Average Price of a Car in the US

- 1917: The average price of a car in the US was $400 ($8,926.74 when adjusted for inflation)

- Today: The average car price in the US was $34,968 as of January 2017.

6. The First Boeing Aircraft

- 1917: A Boeing aircraft flew for the first time on June 15.

- Today: In 2015, there were almost 24,000 turboprop and regional aircraft, as well as wide body and narrow body jets, in service worldwide.

7. Coca-Cola

- 1917: On July 1, 1916, Coca-Cola introduced its current formula to the market.

- Today: Today, Coca-Cola has a market cap of about $178 billion with 2015 net operating revenues over $44 billion. Each day, over 1.9 billion servings of Coca-Cola drinks are enjoyed in more than 200 countries.

7. Average US Wages

- 1917: The average US hourly wage was 22 cents an hour ($4.90 per hour when adjusted for inflation)

- Today: The average US hourly wage is approximately $26 per hour.

8. Supermarkets

- 1917: The first "super" market, PigglyWiggly, opened on September 6, 1916 in Memphis, TN.

- Today: In 2015, there were 38,015 supermarkets, employing 3.4 million people and generating sales of about $650 billion.

9. Billionaires

- 1917: John D. Rockefeller became the world's first billionaire on September 29.

- Today: There are approximately 1,810 billionaires, and their aggregate net worth is $6.5 trillion.

For context, Rockefellers net worth in todays dollars would have been about $340 billion. Bill Gates, the worlds richest man, is worth $84 billion today.

10. Telephones (Landlines vs. Cellphones)

- 1917: Only 8 percent of homes had a landline telephone.

- Today: Forget landlines! In the US, nearly 80 percent of the population has a smartphone (a supercomputer in their pockets). Nearly half of all American households now use only cellphones rather than older landlines. And as far as cost, today, you can Skype anywhere in the world for free over a WiFi network.

11. Traffic (Horses to Cars)

- 1917: In 1912, traffic counts in New York showed more cars than horses for the first time.

- Today: There were approximately 253 million cars and trucks on US roads in 2015.

12. US Population

- 1917: The US population broke 100 million, and the global population reached 1.9 billion.

- Today: The US population is 320 million, and the global population broke 7.5 billion this year.

13. Inventions and Technology

- 1917: The major tech invention in 1917? The toggle light switch.

- Today: The major tech invention of today? CRISPR/Cas9 gene editing technology, which enables us to reprogram life as we know it. And we are making strides in AI, robotics, sensors, networks, synthetic biology, materials science, space exploration and more every day.

14. High School Graduation Rates

- 1917: Only 6 percent of all Americans had graduated from high school.

- Today: Over 80 percent of all Americans graduated high school this past year.

15. Cost of Bread

- 1917: A loaf of bread was $0.07 ($1.50 when adjusted for inflation).

- Today: A loaf of bread costs $2.37.

16. Speed Limits

- 1917: The maximum speed limit in most cities was 10 mph.

- Today: The maximum speed limit in most cities is about 70 mph.

Just wait for the next 100 years.

Image Credit: Wikimedia Commons

Original post:

How the World Has Changed From 1917 to 2017 - Singularity Hub

Posted in Singularity | Comments Off on How the World Has Changed From 1917 to 2017 – Singularity Hub

Holograms Aren’t The Stuff of Science Fiction Anymore – Singularity Hub

Posted: at 12:29 am

The world seems to be full of illusionsand were not talking about fake news from Macedonia.

Holograms appear to be all around us now. Long-dead rapper Tupac Shakur showed up at the 2012 edition of the Coachella music festival. Microsofts HoloLens seems akin to a wearables version of Star Treks holodeck, allowing its user to interact with 3D objects in an augmented reality. Startups like Edinburgh-based Holoxica can create digital 3D holograms of human organs for medical visualization purposes.

While some of these light shows are far from mere parlor tricks, none of these efforts are holograms in the sense depicted most famously in movies like Star Wars. True hologram technology is mostly still a science fiction fantasy, but earlier this year scientists revealed innovations to move the technology forward a few light years.

A study published online in Nature Photonics by a team of researchers in Korea has developed a 3D holographic display that they write performs more than 2,600 times better than existing technologies. Meanwhile, researchers led by a team in Australia claimed in the journal Optica to have invented a miniature device that creates the highest-quality holographic images to date. The papers were published within three days of each other last month.

Holography is a broad field, but at its most basic, it is a photographic technique that records the light scattered from an object. The light is then reproduced in a 3D format. Holography was first developed in the 1940s by the Hungarian-British physicist Dennis Gabor, who won the 1971 Nobel Prize in physics for his invention and development of the holographic method.

Most holograms are static images, but scientists are working on more dynamic systems to reproduce the huge amount of information embedded in a 3D image.

Take the work being done by researchers at the Korea Advanced Institute of Science and Technology (KAIST).

Our ability to produce dynamic, high-resolution hologramsthink Princess Leia pleading with Obi-Wan Kenobi for the Jedis helpis currently limited by whats called wavefront modulators. These devices, such as spatial light modulators or digital micromirror devices, can control the direction of light propagation.

An imaging system with a short focal length lens can only create a tiny image that has a wide viewing range. Conversely, a system with a long focal length can generate a larger image but with a very narrow viewing range. The best wavefront modulator technology has only been able to create an image that is one centimeter in size with a viewing angle of three degrees.

Its possible to do better by creating a complex and unwieldy system using multiple spatial light modulators, for example. But the team at KAIST came up with a simpler solution.

This problem can be solved by simply inserting a diffuser, explains YongKeun Park, a professor in the Physics Department at KAIST. Because a diffuser diffuses light, both the image size and viewing angle can be dramatically increased by a factor of a few thousands, according to Park.

But theres still one more problem to overcome: a diffuser scrambles light.

Thus, in order to utilize a diffuser as a holographic lens, one needs to calibrate the optical characteristics of each diffusor carefully, Park says by email. For this purpose, we use wavefront-shaping technique, which provides information about the relationship between impinging light onto a diffuser and outgoing light.

Parks team succeeded in producing an enhanced 3D holographic image with a viewing angle of 35 degrees in a volume of two centimeters in length, width, and height.

The enhancement of the scale, resolution, and viewing angles using our method is readily scalable, he notes. Since this method can be applicable to any existing wavefront modulator, it can further increase the image quality as a better wavefront modulator comes out in [the] market.

Near-term applications for the technology once it matures include head-up displays for an automobile or holographic projections of a smart phones user interface, Park says. [Holograms] will bring new experiences for us to get information from electronics devices, and they can be realized with a fewer number of pixels than 3D holographic display.

For true tech heads, physicist and science writer Chris Lee, writing for Ars Technica, provides an in-depth description on how the KAIST system works.

Meanwhile, physicists from the Australian National University unveiled a device consisting of millions of tiny silicon pillars, each up to 500 times thinner than a human hair. The transparent material is capable of complex manipulations of light, they write.

"Our ability to structure materials at the nanoscale level allows the device to achieve new optical properties that go beyond the properties of natural materials, says Sergey Kruk, co-lead on the research, in a press release from the university. The holograms that we made demonstrate the strong potential of this technology to be used in a range of applications."

The researchers say they were inspired by films such as Star Wars. We are working under the same physical principles that once inspired science fiction writers, Kruk says in a video interview.

Kruk says the new material could someday replace bulkier and heavier lenses and prisms used in other applications.

With our new material, we can create components with the same functionality but that would be essentially flat and lightweight, he says. This brings so many applications, starting from further shrinking down cameras in consumer smart phones, all the way up to space technologies by reducing the size and weight of complex optical systems of satellites.

Speaking of space exploration: What if the entire universe is a hologram? What does that mean for pseudo-holograms of Tupac Shakur? Not to mention the rest of us still-living 3D beings?

Theoretical physicists believe they have observed evidence supporting a relatively new theory in cosmology that says the known universe is the projection of a 2D reality. First floated in the 1990s, the idea is similar to that of ordinary holograms in which a 3D image is encoded in a 2D surface, such as in the hologram on a credit card.

Supporters of the theory argue that it can reconcile the two big theories in cosmology. Einstein's theory of general relativity explains almost everything large scale in the universe. Quantum physics is better at explaining the small stuff: atoms and subatomic particles. The findings for a holographic universe were published in the journalPhysical Review Letters.

The team used data gleaned from instruments capable of studying the cosmic microwave background. The CMB, as its known, is the afterglow of the Big Bang from nearly 14 billion years ago. Youve seen evidence of the CMB if youve ever noticed the white noise created on an un-tuned television.

The study found that some of the simplest quantum field theories could explain nearly all cosmological observations of the early universe. The work could reportedly lead to a functioning theory of quantum gravity, merging quantum mechanics with Einsteins theory of gravity.

The key to understanding quantum gravity is understanding field theory in one lower dimension, says lead author Niayesh Afshordi, professor of physics and astronomy at the University of Waterloo, in a press release. "Holography is like a Rosetta Stone, translating between known theories of quantum fields without gravity and the uncharted territory of quantum gravity itself.

Heavy stuff no matter what dimension you come from.

Image Credit: Shutterstock

Read the original post:

Holograms Aren't The Stuff of Science Fiction Anymore - Singularity Hub

Posted in Singularity | Comments Off on Holograms Aren’t The Stuff of Science Fiction Anymore – Singularity Hub

Families Finally Hear From Completely Paralyzed Patients Via New Mind-Reading Device – Singularity Hub

Posted: February 13, 2017 at 9:37 am

Wendy was barely 20 years old when she received a devastating diagnosis: juvenile amyotrophic lateral sclerosis (ALS), an aggressive neurodegenerative disorder that destroys motor neurons in the brain and the spinal cord.*

Within half a year, Wendy was completely paralyzed. At 21 years old, she had to be artificially ventilated and fed through a tube placed into her stomach. Even more horrifyingly, as paralysis gradually swept through her body, Wendy realized that she was rapidly being robbed of ways to reach out to the world.

Initially, Wendy was able to communicate to her loved ones by moving her eyes. But as the disease progressed, even voluntary eye twitches were taken from her. In 2015, a mere three years after her diagnosis, Wendy completely lost the ability to communicateshe was utterly, irreversibly trapped inside her own mind.

Complete locked-in syndrome is the stuff of nightmares. Patients in this state remain fully conscious and cognitively sharp, but are unable to move or signal to the outside world that theyre mentally present. The consequences can be dire: when doctors mistake locked-in patients for comatose and decide to pull the plug, theres nothing the patients can do to intervene.

Now, thanks to a new system developed by an international team of European researchers, Wendy and others like her may finally have a rudimentary link to the outside world. The system, a portable brain-machine interface, translates brain activity into simple yes or no answers to questions with around 70 percent accuracy.

That may not seem like enough, but the system represents the first sliver of hope that we may one day be able to reopen reliable communication channels with these patients.

Four people were tested in the study, with some locked-in for as long as seven years. In just 10 days, the patients were able to reliably use the system to finally tell their loved ones not to worrytheyre generally happy.

The results, though imperfect, came as enormous relief to their families, says study leader Dr. Niels Birbaumer at the University of Tbingen. The study was published this week in the journal PLOS Biology.

Robbed of words and other routes of contact, locked-in patients have always turned to technology for communication.

Perhaps the most famous example is physicist Stephen Hawking, who became partially locked-in due to ALS. Hawkings workaround is a speech synthesizer that he operates by twitching his cheek muscles. Jean-Dominique Bauby, an editor of the French fashion magazine Elle who became locked-in after a massive stroke, wrote an entire memoir by blinking his left eye to select letters from the alphabet.

Recently, the rapid development of brain-machine interfaces has given paralyzed patients increasing access to the worldnot just the physical one, but also the digital universe.

These devices read brain waves directly through electrodes implanted into the patients brain, decode the pattern of activity, and correlate it to a commandsay, move a computer cursor left or right on a screen. The technology is so reliable that paralyzed patients can even use an off-the-shelf tablet to Google things, using only the power of their minds.

But all of the above workarounds require one critical factor: the patient has to have control of at least one muscleoften, this is a cheek or an eyelid. People like Wendy who are completely locked-in are unable to control similar brain-machine interfaces. This is especially perplexing since these systems dont require voluntary muscle movements, because they read directly from the mind.

The unexpected failure of brain-machine interfaces for completely locked-in patients has been a major stumbling block for the field. Although speculative, Birbaumer believes that it may be because over time, the brain becomes less efficient at transforming thoughts into actions.

Anything you want, everything you wish does not occur. So what the brain learns is that intention has no sense anymore, he says.

In the new study, Birbaumer overhauled common brain-machine interface designs to get the brain back on board.

First off was how the system reads brain waves. Generally, this is done through EEG, which measures certain electrical activity patterns of the brain. Unfortunately, the usual solution was a no-go.

We worked for more than 10 years with neuroelectric activity [EEG] without getting into contact with these completely paralyzed people, says Birbaumer.

It may be because the electrodes have to be implanted to produce a more accurate readout, explains Birbaumer to Singularity Hub. But surgery comes with additional risks and expenses to the patients. In a somewhat desperate bid, the team turned their focus to a technique called functional near-infrared spectroscopy (fNIRS).

Like fMRI, fNIRS measures brain activity by measuring changes in blood flow through a specific brain regiongenerally speaking, more blood flow equals more activation. Unlike fMRI, which requires the patient to lie still in a gigantic magnet, fNIRS uses infrared light to measure blood flow. The light source is embedded into a swimming cap-like device thats tightly worn around the patients head.

To train the system, the team started with facts about the world and personal questions that the patients can easily answer. Over the course of 10 days, the patients were repeatedly asked to respond yes or no to questions like Paris is the capital of Germany or Your husbands name is Joachim. Throughout the entire training period, the researchers carefully monitored the patients alertness and concentration using EEG, to ensure that they were actually participating in the task at hand.

The answers were then used to train an algorithm that matched the responses to their respective brain activation patterns. Eventually, the algorithm was able to tell yes or no based on these patterns alone, at about 70 percent accuracy for a single trial.

After 10 years [of trying], I felt relieved, says Birbaumer. If the study can be replicated in more patients, we may finally have a way to restore useful communication with these patients, he added in a press release.

The authors established communication with complete locked-in patients, which is rare and has not been demonstrated systematically before, says Dr. Wolfgang Einhuser-Treyer to Singularity Hub. Einhuser-Treyer is a professor at Bielefeld University in Germany who had previously worked on measuring pupil response as a means of communication with locked-in patients and was not involved in this current study.

With more training, the algorithm is expected to improve even further.

For now, researchers can average out mistakes by repeatedly asking a patient the same question multiple times. And even at an acceptable 70 percent accuracy rate, the system has already allowed locked-in patients to speak their mindsand somewhat endearingly, just like in real life, the answer may be rather unexpected.

One of the patients, a 61-year-old man, was asked whether his daughter should marry her boyfriend. The father said no a striking nine out of ten timesbut the daughter went ahead anyway, much to her fathers consternation, which he was able to express with the help of his new brain-machine interface.

Perhaps the most heart-warming result from the study is that the patients were generally happy and content with their lives.

We were originally surprised, says Birbaumer. But on further thought, it made sense. These four patients had accepted ventilation to support their lives despite their condition.

In a sense, they had already chosen to live, says Birbaumer. If we could make this technique widely clinically available, it could have a huge impact on the day-to-day lives of people with completely locked-in syndrome.

For his next steps, the team hopes to extend the system beyond simple yes or no binary questions. Instead, they want to give patients access to the entire alphabet, thus allowing them to spell out words using their brain wavessomething thats already been done in partially locked-in patients but never before been possible for those completely locked-in.

To me, this is a very impressive and important study, says Einhuser-Treyer. The downsides are mostly economical.

The equipment is rather expensive and not easy to use. So the challenge for the field will be to develop this technology into an affordable product that caretakers [sic], families or physicians can simply use without trained staff or extensive training, he says. In the interest of the patients and their families, we can hope that someone takes this challenge.

*The patient is identified as patient W in the study. Wendy is an alias.

Banner Image Credit: Shutterstock

Continue reading here:

Families Finally Hear From Completely Paralyzed Patients Via New Mind-Reading Device - Singularity Hub

Posted in Singularity | Comments Off on Families Finally Hear From Completely Paralyzed Patients Via New Mind-Reading Device – Singularity Hub

The fear of a technological singularity – ETtech.com

Posted: at 9:37 am

By Debkumar Mitra, Gray Matters

In 2016, a driverless Tesla car crashed killing the test driver. It was not the first vehicle to be involved in a fatal crash, but was the first of its kind and the tragedy opened a can of ethical dilemmas.

With autonomous systems such as driverless vehicles there are two main grey areas: responsibility and ethics. Widely discussed at various forums is a dilemma where a driverless car must choose between killing pedestrians or passengers.

Here, both responsibility and ethics are at play. The cold logic of numbers that define the mind of such systems can sway it either way and the fear is that passengers sitting inside the car have no control.

Any new technology brings a new set of challenges. But it appears that creating artificial intelligence-driven technology products is almost like unleashing the Frankensteins monster.

Artificial Intelligence (AI) is currently at the cutting-edge science and technology. Advances in technology, including aggregate technologies like deep learning and artificial neural networks, are behind many new developments such as that Go playing world champion machine.

However, though there is great positive potential for AI, many are afraid of what AI could do, and rightfully so. There is still the fear of a technological singularity, a circumstance in which AI machines would surpass the intelligence of humans and take over the world.

Researchers in genetic engineering also face a similar question. This dark side of technology, however, should not be used to decree closure of all AI or genetics research. We need to create a balance between human needs and technological aspirations.

Much before the current commotion over ethical AI technology, celebrated science-fiction author Isaac Asimov came up with his laws of robotics.

Exactly 75 years ago in a 1942 short story Runaround, Asimov unveiled an early version of his laws. The current forms of the laws are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Given the pace at which AI systems are developing, there is an urgent need to put in some checks and balances so that things do not go out of hand.

There are many organisations now looking at legal, technical, ethical and moral aspects of a society driven by AI technology. The Institute of Electrical and Electronics Engineers (IEEE) already has Ethically Aligned Designed, an AI framework addressing the issues in place. AI researchers are drawing up a laundry list similar to Asimovs laws to help people engage in a more fearless way with this beast of a technology.

In January 2017, Future of Life Institute (FLI), a charity and outreach organisation, hosted their second Beneficial AI Conference. AI experts developed Asilomar AI Principles, which ensures that AI remains beneficial and not harmful to the future of humankind.

The key points that came out of the conference are: How can we make future AI systems robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining peoples resources and purpose? How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have?

Ever since they unshackled the power of the atom, scientists and technologists have been at the forefront of the movement emphasising science for the betterment of man. This duty was forced upon them when the first atom bomb was manufactured in the US. Little did they realise that a search for the atomic structure could give rise to nasty subplot? With AI we are at the same situation or maybe worse.

No wonder at an IEEE meeting that gave birth to ethical AI framework, the dominant thought was that the human and all living beings must remain at centre of all AI discussions. People must be informed at every level right from the design stage to development of the AI-driven products for everyday use.

While it is a laudable effort to develop ethically aligned technologies, it begs another question that has been raised at various AI conferences. Are humans ethical?

(The author is the CEO of Gray Matters. Views expressed above are his own)

Originally posted here:

The fear of a technological singularity - ETtech.com

Posted in Singularity | Comments Off on The fear of a technological singularity – ETtech.com

Ready to Change the World? Apply Now for Singularity University’s 2017 Global Solutions Program – Singularity Hub

Posted: February 11, 2017 at 8:41 am

Im putting out a call for brilliant entrepreneurs who want to enroll in Singularity Universitys Global Solutions Program (GSP).

The GSP is where youll learn about exponentially growing technology, dig into humanitys Global Grand Challenges (GGCs) and then start a new company, product or service with the goal of positively impacting 1 billion people within 10 years.

We call this a ten-to-the-ninth (10+) company.

This post is about who should apply, how to apply and the over $1.5 million in scholarships being provided by Google for entrepreneurs.

SUs GSP program runs from June 17, 2017 until August 19, 2017.

Applications are due: February 21, 2017.

Eight years ago, Ray Kurzweil and I cofounded Singularity University to search the world for the most brilliant, world-class problem-solvers, to bring them together, and to give them the resources to create companies that impact billions.

The GSP is an intensive 24/7 experience at the SU campus at the NASA Research Center in Mountain View, CA, in the heart of Silicon Valley.

During the nine-week program, 90 entrepreneurs, engineers, scientists, lawyers, doctors and innovators from around the world learn from our expert faculty about infinite computing, AI, robotics, 3D printing, networks/sensors, synthetic biology, entrepreneurship, and more, and focus on building and developing companies to solve the global grand challenges (GGCs).

GSP participants form teams to develop unique solutions to GGCs, with the intent to form a company that, as I mentioned above, will positively impact the lives of a billion people in 10 years or less.

Over the course of the summer, participants listen to and interact with top Silicon Valley executive guest speakers, tour facilities like GoogleX, and spend hours getting their hands dirty in our highly advanced maker workshop.

At the end of the summer, the best of these startups will be asked to join SU Labs, where they will receive additional funding and support to take the company to the next level.

I am pleased to announce that thanks to a wonderful partnership with Google, all successful applicants will be fully subsidized by Google to participate in the program.

In other words, if accepted into the program, the GSP is free.

The Global Solutions Program (GSP) is SUs flagship program for innovators from a wide diversity of backgrounds, geographies, perspectives, and expertise. At GSP, youll get the mindset, tools, and network to help you createmoonshotinnovations that will positively transform the future of humanity. If you're looking to create solutions to help billions of people, we can help you do just that.

Key program dates:

This program will be unlike any we've ever doneand unlike any you've ever seen.

If you feel like you meet the criteria, apply now (click here).

Applications close February 21nd, 2017.

If you know of a friend or colleague who would be a good fit for this program, please share this post with them and ask that theyfill out an application.

See the article here:

Ready to Change the World? Apply Now for Singularity University's 2017 Global Solutions Program - Singularity Hub

Posted in Singularity | Comments Off on Ready to Change the World? Apply Now for Singularity University’s 2017 Global Solutions Program – Singularity Hub

How Robots Helped Create 100,000 Jobs at Amazon – Singularity Hub – Singularity Hub

Posted: at 8:41 am

Accelerating technology has been creating a lot of worry over job loss to automation, especially as machines become capable of doing things they never could in the past. A recent report released by the McKinsey Global Institute estimated that 49 percent of job activities could currently be fully automatedthat equates to 1.1 billion workers globally.

What gets less buzz is the other side of the coin: automation helping to create jobs. Believe it or not, it does happen, and we can look at one of the worlds largest retailers to see that.

Thanks in part to more robots in its fulfillment centers, Amazon has been able to drive down shipping costs and pass those savings on to customers. Cheaper shipping made more people use Amazon, and the company hired more workers to meet this increased demand.

So what do the robots do, and what do the people do?

Tasks involving fine motor skills, judgmentor unpredictability are handled by people. They stock warehouse shelves with items that come off delivery trucks. A robot could do this, except that to maximize shelf space, employees are instructed to stack items according to how they fit on the shelf rather than grouping them by type.

Robots can only operate in a controlled environment, performing regular and predictable tasks. Theyve largely taken over heavy lifting, including moving pallets between shelvesgood news for warehouse workers backsas well as shuttling goods from one end of a warehouse to another.

Under current technology, the expense of building robots able to stock shelves based on available space is more costly and less logical than hiring people to do it.

Similarly, for outgoing orders, robots do the lifting and transportation, but not the selecting or packing. A robot brings an entire shelf of goods to an employees workstation, where the employee selects the correct item and puts it on a conveyor belt for another employee to package. By this time, the shelf-carrying robot is already returning the first shelf and retrieving another.

Since loading trucks also requires spatial judgment and can be unpredictablespace must be maximized here even more than on shelvespeople take care of this too.

Ever since acquiring Boston-based robotics company Kiva Systemsin March 2012at a price tag of $775 millionAmazon has been ramping up its use of robots and is continuing to pour funds into automation research, both for robots and delivery drones.

In 2016 the company grew its robot workforce by 50 percent, from 30,000 to 45,000. Far from laying off 15,000 people, though, Amazon increased human employment by around 50 percent in the same period of time.

Even better, the companys Q4 2016 earnings report included the announcement that it plans to create more than 100,000 new full-time, full-benefit jobs in the US over the next 18 months. New jobs will be based across the country and will include various types of experience, education, and skill levels.

So how tight is the link between robots and increased productivity? Would there be even more jobs if people were doing the robots work?

Well, picture an employee walking (or even running) around a massive warehouse, locating the right shelf, climbing a ladder to reach the item hes looking for, grabbing it, climbing back down the ladder (carefully, of course), and walking back to his work station to package it for shipping. Now multiply the time that whole process took by the hundreds of thousands of packages shipped from Amazon warehouses each day.

Lots more time. Lots less speed. Fewer packages shipped. Higher costs. Lower earnings. No growth.

Though it may not last forever, right now Amazons robot-to-human balance is clearly in employees favor. Automation can take jobs away, but sometimes it can create them too.

Image Credit: Tabletmonkeys/YouTube

The rest is here:

How Robots Helped Create 100,000 Jobs at Amazon - Singularity Hub - Singularity Hub

Posted in Singularity | Comments Off on How Robots Helped Create 100,000 Jobs at Amazon – Singularity Hub – Singularity Hub

Rowe FTC robotics team RSF Singularity takes top honors at Championship – Rancho Santa Fe Review

Posted: February 10, 2017 at 3:33 am

On Saturday, Feb. 4, at The Grauer School in Encinitas, the three Rowe FTC robotics teams -- Singularity, Logitechies and Intergalactic Dragons -- ended up being 1st, 2nd and 3rd place captains in the League Championships exciting alliance rounds which culminated the end of a hard-fought event. David Warner, who heads the schools FTC robotics program, said, Im so proud of our students, parent mentors and coaches who worked countless hours to achieve success. Being the youngest teams at the championship, this is truly remarkable and a testament to their hard work!

The Logitechies and Intergalactic Dragons alliance teams faced off in an exciting third game to determine who would move on to face the Singularity alliance in the championship round. The Intergalactic Dragons won, but when they moved on to the final match to determine the champion, Singularitys 90-point autonomous program was the key to victory as their alliance put up well over 200 points in the two final games.

In addition to competing in the alliance matches, the Logitechies team was also a finalist in the judged Connect and PTC awards.

Singularity earned top honors of the day as they advance, along with the Intergalactic Dragons, to the San Diego Regionals at Francis Parker High School on Feb. 25.

Go here to read the rest:

Rowe FTC robotics team RSF Singularity takes top honors at Championship - Rancho Santa Fe Review

Posted in Singularity | Comments Off on Rowe FTC robotics team RSF Singularity takes top honors at Championship – Rancho Santa Fe Review

Physicists Unveil Blueprint for a Quantum Computer the Size of a … – Singularity Hub

Posted: at 3:33 am

Quantum computers promise to crack some of the worlds most intractable problems by super-charging processing power. But the technical challenges involved in building these machines mean theyve still achieved just a fraction of what they are theoretically capable of.

Now physicists from the UK have created a blueprint for a soccer-field-sized machine they say could reach the blistering speeds that would allow them to solve problems beyond the reach of todays most powerful supercomputers.

The system is based on a modular design interlinking multiple independent quantum computing units, which could be scaled up to almost any size. Modular approaches have been suggested before, but innovations such as a far simpler control system and inter-module connection speeds 100,000 times faster than the state-of-the-art make this the first practical proposal for a large-scale quantum computer.

For many years, people said that it was completely impossible to construct an actual quantum computer. With our work we have not only shown that it can be done, but now we are delivering a nuts and bolts construction plan to build an actual large-scale machine, professor Winfried Hensinger, head of the Ion Quantum Technology Group at the University of Sussex who led the research, said in a press release.

The technology at the heart of the individual modules is already well-established and relies on trapping ions (charged atoms) in magnetic fields to act as qubits, the basic units of information in quantum computers.

While bits in conventional computers can have a value of either 1 or 0, qubits take advantage of the quantum mechanical phenomena of superposition, which allows them to be both at the same time.

As Elizabeth Gibney explains in Nature, this is what makes quantum computers so incredibly fast. The set of qubits comprising the memory of a quantum computer could exist in every possible combination of 1s and 0s at once. Where a classical computer has to try each combination in turn, a quantum computer could process all those combinations simultaneously.

In a paper published in the journal Science Advances last week, researchers outline designs for modules containing roughly 2,500 qubits and suggest interlinking thousands of them together to create a machine containing two billion qubits. For comparison, Canadian firm D-Wave, the only commercial producer of quantum computers, just brought out its latest model featuring 2,000 qubits.

This is not the first time a modular system like this has been suggested, but previous approaches have recommended using light waves traveling through fiberoptics to link the units. This results in interaction rates between modules far slower than the quantum operations happening within them, putting a handbrake on the systems overall speed. In the new design, the ions themselves are shuttled from one module to another using electrical fields, which results in 100,000 times faster connection speeds.

The system also has a much simpler way of controlling qubits. Previous designs required lasers to be carefully targeted at each ion, an enormous engineering challenge when dealing with billions of qubits. Instead, the new system uses microwave fields and the careful application of voltages, which is much easier to scale up.

The researchers concede there are still considerable technical challenges to building a device on the scale they have suggested, not to mention the cost. But they have already announced plans to build a prototype based on the design at the university at a cost of 1-2 million.

While this proposal is incredibly challenging, I wish more in the quantum community would think big like this, Christopher Monroe, a physicist at the University of Maryland who has worked on trapped-ion quantum computing, told Nature.

In their paper, the researchers predict their two billion qubit system could find the prime factors of a 617-digit-long number in 110 days. This is significant because many state-of-the-art encryption systems rely on the fact that factoring large numbers can take conventional computers thousands of years. This is why many in the cybersecurity world are nervous about the advent of quantum computing.

These researchers arent the only ones working on bringing quantum computing into the real world, though. Google, Microsoft and IBM are all developing their own systems, and D-Wave recently open-sourced a software tool that helps those without a background in quantum physics program its machines.

All that interest is due to the enormous potential of quantum computing to solve problems as diverse and complex as developing drugs for previously incurable diseases, devising new breeds of materials for high-performance superconductors, magnets and batteries and even turbo-charging machine learning and artificial intelligence.

"The availability of a universal quantum computer may have a fundamental impact on society as a whole, said Hensinger. Without doubt it is still challenging to build a large-scale machine, but now is the time to translate academic excellence into actual application, building on the UK's strengths in this ground-breaking technology.

Image Credit: University of Sussex/YouTube

See the rest here:

Physicists Unveil Blueprint for a Quantum Computer the Size of a ... - Singularity Hub

Posted in Singularity | Comments Off on Physicists Unveil Blueprint for a Quantum Computer the Size of a … – Singularity Hub

Singularity Containers for Science, Reproducibility, and HPC – Linux.com (blog)

Posted: at 3:33 am

Singularity Containers for Science, Reproducibility, and HPC
Linux.com (blog)
Explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators ) allowing users to take full control to set-up and run in their native environments. This talk explores ...

Link:

Singularity Containers for Science, Reproducibility, and HPC - Linux.com (blog)

Posted in Singularity | Comments Off on Singularity Containers for Science, Reproducibility, and HPC – Linux.com (blog)

Robot Cars Can Teach Themselves How to Drive in Virtual Worlds – Singularity Hub

Posted: February 9, 2017 at 6:30 am

Over the holidays, I went for a drive with a Tesla. With, not in, because the car was doing the driving.

Hearing about autonomous vehicles is one thing; experiencing it was something entirely different. When the parked Model S calmly drove itself out of the garage, I stood gaping in awe, completely mind-blown.

If this years Consumer Electronics Show is any indication, self-driving cars are zooming into our lives, fast and furious. Aspects of automation are already in useTeslas Autopilot, for example, allows cars to control steering, braking and switching lanes. Elon Musk, CEO of Tesla, has gone so far as to pledge that by 2018, you will be able to summon your car from across the countryand itll drive itself to you.

So far, the track record for autonomous vehicles has been fairly impressive. According to a report from the National Highway Traffic Safety Administration, Teslas crash rate dropped by about 40% after turning on their first-generation Autopilot system. This week, with the introduction of gen two to newer cars equipped with the necessary hardware, Musk is aiming to cut the number of accidents by another whopping 50 percent.

But when self-driving cars mess up, we take note. Last year, a Tesla vehicle slammed into a white truck while Autopilot was engagedapparently confusing it with the bright, white skyresulting in the companys first fatality.

So think about this: would you entrust your life to a robotic machine?

For anyone to even start contemplating yes, the cars have to be remarkably safe fully competent in day-to-day driving, and able to handle any emergency traffic throws their way.

Unfortunately, those edge cases also happen to be the hardest problems to solve.

To interact with the world, autonomous cars are equipped with a myriad of sensors. Googles button-nosed Waymo car, for example, relies on GPS to broadly map out its surroundings, then further captures details using its cameras, radar and laser sensors.

These data are then fed into software that figures out what actions to take next.

As with any kind of learning, the more scenarios the software is exposed to, the better the self-driving car learns.

Getting that data is a two-step process: first, the car has to drive thousands of hours to record its surroundings, which are used as raw data to build 3D maps. Thats why Google has been steadily taking their cars out on field tripssome two million miles to datewith engineers babysitting the robocars to flag interesting data and potentially take over if needed.

This is followed by thousands of hours of labelingthat is, manually annotating the maps to point out roads, vehicles, pedestrians and other subjects. Only then can researchers feed the dataset, so-called labeled data, into the software for it to start learning the basics of a traffic scene.

The strategy works, but its agonizingly slow, tedious and the amount of experience that the cars get is limited. Since emergencies tend to fall into the category of unusual and unexpected, it may take millions of miles before the car encounters dangerous edge cases to test its softwareand of course, put both car and human at risk.

An alternative, increasingly popular approach is to bring the world to the car.

Recently, Princeton researchers Ari Seff and Jianxiong Xiao realized that instead of manually collecting maps, they could tap into a readily available repertoire of open-sourced 3D maps such as Google Street View and OpenStreetMap. Although these maps are messy and in some cases can have bizarre distortions, they offer a vast amount of raw data that could be used to construct datasets for training autonomous vehicles.

Manually labeling that data is out of the question, so the team built a system that can automatically extract road featuresfor example, how many lanes there are, if theres a bike lane, what the speed limit is and whether the road is a one-way street.

Using a powerful technique called deep learning, the team trained their AI on 150,000 Street View panoramas, until it could confidently discard artifacts and correctly label any given street attribute. The AI performed so well that it matched humans on a variety of labeling tasks, but at much faster speed.

The automated labeling pipeline introduced here requires no human intervention, allowing it to scale with these large-scale databases and maps, concluded the authors.

With further improvement, the system could take over the labor-intensive job of labeling data. In turn, more data means more learning for autonomous cars and potentially much faster progress.

This would be a big win for self-driving technology, says Dr. John Leonard, a professor specializing in mapping and automated driving at MIT.

Other researchers are eschewing the real world altogether, instead turning to hyper-realistic gaming worlds such as Grand Theft Auto V.

For those not in the know, GTA V lets gamers drive around the convoluted roads of a city roughly one-fifth the size of Los Angeles. Its an incredibly rich world the game boasts 257 types of vehicles and 7 types of bikes that are all based on real-world models. The game also simulates half a dozen kinds of weather conditions, in all giving players access to a huge range of scenarios.

Its a total data jackpot. And researchers are noticing.

In a study published in mid-2016, Intel Labs teamed up with German engineers to explore the possibility of mining GTA V for labeled data. By looking at any road scene in the game, their system learned to classify different objects in the roadcars, pedestrians, sidewalks and so onthus generating huge amounts of labeled data that can then be fed to self-driving cars.

Of course, datasets extracted from games may not necessarily reflect the real world. So a team from the University of Michigan trained two algorithms to detect vehicles one using data from GTA V, the other using real-world imagesand pitted them against each other.

The result? The game-trained algorithm performed just as well as the one trained with real-life images, although it needed about 100 times more training data to reach the performance of the real-world algorithmnot a problem, since generating images in games is quick and easy.

But its not just about datasets. GTA V and other hyper-realistic virtual worlds also allow engineers to test their cars in uncommon but highly dangerous scenarios that they may one day encounter.

In virtual worlds, AIs can tackle a variety of traffic hazardssliding on ice, hitting a wall, avoiding a deerwithout worry. And if the cars learn how to deal with these edge cases in simulations, they may have a higher chance of surviving one in real life.

So far, none of the above systems have been tested on physical self-driving cars.

But with the race towards full autonomy at breakneck speed, its easy to see companies incorporating these systems to give themselves an upper edge.

Perhaps more significant is that these virtual worlds represent a subtle shift towards the democratization of self-driving technology. Most of them are open-source, in that anyone can hop onboard to create and test their own AI solutions for autonomous cars.

And who knows, maybe the next big step towards full autonomy wont be made inside Tesla, Waymo, or any other tech giant.

It could come from that smart kid next door.

Image Credit: Shutterstock

See more here:

Robot Cars Can Teach Themselves How to Drive in Virtual Worlds - Singularity Hub

Posted in Singularity | Comments Off on Robot Cars Can Teach Themselves How to Drive in Virtual Worlds – Singularity Hub

Page 102«..1020..101102103104..»