Here is how Nvidia can sidestep Moore’s Law in GPU design – PC Gamer

Nvidia is fast approaching a technical wall in GPU design where it will no longer be able to shove more transistors into a GPU die to increase performance at the same rate customers have grown accustomed to. Simply put, as Moore's Law slows down, the number of transistors per die no longer grows at historical rates, Nvidia notes. The solution to this problem could lie in switching to a multi-chip module GPU design.

Researchers from Nvidia, Arizona State University, the University of Texas, and the Barcelona Supercomputing Center have published a paper outlining the benefits of multi-chip module GPUs. It is a design that is working for AMD with its Ryzen CPUs, and likewise Nvidia believes it could benefit GPUs as well.

"Specifically, we propose partitioning GPUs into easily manufacturable basic GPU Modules (GPMs), and integrating them on package using high bandwidth and power efficient signaling technologies," Nvidia says.

Without either switching to a multi-chip module design or coming up with an alternative solution, Nvidia warns that the performance curve of single monolithic GPUs as currently constructed will ultimately plateau. Beyond the technical challenge of cramming more transistors into smaller spaces, there is also the cost to consider, both in terms of technical research and reduced die yields.

Whether or not an MCM design is ultimately the answer, Nvidia thinks it is at least worth exploring. One thing that Nvidia mentions in its paper is that it's difficult to scale GPU workloads on multi-GPU systems, even if they scale well on a single GPU.

"This is due to to multiple unsolved challenges related to work partitioning, load balancing, and data sharing across the slow on-board interconnection network. However, due to recent advances in packaging and signaling technologies, package-level integration provides a promising integration tier that lies between the existing on-chip and on-board integration technologies," Nvidia says.

What Nvidia proposes is connecting multiple GPU modules using advanced, high-speed input/output protocols to efficiently communicate with each other. This would allow for less complex (and presumably cheaper) GPU modules compared to a monolithic design. It is a sort of strength in numbers approach.

Nvidia's team of researchers used an in-house simulator to evaluate their designs. What they did was build two virtual GPUs, each with 256 streaming multiprocessors (SMs). One was based on the current monolithic design and the other used an MCM design.

The simulator showed the MCM design performed within 10 percent of monolithic GPU. It also showed that the MCM design would be nearly 27 percent faster than an SLI setup with similar specs. And when optimized, the MCM design can achieve a 45.5 percent speedup compared to the largest implementable monolithic GPU, which would have 128 SMs.

Much of this is hypothetical, not just in the simulation but also the examples used. A 256 SM chip just isn't possible at the momentNvidia labels it as "unbuildable." To put that into perspective, Nvidia's GeForce GTX 1080 Ti sports 28 SMs.

It remains to be seen what Nvidia will do for the next couple of generations, though a move to MCM GPUs seems almost inevitable. The question is, which company will get there first? It is believed that AMD's Navi GPU architecture off in the distance could utilize an MCM GPU design as well, especially now that AMD has the tech in place with Zen (Ryzen, Threadripper, Naples, Epyc).

For now, you can dive into Nvidia's white paper (PDF) for all of the gritty details.

Here is the original post:

Here is how Nvidia can sidestep Moore's Law in GPU design - PC Gamer

Moore’s Law end shakes industry – EE Times Asia – Eetasia.com (press release)

At the 50th anniversary of the Alan Turing award, panellists revealed that the expected death of Moore's Law would change the semiconductor and computer industries.

A basket of silicon, systems and software technologies will continue progress, but not at the same pace, they said. With no clear replacement for CMOS scaling, semiconductor and systems industries may be reshaped into vertical silos, they added.

Moores Law said transistor density doubles every 18 months, something we maintained for 25 years, but it began slowing down to every two to three years around 2000-2005, and more recently were seeing doubling about every four years, so we're reaching the end of semiconductor technology as we know it, said John Hennessy, former president of Stanford University and author of a key text book on microprocessors.

Figure 1: Hennessy: We're reaching the end of semiconductor technology as we know it.

Dennard scaling, a related observation that energy requirements scale as silicon shrinks, already has been non-operational for 1015 years, creating an era of dark silicon where we quickly turned to multicore processors, Hennessy added.

Moores Law is really an observation about economics, not a law of physics. The question is whether we can find another aspect of physics that has a return on investment like CMOS, said Margaret Martonosi, a systems specialist at Princeton.

Insofar as Moores Law is about a rate [of density scaling], it is dead because I think we are at the end of a predictable rate and in a few generation well hit the limits of physics, said Doug Burger, a distinguished engineer working on FPGA accelerators at Microsofts Azure cloud service.

Figure 2: Margaret Martonosi wrote two textbooks on power-aware computers.

Moores Law gave us a free ride and thats just about over so we are entering a wild, messy time and it sounds like a lot of fun, Burger said.

I think we still have a few more years of CMOS scaling, said Norm Jouppi, a veteran microprocessor designer and lead of the team behind Googles TPU accelerator. Some apps will continue to see performance speed ups for the next decade but for others they will come more slowly, he said.

Jouppi quipped that the industry is in denial about Moore's Law, like the vendor in the Monty Python dead-parrot sketch, who insists a bird is not dead, its just resting. Next: Goodbye DRAMS, hello franken-systems

Read the original here:

Moore's Law end shakes industry - EE Times Asia - Eetasia.com (press release)

Boffins create 3D CPU architecture to stretch Moore’s Law more – The INQUIRER

RESEARCHERS CLAIM to have developed a new '3D chip' that could be the answer to some of the bandwidth issues plaguing the current generation of chips.

The prototype, built by a team of researchers from Stanford and MIT, manages to combine memory, processor and sensors onto a single discrete unit made of graphene nanotubes, with resistive RAM (RRAM) squished over the top.

The 3D computer architecture is, the team claims, "the most complex nano-electronic system ever made with emerging nano-technologies".

Carbon has a higher tolerance to heat than silicon, and so using the carbon nanotubes means that the chip can stand up to higher temperatures than a regular chip - especially now the wafers are getting so ridiculously thin.

Related: Researchers shift processing to memory to make things faster

The research funded by the Defense Advanced Research Projects Agency (DARPA) and the US National Sanitation Foundation (!) is making good headway, but it is, for want of a better phrase, not backwards compatible' and so it could be a while before we see anything in the shops that uses the same technology.

"The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today's logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM," said Philip Wong from the MIT team.

The work has come on at a phenomenal pace, with a University of Wisconsin-Madison team first perfecting the nanotube system to overtake speeds possible in silicon chips as recently as last September.

At that time, experts estimate the chips could take a current up to 1.9 times that of a conventional silicon dewberry. Ultimately it's thought that figure will increase to five times the speed, with a fifth of the energy, but again, no time frames.

See original here:

Boffins create 3D CPU architecture to stretch Moore's Law more - The INQUIRER

What Moore’s Law has to teach us about WanaCrypt0r – SC Magazine UK

Kirsten Bay, president and CEO, Cyber adAPT

WannaCrypt0r the malware that held data to ransom on a global scale was a powerful illustration of what happens when cyber-security loopholes are not effectively closed. Exploiting a weakness in Microsoft's Windows operating system, the cryptoworm spread between PCs like wildfire, encrypting data and demanding Bitcoin payment in exchange for its return.

It is fair to say the attack took most cyber-security professionals by surprise. But was it really so unfathomable and, more importantly, how can we ensure such attacks are not repeated?

The answer to these questions lies in a theory proposed by Intel co-founder, Gordon Moore, in the 1960s: the processing power of computers doubles every two years.

Having dominated computing for the last 52 years, Moore's Law is now looking set to run out of steam, and it is the reason behind this has much to teach us about cyber-security now, and in the future.

Keeping up with the hackers

According to Europol chief Rob Wainwright, the best way to stop WanaCrypt0r infecting PCs and corporate networks is simple: installing a Microsoft patch on all machines.

Yet as the attack has shown, keeping security systems up to date is challenging. Microsoft, after all, had already released the MS17-010 patch before the ransomware hit, but failure of individual users and businesses to update promptly meant 150 countries were still affected.

The hard truth is: security breaches are not just increasing; they are inevitable especially in large organisations where networks support multiple devices that all run different software. And considering the scale of the biggest organisations affected the UK's National Health Service and FedEx it is easy to see how PCs running outdated systems, like Windows 7, were overlooked.

The key conclusion we can draw from this latest breach is that our tendency to focus on protecting specific networks or devices is a serious error. And this is where Moore's Law comes in

From chip-power to the cloud

When Moore first made his observation, technology was different computing power was determined by how many transistors a dense integrated circuit, or chip, could hold. After noting that the transistor to chip ratio was doubling every two years (a revised estimate made in 1975), he predicted that processing capability would grow at the same rate, and so Moore's Law was born.

Although the theory has been verified by more than half a century of multiplying transistors and shrinking chips, empirical support for it is dwindling. Indeed, in 2015, Moore himself said he saw the law dying in the next decade or so.

The reason for this is that computing capability is no longer tied to hardware. The advent of cloud computing means software, data and extra processing capacity can now be accessed over the internet without increasing the number of transistors in a device.

Thus, when we apply the same argument to cyber-security the problem is clear: current measures are trying to protect limited networks and specific devices, but networks are now edgeless and used by myriad devices. In other words, the idea of patching every single device linked to the network is unrealistic and we are trying to keep a gate closed that is simply too wide.

Outside in: building internal defences

To outpace the hackers, we must learn from the failings of Moore's Law and take a lateral security perspective that extends beyond individual devices.

CISOs need to adopt a detection-led approach that focuses on preventing attacks after hackers have breached networks by monitoring for and removing suspicious users. In doing so, they can ensure their cyber-security measures are fit for the 21st century, rather than embarking on an endless mission to update every device each time a threat is identified. And with such defences in place, security professionals could stop the next ransomware attack from spreading so quickly, or at all.

The demise of Moore's law teaches us that modern security cannot afford to view networks as silos. With the cloud constantly creating new connections, there are no more perimeters to protect, which means keeping systems safe requires defences that can identify hackers after they have made their way in.

By deploying a detection-led method, CISOs can use the lessons of the past to secure networks at all times, and ensure they are positioned tothwartthe next WanaCrypt0r-style-attack in its early stages.

Contributed by Kirsten Bay, president and CEO, Cyber adAPT

*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media or Haymarket Media.

Read the original:

What Moore's Law has to teach us about WanaCrypt0r - SC Magazine UK

Cadence, Synopsys: Monster Chips from Nvidia, Intel Bode Well, Says RBC – Barron’s


Barron's
Cadence, Synopsys: Monster Chips from Nvidia, Intel Bode Well, Says RBC
Barron's
Giant chips from Nvidia and Intel packed with tons of transistors are a good sign that the chip industry rule of thumb, Moore's Law, is alive and well, says RBC's Mitch Steves, and that should be good business for Synopsys and Cadence, vendors of the ...

Read more from the original source:

Cadence, Synopsys: Monster Chips from Nvidia, Intel Bode Well, Says RBC - Barron's

Nvidia researching Multi-Chip-Module GPUs to keep Moore’s law alive – Neowin

A team consisting of researchers from Nvidia, Arizona State University, the University of Texas, and the Barcelona Supercomputing Centre have published a paper (PDF) studying ways of bypassing the recent deceleration in the pace of advancement of transistor density.

To avoid the performance ceiling monolithic GPUs will ultimately reach, they propose the manufacture of basic GPU Modules (GPMs) that will be integrated on a single package using high bandwidth and power-efficient signaling technologies, in order to create Multi-Chip-Module (MCM) GPU designs.

The researchers used Nvidia's in-house GPU simulator to evaluate their designs. According to their findings, MCM GPUs can greatly assist in increasing the number of Streaming Multiprocessors (SM), a fact that speeds up vastly many types of applications. Utilizing the simpler GPM building blocks and advanced interconnects, they simulated a 256 SM chip that achieves a 45.5% speedup over the largest possible monolithic GPU with 128 SMs. In addition, their design performs 26.8% better than a discrete multi-GPU with the same number of SMs, and is within 10% of the performance of a hypothetical monolithic GPU with 256 SMs that cannot be built based on todays technology roadmap.

Source and Images via Hexus

Visit link:

Nvidia researching Multi-Chip-Module GPUs to keep Moore's law alive - Neowin

Death of Moore’s Law could be cool – Fudzilla

HP Labs thinks it is the best thing to happen in computing

While Moore's Law is slowly winding up, Hewlett-Packard Labs is not exactly mourning.

Hewlett-Packard Labs boffin Stanley Williams, has penned a report exploring the end of Moore's Law which says it could be the best thing that has happened for computing.

He wrote that confronting the end of an epoch should enable a new era of creativity by encouraging computer scientists to invent biologically inspired devices, circuits, and architectures implemented using recently emerging technologies.

Williams argues that : "The effort to scale silicon CMOS overwhelmingly dominated the intellectual and financial capital investments in industry, government, and academia, starving investigations across broad segments of computer science and locking in one dominant model for computers, the von Neumann architecture."

Three alternatives already being developed at Hewlett Packard Enterprise -- neuromorphic computing, photonic computing, and Memory-Driven Computing.

"All three technologies have been successfully tested in prototype devices, but MDC is at centre stage."

Follow this link:

Death of Moore's Law could be cool - Fudzilla

TOP500 Meanderings: Sluggish Performance Growth May Portend Slowing HPC Market – TOP500 News

For all the supercomputing trends revealed on recent TOP500 lists, the most worrisome is the decline in performance growth that has taken place over the over the last several years worrisome not only because performance is the lifeblood of the HPC industry, but also because there is no definitive cause of the slowdown.

TOP500 aggregate performance (blue), top system performance (red), and last system performance (orange). Credit Erich Strohmaier

That said, there are a few smoking guns worth considering. An obvious one is Moores Law, or rather the purported slowing of Moores Law. Performance increases in supercomputer hardware relies on a combination of getting access to more powerful computer chips and putting more of them into a system. The latter explains why aggregate performance on the TOP500 list historically grew somewhat faster than the rate of Moores Law.

But this no longer appears to be the case. Since 2013 or thereabouts, the annual aggregate performance increase on the TOP500 list has fallen not just below its historical rate of growth, but the Moores Law rate as well. As you can see from the chart below, performance growth has had its ups and downs over the years, but the recent dip appears to indicate a new trend.

TOP500 rate of performance increase. Credit Erich Strohmaier

So if Moores Law is slowing, why dont users just order bigger system with more servers? Well they are system core counts are certainly rising but there are a number of disincentives to simply throwing more servers at the problem. A major limitation is power.

And although power data on the list is sketchier than performance data, there is a clear trend toward increased energy usage. For example, over the last 10 years, the supercomputer with the largest power draw increased from around 2.0 KW in 2007 (ASC Purple) to more than 17.8 KW in 2017 (Tianhe-2). In fact, three of the largest systems today use more than 10 KW. The systems in middle of the list appear to be sucking more energy as well, although the increase is not so pronounced as it is for the biggest systems.

Theres nothing inherently wrong with building supercomputers that chew through tens of megawatts of electricity. But given the cost of power, there just wont be very many of them. The nominal goal of building the first exascale supercomputers in the 20 to 30 MW range ensures there will be only a handful of such machines in the world.

The problem with using additional electricity is not just that it costs more, and thus there is less money to spend on buying more hardware, but once you grow beyond the power budget of your datacenter, youre stuck. At that point, you either have to build a bigger facility, wait until the hardware becomes more energy efficiency, or burst some of your workload to the cloud. All of those scenarios lead to the slower performance growth we see on the TOP500.

It also leads to reduced system turnover, which is another recent trend that appears to have clearly established itself. Looking at the chart below, the time an average system spends on list has tripled since 2008, and is about double the historical average. Its almost certain that this means users are hanging on to their existing systems for longer periods of times.

TOP500 of averagelifetime of system on list. Credit Erich Strohmaier

None of this bodes well for supercomputer makers. Cray, the purest HPC company in the world, has been seeing some of the effects of stretching out system procurements. Since 2016 at least, the company has experienced a contraction in the number of systems they are able bid on (although, theyve been able to compensate to some degree with better win rates). Crays recent forays into cloud computing and AI are two ways they are looking to establish revenue streams that are not reliant traditional HPC system sales.

Analysts firms Intersect360 Research and Hyperion (formerly IDC) remain bullish about the HPC market, although compared to a few years ago their growth projections have been shaved back. Hyperion is forecasting a 5.8 percent compound annual growth rate (CAGR) for HPC servers over the next five years, but thats full a point and half lower than the 7.3 percent CAGR they were talking about in 2012. Meanwhile Intersect360 Research is currently projecting a 4.7 percent CAGR for server hardware, while in 2010 they were forecasting a 7.0 percent growth rate (although that included everything, not just servers).

The demand for greater computing power from both researchers and commercial users appears to be intact, which makes the slowdown in performance growth all the more troubling. This same phenomenon appears to be some of what is behind the current trend toward more diverse architectures and heterogeneity. The most popular new processors: GPUs, Xeon Phis, and to a lesser extent, FPGAs, all exhibit better performance per watt characteristics than the multicore CPUs they nominally replace. The interest in the ARM architecture is along these same lines.

Of course, all of these processors will be subject to the erosion of Moores Law. So unless a more fundamental technology or architectural approach emerges to take change the power-performance calculus, slower growth will persist. That wont wipe out HPC usage, any more than the flat growth of enterprise computing wiped out businesses. It will just be the new normal until something else comes along.

See more here:

TOP500 Meanderings: Sluggish Performance Growth May Portend Slowing HPC Market - TOP500 News

9 Things You Didn’t Know About Intel Corporation – Madison.com

Comedian Conan O'Brien once mocked semiconductor giant Intel's (NASDAQ: INTC) cubicle-farm offices for their soulless, gray aesthetic. The company subsequently chose to spruce up its drab digs, but after hearing a story like that, you may not think of Intel as a company whose history is replete with fascinating facts and stories of innovation. However, the chipmaking giant's past, present, and future are far more dynamic and interesting than you might expect.

Here are nine things about Intel that you may not know.

The company's founders at first wanted to name Intel "Moore Noyce," a combination of their last names. However, when colleagues complained that it sounded like "more noise," they switched to the first parts of the words "integrated electronics," and Intel was born.

Intel's chips helped power the first-ever live video from space to Earth in 1995. Astronauts aboard the Space Shuttle Endeavour streamed a live feed, including photographic images and annotations, to ground control at Houston's Johnson Space Center.

Intel is famous for its five-chimed jingle. Officially known as the "Intel Bong" -- insert joke here -- it was composed by Austrian music producer Walter Werzowa and made its debut in 1994.The iconic jingle was so heavily incorporated into Intel's marketing efforts that it was at one point estimated to have been played somewhere in the world once every five minutes at its peak. It's been played over 1 billion times in total.

Observers of the semiconductor market probably know Moore's Law, which predicts that semiconductors' performance tends to double every two years. However, it's less well known that this foundational concept is named after Intel's co-founder, Gordon Moore,who popularized the idea.

Intel absolutely dominates its two core markets, holding more than a 90% share in the market for PC and server microprocessors. Competitors such asAdvanced Micro Devicestry to catch up but are rarely able to gain any ground because of Intel's massive research and development budget. For its fiscal 2016, Intel spent $12.7 billion on R&D alone, roughly triple AMD's entire 2016 revenue of $4.2 billion.That's a staggering difference in scale.

Even though many consumers would have a hard time distinguishing Intel's chips from others inside today's computers, Intel enjoys worldwide brand recognition. Global brand consultancy Interbrand ranked Intel as having the world's 14thmost valuable brand in 2016, with an estimated brand value of $36.9 billion,situating the company between far more consumer-facing companies Disney (13th) and Facebook(15th).

In addition to building one of tech's most iconic companies, Intel's top executives have left a far broader imprint on tech and business over the years. For example, Intel co-founder Bob Noycementored a young Steve Jobs,who mentioned Noyce by name in his famous Stanford commencement speech.And former Intel CEO Andy Grove, another longtime Jobs mentor,wrote several best-selling books that are today required reading at business schools around the world, including Only the Paranoid Survive and High Output Management.

The company's legion of fans also extends into the scientific community. In 1987, researchers at the CERGA Observatory named an asteroid "Intel 8080" in the chipmaker's honor. The name comes from the company's 8080 chip, which is widely credited with enabling the personal-computing revolution to take off.

Intel and other semiconductor producers like it have maintained Moore's Law by continually shrinking the number of transistors it can fit in its chips. For context, the aforementioned Intel 8080 contained 6,000 transistors, a major breakthrough at the time. Today, PC microprocessors pack in 2.6 billion transistors,a testament to Intel's ability to continually innovate.

10 stocks we like better than Intel

When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now... and Intel wasn't one of them! That's right -- they think these 10 stocks are even better buys.

*Stock Advisor returns as of June 5, 2017

Andrew Tonner has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Facebook and Walt Disney. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.

See the rest here:

9 Things You Didn't Know About Intel Corporation - Madison.com

The information age is over, welcome to the machine learning age – VentureBeat

I first used a computer to do real work in 1985.

I was in college in the Twin Cities, and I remember using the DOS version of Word and later upgraded to the first version of Windows. People used to scoff at the massive gray machines in the computer lab but secretly they suspected something was happening.

It was. You could say the information age started in 1965 when Gordon Moore invented Moores Law (a prediction about how transistors would double every year, later changed to every 18 months). It was all about computing power escalation, and he was right about the coming revolution. Some would argue the information age started long before then when electricity replaced steam power. Or, maybe it was when the library system in the U.S. started to expand in the 30s.

Who knows? My theory it started when everyone had access to information on a personal computer. That was essentially what happened around 1985 for me and a bit before that in high school. (Insert your own theory here about the Apple II ushering in the information age in 1977. Id argue that was a little too much of a hobbyist machine.)

We can agree on one thing. We know that information is everywhere. Thats a given. Now, prepare for another shift.

In their book Machine, Platform, Crowd: Harnessing Our Digital Future, economic gurus Andrew McAfee and Erik Brynjolfsson suggest that were now in the machine learning age. They point to another momentous occasion that might be as significant as Moores Law. In March of last year, an AI finally beat a world champion player in Go, winning three out of four games.

Of course, pinpointing the start of the machine learning age is also difficult. Beating Go was a milestone, but my adult-age kids have been relying on GPS in their phones for years. They dont know how to read normal maps, and if they didnt have a phone, they would get lost. They are already relying on a machine that essentially replaces human reasoning. I havent looked up showtimes for a movie theater in a browser for several years now. I leave that to Siri on my iPhone. Ive been using an Amazon Echo speaker to control the thermostat in my home since 2015.

In their book, McAfee and Brynjolfsson make an interesting point about this radical shift. For anyone working in the field of artificial intelligence, leaving the information age behind, we know that this will be a crowdsourced endeavor. Its more than creating an account on Kickstarter. AI comes alive when it has access to the data generated by thousands or millions of users. The more data it has the better it will be. To beat the Go champion, Google DeepMind used a database of actual human-to-human games. AI cannot exist without crowdsourced data. We see this with chatbots and voicebots. The best bots know how to adapt to the user, know how to use previous discussions as the basis for improved AI.

Even the term machine learning has an implication about crowdsourcing. The machine learns from the crowd, typically by gathering data. We see this play out more vibrantly with autonomous cars than any other machine learning paradigm. Cars analyze thousands of data points using sensors that watch how people drive on the road. A Tesla Model S is constantly crowdsourcing. Now that GM is testing the self-driving Bolt on real roads, its clear the entire project is a way to make sure the cars understand all of the real-world variables.

The irony here? The machine age is still human-powered. In the book, the authors explain how the transition from steam power to electric power took a long time. People scoffed at the idea of using electric motors and not a complex system of gears and pulleys. Not everyone was on board. Not everyone saw the value. As we experiment with AI, test and retest the algorithms, and deploy bots into the home and workplace, its important to always keep in mind that the machines will only improve as the crowdsourced data improves.

Were still in full control. For now.

See the original post here:

The information age is over, welcome to the machine learning age - VentureBeat

We Are At The Dawn of a New Era of Innovation. Will You Still Be Able to Compete? – Inc.com

I recently appeared as a guest on Wharton Professor David Robertson's radio show, Innovation Navigation. David is an old pro and recently published an excellent new book on innovation, The Power of Little Ideas, so it was an interesting, wide ranging discussion that covered a lot of ground.

One of the subjects we touched on was the new era of innovation. For the past few decades, firms have innovated within well understood paradigms, Moore's Law being the most famous, but by no means the only one. This made innovation relatively simple, because we were fairly sure of where technology was going.

Today, however, Moore's Law is nearing its theoretical limits as are lithium-ion batteries. Other technologies, such as the internal combustion engine, will be replaced by new paradigms. So the next few decades are likely to look a whole lot more like the 50s and the 60s than the 90s or the aughts, in which value will shift from developing applications to fundamental technologies.

As Thomas Kuhn explained in The Structure of Scientific Revolutions, we normally work within well established paradigms because they are useful for establishing the rules of the game. Specialists within a particular field can speak a common language, advance the field within well understood parameters and apply their knowledge to solve problems.

For example, Moore's Law establish a stable trend of doubling computing power about every 18 months. That made it possible for technology companies to know how much computing power they would have to work with in the coming years and predict, with a fairly high level of accuracy, what they would be able to do with it.

Yet today, chip manufacturing has advanced to the point where, in a few short years, it will be theoretically impossible to fit more transistors on a silicon wafer. There are nascent technologies, such as quantum computing and neuromorphic chips that can replace traditional architectures, but they are not nearly as well understood.

Computing is just one area reaching its theoretical limits. We also need next generation batteries to power our devices, electric cars and the grid. At the same time, new technologies, such as genomics, nanotechnology and robotics are becoming ascendant and even the scientific method is being called into question.

Over the past few decades, technology and innovation has mostly been associated with the computer industry. As noted above, Moore's law has enabled firms to bring out a steady stream of devices and services that improve so quickly that they become virtually obsolete in just a few years. Clearly, these improvements have made our lives better.

Still, as Robert Gordon points out in The Rise and Fall of American Growth, because advancement has been contained so narrowly within a single field, productivity gains have been meager compared to earlier technological revolutions, such as indoor plumbing, electricity and the internal combustion engine.

There are indications that's beginning to change.These days, the world of bits is beginning to invade the world of atoms. More powerful computers are being used for genetic engineering and to design new materials. Robots, both physical and virtual, are replacing human labor for many jobs including high value work in medicine, law and creative tasks.

Yet again, these technologies are still fairly new and not nearly as well understood as traditional technologies. Unlike computer programming, you can't take a course in nanotechnology, genetic engineering or machine learning at your local community college. In many cases, the cost of the equipment and expertise to create these technologies is prohibitive for most organizations.

In the 1950s and 60s, technological advancement brought increased scale to enterprises. Not only did mass production, distribution and marketing require more capital, but improved information and communication technologies made the management of a large enterprise far more feasible than ever before.

So it would stand to reason that this new era of innovation would lead to a similar trend. Only a handful of companies, like IBM, Microsoft, Google in the tech space and corporate giants like Boeing and Procter & Gamble in more conventional categories, can afford to invest billions of dollars in fundamental research.

Yet something else seems to be happening. Cloud technologies and open data initiatives are democratizing scientific research. Consider the Cancer Genome Atlas, a program that sequences the DNA inside tumors and makes it available on the Internet. It allows researchers at small labs to access the same data as major institutions. More recently, the Materials Genome Initiative was established to do much the same for manufacturing.

In fact, today there are a wide variety ways for small businesses to access world class scientific research. From government initiatives like the manufacturing hubs and Argonne Design Works to incubator, accelerator and partnership programs at major corporations, the opportunities are endless for those who are willing to explore and engage.

In fact, many large firms that I've talked to have come to see themselves as essentially utility companies, providing fundamental technology and letting smaller firms and startups explore thousands of new business models.

Innovation has come to be seen as largely a matter of agility and adaptation. Small, nimble players can adapt to changing conditions much faster than industry giants. That gives them an advantage over large, bureaucratic firms in bringing new applications to market. When technologies are well understood, much of the value is generated through the interface with the end user.

Consider Steve Job's development of the iPod. Although he knew that his vision of "1000 songs in your pocket" was unachievable with available technology, he also knew that it would only be a matter of time for someone to develop hard drive with the specifications he required. When they did, he pounced, built an amazing product and a great business.

He was able to do that for two reasons. First, because the newer, more powerful hard drives worked exactly like the old ones and fit easily into Apple's design process. Second, because the technology was so well understood, the vendor had little ability to extract large margins, even for cutting edge technology.

Yet as I explain in my book, Mapping Innovation, over the next few decades much of the value will shift back to fundamental technologies because they are not well understood, but will be essential for increasing the capability of products and services. They will require highly specialized expertise and will not fit so seamlessly into existing architectures. Rather than agility, exploration will emerge as a key competitive trait.

In short, the ones that will win in this new era will not be those with a capacity to disrupt, but those that are willing to tackle grand challenges and probe new horizons.

View post:

We Are At The Dawn of a New Era of Innovation. Will You Still Be Able to Compete? - Inc.com

Plotting a Moore’s Law for Flexible Electronics – IEEE Spectrum

Photo: IMEC Near Field Communicator: There are 1,700 transistors on the flexible chip in this NFC transmitter.

At a meeting in midtown Manhattan, Kris Myny picks up what looks like an ordinary paper business card and, with little fanfare, holds it to his smartphone. The details of the card appear almost immediately on the screen inside a custom app.

Its a simple demonstration, but Myny thinks it heralds an exciting future for flexible circuitry. In January, he began a five-year project at the nanoelectronics research institute Imec in Leuven, Belgium, to demonstrate that thin-film electronics has significant potential outside the realm of display electronics. In fact, he hopes that the project, funded with a 1.5 million grant from the European Research Council (ERC), could demonstrate that there is a path for the mass production of denser and denser flexible circuitsin other words, a Moores Law for bendable ICs.

Five years ago, Myny and his colleagues reported that they had used organic thin-film transistors to build an 8-bit microprocessor on flexible plastic. In the years since, the group has turned its focus to IGZOa metal-oxide semiconductor that is a mixture of indium, gallium, zinc, and oxygen. Thin-film transistors based on this substance can move charge significantly faster than their organic counterparts do; at the same time the transistors can still be built at or around room temperaturean important requirement when attempting to fabricate electronics directly onto plastic and other materials that can be easily deformed or damaged by heat.

To build that business card, Myny and his colleagues engineered a flexible chip containing more than 1,700thin-film IGZO transistors. What sets the chip apart from other efforts is its ability to comply with the ISO14443-A Near Field Communication (NFC) standard. For flexible circuitry, this is a demanding set of requirements, Myny says, as it requires logic gates that are fast enough to work with the 13.56-megahertz standard carrier frequency.

Adding to the challenge is that while IGZO is an effective n-type semiconductor, allowing electrons to flow easily, it is not a particularly good p-type material; there is no comparable material that excels at permitting the flow of holesthe absence of electrons that are treated as positive charges. Todays logic uses both p- and n-type devices; the complementary pairing helps control power consumption by preventing the flow of current when transistors are not in the act of switching. With just n-type devices to work with, Myny and his colleagues have to devise a different kind of circuitry.

With the ERC project, Imec aims to tackle a suite of interrelated problems in an effort to boost transistor density from 5,000 or so devices per square centimeter to 100,000. That figure isnt far from the density of thin-film transistors in conventional rigid-display backplanes today, Myny says. However, its another matter to try to achieve that density with digital logic circuitswhich require more complicated designsand to make sure those devices are reliable and consistent when theyre built on a delicate and irregular substrate.

The group also wants to prove this density is achievable outside the lab, by adapting manufacturing techniques that are already in use in display fabs. Myny says that if he and his team hit their goals, a square centimeter of fast, flexible circuitry could be built at a cost of 1 U.S. cent (assuming high-volume manufacturing). At the same time, while the density of the circuits increases, the group will also have to boost the transistor frequency and drive down power consumption to prevent overheating. The overall goal, Myny says, is to demonstrate that you can indeed make flexible circuitsthat it is not science fiction but that it is going to market.

When it comes to the fabrication of complex digital circuits on flexible substrates, Imec is in my opinion the biggest player, says Niko Mnzenrieder, a lecturer at the University of Sussex, in England, who specializes in flexible electronics. He notes that metal-oxide flexible circuitry is already starting to make commercial inroads, and he expects the first big applications to be in RFID and NFC technology. Its not a mature technology, he says, but its nearly ready for everyday use.

Go here to read the rest:

Plotting a Moore's Law for Flexible Electronics - IEEE Spectrum

What Is the Future of Computers? | Moore’s Law

Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.

In 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of an 11-millimeter-long "chip" of semiconducting germanium, creating the first ever integrated circuit. Because the circuit contained a single transistor a sort of miniature switch the chip could hold one "bit" of data: either a 1 or a 0, depending on the transistor's configuration.

Since then, and with unflagging consistency, engineers have managed to double the number of transistors they can fit on computer chips every two years. They do it by regularly halving the size of transistors. Today, after dozens of iterations of this doubling and halving rule, transistors measure just a few atoms across, and a typical computer chip holds 9 million of themper square millimeter. Computers with more transistors can perform more computations per second (because there are more transistors available for firing), and are therefore more powerful. The doubling of computing power every two years is known as "Moore's law," after Gordon Moore, the Intel engineer who first noticed the trend in 1965.

Moore's law renders last year's laptop models defunct, and it will undoubtedly make next year's tech devices breathtakingly small and fast compared to today's. But consumerism aside, where is the exponential growth in computing power ultimately headed? Will computers eventually outsmart humans? And will they ever stop becoming more powerful?

The singularity

Many scientists believe the exponential growth in computing power leads inevitably to a future moment when computers will attain human-level intelligence: an event known as the "singularity." And according to some, the time is nigh.

Physicist, author and self-described "futurist" Ray Kurzweil has predicted that computers will come to par with humans within two decades. He told Time Magazine last year that engineers will successfully reverse-engineer the human brain by the mid-2020s, and by the end of that decade, computers will be capable of human-level intelligence.

The conclusion follows from projecting Moore's law into the future. If the doubling of computing power every two years continues to hold, "then by 2030 whatever technology we're using will be sufficiently small that we can fit all the computing power that's in a human brain into a physical volume the size of a brain," explained Peter Denning, distinguished professor of computer science at the Naval Postgraduate School and an expert on innovation in computing. "Futurists believe that's what you need for artificial intelligence. At that point, the computer starts thinking for itself." [How to Build a Human Brain]

What happens next is uncertain and has been the subject of speculation since the dawn of computing.

"Once the machine thinking method has started, it would not take long to outstrip our feeble powers," Alan Turing said in 1951 at a talk entitled "Intelligent Machinery: A heretical theory," presented at the University of Manchester in the United Kingdom. "At some stage therefore we should have to expect the machines to take control." The British mathematician I.J. Good hypothesized that "ultraintelligent" machines, once created, could design even better machines. "There would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make," he wrote.

Buzz about the coming singularity has escalated to such a pitch that there's even a book coming out next month, called "Singularity Rising" (BenBella Books), by James Miller, an associate professor of economics at Smith College, about how to survive in a post-singularity world. [Could the Internet Ever Be Destroyed?]

Brain-like processing

But not everyone puts stock in this notion of a singularity, or thinks we'll ever reach it. "A lot of brain scientists now believe the complexity of the brain is so vast that even if we could build a computer that mimics the structure, we still don't know if the thing we build would be able to function as a brain," Denning told Life's Little Mysteries. Perhaps without sensory inputs from the outside world, computers could never become self-aware.

Others argue that Moore's law will soon start to break down, or that it has already. The argument stems from the fact that engineers can't miniaturize transistors much more than they already have, because they're already pushing atomic limits. "When there are only a few atoms in a transistor, you can no longer guarantee that a few atoms behave as they're supposed to," Denning explained. On the atomic scale, bizarre quantum effects set in. Transistors no longer maintain a single state represented by a "1" or a "0," but instead vacillate unpredictably between the two states, rendering circuits and data storage unreliable. The other limiting factor, Denning says, is that transistors give off heat when they switch between states, and when too many transistors, regardless of their size, are crammed together onto a single silicon chip, the heat they collectively emit melts the chip.

For these reasons, some scientists say computing power is approaching its zenith. "Already we see a slowing down of Moore's law," the theoretical physicist Michio Kaku said in a BigThink lecture in May.

But if that's the case, it's news to many. Doyne Farmer, a professor of mathematics at Oxford University who studies the evolution of technology, says there is little evidence for an end to Moore's law. "I am willing to bet that there is insufficient data to draw a conclusion that a slowing down [of Moore's law] has been observed," Farmer told Life's Little Mysteries. He says computers continue to grow more powerful as they become more brain-like.

Computers can already perform individual operations orders of magnitude faster than humans can, Farmer said; meanwhile, the human brain remains far superior at parallel processing, or performing multiple operations at once. For most of the past half-century, engineers made computers faster by increasing the number of transistors in their processors, but they only recently began "parallelizing" computer processors. To work around the fact that individual processors can't be packed with extra transistors, engineers have begun upping computing power by building multi-core processors, or systems of chips that perform calculations in parallel."This controls the heat problem, because you can slow down the clock," Denning explained. "Imagine that every time the processor's clock ticks, the transistors fire. So instead of trying to speed up the clock to run all these transistors at faster rates, you can keep the clock slow and have parallel activity on all the chips." He says Moore's law will probably continue because the number of cores in computer processors will go on doubling every two years.

And because parallelization is the key to complexity, "In a sense multi-core processors make computers work more like the brain," Farmer told Life's Little Mysteries.

And then there's the future possibility of quantum computing, a relatively new field that attempts to harness the uncertainty inherent in quantum states in order to perform vastly more complex calculations than are feasible with today's computers. Whereas conventional computers store information in bits, quantum computers store information in qubits: particles, such as atoms or photons, whose states are "entangled" with one another, so that a change to one of the particles affects the states of all the others. Through entanglement, a single operation performed on a quantum computer theoretically allows the instantaneous performance of an inconceivably huge number of calculations, and each additional particle added to the system of entangled particles doubles the performance capabilities of the computer.

If physicists manage to harness the potential of quantum computers something they are struggling to do Moore's law will certainly hold far into the future, they say.

Ultimate limit

If Moore's law does hold, and computer power continues to rise exponentially (either through human ingenuity or under its own ultraintelligent steam), is there a point when the progress will be forced to stop? Physicists Lawrence Krauss and Glenn Starkman say "yes." In 2005, they calculated that Moore's law can only hold so long before computers actually run out of matter and energy in the universe to use as bits. Ultimately, computers will not be able to expand further; they will not be able to co-opt enough material to double their number of bits every two years, because the universe will be accelerating apart too fast for them to catch up and encompass more of it.

So, if Moore's law continues to hold as accurately as it has so far, when do Krauss and Starkman say computers must stop growing? Projections indicate that computer will encompass the entire reachable universe, turning every bit of matter and energy into a part of its circuit, in 600 years' time.

That might seem very soon. "Nevertheless, Moore's law is an exponential law," Starkman, a physicist at Case Western University, told Life's Little Mysteries. You can only double the number of bits so many times before you require the entire universe.

Personally, Starkman thinks Moore's law will break down long before the ultimate computer eats the universe. In fact, he thinks computers will stop getting more powerful in about 30 years. Ultimately, there's no telling what will happen. We might reach the singularity the point when computers become conscious, take over, and then start to self-improve. Or maybe we won't. This month, Denning has a new paper out in the journal Communications of the ACM, called "Don't feel bad if you can't predict the future." It's about all the people who have tried to do so in the past, and failed.

This story was provided by Life's Little Mysteries, a sister site to LiveScience. Follow Natalie Wolchover on Twitter @nattyoveror Life's Little Mysteries @llmysteries. We're also on Facebook & Google+.

Read this article:

What Is the Future of Computers? | Moore's Law

Moore’s Law Meet Darwin – JWN

Image: Pixabay

It is not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change. We must, however, acknowledge, as it seems to me, that man with all his noble qualities ... still bears in his bodily frame the indelible stamp of his lowly origin. Charles Darwin, Origin of the Species

Our species and technology are at a crossroads. Our amazing human ingenuity has created amazing machines andfor the first time in our species existencethose machines are starting to create other smarter machines at scale. The changes are exponential and combinatorial. It is a phenomenon with which many non-nerds are now becoming familiar called Moores Law.

Simply stated, we have observed over many decades that the price/performance of computation is doubling approximately every two yearswhat is commonly called an exponential function. The scary / exciting thing is that it is happening not only to computation power (i.e. speed of computers, phones, etc) but to other technologies like the capacity of solar power, the capabilities of Gene Sequencing technologies and the like. In addition, these technologies are are starting to combine in the many companies of the new global entrepreneurial economy. Most worrisome, is that these exponential technologiesonce inventednever disappear. These genies are never placed back in the bottle. Good or evil.

But the human animalboth physically and culturallyis still evolving at Darwins pace. Our foundational structures of governance and education (for example) have to date only been able to respond in linear waysat best. At a more atomic level, perhaps the most linear of all of our systems is our ability as individual humans to adapt to new ideas, people and cultures. This resulting gapcalled disruption by manyis growing and is unsustainable. Left unchecked, it will likely not end well.

Subscribe to the free JWN weekly Energy Tech e-newsletter.

The answer it seems to me is that the human species needs to figure out how to adaptexponentially.

There are many ways of thinking that can help us move forward. Heres one:

Complex Adaptive Systems: Sustainability & Capacity before Velocity

In Alberta, we have adopted a concept for our innovation ecosystem that is based on the lessons of Complex Adaptive Systems called the Rainforest Framework. Centered on the seminal book by Victor Hwang and Greg Horowitz, the book and its research informs us that the key to advancing an ecosystemsmall or largeis to start with Culture. The authors note that if we are able to create a culture of trust, pay-it-forward and the likeand make it an explicit prerequisite of participationthen we can step on the gas with great effect. Velocitywhether linear or exponentiallycan only happen when individuals are acting in a way that eliminates the friction of mistrust, winner-take-all philosophies and lack of diversity.

So how does this ecosystem experiment we are conducting in Alberta relate to the broader challenges of Darwin and Moores Law? The answer we believe is to change the way we understand and define our primary adaptive strategynamely innovation. We are experimenting with a new definition of innovation that meets hyper-disruption head on. This new definition suggests that we cannot move forward as a species unless we move technology and governance together.

In short our new definition of innovation is:

The advancement of the human condition through changes in technology matched by equal or greater advancement in social governance.

Simply advancing the technologies that make our world easier and comfortable for an unbalanced few is not enough. As we measure technical advancement we also must include measures of how we are progressing socially. Our ability to sustain and capacity to absorb technological change MUST be present to increase the velocity otherwise we will continue our march to inequality and unsustainable growth. In a previous blog I called it the Innovation of Ways versus the Innovation of Things.

We can see this everywhere in our history. The environmental movement of the 60s, for example, coming on the heels of three decades of uninterrupted post war growth is no longer possible in an era of exponential change. This is a critical difference: The speed of change today is exponential and combinatorialmeaning we cant wait for social movements and institutions to catch up.

Here in Alberta we are adopting a new social contract and creating a common, collective voice that is beginning to bridge economic, political and cultural silos. Our belief is that once established, we willif we continue to embrace its philosophiesbe able to push harder and move faster. We will be able to overlay a culture of entrepreneurship within this full definition of innovation and create significantly increased velocity and change.

I recently watched an excellent TED talk by Dan Pollata called The Dreams We Havent Dared to Dream. In his talk, Dan eloquently notes that while human ingenuity has exponentially increased the transistors on a chip for the past 40 years, we have not applied the same exponential thinking to our dreams nor human compassion. As he says, we continue to make a perverse trade-off between our future dreams and our present state of evolution.

Some describe this ethical stasis as the tyranny of the OR

As the great Stephen Hawking said:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

It begins with a new definition of Innovation that matches Darwin to the relentless march of Moores Law.

About Jim Gibson

Jim Gibson is a Calgary-based serial entrepreneur, an active leader in the Alberta innovation ecosystem, author and founder of the Rainforest Movement (www.rainforestab.ca) that seeks to move the needle forward in the culture of innovation in Alberta and Canada. His blog on innovation is here at http://thespear.co and his upcoming book, The Tip of the Spear: Our Species and Technology at a Crossroads, will be available in the fall.

Read the original here:

Moore's Law Meet Darwin - JWN

3-in-1 device offers alternative to Moore’s law – Phys.Org

June 14, 2017 by Lisa Zyga feature Illustration of the reconfigurable device with three buried gates, which can be used to create n- or p-type regions in a single semiconductor flake. Credit: Dhakras et al. 2017 IOP Publishing Ltd

In the semiconductor industry, there is currently one main strategy for improving the speed and efficiency of devices: scale down the device dimensions in order to fit more transistors onto a computer chip, in accordance with Moore's law. However, the number of transistors on a computer chip cannot exponentially increase forever, and this is motivating researchers to look for other ways to improve semiconductor technologies.

In a new study published in Nanotechnology, a team of researchers at SUNY-Polytechnic Institute in Albany, New York, has suggested that combining multiple functions in a single semiconductor device can improve device functionality and reduce fabrication complexity, thereby providing an alternative to scaling down the device's dimensions as the only method to improve functionality.

To demonstrate, the researchers designed and fabricated a reconfigurable device that can morph into three fundamental semiconductor devices: a p-n diode (which functions as a rectifier, for converting alternating current to direct current), a MOSFET (for switching), and a bipolar junction transistor (or BJT, for current amplification).

"We are able to demonstrate the three most important semiconductor devices (p-n diode, MOSFET, and BJT) using a single reconfigurable device," coauthor Ji Ung Lee at the SUNY-Polytechnic Institute told Phys.org. "While these devices can be fabricated individually in modern semiconductor fabrication facilities, often requiring complex integration schemes if they are to be combined, we can form a single device that can perform the functions of all three devices."

The multifunctional device is made of two-dimensional tungsten diselenide (WSe2), a recently discovered transition metal dichalcogenide semiconductor. This class of materials is promising for electronics applications because the bandgap is tunable by controlling the thickness, and it is a direct bandgap in single layer form. The bandgap is one of the advantages of 2D transition metal dichalcogenides over graphene, which has zero bandgap.

In order to integrate multiple functions into a single device, the researchers developed a new doping technique. Since WSe2 is such a new material, until now there has been a lack of doping techniques. Through doping, the researchers could realize properties such as ambipolar conduction, which is the ability to conduct both electrons and holes under different conditions. The doping technique also means that all three of the functionalities are surface-conducting devices, which offers a single, straightforward way of evaluating their performance.

"Instead of using traditional semiconductor fabrication techniques that can only form fixed devices, we use gates to dope," Lee said. "These gates can dynamically change which carriers (electrons or holes) flow through the semiconductor. This ability to change allows the reconfigurable device to perform multiple functions.

"In addition to implementing these devices, the reconfigurable device can potentially implement certain logic functions more compactly and efficiently. This is because adding gates, as we have done, can save overall area and enable more efficient computing."

In the future, the researchers plan to further investigate the applications of these multifunctional devices.

"We hope to build complex computer circuits with fewer device elements than those using the current semiconductor fabrication process," Lee said. "This will demonstrate the scalability of our device for the post-CMOS era."

Explore further: Team engineers oxide semiconductor just single atom thick

More information: Prathamesh Dhakras, Pratik Agnihotri, and Ji Ung Lee. "Three fundamental devices in one: a reconfigurable multifunctional device in two-dimensional WSe2." Nanotechnology. DOI: 10.1088/1361-6528/aa7350

Journal reference: Nanotechnology

2017 Phys.org

A new study, affiliated with UNIST has introduced a novel method for fabrication of world's thinnest oxide semiconductor that is just one atom thick. This may open up new possibilities for thin, transparent, and flexible ...

(PhysOrg.com) -- Most of todays electronics devices contain two different types of field-effect transistors (FETs): n-type (which use electrons as the charge carrier) and p-type (which use holes). Generally, a transistor ...

(Phys.org)Although vacuum tubes were the basic components of early electronic devices, by the 1970s they were almost entirely replaced by semiconductor transistors. But in the past few years, researchers have been developing ...

Combining silicon with a light-producing semiconductor may help develop micrometer-scale lasers, shows Doris Keh-Ting Ng and her colleagues from the A*STAR Data Storage Institute.

A team of researchers from Purdue University, SEMATECH and SUNY College of Nanoscale Science and Engineeringwill present at the 2014 Symposium on VLSI Technology on their work involving high-performance molybdenum disulfide ...

Researchers at the Energy Department's National Renewable Energy Laboratory (NREL) have uncovered a way to overcome a principal obstacle in using two-dimensional (2D) semiconductors in electronic and optoelectronic devices.

In the semiconductor industry, there is currently one main strategy for improving the speed and efficiency of devices: scale down the device dimensions in order to fit more transistors onto a computer chip, in accordance ...

Carbon is one of the most versatile elements: it forms the basis for an enormous number of chemical compounds, it has several allotropes of different dimensionality, and it exhibits many different bonding geometries. For ...

Flexible electronic parts could significantly improve medical implants. However, electroconductive gold atoms do not easily bind to silicones. Researchers from the University of Basel have now modified short-chain silicones ...

The news story made a big splash: in January 2016 ETH researchers Professor Raffaele Mezzenga and his senior researcher Sreenath Bolisetty published a study in the journal Nature Nanotechnology about an innovative type of ...

In many ways, magnets are still mysterious. They get their (often powerful) effects from the microscopic interactions of individual electrons, and from the interplay between their collective behavior at different scales. ...

Queen's University Belfast researchers have discovered a new way to create extremely thin electrically conducting sheets, which could revolutionise the tiny electronic devices that control everything from smart phones to ...

Adjust slider to filter visible comments by rank

Display comments: newest first

What's next?

Maybe the construct of intent. The how and why influence enlists underlying capabilities.

Nope. Moore's law isn't about speed or efficiency, but about the number of transistors at the lowest price point per transistor. Scaling down doesn't necessarily bring cost advantges, so simply fitting more transistors per square inch doesn't follow Moore's law.

Again they get it wrong.

Not sure I see the point of this. In situations where Moore's law matters (high density memory/processor logic), one generally has massive arrays of the same kind of component and they are usually dedicated, not programmable. If these things could productively do useful things simultaneously, that would be something, but they are one function at a time. Perhaps one could make some sort of Read Only Memory/FLASH-like memory out of them by programming locations to be a transistor or diode, but there's unlikely to be a density gain by doing that. This could be a significant advance in programmable arrays, but I don't see it helping much for conventional memory and logic.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Go here to see the original:

3-in-1 device offers alternative to Moore's law - Phys.Org

Donald Trump’s coal bet faces Moore’s Law – Livemint – Livemint

Donald Trump justified his decision to withdraw from the Paris climate deal by claiming that compliance would impose crippling economic burdens on the US. I happen to love the coal miners, Trump declared, before reaffirming his intention to make the fossil fuel the centrepiece of the nations energy policy.

The backlash against the president was ferocious, but mainly focused on his lack of concern for the catastrophic effects of climate change. Far less attention has been directed at his conviction that coal will be cheaper than renewable sources of energy in the foreseeable future.

This is a question, luckily, that history can help answer. Recent research suggests that certain technologies introduced over the past two centuries exhibit very predictable rates of advancement, becoming more efficientand thus cheaperat a steady clip. And solar energy is one of those technologies. Looking into the past can give us a glimpse of the future.

In 1965, Gordon Moore, one of the founders of chip giant Intel, noticed that the number of transistors per integrated circuit doubled every two years on average, with corresponding advances in speed and declines in cost. This quickly became known as Moores Law. In the succeeding half century, Moores Law has held up, with the cost of computing power plunging dramatically over the years.

Last year, two economists named J. Doyne Farmer and Francois Lafond published an intriguing paper that riffed off Moores Law. Many technologies, they correctly observed, followed a generalized version of Moores Law in which costs tend to drop exponentially. Some technologies, however, do not follow this model, and it can be hard to distinguish between them. Past performance, in other words, is not always predictive of future results.

In order to sort out the ones that follow a version of Moores Law from the ones that dont, the researchers engaged in an interesting thought experiment. They selected 53 very different technologies across a range of sectors and built a deep database of historical unit costs for producing milk; sequencing DNA; making laser diodes, formaldehyde, acrylic fibre, transistors, and many other things; and electricity from nuclear, coal, and solar.

They then engaged in a statistical method called hindcasting. This entails going back to various points in the past for each technology, taking whatever trend existed at the time and then extrapolating it into the future. They then took this prediction and compared it to what actually happened. This has the virtue of actually testing the predictive power of the data rather than fitting the data to a model. Moreover, it gives some insights into the accuracy of future forecasts. After all, the authors note, a sceptic who looks at the trends in the cost of solar and coal would rightfully respond, how do we know that the historical trend will continue? Isnt it possible that things will reverse, and over the next 20 years coal will drop dramatically in price and solar will go back up? Hindcasting offers a way to answer that question in quantitative terms.

And the answers are rather interesting. The researchers found that many technologies dont follow a robust version of Moores Law, even if the cost per unit can fluctuate a great deal in the short term. The cost of chemicals, household goods, and many other goods dont stay the same, but they fluctuate in a random fashion, going up for a number of years and then going back down again. Others, like transistors and DNA sequencing, are eerily predictable.

Energy, on the other hand, is a mixed bag. The current unit cost of coal is approximately the same as it was in the year 1890 in inflation-adjusted terms. It has, however, fluctuated randomly over time by a factor of three, exhibiting short-term trends that eventually reverse themselves. The same is true of gas and oil. Nuclear has also fluctuated, but is actually more expensive now than when it was first introduced in the 1950s. In short, theres no equivalent for Moores Law when it comes to fossil fuels and nuclear power.

Which brings us to solar. Here the trend has been unmistakable, with the price per unit dropping a very steady 10% per year. This has been a very rapid decline with little variability. Despite changes in demand, the ebb and flow of government subsidies, solar has steadily dropped in cost.

This very Moore-ish trajectory permits us to make reasonably secure predictions about the future cost of solar power. Theres a very slim chance those predictions could be wrong, but compared to predicting the cost of coalwhich is akin to spinning a roulette wheelwe can get some glimpse of the future.

And that future will almost certainly be dominated by solarnot because its green, but because its cheap. Indeed, the authors data suggests that theres a fifty-fifty chance that solar will become competitive with coal as early as 2024; theres a good chance that could happen even sooner. Indeed, it already has in some countries.

In the near future, it will likely be the coal industry that will need subsidies to compete with solar, not the other way around.

Trump can love coal miners all he wants. But he cannot stop solar from becoming the cheapest energy source any more than he could have halted the rise of ever cheaper, more powerful computers. Hes going to loseagain. Bloomberg

Stephen Mihm is an associate professor of history at the University of Georgia

Comments are welcome at theirview@livemint.com

First Published: Tue, Jun 13 2017. 12 14 AM IST

Excerpt from:

Donald Trump's coal bet faces Moore's Law - Livemint - Livemint

Beyond Moore’s Law and further – evertiq.com

HIVE comes at a time when the microsystems technology community is facing an array of long-anticipated obstacles to its relentless and storied decades-long march of progress.

For nearly seventy years, the United States has enjoyed the economic and security advantages that have come from national leadership in electronics innovation, said Bill Chappell, director of DARPA*s Microsystems Technology Office (MTO), which will lead the new effort. If we want to remain out front, we need to foment an electronics revolution that does not depend on traditional methods of achieving progress. Thats the point of this new initiative to embrace progress through circuit specialization and to wrangle the complexity of the next phase of advances, which will have broad implications on both commercial and national defense interests.

There always has been a finish line on the horizon. The saga of electronics miniaturisation that has yielded ever more computing power at ever-lower unit costsrepresented by the famed Moores Law (named after Intels co-founder Gordon Moore)has always been destined to encounter the limitations of both physics and economics. As this inflection point nears, continued progress in microelectronics will require a new phase of innovation to keep the modern miracle of electronics innovation moving forward.

DARPAs Microsystems Technology Office created the Hierarchical Identify Verify & Exploit (HIVE) program to develop new technologies to realize 1'000x performance-per-watt gains in the ability to handle graph analytics. Intels Data Center Group (DCG), Platform Engineering Group (PEG) and Intel Labs will work as one of the hardware architecture research performers for DARPA HIVE, with a joint research program between Intel and DARPA valued at more than USD 100 million during a 4-year effort.

By mid-2021, the goal of HIVE is to provide a 16-node demonstration platform showcasing 1,000x performance-per-watt improvement over todays best-in-class hardware and software for graph analytics workloads, said Dhiraj Mallick, vice president of the Data Center Group and general manager of the Innovation Pathfinding and Architecture Group at Intel. Intels interest and focus in the area may lead to earlier commercial products featuring components of this pathfinding technology much sooner. -----

Image Caption: The patchwork of microelectronic dies represents work performed by a multitude of university groups that participated in previous DARPA-industry-academe collaborations. DARPAs new electronics initiative is pushing for a new era of microsystem structures and capabilities. Click on the image for a high-resolution version.

* DARPA = Defense Advanced Research Projects Agency

See the original post here:

Beyond Moore's Law and further - evertiq.com

Trump’s Coal Bet Faces a Tough Foe: Moore’s Law – Bloomberg

Forward progress.

Donald Trump justified his decision to withdraw from the Paris Climate Agreement by claiming that compliance would impose crippling economic burdens on the United States. I happen to love the coal miners, Trump declared, before reaffirming his intention to make the fossil fuel the centerpiece of the nations energy policy.

QuickTake Climate Change

The backlash against the president was ferocious, but mainly focused on his lack of concern for the catastrophic effects of climate change. Far less attention has been directed at his conviction that coal will be cheaper than renewable sources of energy in the foreseeable future.

This is a question, luckily, that history can help answer. Recent research suggests that certain technologies introduced over the past two centuries exhibit very predictable rates of advancement, becoming more efficient --and thus cheaper -- at a steady clip. And solar energy is one of those technologies. Looking into the past can give us a glimpse of the future.

In 1965, Gordon Moore, one of the founders of chip giant Intel, noticed that the number of transistors per integrated circuit doubled every two years on average, with corresponding advances in speed and declines in cost. This quickly became known as Moores Law. In the succeeding half century, Moores Law has held up, with the cost of computing power plunging dramatically over the years.

Last year, two economists published an intriguing paper that riffed off Moores Law. Many technologies, they correctly observed, followed a generalized version of Moores Law in which costs tend to drop exponentially. Some technologies, however, do not follow this model, and it can be hard to distinguish between them. Past performance, in other words, is not always predictive of future results.

In order to sort out the ones that follow a version of Moores Law from the ones that dont, the researchers engaged in an interesting thought experiment. They selected 53 very different technologies across a range of sectors and built a deep database of historical unit costs for producing milk; sequencing DNA; making laser diodes, formaldehyde, acrylic fiber, transistors, and many other things; and electricity from nuclear, coal, and solar.

They then engaged in a statistical method called hindcasting. This entails going back to various points in the past for each technology, taking whatever trend existed at the time, and then extrapolating it into the future. They then took this prediction and compared it to what actually happened. This has the virtue of actually testing the predictive power of the data rather than fitting the data to a model.

Moreover, it gives some insights into the accuracy of future forecasts. After all, the authors note, a skeptic who looks at the trends in the cost of solar and coal would rightfully respond, How do we know that the historical trend will continue? Isnt it possible that things will reverse, and over the next 20 years coal will drop dramatically in price and solar will go back up? Hindcasting offers a way to answer that question in quantitative terms.

And the answers are rather interesting. The researchers found that many technologies dont follow a robust version of Moores Law, even if the cost per unit can fluctuate a great deal in the short term. The cost of chemicals, household goods, and many other goods dont stay the same, but they fluctuate in a random fashion, going up for a number of years and then going back down again. Others, like transistors, DNA sequencing, and others, are eerily predictable.

Clear thinking from leading voices in business, economics, politics, foreign affairs, culture, and more.

Share the View

Energy, on the other hand, is a mixed bag. The current unit cost of coal is approximately the same as it was in the year 1890 in inflation-adjusted terms. It has, however, fluctuated randomly over time by a factor of three, exhibiting short-term trends that eventually reverse themselves. The same is true of gas and oil. Nuclear has also fluctuated, but is actually more expensive now than when it was first introduced in the 1950s. In short, theres no equivalent for Moores Law when it comes to fossil fuels and nuclear power.

Which brings us to solar. Here the trend has been unmistakable, with the price per unit dropping a very steady 10 percent per year. This has been a very rapid decline with little variability. Despite changes in demand, the ebb and flow of government subsidies, solar has steadily dropped in cost.

This very Moore-ish trajectory permits us to make reasonably secure predictions about the future cost of solar power. Theres a very slim chance those predictions could be wrong, but compared to predicting the cost of coal -- which is akin to spinning a roulette wheel we can get some glimpse of the future.

And that future will almost certainly be dominated by solar -- not because its green, but because its cheap. Indeed, the authors data suggests that theres a fifty-fifty chance that solar will become competitive with coal as early as 2024; theres a good chance that could happen even sooner. Indeed, it already has in some countries.

In the near future, it will likely be the coal industry that will need subsidies to compete with solar, not the other way around.

Trump can love coal miners all he wants. But he cannot stop solar from becoming the cheapest energy source any more than he could have halted the rise of ever cheaper, more powerful computers. Hes going to lose -- again.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

To contact the author of this story: Stephen Mihm at smihm1@bloomberg.net

To contact the editor responsible for this story: Mike Nizza at mnizza3@bloomberg.net

View original post here:

Trump's Coal Bet Faces a Tough Foe: Moore's Law - Bloomberg

IBM Up Against Moore’s Law with 5nm Chip – All About Circuits

Moores Law will be relevant for a few more years with recent news that IBM, GlobalFoundries, and Samsung succeeded in fitting 30 billion transistors onto a single, 5nm chip.

With concerns that the trend Moore's Law predicted, which states that the number of transistors able tofit on a single chip would double every two years, was slowing down or becoming obsolete, research into High Performance and Parallel Computing has begun to gain traction as one of the solutions for ensuring that computation power continues to expand, while decreasing power and resource requirements.

IBM managed this feat by using a new type of gate: the Gate All Around Field-Effect Transistor (GAAFET). This is similar to transistors found in 7nm chips, which are Fin Field-Effect Transistors (FINFET). Both are 3-dimensional, extending a fin upward to allow for more silicon, but in the case of GAAFET, there are three nanosheet layers of silicon.

Manufacturing chips this small is a complicated task. However, by using ultraviolet laser etching the process becomes more precise and manageable, which is part of the breakthroughallowing IBM to create 5nm chips.

Tech industry analysts in the past did not predict a 5nm chip would be possible before the early 2020s. While IBM and its partners still have some time to go before they are likely to begin producing the 5nm chip, it is still a head start in the industry.

Currently, 10nm is the smallest chip available on the market, with smartphones typically utilizing 14nm Qualcomm chips. The 5nm chip is promised to be significantly faster and power efficient (up to 40 and 75 percent more efficient, respectively) compared to its 10nm predecessor; IBM believes days could be added to the battery life of phones.

There has been speculation that the GAAFET could be scaled down to as small as 3nm. However, the more a chip is scaled down, the higher thepotential for problems with physical limitations of transistors that small, including added complexity in manufacturing.

Talking about 3nm chips may be getting ahead of ourselves, since 7nm chips have yet to enter the market, and 10nm chips have just begun to appear.

Samsungonly started manufacturing 10nm chips last October, which were reported to have a 27 percent increase in performance and a 40 percent increase in power efficiency compared to 14nm chips.

Coffee Lake, Intels codename for its 8th-generation Core processors, is the successor of the Kaby Lake 14nm microarchitecture. Reportedly, it refines the manufacturing process with up to a 30% increase in performance compared to Kaby Lake. The recently announced Intel i9 chip will also utilize a 14nm chip.

However, Intel claims that chip size is not the best way to describe the actual improvements of a chips density and that Samsungs 10nm chips are still equivalent to Intels 14nm chip before its refinement. An alternative method of addressing performance involves the use of a formula which focuses on standard logic cell densityand weightedbased on typical chip design.

Follow this link:

IBM Up Against Moore's Law with 5nm Chip - All About Circuits

A new way to extend Moore’s law – The Economist

rG'}3Refe]Q c=, %Q}O?wN(-o[& %'}7oh9Y;oQdq~l9[?zk,WMUUcs'4D|fIhkny|4O?:vw4UR/O4-r7}L>4I/<_,dz3_,uD0Mm'hH2JDNs|~^l}7Czlj4O[8[-~>dyh|Cu^4[tU,zZoNOni`qC7'3zi|yx4-~FydE,Io|jo9kil|"'xby5uiX7&%iGbrPD_Un G5^Q-@=Bpgo&C}<^4N|Ds1_Fp;^gB+WWjO>!<:UstON~Lgxh~:'xMj$ef~r~tN4'DVKy(,zFDa'zgH:.thrM&+4+< N7q`'1Ttca8[o `4#XvEKH'GCXoFwgKzusGqDM| Tf~pO =E#k}]zZ":}svNh&dhS!5];5tRmQ4OJ*'MUUGE~~J0Qm06m~X5.q&M?pyM2-*'lnrda11|w%?1eSVuyM&uO{Y7a^6:mQF`IeNZ[~IL_iPIuTN#2^r(&o?Su39a;>Y8~GmX O.Y~d>.qhb#* "{c#J5Gh0Kf=;W5)PQyBx[@5X5Q3O /Z /&n zF|>#N^&gw*F?z(cw}sD,GV)=M5So KcMO|G_,!vz/}GYy]^!D(_~iZq?YUoNnh8k}=&V {J)H,v1"==[.U"?OMp6M2M;6;z47K)'Ex byySho51$ X&]>Oo4][ ?'%'hxoq+FNR5;9/#Djx3e:m?|GhXTiH(Hd:&^f*1&+?XEpI;q Q7J`nhg0IwqPd^G^y3 >~A4J02&?F$ul4vL/uYE/_e%]&)SY;QK6mTGWt!m]x=0]UsLPj <%/D;*/v0}#&B$jomn"UO=zwu?[1;-=Yw[n <04=nup}aFsO`w=N?o+cX=pBjYZ>wpn|]e{}<=y'^O;y/__]NOY|wW?|USo??~~2K|zyp~tro/_o}}kq'xNO?,>=tO[#yM<.^^Mzw_'O_>xZq_ls'z7c9}?z8/_ToA'Q?'Y~:?ee?_v}o9x??||AszP_|>/?<|%?g4h)_rwr?/)}Ko&Q?O}O7y1n0g9x|=^*ioo}~sxRk_EJ_y[KMF`N]}UJc/)qMohn~74N*mAX7z?/{fdDm6Y}5ooC[lk,z,YFqr>GnOu~ &=:ZVG fb! R49,v=rr;R*#tXEI_Z1_oZL^ d-5{5O:4'[ {hj'q~fsMO"Oi6=:{=/2n6O}-{`TngTzN B/tMzwX5o"N{uSt>8NlbJ<3'> !{c)z3q '~&-ouk;NQ^AcPwCyGHM+`nckl/rq$ u5a?w; 9hQ M5e.t%1^3R7pbRjQd. (uYW}dhL5``QS^ZY^;YGM$7jaF3lt B|gTxuZv<[>^6f+pN= N93WNZ9fx ;o|t ^2NcLOG(&7hb)* Oo%Zme:t"VFjvV1o#Rz$Zx}k0}?Lf|hSLNg{m v?x69m}3n-5NWRin^, wlIVFb q#<2 ;u[n-:(o}<`i$#x%;n0}}eZo1I 2X{OeG&2px<tn.w[=z7r,E{&-wc}si*Lay}B0n9zYw.8b;`[y|d=72cyvf>=T}a1/VNVc>1?)p{cng=h(>.|z$EU %k?rq }>ISU^zXkq&,K5YfGeaTrR KZxoo/:7KlIl/gZr|oZ]vun'jFnJrz ~'2=tcbL_oqln[/ wa'#.h[8[tBn4P[X}V;1_G2<_'jXjVWZg?iN}W`2^5];Z~|3[a7[NqTg2n*n8`@Jt]+1MnGM|[n;tpwC!=0r:a&"=]tS]Eb-u/s;vs]! h7`^ooP] C^t([~8m,SDa0)jh$;?'=C`|W)[oS>zex3*{].o7^&}oqFf58zwC;Y{+bY{)-LY{,Y{!|dx4n=3em1GG Il|d>O.O87]jW~Ov9#{]jq99x3R&/.,fqOx)$<":0HDC=nVPy?GIyV>O5}ww_O6W/}d5y_rSvW.hrLLdQ2MxwT_O$"'KeTN-,.LbERDU6lE;nEZ?G=*"G?9uDw"j&S;]{^}lop!7{w~,of8Y/}vgid+Zr',X&o,,%][+/@m#?Zxz|iSb<3.^Z3_dtg[!wXa^JI/RL~y@r`=T_;$p/E$zMT-&OhAi`aYT9[Rs6 5!q=?2'!c}>c? +c-}>GXs?%N_laW)W/b}6_}!&c}cR'>}b::>a^*+>/}b:u)}:m.^* g>?r`0-|^-Ck:~>fNkUIDN^vG!Mk/"I//UyW>n=([d]^9Hh|]TOg(x>];yu>y$ y4Oo27&T2)dn?QJ/G+[QvP#VwUHWFJ9lCXte`{Ly!5l_K&!Qx YT-iJial(SZb?=2V8Y!ecK.Uqv;uMn_NVSv&{pQBU!r8_40t}l>K'Ql TrbX' G*xfFo59?[?D8T>Va4Jdk*]1BL{)@4ohhkqnZiBY0ay_r`xBrT^MY/i/5tvUx VMnE?7bUg1@u.4mds3{I}MJe^;

{Wkcdc7eny:j$~b~CWoTx8;c&lyw|QMx6y'j>&M*?3:#XjpJnhpj1?Rh_eh6^/GEsN g]q;>:chuE &OA@tp4*>^phO*="sDPfK_tV%YPS }/zp25>]xHkMur'{S3ngt?>7GoLrxsIMFl|@bB?yO][~/Fx40 0,;K1iWJ  VH@m]RXS13v,XUmY!W)o}e1pNcp86N,;P0bV@4[U)jBf"g#/Neof5'L+2gw"`C[)3O$z=>Zm]6,D(x6U{@@Q3g;]-edpCS.ZFfIAl=X2j`Hwiy8m_8,S}Y*=C!Gq!!TV`hqV*".2.MlB9(f ~kP^ *&|v FE(JlcgcOdX1R;F!Nu{ QE^^*PY5:X{'@+^pHOiT,bMrYm7+(;F=euA3g.F0faFSYK-{&,!9$D) B_hKf11{FFvB(QGRbE;#cKN"@x3~Z W$ h_VMCLn-;+NLM y*>=eZM2x1bJ|&=F$i HF^SsDd2&3z|+f,7` yCy/?nA0i S Z7+#vg^y/G`AMZVNdurM3dldZEV^KKs)EUmF,-rPRc"o9S/ ;HO,HzNI*kK+kx=22/".X~c@)xVxu#P3"{BjY8zEGJ#nVB6X8R8C`ivJ[1 Dm"u;TA b9+>Q[&BZ4"8QO [e&yUk#Ng"VTU fy!0innt~-f)][8CVfHE.m8nUhL_aK(fD~x xYQFv^xh=D3L'%aO Kk#'9]gQsEi0G%"k c]=7(59Fqs,m>s,Ec D[.6`k>q>y@sHq{^,Y]fcq!S)VUb|xOrtf/Eb)-.8C$ m,3dSjX0<0( pB:vMcU sD0+so3|3] m_^)vZ9tDK+z%s;ji2X#;Ve4p.dhj0$H ^Nb=cD'U1:!d2H|M7]8.VuQVhiE*%,>]zG##n38dKkXA`z|d,|?ZdxO`C5^_:]^Ojn 8"h)9tqKd |o>-`"r3/xrLV>iaL@"JtK=|EKOT U,3/d b-:(Dhk~%If! %j>1Ul%@ I. L3R?ybdR.y I d#O%j^m,zqMDOf$o5y$m"Dq>8? =3xgMdhCe+^J8>Aj%M[[UoV5f/Z XYxwJ?wN3ga'{;xVV7UV{2yD [,`g%`EgY36gN uPzk @;|#sfRN/%4!fM^TpK+G4eKJCxT5bfJyK!KcBEZEK&xNJ#>,'~Rqp (4"pdre*xo&Er#u P0DH1!K#3U">rhg73}jv(~xd_UX3aD"+N$sK2wLH/m5h*0]<3pq(-(&fT!>E8#Q:p}//?zi@X #8[Vc#IOy/,o B.6`H{ #MAL!A ad y-6;f/u[ .l~p1 W:L0$XS Y1~Ui b'=IE,v/aqyt# S4b }O0&M CY _B9#kW+qslU*XR 7Di:bQl`tf~UB g^MtZ!fZCP1G^e/xHU)BvmEgG {Jjj0]R5n`}XSO$ "K|6&Ma5 ^ d0'ex">SyYC-^H'0.k?aoaf?yD+/,Ybk>F|O8[1dQ IX4j21mhyyhH$0uXRg}yLN)~Cv}E;kiA Z@LpaO`Q`s)AXv0MigbpL0 j7-:Xcs@up.mgrDGU(NQ.6MWAp39/lhZS1!BJ5sB:!XbF 5kUuNC5},L; |dA ;B{lD_&i-%E`.qCLW },|6.;|t@) Q9FB ,+ YIq9xyA,U%DSd)VrUm#WU+pcN@ m FQs?s66JTv%HXNPjpHe*VTm1e p%Sa7.X$!Yz!|"Hd=@=Ti4Lz5 T*p|{i]!g RZ~i8W'ej 5pe$Pvpxg m$^LN?J(}kENodYbwa+pfW~U$E#Q*#r{DlLwU%]6]ji, S!Y!}'Na!TUE)"L?4`FUx3w]aUz Z"LlEC6.1XU_eE4AR8i2[SG*]h'.{fP<5@L bs-""YpXU?-f)O;lNX@r5^2838(L5!Gg'BZ@k`:SF#eDEN*E2jqCuj#=G@7)ZPZ$;o~ o*L0 %%"UVIUmb" bYe0]@E}"S9:g+Au RNj4ES#jE;w*Bf "XIB]3j5X(YC,Ttr0K~@6 TN4Bc |LACdD= G# p).7IRlxL'!Gr#8{K&q/pra#Y'RO`P/+@c0T,%9T%dN4=2 8gnjBTxm-[-gQO>5v"MxU+$ x$i4At4n~BGuCzAju/9K X0,OjQ9p,VEC:/$X,5[<^UT+R@`2*JV,F| w9hAljjF&vZ% N$UzTTRl#VHPR & :jU5A5=hU[sjEx)O$pAYuv02N# b)XP A 7bN~XG b=Uoc5TA>7*wDz^Us].$rDP wP_^qxE=hw1Ht3D'~.c'|L)"1]Kx0)We`q}e{Tv8|ClEeD-L*cwJ~x`EVMqz1s9LWCZDj6z7m9DG0iUGn=v{>6p&TD?r lz"B,-yCw=fbpp+ ;KY1EQeR~+3*n*tPOP @l)(WzprTe|:8#5ZxK1JX6s[`T pu.Ikci'G )d)VZ[ECB4j;)']+20eX(TA`7nM/} *cRB3b}'Q^T^<|(2%j``PlRsv&H?ce ./Sd"/D&x{)f%v*=Xs0v{$Y jXJQQ,(-R:5Zo#2"(U0]U./uEV%d@Fd837PviU}-NH*QBRj? $y"-q{lb31)*SCD%Ery+f UIyKZw#s1QgT#@lAjH,j`^-:'l*lG36{n?d*|]m!ZWm$@12 tJe'a_.cR_$T=n"{1!`%WH@dc<[GZ4_/_Qa jv,>8kTx}!A~vghc#u9C22 gJs UEHsB .6b~*;u!}X z,p,P 0ViXaL|e6zXE.?mz"[EOWM5Q5A"&1x[@/ltZN$`3-$yWy%Dl #AE0# f~[2hJe`Q51=lpm,tuA/gcA6r&vq#{FE,xUw (Aqf5{^HB7y+FE}Vd^L%aP^l k Gp5sQWZI3'H|)S2g&:b-Y/|R0nu `F:ly:z: ;' =&_@7QPCi_3j6r^M @kKB(ZvV~&i#:bh"rZjs]Ht CAH.E~$&rL^Fggcu*`;x`?l$N:gu ^55QQ/@[u}kA XC 9/Be)Fu yhe^:Q89q:<)2s/w9gkswn4xLTa(RT"_y[t3z7r(D9Yjug LT's{a9f_}8w]"dPuB3mk4,dkF~:C(1]I&IlpXpvBCa8$v$8}#SvAq[XH'L TT. ilE`.%!|A4{ )nTmtbY1X_@/_Ej` GNBR}p_+]n/=IO#(VoU lJ[HXhTwgSwr0-+l,YE +ahtZn{I4 If$PH%60jHN!wo'[IX$J]Q 'gE~JY S|)a8TUD{8TB(2^9z4j'XJ [0y=P1,w1RZdTj6IxW1jcX#tlDS56ZV|fR8Iea^ld ZtaPUV E37}IBlPDi[).JmY:Erar 38;Hk WDYFtT=p4QG%CgFl1W:D6D^;{K",1o"[t(wEA9)a+hb>Tp8Q)vJ T[-ui&9l N .0DRL3:4 (m~ :Y@Px`8H`@,*F7L3k1'G,['&uP P') ;3r))y5[db2W fsFI>`(ip+Jzc#Xu*b 2aNz NSHE0 &((npH6F*@Q^<*@JB)B|,71HwsYX 0*S10.PF&.TdTY+2HS* IT^B,vXsa>S S H@{RwiM,&$L!1ClsX "94)Y%#a!3:{2fDlFU~|W q 0 |RK5AEh#tN&0?xG3a<%Y5gX~'yL8gd^TP0NB_'"l$j`9kg,zI&q[`CP@bo,/9:AKdERHGL;HlUEI'fE;L"-&.c9H5/kZcEi"w%s&J?a4=+{w :saz:u/Gf.{%MXB0Upj!^0**!.k~: "Pg d4K,H9xztQ?kZ`hji9rd4Cm9bt* Fmf9*eg1O5.biB?cN#! )vpn`(?*9/*I!7)db M2C p?r*9ChXV,P8R)FP3G))S(WaHldsQlymu0hy[?QD514wqhtCJRA{`_v."lZvL'nC z*@#,W}Y,Vb"O{ozjTcY$<<@SFk`.~{4_T&*<3>*HQD:'=:bIRmJ5M1;jTSa S1jv8cH1Hse;W_g&NU8418$z+`To(P``&Tn7*,H)qd3m)G/W~|#9"lH>?FXt:RR $fEcg% $<*|I$ Sztxfd:.$2WtB4@xbA$QHE@2QDF-1fQdb:nzq HD+!Os:7sCYuM8^8b P`gj$#<'0vM]=JeDU{_iWQG?5a$'@#kyN'R1!p|:{n0'rH}B' ,ssdS5]qzr:h+18Yjg#b`Cb1s.&2e'@?3U`*a~[o+L,JiB# us#}QSD|W"3fxf|n7Xcf|2!yUk)`THFR a k&s.u=NhtYh_W mR^3r"BH#eR>#Cw:&F2i'SrL6sVF(/ 1-9T$]M*: % 5s1RdL)MCbJQ|j[[p!4NCQj F P ve5Fu&bR Xc'#Vpj)'Sd]r`T*{&5RuMlon^nmx0POUDkOG_QFYiG"5'mUOGHP%Pac=gl^?=+[)1 4CZe`5CBGryNLZd HN-+{?,ZM0Dr^M!EW*5PP|nq6R4YzPvKEG[(ch63rZy?!.XC 9 [4FSZE"b6"b@`$JCudY'(feW?Jwn+uE6rIPbbQK d iDj[ (%kb,o#d!?kPLZ8f>]!t*w, B&OAsp2_'+?q#?(N8WWM9 BMCbEaPCQAa0qTrZ.|q*fy8NW:A3A]Ce%4*xr-iv0BiIM_D! eatGz;OF2;d %O2*.r!=6"2by8 QXyo - vG !l^jUI2TGKUciUo@-a](F A+e&U*m#6%XHV%MA0E(Q*X2HQVxe))F`@elyv|"Z)bB@9#G#|R[c>APIQi;; $AHR'V3x8sTd9!JTZ'hdXn8<]oybmN @X|GCE89YoP@uT19BV'}vYy4P!Z%TtST *Qc>`H #qQ*gPt,-5GJ(}Doc2J3f0u;J8S9 ]Bn,]/b_>GEhd^/>@@q**KH/crL3WsjA6!:FRkZRF_O)nZ H>Zh6DBG4.l(Q0n$u?n Ttz3 ga9&]~*E#*z:` A;/5Bjtc>jA65"WV@_6[ v0%qSY"G1K[~)]u2*j 9/ Cyd?H c`/2$D(+-=4Gi;Gh82{''B6nxw&*Hhg/;Su'@)k[^q}owDz+L]X}O7ig.Q%q3O1>IkW@>sqKx*Co~@;xP*T$?^JfU#oTyehJ%%el@V_7^-.>kStaA{;V2@/WssW0;b)Zj3k!Y$wxx;Wp#?Hp#?!5wRGZa~:#1;3vbb^..?eHMN8,&-aa25u8;?[b1f:e{#- + F*%vU`[k:S4**Bv;~zoF$d,>MOZR3M-|Fwlwf j !2aMZRdRnGir],j$j]k4*N1r Z&>KB34D}F( 7^q#t%HGQ_^Sn 'b,} d!j.b!x[3iT"CuXE0_UNa?L&+a_JIT6x11Bmsibb"ZbrJ1?_CCCC!vc!q[qAkDYEdRkD 'xfY(j6'?l.UuokQL@5tp>j45A8/7bL+V'2F`3$)j*H$3*yFfnkA)N}#YB]sD Bm_0pauy" ?U3`MnFK [0n;l;2k]K}>!EjXB}aa3?0,A$0j}AvVh:QcZ.!TmfJKqdY 1T"uL};Onbsbb" YXZ*R" kYk6yy!'^;umqe=`Vj)I% !Q;1bf#&Dp(E( K^?@pQVJ/E1V*+oheSRH p$(6]>]4up>FZR$7%f"H0H'ba;cCa?Ej'4WMC /m(a"L*bVy{

Continued here:

A new way to extend Moore's law - The Economist