How to Pick a Winning March Madness Bracket – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Introduction

In 2019, over 40 million Americans wagered money on March Madness brackets, according to the American Gaming Association. Most of this money was bet in bracket pools, which consist of a group of people each entering their predictions of the NCAA tournament games along with a buy-in. The bracket that comes closest to being right wins. If you also consider the bracket pools where only pride is at stake, the number of participants is much greater. Despite all this attention, most do not give themselves the best chance to win because they are focused on the wrong question.

The Right Question

Mistake #3 in Dr. John Elders Top 10 Data Science Mistakes is to ask the wrong question. A cornerstone of any successful analytics project starts with having the right project goal; that is, to aim at the right target. If youre like most people, when you fill out your bracket, you ask yourself, What do I think is most likely to happen? This is the wrong question to ask if you are competing in a pool because the objective is to win money, NOT to make the most correct bracket. The correct question to ask is: What bracket gives me the best chance to win $? (This requires studying the payout formula. I used ESPN standard scoring (320 possible points per round) with all pool money given to the winner. (10 points are awarded for each correct win in the round of 64, 20 in the round of 32, and so forth, doubling until 320 are awarded for a correct championship call.))

While these questions seem similar, the brackets they produce will be significantly different.

If you ignore your opponents and pick the teams with the best chance to win games you will reduce your chance of winning money. Even the strongest team is unlikely to win it all, and even if they do, plenty of your opponents likely picked them as well. The best way to optimize your chances of making money is to choose a champion team with a good chance to win who is unpopular with your opponents.

Knowing how other people in your pool are filling out their brackets is crucial, because it helps you identify teams that are less likely to be picked. One way to see how others are filling out their brackets is via ESPNs Who Picked Whom page (Figure 1). It summarizes how often each team is picked to advance in each round across all ESPN brackets and is a great first step towards identifying overlooked teams.

Figure 1. ESPNs Who Picked Whom Tournament Challenge page

For a team to be overlooked, their perceived chance to win must be lower than their actual chance to win. The Who Picked Whom page provides an estimate of perceived chance to win, but to find undervalued teams we also need estimates for actual chance to win. This can range from a complex prediction model to your own gut feeling. Two sources I trust are 538s March Madness predictions and Vegas future betting odds. 538s predictions are based on a combination of computer rankings and has predicted performance well in past tournaments. There is also reason to pay attention to Vegas odds, because if they were too far off, the sportsbooks would lose money.

However, both sources have their flaws. 538 is based on computer ratings, so while they avoid human bias, they miss out on expert intuition. Most Vegas sportsbooks likely use both computer ratings and expert intuition to create their betting odds, but they are strongly motivated to have equal betting on all sides, so they are significantly affected by human perception. For example, if everyone was betting on Duke to win the NCAA tournament, they would increase Dukes betting odds so that more people would bet on other teams to avoid large losses. When calculating win probabilities for this article, I chose to average 538 and Vegas predictions to obtain a balance I was comfortable with.

Lets look at last year. Figure 2 compares a teams perceived chance to win (based on ESPNs Who Picked Whom) to their actual chance to win (based on 538-Vegas averaged predictions) for the leading 2019 NCAA Tournament teams. (Probabilities for all 64 teams in the tournament appear in Table 6 in the Appendix.)

Figure 2. Actual versus perceived chance to win March Madness for 8 top teams

As shown in Figure 2, participants over-picked Duke and North Carolina as champions and under-picked Gonzaga and Virginia. Many factors contributed to these selections; for example, most predictive models, avid sports fans, and bettors agreed that Duke was the best team last year. If you were the picking the bracket most likely to occur, then selecting Duke as champion was the natural pick. But ignoring selections made by others in your pool wont help you win your pool.

While this graph is interesting, how can we turn it into concrete takeaways? Gonzaga and Virginia look like good picks, but what about the rest of the teams hidden in that bottom left corner? Does it ever make sense to pick teams like Texas Tech, who had a 2.6% chance to win it all, and only 0.9% of brackets picking them? How much does picking an overvalued favorite like Duke hurt your chances of winning your pool?

To answer these questions, I simulated many bracket pools and found that the teams in Gonzagas and Virginias spots are usually the best picksthe most undervalued of the top four to five favorites. However, as the size of your bracket pool increases, overlooked lower seeds like third-seeded Texas Tech or fourth-seeded Virginia Tech become more attractive. The logic for this is simple: the chance that one of these teams wins it all is small, but if they do, then you probably win your pool regardless of the number of participants, because its likely no one else picked them.

Simulations Methodology

To simulate bracket pools, I first had to simulate brackets. I used an average of the Vegas and 538 predictions to run many simulations of the actual events of March Madness. As discussed above, this method isnt perfect but its a good approximation. Next, I used the Who Picked Whom page to simulate many human-created brackets. For each human bracket, I calculated the chance it would win a pool of size by first finding its percentile ranking among all human brackets assuming one of the 538-Vegas simulated brackets were the real events. This percentile is basically the chance it is better than a random bracket. I raised the percentile to the power, and then repeated for all simulated 538-Vegas brackets, averaging the results to get a single win probability per bracket.

For example, lets say for one 538-Vegas simulation, my bracket is in the 90th percentile of all human brackets, and there are nine other people in my pool. The chance I win the pool would be. If we assumed a different simulation, then my bracket might only be in the 20th percentile, which would make my win probability . By averaging these probabilities for all 538-Vegas simulations we can calculate an estimate of a brackets win probability in a pool of size , assuming we trust our input sources.

Results

I used this methodology to simulate bracket pools with 10, 20, 50, 100, and 1000 participants. The detailed results of the simulations are shown in Tables 1-6 in the Appendix. Virginia and Gonzaga were the best champion picks when the pool had 50 or fewer participants. Yet, interestingly, Texas Tech and Purdue (3-seeds) and Virginia Tech (4-seed) were as good or better champion picks when the pool had 100 or more participants.

General takeaways from the simulations:

Additional Thoughts

We have assumed that your local pool makes their selections just like the rest of America, which probably isnt true. If you live close to a team thats in the tournament, then that team will likely be over-picked. For example, I live in Charlottesville (home of the University of Virginia), and Virginia has been picked as the champion in roughly 40% of brackets in my pools over the past couple of years. If you live close to a team with a high seed, one strategy is to start with ESPNs Who Picked Whom odds, and then boost the odds of the popular local team and correspondingly drop the odds for all other teams. Another strategy Ive used is to ask people in my pool who they are picking. It is mutually beneficial, since Id be less likely to pick whoever they are picking.

As a parting thought, I want to describe a scenario from the 2019 NCAA tournament some of you may be familiar with. Auburn, a five seed, was winning by two points in the waning moments of the game, when they inexplicably fouled the other team in the act of shooting a three-point shot with one second to go. The opposing player, a 78% free throw shooter, stepped to the line and missed two out of three shots, allowing Auburn to advance. This isnt an alternate reality; this is how Auburn won their first-round game against 12-seeded New Mexico State. They proceeded to beat powerhouses Kansas, North Carolina, and Kentucky on their way to the Final Four, where they faced the exact same situation against Virginia. Virginias Kyle Guy made all his three free throws, and Virginia went on to win the championship.

I add this to highlight an important qualifier of this analysisits impossible to accurately predict March Madness. Were the people who picked Auburn to go to the Final Four geniuses? Of course not. Had Terrell Brown of New Mexico State made his free throws, they would have looked silly. There is no perfect model that can predict the future, and those who do well in the pools are not basketball gurus, they are just lucky. Implementing the strategies talked about here wont guarantee a victory; they just reduce the amount of luck you need to win. And even with the best modelsyoull still need a lot of luck. It is March Madness, after all.

Appendix: Detailed Analyses by Bracket Sizes

At baseline (randomly), a bracket in a ten-person pool has a 10% chance to win. Table 1 shows how that chance changes based on the round selected for a given team to lose. For example, brackets that had Virginia losing in the Round of 64 won a ten-person pool 4.2% of the time, while brackets that picked them to win it all won 15.1% of the time. As a reminder, these simulations were done with only pre-tournament informationthey had no data indicating that Virginia was the eventual champion, of course.

Table 1 Probability that a bracket wins a ten-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

In ten-person pools, the best performing brackets were those that picked Virginia or Gonzaga as the champion, winning 15% of the time. Notably, early round picks did not have a big influence on the chance of winning the pool, the exception being brackets that had a one or two seed losing in the first round. Brackets that had a three seed or lower as champion performed very poorly, but having lower seeds making the Final Four did not have a significant impact on chance of winning.

Table 2 shows the same information for bracket pools with 20 people. The baseline chance is now 5%, and again the best performing brackets are those that picked Virginia or Gonzaga to win. Similarly, picks in the first few rounds do not have much influence. Michigan State has now risen to the third best Champion pick, and interestingly Purdue is the third best runner-up pick.

Table 2 Probability that a bracket wins a 20-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

When the bracket pool size increases to 50, as shown in Table 3, picking the overvalued favorites (Duke and North Carolina) as champions significantly lowers your baseline chances (2%). The slightly undervalued two and three seeds now raise your baseline chances when selected as champions, but Virginia and Gonzaga remain the best picks.

Table 3 Probability that a bracket wins a 50-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

With the bracket pool size at 100 (Table 4), Virginia and Gonzaga are joined by undervalued three-seeds Texas Tech and Purdue. Picking any of these four raises your baseline chances from 1% to close to 2%. Picking Duke or North Carolina again hurts your chances.

Table 4 Probability that a bracket wins a 100-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

When the bracket pool grows to 1000 people (Table 5), there is a complete changing of the guard. Virginia Tech is now the optimal champion pick, raising your baseline chance of winning your pool from 0.1% to 0.4%, followed by the three-seeds and sixth-seeded Iowa State are the best champion picks.

Table 5 Probability that a bracket wins a 1000-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

For Reference, Table 6 shows the actual chance to win versus the chance of being picked to win for all teams seeded seventh or better. These chances are derived from the ESPN Who Picked Whom page and the 538-Vegas predictions. The data for the top eight teams in Table 6 is plotted in Figure 2. Notably, Duke and North Carolina are overvalued, while the rest are all at least slightly undervalued.

The teams in bold in Table 6 are examples of teams that are good champion picks in larger pools. They all have a high ratio of actual chance to win to chance of being picked to win, but a low overall actual chance to win.

Table 6 Actual odds to win Championship vs Chance Team is Picked to Win Championship.

Undervalued teams in green; over-valued in red.

About the Author

Robert Robison is an experienced engineer and data analyst who loves to challenge assumptions and think outside the box. He enjoys learning new skills and techniques to reveal value in data. Robert earned a BS in Aerospace Engineering from the University of Virginia, and is completing an MS in Analytics through Georgia Tech.

In his free time, Robert enjoys playing volleyball and basketball, watching basketball and football, reading, hiking, and doing anything with his wife, Lauren.

See the original post:
How to Pick a Winning March Madness Bracket - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

Cisco Enhances IoT Platform with 5G Readiness and Machine Learning – The Fast Mode

Cisco on Friday announced advancements to its IoT portfolio that enable service provider partners to offer optimized management of cellular IoT environments and new 5G use-cases.

Cisco IoT Control Center(formerly Jasper Control Center) is introducing new innovations to improve management and reduce deployment complexity. These include:

Using Machine Learning (ML) to improve management: With visibility into 3 billion events every day, Cisco IoT Control Center uses the industry's broadest visibility to enable machine learning models to quickly identify anomalies and address issues before they impact a customer. Service providers can also identify and alert customers of errant devices, allowing for greater endpoint security and control.

Smart billing to optimize rate plans:Service providers can improve customer satisfaction by enabling Smart billing to automatically optimize rate plans. Policies can also be created to proactively send customer notifications should usage changes or rate plans need to be updated to help save enterprises money.

Support for global supply chains: SIM portability is an enterprise requirement to support complex supply chains spanning multiple service providers and geographies. It is time-consuming and requires integrations between many different service providers and vendors, driving up costs for both. Cisco IoT Control Center now provides eSIM as a service, enabling a true turnkey SIM portability solution to deliver fast, reliable, cost-effective SIM handoffs between service providers.

Cisco IoT Control Center has taken steps towards 5G readiness to incubate and promote high value 5G business use cases that customers can easily adopt.

Vikas Butaney, VP Product Management IoT, CiscoCellular IoT deployments are accelerating across connected cars, utilities and transportation industries and with 5G and Wi-Fi 6 on the horizon IoT adoption will grow even faster. Cisco is investing in connectivity management, IoT networking, IoT security, and edge computing to accelerate the adoption of IoT use-cases.

See the rest here:
Cisco Enhances IoT Platform with 5G Readiness and Machine Learning - The Fast Mode

How businesses and governments should embrace AI and Machine Learning – TechCabal

Leadership team of credit-as-a-service startup Migo, one of a growing number of businesses using AI to create consumer-facing products.

The ability to make good decisions is literally the reason people trust you with responsibilities. Whether you work for a government or lead a team at a private company, your decision-making process will affect lives in very real ways.

Organisations often make poor decisions because they fail to learn from the past. Wherever a data-collection reluctance exists, there is a fair chance that mistakes will be repeated. Bad policy goals will often be a consequence of faulty evidentiary support, a failure to sufficiently look ahead by not sufficiently looking back.

But as Daniel Kahneman, author of Thinking Fast and Slow, says:

The idea that the future is unpredictable is undermined every day by the ease with which the past is explained. If governments and business leaders will live up to their responsibilities, enthusiastically embracing methodical decision-making tools should be a no-brainer.

Mass media representations project artificial intelligence in futuristic, geeky terms. But nothing could be further from the truth.

While it is indeed scientific, AI can be applied in practical everyday life today. Basic interactions with AI include algorithms that recommend articles to you, friend suggestions on social media and smart voice assistants like Alexa and Siri.

In the same way, government agencies can integrate AI into regular processes necessary for society to function properly.

Managing money is an easy example to begin with. AI systems can be used to streamline data points required during budget preparations and other fiscal processes. Based on data collected from previous fiscal cycles, government agencies could reasonably forecast needs and expectations for future years.

With its large trove of citizen data, governments could employ AI to effectively reduce inequalities in outcomes and opportunities. Big Data gives a birds-eye view of the population, providing adequate tools for equitably distributing essential infrastructure.

Perhaps a more futuristic example is in drafting legislation. Though a young discipline, legimatics includes the use of artificial intelligence in legal and legislative problem-solving.

Democracies like Nigeria consider public input a crucial aspect of desirable law-making. While AI cannot yet be relied on to draft legislation without human involvement, an AI-based approach can produce tools for specific parts of legislative drafting or decision support systems for the application of legislation.

In Africa, businesses are already ahead of most governments in AI adoption. Credit scoring based on customer data has become popular in the digital lending space.

However, there is more for businesses to explore with the predictive powers of AI. A particularly exciting prospect is the potential for new discoveries based on unstructured data.

Machine learning could broadly be split into two sections: supervised and unsupervised learning. With supervised learning, a data analyst sets goals based on the labels and known classifications of the dataset. The resulting insights are useful but do not produce the sort of new knowledge that comes from unsupervised learning processes.

In essence, AI can be a medium for market-creating innovations based on previously unknown insight buried in massive caches of data.

Digital lending became a market opportunity in Africa thanks to growing smartphone availability. However, customer data had to be available too for algorithms to do their magic.

This is why it is desirable for more data-sharing systems to be normalised on the continent to generate new consumer products. Fintech sandboxes that bring the public and private sectors together aiming to achieve open data standards should therefore be encouraged.

Artificial intelligence, like other technologies, is neutral. It can be used for social good but also can be diverted for malicious purposes. For both governments and businesses, there must be circumspection and a commitment to use AI responsibly.

China is a cautionary tale. The Communist state currently employs an all-watching system of cameras to enforce round-the-clock citizen surveillance.

By algorithmically rating citizens on a so-called social credit score, Chinas ultra-invasive AI effectively precludes individual freedom, compelling her 1.3 billion people to live strictly by the Politburos ideas of ideal citizenship.

On the other hand, businesses must be ethical in providing transparency to customers about how data is harvested to create products. At the core of all exchange must be trust, and a verifiable, measurable commitment to do no harm.

Doing otherwise condemns modern society to those dystopian days everybody dreads.

How can businesses and governments use Artificial Intelligence to find solutions to challenges facing the continent? Join entrepreneurs, innovators, investors and policymakers in Africas AI community at TechCabals emerging tech townhall. At the event, stakeholders including telcos and financial institutions will examine how businesses, individuals and countries across the continent can maximize the benefits of emerging technologies, specifically AI and Blockchain. Learn more about the event and get tickets here.

Read the original post:
How businesses and governments should embrace AI and Machine Learning - TechCabal

Machine learning could speed the arrival of ultra-fast-charging electric car – Chemie.de

Using machine learning, a Stanford-led research team has slashed battery testing times - a key barrier to longer-lasting, faster-charging batteries for electric vehicles.

Battery performance can make or break the electric vehicle experience, from driving range to charging time to the lifetime of the car. Now, artificial intelligence has made dreams like recharging an EV in the time it takes to stop at a gas station a more likely reality, and could help improve other aspects of battery technology.

For decades, advances in electric vehicle batteries have been limited by a major bottleneck: evaluation times. At every stage of the battery development process, new technologies must be tested for months or even years to determine how long they will last. But now, a team led by Stanford professors Stefano Ermon and William Chueh has developed a machine learning-based method that slashes these testing times by 98 percent. Although the group tested their method on battery charge speed, they said it can be applied to numerous other parts of the battery development pipeline and even to non-energy technologies.

"In battery testing, you have to try a massive number of things, because the performance you get will vary drastically," said Ermon, an assistant professor of computer science. "With AI, we're able to quickly identify the most promising approaches and cut out a lot of unnecessary experiments."

The study, published by Nature on Feb. 19, was part of a larger collaboration among scientists from Stanford, MIT and the Toyota Research Institute that bridges foundational academic research and real-world industry applications. The goal: finding the best method for charging an EV battery in 10 minutes that maximizes the battery's overall lifetime. The researchers wrote a program that, based on only a few charging cycles, predicted how batteries would respond to different charging approaches. The software also decided in real time what charging approaches to focus on or ignore. By reducing both the length and number of trials, the researchers cut the testing process from almost two years to 16 days.

"We figured out how to greatly accelerate the testing process for extreme fast charging," said Peter Attia, who co-led the study while he was a graduate student. "What's really exciting, though, is the method. We can apply this approach to many other problems that, right now, are holding back battery development for months or years."

Designing ultra-fast-charging batteries is a major challenge, mainly because it is difficult to make them last. The intensity of the faster charge puts greater strain on the battery, which often causes it to fail early. To prevent this damage to the battery pack, a component that accounts for a large chunk of an electric car's total cost, battery engineers must test an exhaustive series of charging methods to find the ones that work best.

The new research sought to optimize this process. At the outset, the team saw that fast-charging optimization amounted to many trial-and-error tests - something that is inefficient for humans, but the perfect problem for a machine.

"Machine learning is trial-and-error, but in a smarter way," said Aditya Grover, a graduate student in computer science who co-led the study. "Computers are far better than us at figuring out when to explore - try new and different approaches - and when to exploit, or zero in, on the most promising ones."

The team used this power to their advantage in two key ways. First, they used it to reduce the time per cycling experiment. In a previous study, the researchers found that instead of charging and recharging every battery until it failed - the usual way of testing a battery's lifetime -they could predict how long a battery would last after only its first 100 charging cycles. This is because the machine learning system, after being trained on a few batteries cycled to failure, could find patterns in the early data that presaged how long a battery would last.

Second, machine learning reduced the number of methods they had to test. Instead of testing every possible charging method equally, or relying on intuition, the computer learned from its experiences to quickly find the best protocols to test.

By testing fewer methods for fewer cycles, the study's authors quickly found an optimal ultra-fast-charging protocol for their battery. In addition to dramatically speeding up the testing process, the computer's solution was also better - and much more unusual - than what a battery scientist would likely have devised, said Ermon.

"It gave us this surprisingly simple charging protocol - something we didn't expect," Ermon said. Instead of charging at the highest current at the beginning of the charge, the algorithm's solution uses the highest current in the middle of the charge. "That's the difference between a human and a machine: The machine is not biased by human intuition, which is powerful but sometimes misleading."

The researchers said their approach could accelerate nearly every piece of the battery development pipeline: from designing the chemistry of a battery to determining its size and shape, to finding better systems for manufacturing and storage. This would have broad implications not only for electric vehicles but for other types of energy storage, a key requirement for making the switch to wind and solar power on a global scale.

"This is a new way of doing battery development," said Patrick Herring, co-author of the study and a scientist at the Toyota Research Institute. "Having data that you can share among a large number of people in academia and industry, and that is automatically analyzed, enables much faster innovation."

The study's machine learning and data collection system will be made available for future battery scientists to freely use, Herring added. By using this system to optimize other parts of the process with machine learning, battery development - and the arrival of newer, better technologies - could accelerate by an order of magnitude or more, he said.

The potential of the study's method extends even beyond the world of batteries, Ermon said. Other big data testing problems, from drug development to optimizing the performance of X-rays and lasers, could also be revolutionized by the use of machine learning optimization. And ultimately, he said, it could even help to optimize one of the most fundamental processes of all.

"The bigger hope is to help the process of scientific discovery itself," Ermon said. "We're asking: Can we design these methods to come up with hypotheses automatically? Can they help us extract knowledge that humans could not? As we get better and better algorithms, we hope the whole scientific discovery process may drastically speed up."

Continue reading here:
Machine learning could speed the arrival of ultra-fast-charging electric car - Chemie.de

Google Teaches AI To Play The Game Of Chip Design – The Next Platform

If it wasnt bad enough that Moores Law improvements in the density and cost of transistors is slowing. At the same time, the cost of designing chips and of the factories that are used to etch them is also on the rise. Any savings on any of these fronts will be most welcome to keep IT innovation leaping ahead.

One of the promising frontiers of research right now in chip design is using machine learning techniques to actually help with some of the tasks in the design process. We will be discussing this at our upcoming The Next AI Platform event in San Jose on March 10 with Elias Fallon, engineering director at Cadence Design Systems. (You can see the full agenda and register to attend at this link; we hope to see you there.) The use of machine learning in chip design was also one of the topics that Jeff Dean, a senior fellow in the Research Group at Google who has helped invent many of the hyperscalers key technologies, talked about in his keynote address at this weeks 2020 International Solid State Circuits Conference in San Francisco.

Google, as it turns out, has more than a passing interest in compute engines, being one of the large consumers of CPUs and GPUs in the world and also the designer of TPUs spanning from the edge to the datacenter for doing both machine learning inference and training. So this is not just an academic exercise for the search engine giant and public cloud contender particularly if it intends to keep advancing its TPU roadmap and if it decides, like rival Amazon Web Services, to start designing its own custom Arm server chips or decides to do custom Arm chips for its phones and other consumer devices.

With a certain amount of serendipity, some of the work that Google has been doing to run machine learning models across large numbers of different types of compute engines is feeding back into the work that it is doing to automate some of the placement and routing of IP blocks on an ASIC. (It is wonderful when an idea is fractal like that. . . .)

While the pod of TPUv3 systems that Google showed off back in May 2018 can mesh together 1,024 of the tensor processors (which had twice as many cores and about a 15 percent clock speed boost as far as we can tell) to deliver 106 petaflops of aggregate 16-bit half precision multiplication performance (with 32-bit accumulation) using Googles own and very clever bfloat16 data format. Those TPUv3 chips are all cross-coupled using a 3232 toroidal mesh so they can share data, and each TPUv3 core has its own bank of HBM2 memory. This TPUv3 pod is a huge aggregation of compute, which can do either machine learning training or inference, but it is not necessarily as large as Google needs to build. (We will be talking about Deans comments on the future of AI hardware and models in a separate story.)

Suffice it to say, Google is hedging with hybrid architectures that mix CPUs and GPUs and perhaps someday other accelerators for reinforcement learning workloads, and hence the research that Dean and his peers at Google have been involved in that are also being brought to bear on ASIC design.

One of the trends is that models are getting bigger, explains Dean. So the entire model doesnt necessarily fit on a single chip. If you have essentially large models, then model parallelism dividing the model up across multiple chips is important, and getting good performance by giving it a bunch of compute devices is non-trivial and it is not obvious how to do that effectively.

It is not as simple as taking the Message Passing Interface (MPI) that is used to dispatch work on massively parallel supercomputers and hacking it onto a machine learning framework like TensorFlow because of the heterogeneous nature of AI iron. But that might have been an interesting way to spread machine learning training workloads over a lot of compute elements, and some have done this. Google, like other hyperscalers, tends to build its own frameworks and protocols and datastores, informed by other technologies, of course.

Device placement meaning, putting the right neural network (or portion of the code that embodies it) on the right device at the right time for maximum throughput in the overall application is particularly important as neural network models get bigger than the memory space and the compute oomph of a single CPU, GPU, or TPU. And the problem is getting worse faster than the frameworks and hardware can keep up. Take a look:

The number of parameters just keeps growing and the number of devices being used in parallel also keeps growing. In fact, getting 128 GPUs or 128 TPUv3 processors (which is how you get the 512 cores in the chart above) to work in concert is quite an accomplishment, and is on par with the best that supercomputers could do back in the era before loosely coupled, massively parallel supercomputers using MPI took over and federated NUMA servers with actual shared memory were the norm in HPC more than two decades ago. As more and more devices are going to be lashed together in some fashion to handle these models, Google has been experimenting with using reinforcement learning (RL), a special subset of machine learning, to figure out where to best run neural network models at any given time as model ensembles are running on a collection of CPUs and GPUs. In this case, an initial policy is set for dispatching neural network models for processing, and the results are then fed back into the model for further adaptation, moving it toward more and more efficient running of those models.

In 2017, Google trained an RL model to do this work (you can see the paper here) and here is what the resulting placement looked like for the encoder and decoder, and the RL model to place the work on the two CPUs and four GPUs in the system under test ended up with 19.3 percent lower runtime for the training runs compared to the manually placed neural networks done by a human expert. Dean added that this RL-based placement of neural network work on the compute engines does kind of non-intuitive things to achieve that result, which is what seems to be the case with a lot of machine learning applications that, nonetheless, work as well or better than humans doing the same tasks. The issue is that it cant take a lot of RL compute oomph to place the work on the devices to run the neural networks that are being trained themselves. In 2018, Google did research to show how to scale computational graphs to over 80,000 operations (nodes), and last year, Google created what it calls a generalized device placement scheme for dataflow graphs with over 50,000 operations (nodes).

Then we start to think about using this instead of using it to place software computation on different computational devices, we started to think about it for could we use this to do placement and routing in ASIC chip design because the problems, if you squint at them, sort of look similar, says Dean. Reinforcement learning works really well for hard problems with clear rules like Chess or Go, and essentially we started asking ourselves: Can we get a reinforcement learning model to successfully play the game of ASIC chip layout?

There are a couple of challenges to doing this, according to Dean. For one thing, chess and Go both have a single objective, which is to win the game and not lose the game. (They are two sides of the same coin.) With the placement of IP blocks on an ASIC and the routing between them, there is not a simple win or lose and there are many objectives that you care about, such as area, timing, congestion, design rules, and so on. Even more daunting is the fact that the number of potential states that have to be managed by the neural network model for IP block placement is enormous, as this chart below shows:

Finally, the true reward function that drives the placement of IP blocks, which runs in EDA tools, takes many hours to run.

And so we have an architecture Im not going to get a lot of detail but essentially it tries to take a bunch of things that make up a chip design and then try to place them on the wafer, explains Dean, and he showed off some results of placing IP blocks on a low-powered machine learning accelerator chip (we presume this is the edge TPU that Google has created for its smartphones), with some areas intentionally blurred to keep us from learning the details of that chip. We have had a team of human experts places this IP block and they had a couple of proxy reward functions that are very cheap for us to evaluate; we evaluated them in two seconds instead of hours, which is really important because reinforcement learning is one where you iterate many times. So we have a machine learning-based placement system, and what you can see is that it sort of spreads out the logic a bit more rather than having it in quite such a rectangular area, and that has enabled it to get improvements in both congestion and wire length. And we have got comparable or superhuman results on all the different IP blocks that we have tried so far.

Note: I am not sure we want to call AI algorithms superhuman. At least if you dont want to have it banned.

Anyway, here is how that low-powered machine learning accelerator for the RL network versus people doing the IP block placement:

And here is a table that shows the difference between doing the placing and routing by hand and automating it with machine learning:

And finally, here is how the IP block on the TPU chip was handled by the RL network compared to the humans:

Look at how organic these AI-created IP blocks look compared to the Cartesian ones designed by humans. Fascinating.

Now having done this, Google then asked this question: Can we train a general agent that is quickly effective at placing a new design that it has never seen before? Which is precisely the point when you are making a new chip. So Google tested this generalized model against four different IP blocks from the TPU architecture and then also on the Ariane RISC-V processor architecture. This data pits people working with commercial tools and various levels tuning on the model:

And here is some more data on the placement and routing done on the Ariane RISC-V chips:

You can see that experience on other designs actually improves the results significantly, so essentially in twelve hours you can get the darkest blue bar, Dean says, referring to the first chart above, and then continues with the second chart above. And this graph showing the wireline costs where we see if you train from scratch, it actually takes the system a little while before it sort of makes some breakthrough insight and was able to significantly drop the wiring cost, where the pretrained policy has some general intuitions about chip design from seeing other designs and people that get to that level very quickly.

Just like we do ensembles of simulations to do better weather forecasting, Dean says that this kind of AI-juiced placement and routing of IP block sin chip design could be used to quickly generate many different layouts, with different tradeoffs. And in the event that some feature needs to be added, the AI-juiced chip design game could re-do a layout quickly, not taking months to do it.

And most importantly, this automated design assistance could radically drop the cost of creating new chips. These costs are going up exponentially, and data we have seen (thanks to IT industry luminary and Arista Networks chairman and chief technology officer Andy Bechtolsheim), an advanced chip design using 16 nanometer processes cost an average of $106.3 million, shifting to 10 nanometers pushed that up to $174.4 million, and the move to 7 nanometers costs $297.8 million, with projections for 5 nanometer chips to be on the order of $542.2 million. Nearly half of that cost has been and continues to be for software. So we know where to target some of those costs, and machine learning can help.

The question is will the chip design software makers embed AI and foster an explosion in chip designs that can be truly called Cambrian, and then make it up in volume like the rest of us have to do in our work? It will be interesting to see what happens here, and how research like that being done by Google will help.

See original here:
Google Teaches AI To Play The Game Of Chip Design - The Next Platform

The Digital Economy Is The API Economy And Kong Is King – Forbes

In recent years, exponential growth in the volume of data and the speed of data integration and management processes has unleashed an explosion of innovation, enabling companies to deliver goods and services that are increasingly responsive, customized and sophisticated in an ever-evolving digital economy.

This kind of data movement would be impossible without APIs application programming interfaces. APIs, forms of modern middleware that plug into various data sources and software services to pipe information to the desired destination, are a critical part of the plumbing of the digital economy. Many of the conveniences of our digitally connected, modern lives like using Amazons Alexa to play music over your home speakers from your Spotify account, or asking your cars navigation system to find a less congested route recommended by Google Maps would be impossible without the APIs that connect companies with their partners, vendors and customers. Indeed, it would not be much of a stretch to say that under the hood, APIs are the engine powering the digital economy.

Kong, the cloud connectivity company with the worlds premier microservices API gateway, is at the head of the pack of a new generation of companies that have emerged in recent years to help organizations optimize their use of APIs. If APIs are the engine behind the digital economy, API gateways are the control panels that determine what goes in and out of this network. Like a smart grid for the cloud, it ensures the flow of data is stable and secure, even at a massive scale. No surprise that API gateways are among the most popular open source technologies, according to new research from Kong/Vanson Bourne.

Advanced API gateways are capable not only of moving data between organizations (e.g., connecting Alexa to Spotify), but also of managing the constant flow of business-critical information within individual companies in a smart way, delivering automated, faster and reliable connectivity among APIs and humans. Demand for this capability is growing rapidly as organizations replace rigid, legacy IT infrastructure (called monolithic) with the more dynamic type of information architecture known as microservices, which leverages APIs to connect discrete software programs and data sources. In this way, microservices API gateways work much like the operating system of a car coordinating the functions of separate but related systems like the transmission, axle and fuel tank in order to keep the engine running and the car moving in the right direction.

Marco Palladino and Augusto Marietti

Led by co-founders Augusto Marietti and Marco Palladino, Kongs mission is to make the digital world reliable. The companys end-to-end platform powers trillions of API transactions for leading organizations around the globe, including AppDynamics, Cargill, Just Eat, Santander, SoulCycle, WeWork and Yahoo! Japan. These companies and many others (Kongs open source software has been downloaded more than 100 million times) use Kong to accelerate the construction of smart connections between hundreds of software services and data sources, reduce latency and downtime, automate workflows, improve data governance and security, and monitor the traffic running through all of the companys APIs.

Were building the nervous system of the cloud to power all of the enterprises in the world, explains Marietti. In this analogy, he says, Kong is the brain and the spine. We want to be the one intelligent broker that can request and respond to data, moving all that information in and out of the company, both within and between teams. As a microservices API gateway, Kong is not only the brain and spine of an organization, but also its peripheral nervous system connecting and coordinating the organizations multitude of internal functions and operations, thus supporting the limbs and organs of the business.

EVOLUTION FROM MASHAPE TO KONG

Kongs impressive journey which has seen it grow and evolve rapidly, turn a profit, and raise $71 million to-date from investors like Andreessen Horowitz, CRV, Index Ventures, and Jeff Bezos has been ten years in the making. Marietti and Palladino founded the company in 2009 as Mashape, an API marketplace that provided tens of thousands of ready-made APIs for the developer community to build into their own IT infrastructure. In 2015, the co-founders launched Kong as an open source project and in 2017 decided to shift the companys focus completely to Kong after they saw the huge opportunity and demand for the service and thus, the company was reborn as Kong.

Weve changed our execution over the years, but the vision has stayed the same, says Marietti. We knew that there would be a massive explosion in APIs, software and services as companies moved to the cloud, and that every company in the world would need a simple, secure API layer to manage and move all that data. We were ahead of the market when we made the bet that this would be the future of software communications.

Devdutt Yellukar is a General Partner at CRV and a lead investor from Kongs earliest days, supporting the company through its most recent $43 million Series C fundraising round in March 2019. Yellukar agrees that Marietti and Palladinos early recognition of this transformational shift, and their quick steps to capitalize on it, have been critical to the companys success. Kong timed it perfectly, explains Yellukar. Every company in the world is becoming digital, but to provide digital services, they need APIs that allow their machines to talk to other companies machines. And on both ends of those conversations, theres going to be a product like Kong. You can see that the market opportunity is incredibly large.

As any entrepreneur would agree, however, timing is not enough success hinges on the details of execution. In this respect, Kong has also excelled and differentiated itself in the marketplace through two very important strategic decisions: the choice to open source its platform and to expand into API development.

OPEN SOURCE AS SELLING POINT AND GO-TO-MARKET STRATEGY

Although open sourcing software in other words, making the code for building software publicly accessible for anyone to copy and adapt was once seen by some as purely academic or even altruistic, today open source is increasingly regarded as a must-have feature by customers, and as a compelling go-to-market strategy by entrepreneurs. Kong was born first and foremost as an open source company, and that crucial early decision has powered its rapid adoption by marquee customers.

For Kong customer AppDynamics, Kongs status as an open source provider was a crucial selling point, with this level of transparency increasing their trust in the underlying technology and giving them greater confidence to deeply integrate Kongs tools throughout the organization. I frankly wouldnt even consider buying a closed source API gateway, the tasks are just too critical, says AppDynamics Senior Director Ty Amell. API gateways are fairly sticky, so it is important that those types of platforms have an open sourced component.

Open sourcing its product is also an important part of Kongs go-to-market strategy, lending itself to effortless adoption by developers, who then serve as product advocates within their organizations and push for paid enterprise services, creating a flywheel.

Because it was open sourced, people could just download it and start to appreciate how scalable and secure it was, and how neat and clean the interfaces were, Yellukar says. When the team open sourced Kong, there was a giant sucking sound in the market for this. Thats what happens with successful open source companies theres humongous love for this type of technology from the community, and that allows you to build the company.

EVOLUTION FROM API GATEWAY TO END-TO-END API AND SERVICE LIFECYCLE MANAGEMENT SOLUTION

Kongs latest strategic move that is already gaining promising traction is its recent entry into the lifecycle management of APIs and new role in the development of APIs from the ground up. Kongs $43 million Series C round helped finance the companys acquisition of Insomnia, an API testing platform. Insomnia in turn provided the foundation for Kong Studio, a suite of software for developers to design and collaborate on APIs.

AppDynamics Amell says that Kongs entry into the API development space was another key selling point that elevated the service beyond just an API gateway and into a full lifecycle management platform capable of solving multiple pain points for APIs along the path from creation to testing to management. We had a conversation with Kong early on about where they were going with lifecycle management. If I project out to the future, I think API gateways become part of lifecycle management, so that you can design, test, and manage all from the same platform.

This strategy is also compelling for Kong because providing developers with the tools and raw materials for building APIs will add greater value to customers. We realized that by offering a way to design, build, and test services, we can further streamline processes for our customers and enable them to be more productive, says Marietti. Once enterprises have done that, then they can use Kongs API gateway to manage these services. Were building a full lifecycle platform that helps you end-to-end, like the circle of life youre born, you build, design, test, and manage. Eventually you retire some functions you dont need anymore, and then you start all over again.

RIDING THE TAILWINDS

While Kong is already transforming the industry, Marietti says the company is just scratching the surface and that the pace of innovation will only continue to accelerate from here on out. Investments in artificial intelligence will be a key driver of this continued innovation as we are now seeing with Kong Brain, the companys first major step to automate API management and transform what is effectively a system of dumb pipes into a true nervous system. The company continues to drive open source innovation as well, launching its latest open source project, Kuma, this past September.

Underneath it all, the fundamental shift of enterprises moving into the cloud and putting more and more data in motion, both within and between organizations, is driving Kongs success. Backed by these promising tailwinds, its hard to put an upper bound on Kongs future prospects for growth.

Companies are going to be migrating to the cloud and facing the challenge of how to move more and more data for the next 100 years, says Marietti. Theres a long way to go, and I think companies are only 1% of the way there. Theres a real need, and thats how Kong is growing so quickly the market is pulling it.

Follow this link:
The Digital Economy Is The API Economy And Kong Is King - Forbes

What an algorithmic trader does and why I quit to create my own programming language – eFinancialCareers

Three years ago I quit my job at JP Morgan doing algorithmic trading in order to launch a technology startup,Concurnas- to create a new programming language!

What does an algorithmic trader do?

Algorithmic trading can mean many things and in my career at least it has covered high frequency sub millisecond foreign exchange market making systems to multi-second complex derivative pricing systems to multi-hour trading systems for trading orders of large volume.

Whereas the media often likes to portray computerized trading as something threatening, at least in my experience I found it to be something which helped facilitate the process of trading at scale. Ithas a demonstratively positive effect in reducing the cost of trading and improving efficiency and is an area full of enthusiastic people who like solving interesting complex problems and who overall want to do the right thing.

What makes a good algorithmic trader?

To start in algorithmic trading, youmust be able to code. Today most graduates entering the trading world can program (or at least have a working knowledge of programming) in languages such as Python. These days the idea of a coder who can trade, or a trader who can code is no longer seen as a peculiarity. Anunderstanding of applied mathematics and statistics is also highly beneficial.

Contrary to the popular stereotype that algorithmic trading is only for nerds, I found that really to succeed one needed to be a people person, for when we were launching new products there would be days when I'd be having one to one's with upwards of 40+ people across sales, trading, technology, compliance, market risk, marketing, legal, quantitative research, product etc.

Algorithmic trading jobs are great!

Algorithmic trading is a great area to work. In addition to it being an intellectual job in terms of mathematically modeling trading ideas and then coding them up for monetization it's an incredibly transformative area to work with lots of exposure to some very senior people in the bank. This of course opens up lots of avenues for personal career progression. In fact it was not uncommon for people to make it to MD before their 35th birthday.

In many ways I found the work in algorithmic trading to be a condensed version of the engineering and organizational challenges faced by the wider economy, a microcosm of the problems the real world faces every day from an engineering, mathematical and business process perspective except at breathtaking speed with little tolerance for error! Algo trading is areally exciting and challenging job.

So why did I leave?

This all sounds awesome right!? So how come I quit to start my own company?

The short of it is that, like many startup founders, although I really loved my job I just found something more interesting to do. People talk about how banks are becoming more like technology companies and while that is true in terms of reliance upon technology (the modern trading desk simply cannot operate without some form of automated trading), in terms of direct impact banks are nowhere near tech companies. - If you work on something like Google search you effectively have billions of customers, whereas in banks even for the most prestigious of roles you may have a few thousand customers at best.

If you want to do technical work, and enable your work to have that direct global impact, you either have to work for a large tech company or start your own!

What happened for me was that while I was sitting in the bank, I saw that the engineering problems which we were solving on a day to day basis were mostly centered around building reliable scalable high performance distributed concurrent systems - which along with computer graphics and AI is one of the most complex areas aspects of software engineering to get right. When I looked at these problems I started to see patterns and ways in which the solutions applied could be generalized, and I thought to myself, "Hey, there's a product here, and it would be beneficial for banks and the wider economy to have access to it". And so Concurnas as a programming language was born.

What does my Concurnas programming language do?

Concurnas is a highlevel programming language, and this makes it easy to use.

When you're working ona trading desk there is so much going on that you quickly learn that if you are to get anything done you must be incredibly focused with your time (unless you like working till 10pm and all weekend). Personally, I'd often enter a Zen-like state when I was trying to model and code. What really helped in that situation was using so called high level programming languages such as Python.

High level programming languages don't require you to write as much code as languages such as C and C++ in order to get the same amount of work done. The code is also more directed to solving business problems as opposed to technical problems, which means it's easier for a person who doesn't focus exclusively on coding in say market risk or compliance, to understand what somecode is doing. I learned very early on in my algorithmic trading career that it's best to make the job of people in market risk and compliance easy. -The quicker they can sign off on your work, the quicker you can get to market.

For this reason Concurnas has been engineered to be an easy to program language, just like Python - but better.

Concurnas runs on Java and is open source

Python is a great language, but it can beslow. This is why you often find that trading systems are written in the likes of C/C++ or more recently Java.

In addition to its incredible performance, Java in particular has a wealth of open source software available for it which anyone can use for free.

Concurnas runs on Java - this way it offers the incredible performance of Java and use of all the existing free Java based software that's available. Concurnas also offers support for the 'domain specific languages' (DSLs) which inevitably get built into trading systems as developers invent their own nomenclature for describing a trading problem. This is unusual - few programming languages offer any DSL support.

Lastly,Concurnas is open source. I'm often asked why I give it away for free and the reason is that, throughout my career I, and the companies I've worked for, have benefited from open source software, in fact much of the world's software relies upon open source, and so this is my way of giving back.

As a company, Concurnas Ltd offerscommercial supportfor the Concurnas programming language as well astechnology consultingservices for banks, hedge funds and the wider economy. I hope you benefit from it!

Photo byWarren WongonUnsplash

Have a confidential story, tip, or comment youd like to share? Contact: sbutcher@efinancialcareers.com in the first instance. Whatsapp/Signal/Telegram also available. Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will unless its offensive or libelous (in which case it wont.)

More here:
What an algorithmic trader does and why I quit to create my own programming language - eFinancialCareers

Why Quantum Computing Gets Special Attention In The Trump Administration’s Budget Proposal – Texas Standard

The Trump administrations fiscal year 2021 budget proposal includes significant increases in funding for artificial intelligence and quantum computing, while cutting overall research and development spending. If Congress agrees to it, artificial intelligence, or AI, funding would nearly double, and quantum computing would receive a 50% boost over last years budget, doubling in 2022, to $860 million. The administration says these two fields of research are important to U.S. national security, in part because China also invests heavily in these fields.

Quantum computing uses quantum mechanics to solve highly complex problems more quickly than they can be solved by standard or classical computers. Though fully functional quantum computers dont yet exist, scientists at academic institutions, as well as at IBM, Google and other companies, are working to build such systems.

Scott Aaronson is a professor of computer science and the founding director of the Quantum Information Center at the University of Texas at Austin. He says applications for quantum computing include simulation of chemistry and physics problems. These simulations enable scientists to design new materials, drugs, superconductors and solar cells, among other applications.

Aaronson says the governments role is to support basic scientific research the kind needed to build and perfect quantum computers.

We do not yet know how to build a fully scalable quantum computer. The quantum version of the transistor, if you like, has not been invented yet, Aaronson says.

On the software front, researchers have not yet developed applications that take full advantage of quantum computings capabilities.

Thats often misrepresented in the popular press, where its claimed that a quantum computer is just a black box that does everything, Aaronson says.

Competition between the U.S. and China in quantum computing revolves, in part, around the role such a system could play in breaking the encryption that makes things secure on the internet.

Truly useful quantum computing applications could be as much as a decade away, Aaronson says. Initially, these tools would be highly specialized.

The way I put it is that were now entering the very, very early, vacuum-tube era of quantum computers, he says.

See original here:
Why Quantum Computing Gets Special Attention In The Trump Administration's Budget Proposal - Texas Standard

Where the Buzz About Quantum Computing Is Wrong – Toolbox

A lot of bold claims have been written about the recent emergence of quantum computing. It will revolutionize computing. It will break cryptography and the encryption that protects the worlds data. It will enable the true rise of artificial intelligence as a force in the world.

While each of these assertions hint at some truth about the rise of quantum computers, theres also a fair amount of hype going around. Quantum computing will change the world, but not all the predictions are factually accurate.

So lets start with the basics of quantum computing.

Quantum computing is different than traditional computing because it escapes the binary foundation of the computer. Instead of yes or no, the 0s and 1s that form the foundation of current computer logic, theres also maybe. These intermediate states occur because quantum computers take advantage of the quirky behavior of quantum phenomena.

This new model will alter the computing landscape and open the door for solving some problems faster than traditional computers. Prediction is far more efficient when there are intermediate states compared with the black and white logic of yes or no. But this development will not change everything. It just will change some things. And it probably wont be making a big splash just yet.

So lets look at three places where the hype is not in touch with reality when it comes to quantum computers today.

The most over-hyped aspect of quantum computing is the possible near-term algorithms because we do not know which if any will work on devices within the next three to five years, and which can be run efficiently on current digital computers, says Dr. Joel Wallman, assistant professor of applied mathematics at the Institute for Quantum Computing at the University of Waterloo.

This paucity of appropriate code, much of which must be developed from the ground up, is just one hurdle that quantum computers must overcome before they are ready for widespread commercial use.

Googles recent 53-qubit demonstration [of quantum computing] is akin to the Wright brothers first flights at Kitty Hawk, says William Oliver, an MIT associate professor who teaches the universitys xPRO course on quantum computing. Their plane, the Wright Flyer, was not the first to fly. It didnt solve any pressing transportation problem. Nor did it herald widespread adoption commercial aviation would only gradually emerge over the next few decades.

What the Wright Flyer did and what the quantum computers are doing now are simply proofs of concept.

Oliver notes that the transistor was invented in 1947, but it was 25 years before the world had the Intel 4004 4-bit processor. It was another 25 years before the world got to the Pentium Pro with 1M transistors, and then another 20 years before the multi-core processors and GPUs with billions of transistors.

Quantum computers are nascent, he says. To realize their promise, we will need to build robust, reproducible machines and develop the algorithms to use them. Engineering and technology development take time.

Theres a real chance that quantum computers will challenge current cryptography someday, rendering todays encryption obsolete. This is a known problem that cybersecurity professionals face, just as the Y2K Millennium Bug was a real problem that required a fix back in 1999.

Right now quantum computing technology is nowhere near ready for this code-breaking, however. The world has time for developing the next generation of security technology before current encryption methods stop working.

The reality is that to break todays encryption requires a large-scale and fault tolerant quantum computer, and we arent there yet, says Tim Zanni, US technology sector leader for KPMG. Therefore, were unlikely to see a quantum computing-driven security breach in the near future.

It is important to understand that quantum computers will not replace classical computers, says Dr. Bob Sutor, vice president for IBMs quantum computing Q ecosystem development at IBM Research. Quantum computers fundamental properties complement the traditional systems.

Thats because the strength of quantum computers, having intermediate states somewhere between yes and no, can help the enterprise solve some forms of intractable classical problems that blow up or become extremely time-consuming with traditional computers, but they are not efficient for many of todays other computing processes. The 0s and 1s of todays computers are just fine for many computing applications, and theres no need to completely replace traditional computers with quantum computers even if that were feasible.

Quantum computers therefore most likely will be a subset of the full computing landscape, just like there are processors built for graphics or AI but also other types of processors in use.

This means that a generation of computer science students also will need to learn how to use and code quantum computers for the coming emergence of the technology.

From computer science courses to chemistry and business classes, students should be getting quantum ready, says Sutor.

Quantum computing is real, and it will have an impact on the world. But were not there yet, and everything isnt going to change once quantum computers do reach the point of commercial viability. The emergence of quantum computers is more like the slow emergence of commercial aviation.

The rise of flight did not mark the beginning of the end for other modes of transportation 90 percent of commercial shipping is still done today by ships, notes Oliver at MIT. Rather, the events at Kitty Hawk are remembered for having demonstrated a new operational regime, the first self-propelled flight of a heavier-than-air aircraft.

Its what the flight represented, in other words, not what it practically accomplished. And so it is with this first demonstration of quantum computing.

See the original post here:
Where the Buzz About Quantum Computing Is Wrong - Toolbox

Correcting the jitters in quantum devices – The European Sting

(Brian Kostiuk, Unsplash)

This article is brought to you thanks to the collaboration ofThe European Stingwith theWorld Economic Forum.

Author: David L. Chandler, Writer, MIT News & MIT News

A new study suggests a path to more efficient error correction, which may help make quantum computers and sensors more practical.

Labs around the world are racing to develop new computing and sensing devices that operate on the principles of quantum mechanics and could offer dramatic advantages over their classical counterparts. But these technologies still face several challenges, and one of the most significant is how to deal with noise random fluctuations that can eradicate the data stored in such devices.

A new approach developed by researchers at MIT could provide a significant step forward in quantum error correction. The method involves fine-tuning the system to address the kinds of noise that are the most likely, rather than casting a broad net to try to catch all possible sources of disturbance.

The analysis is described in the journal Physical Review Letters, in a paper by MIT graduate student David Layden, postdoc Mo Chen, and professor of nuclear science and engineering Paola Cappellaro.

The main issues we now face in developing quantum technologies are that current systems are small and noisy, says Layden. Noise, meaning unwanted disturbance of any kind, is especially vexing because many quantum systems are inherently highly sensitive, a feature underlying some of their potential applications.

And theres another issue, Layden says, which is that quantum systems are affected by any observation. So, while one can detect that a classical system is drifting and apply a correction to nudge it back, things are more complicated in the quantum world. Whats really tricky about quantum systems is that when you look at them, you tend to collapse them, he says.

Classical error correction schemes are based on redundancy. For example, in a communication system subject to noise, instead of sending a single bit (1 or 0), one might send three copies of each (111 or 000). Then, if the three bits dont match, that shows there was an error. The more copies of each bit get sent, the more effective the error correction can be.

The same essential principle could be applied to adding redundancy in quantum bits, or qubits. But, Layden says, If I want to have a high degree of protection, I need to devote a large part of my system to doing these sorts of checks. And this is a nonstarter right now because we have fairly small systems; we just dont have the resources to do particularly useful quantum error correction in the usual way. So instead, the researchers found a way to target the error correction very narrowly at the specific kinds of noise that were most prevalent.

The quantum system theyre working with consists of carbon nuclei near a particular kind of defect in a diamond crystal called a nitrogen vacancy center. These defects behave like single, isolated electrons, and their presence enables the control of the nearby carbon nuclei.

But the team found that the overwhelming majority of the noise affecting these nuclei came from one single source: random fluctuations in the nearby defects themselves. This noise source can be accurately modeled, and suppressing its effects could have a major impact, as other sources of noise are relatively insignificant.

We actually understand quite well the main source of noise in these systems, Layden says. So we don have to cast a wide net to catch every hypothetical type of noise.

The team came up with a different error correction strategy, tailored to counter this particular, dominant source of noise. As Layden describes it, the noise comes from this one central defect, or this one central electron, which has a tendency to hop around at random. It jitters.

That jitter, in turn, is felt by all those nearby nuclei, in a predictable way that can be corrected.

The upshot of our approach is that were able to get a fixed level of protection using far fewer resources than would otherwise be needed, he says. We can use a much smaller system with this targeted approach.

The work so far is theoretical, and the team is actively working on a lab demonstration of this principle in action. If it works as expected, this could make up an important component of future quantum-based technologies of various kinds, the researchers say, including quantum computers that could potentially solve previously unsolvable problems, or quantum communications systems that could be immune to snooping, or highly sensitive sensor systems.

This is a component that could be used in a number of ways, Layden says. Its as though were developing a key part of an engine. Were still a ways from building a full car, but weve made progress on a critical part.

Quantum error correction is the next challenge for the field, says Alexandre Blais, a professor of physics at the University of Sherbrooke, in Canada, who was not associated with this work. The complexity of current quantum error correcting codes is, however, daunting as they require a very large number of qubits to robustly encode quantum information.

Blais adds, We have now come to realize that exploiting our understanding of the devices in which quantum error correction is to be implemented can be very advantageous. This work makes an important contribution in this direction by showing that a common type of error can be corrected for in a much more efficient manner than expected. For quantum computers to become practical we need more ideas like this.

The research was supported by the U.S. Army Research Office and the National Science Foundation.

Read the original:
Correcting the jitters in quantum devices - The European Sting