Amazon AWS says Very, very sophisticated practitioners of machine learning are moving to SageMaker – ZDNet

AWS's AmazonSageMaker software, a set of tools for deploying machine learning, is not only spreading throughout many companies, it is becoming a key tool for some of the more demanding kinds of practitioners of machine learning, one of the executives in charge of it says.

"We are seeing very, very sophisticated practitioners moving to SageMaker because we take care of the infrastructure, and so it makes them an order-of-magnitude more productive," said Bratin Saha, AWS's vice president in charge of machine learning and engines.

Saha spoke with ZDNet during the third week of AWS's annual re:Invent conference, which this year was held virtually because of the pandemic.

The benefits of SageMaker have to do with all the details of how to stage training tasks and deploy inference tasks across a variety of infrastructure.

SageMaker, introduced in 2017, can automate a lot of the grunt work that goes into setting up and running such tasks.

"Amazon dot com has invested in machine learning for more than twenty years, and they are moving on to SageMaker, and we have very sophisticated machine learning going on at Amazon dot com," says Amazon AWS's vice president for ML and engines, Bratin Saha.

While SageMaker might seem like something that automates machine learning for people who don't know how to do the basics, Saha told ZDNet that even experienced machine learning scientists find value in speeding up the routine tasks in a program's development.

"What they had to do up till now is spin up a cluster, make sure that the cluster was well utilized, spend a lot of time checking as the model is deployed, am I getting traffic spikes," said Saha, describing the traditional deployment tasks that had to be carried out by a machine learning data scientist. That workflow extends from initially gathering the data to labeling the data (in the case of labeled training), refine the model architecture, and then deploying trained models for inference usage and monitoring and maintaining those inference models as long as they are running live.

"You don't have to do any of that now," said Saha. "SageMaker gives you training that is server-less, in the sense that your billing starts when your model starts training, and stops when your model stops training."

Also: Amazon AWS unveils RedShift ML to 'bring machine learning to more builders'

Added Saha, "In addition, it works with spotinstances in a very transparent way; you don't have to say, Hey, have my spot instances been pre-empted, is my job getting killed, SageMaker takes care of all of that." Such effective staging of jobs can reduce costs by ninety percent, Saha contends.

Saha said that customers such as Lyft and Intuit, despite having machine learning capabilities of their own, are more and more taking up the software to streamline their production systems.

"We have some of the most sophisticated customers working on SageMaker," said Saha.

"Look at Lyft, they are standardizing their training on SageMaker, their training times have come down from several days to a few hours," said Saha. "MobileEye is using SageMaker training," he said, referring to the autonomous vehicle chip unit within Intel. "Intuit has been able to reduce their training time from six months to a few days." Other customers include the NFL, JP Morgan Chase, Georgia Pacific, Saha noted.

Also: Amazon AWS analytics director sees analysis spreading much more widely throughout organizations

Amazon itself has moved its AI work internally to SageMaker, he said. "Amazon dot com has invested in machine learning for more than twenty years, and they are moving on to SageMaker, and we have very sophisticated machine learning going on at Amazon dot com." As one example, Amazon's Alexa voice-activated appliance uses SageMaker Neo, an optimization tool that compiles trained models into a binary program with settings that will make the model run most efficiently when being used for inference tasks.

There are numerous other parts of SageMaker, such as pre-built containers with select machine learning algorithms; a "Feature Store" where one can pick out attributes to use in training; and what's known as the Data Wrangler to create original model features from training data.

AWS has been steadily adding to the tool set.

During his AWS re:Invent keynote two weeks ago, Amazon's vice president of machine learning, Swami Sivasubramanian, announced that SageMaker can now automatically break up the parts of a large neural net and distribute those parts across multiple computers. This form of parallel computing, known as model parallelism, is usually something that takes substantial effort.

Amazon was able to reduce neural network training time by forty percent, said Sivasubramanian, for very large deep learning networks, such as "T5," a version of Google's Transformer natural language processing.

Continued here:
Amazon AWS says Very, very sophisticated practitioners of machine learning are moving to SageMaker - ZDNet

Way to Grow in Career – Upskilling in AI and Machine Learning – Analytics Insight

There are at least two clear patterns that show a demand-supply mismatch in tech occupations in front line IT fields, for example, Artificial Intelligence and Machine Learning. One is by means of industry predictions that gauge growth in the AI market from $21.46 Bn to $190.61 Bn somewhere in the range of 2018 and 2025.

Machine learning and AI, cloud computing, cybersecurity and data science are the most pursued fields of knowledge and skills, and as innovation experts contend in the digital space quickly being surpassed by automation, huge numbers of them are upskilling themselves.

As indicated by the report from Gartner, AI-related job creation will arrive at 2,000,000 net-new openings in 2025. Notwithstanding, there arent that numerous experts with the range of abilities to match this requirement. To overcome this issue, there is an expanding need for experts to upskill in these spaces.

A study led by e-learning stage Simplilearn among 1,750 experts, discovered that 33% of respondents were spurred to take up courses to assist them with procuring better pay rates. Other persuading factors incorporate getting opportunities related to hands-on and industry-relevant projects, said 27% of those participated, while another 21% said rewards and recognition pushed them to upskill themselves.

Essentially all types of enterprise programming, transport, factory automation and different enterprises are progressively utilizing AI-based interfaces in their every day operations. Truth be told, by 2030, AI may wind up contributing USD 15.7 trillion to the worldwide economy.

Mathematical and programming aptitudes are integral to gaining competency in this field. Notwithstanding, for seasoned tech experts, it is likewise very critical to build great communication skills. A comprehension of how business functions and the common processes utilized in everyday operations will help you better use your core skills to improve authoritative work processes.

The average yearly compensation of data scientist would go between Rs 5 lakh to 42 lakh from junior to mid-range positions, trailed by the range of Rs 4.97 lakh to 50 lakh for each year for an expert skilled in cloud computing, and Rs 5 lakh to 70 lakh for every year for occupations in Artificial Intelligence, the survey anticipated.

Among the all the more exciting opportunities, one can expect in 2021 is the rising utilization of AI in healthcare to identify and analyze medical problems in people. Smart infrastructure to help balance rapid development in urban centres in India is additionally an option being explored by the government.

Tech programs presently obligatorily have AI, IoT, Machine Learning and some other basic components of arising technologies. Nonetheless, steady changes in this unique field have made it obligatory for experts to keep upskilling by means of a prominent organization to stay relevant and work commendably. When experts have upskilled themselves to fulfill the needs of the market, it is significant that they articulate their expertise productively to the hiring companies.

As robotization progressively replaces traditional entry-level technical jobs, for example, data entry and monitoring, it is turning out to be clear that moving up to cutting-edge skills, for example, AI, DL, ML and Cloud is the route forward. Artificial intelligence is assessed to supplant almost 7,000,000 positions by 2037. Automation (generally powered by AI) is probably going to affect 69% of occupations in India as corporations progressively receive the whatever can be automated, will be automated mantra to boost profitability.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Read the original post:
Way to Grow in Career - Upskilling in AI and Machine Learning - Analytics Insight

Become An Expert In Artificial Intelligence With The Ultimate Artificial Intelligence Scientist Certification Bundle – IFLScience

Interested in the fast-growing field ofArtificial Intelligence(AI)? With so much under its umbrella, mastering the many aspects of AI can be cumbersome and even confusing. Truly understanding the vast world of AI meanslearning about its various subsets, how they are interconnected and what happens when they work together.

Its an exciting idea, but where does one start? Take it from here:become a certified AI scientistwith The Ultimate Artificial Intelligence Scientist Certification Bundle. The four featured courses are packed with 670 lessons covering Deep Learning, Machine Learning, Python and Tensorflow. Designed for all skill levels, the courses cover everything from the basics to real-world examples and projects. Over 1,000 students have already enrolled in this highly-rated bundle, which we break down below.

Deep Learning is at the heart of Artificial Intelligence, the key to solving the increasingly complex problems that will inevitably come up as AI advances. This course features a robust load of 180 lectures so you can gain a solid understanding of all things Deep Learning. For example, the course includes lessons on the intuition behind Artificial Neural Networks as well as differentiating between Supervised and Unsupervised Deep Learning. Youll work on real-world data sets to reinforce your learnings, including applying Convolutional Neural Networks and Self-Organizing Maps. Over 267,000 students have seen success from this course, with 30,368 positive ratings.

Learn from the best in this Machine Learning course, which was expertly designed by two data scientists. Take the reins on all the algorithms, coding libraries and complex theories with an impressive 40 hours of content. Youll Master Machine Learning on Python and R (two open-source programming languages) and also learn to handle advanced techniques like Dimensionality Reduction. At the end of this class, youll be able to build powerful Machine Learning models and know how to fuse them to solve even the most complex problems. The step-by-step tutorials have garnered 119,297 positive ratings from 653,721 students.

You've likely heard of Python, the super-popular and beginner-friendly programming language that any aspiring AI expert should be familiar with. The 73 lessons in this foundational course designed for all skill levels are crafted to build upon each previous lesson, ensuring you grasp and retain everything you learn. Get comfortable with the core principles of programming, learn to code in Jupiter Notebooks, understand the Law of Large numbers and more. By the time you take the last lesson, youll have a deep understanding of integer, float, logical, sting and other types in Python, plus know how to create a while() loop and a for() loop.Trust the 15,307 positive ratings from 111,676 students enrolled.

Consider this your complete guide to the recently-released Tensorflow 2.0. The course, complete with 133 lessons, starts with the basics and builds on this foundation to cover topics ranging from neural network modeling and training to production. Ultimately, youll be empowered with the know-how to create incredible Deep Learning and AI solutions and bring them to life. One really cool aspect of this hands-on course is that youll have the opportunity to build a bot that acts as a broker for buying and selling stocks using Reinforcement Learning. Take a cue from the 185 positive ratings from 1,579 students and follow suit.

This bundle is a great deal not only because buying even one of these courses separately would break the bank, but also because you'd only get a segmented view of the excitingly wide world of AI.

Right now, you can getThe Ultimate Artificial Intelligence Scientist Certification Bundlefor $34.99, down 95% from the original MSRP.

Prices subject to change.

Here is the original post:
Become An Expert In Artificial Intelligence With The Ultimate Artificial Intelligence Scientist Certification Bundle - IFLScience

Safe Internet: WOT uses machine learning and crowdsourcing to protect your phone and tablet – PhoneArena

Advertorial by WOT: the opinions expressed in this story may not reflect the positions of PhoneArena!

WOT is available in the form of an Android app or extension for Firefox, Opera, Chrome, and even the Samsung browser. This means you can use it on absolutely any Android device in your household, plus the family desktop PC.

In order to ensure its protection is always up to date, WOT utilizes a mixture of crowdsourcing, machine learning, and third party blacklists. It will analyze user behavior and compare it against databases of known scams to make sure its constantly on top of its game.

If you subscribe to premium ($2.49 per month on an annual plan), you gain access to WOTs superb Anti-Phishing shield, which will keep a lookout for clever scams. Premium users also have no limit on how many apps they can lock, and gain an auto-scanning feature, which will automatically check new Wi-Fi networks and apps for security flaws.

Here is the original post:
Safe Internet: WOT uses machine learning and crowdsourcing to protect your phone and tablet - PhoneArena

Using Machine Learning to Predict Which COVID-19 Patients Will Get Worse – Michigan Medicine

A patient enters the hospital struggling to breathe they have COVID-19. Their healthcare team decides to admit them to the hospital. Will they be one of the fortunate ones who steadily improves and are soon discharged? Or will they end up needing mechanical ventilation?

That question may be easier to answer, thanks to a recent study from Michigan Medicine describing an algorithm to predict which patients are likely to quickly deteriorate while hospitalized.

You can see large variability in how different patients with COVID-19 do, even among close relatives with similar environments and genetic risk, says Nicholas J. Douville, M.D., Ph.D., of the Department of Anesthesiology, one of the studys lead authors. At the peak of the surge, it was very difficult for clinicians to know how to plan and allocate resources.

Combining data science and their collective experiences caring for COVID-19 patients in the intensive care unit, Douville, Milo Engoren, M.D., and their colleagues explored the potential of predictive machine learning. They looked at a set of patients with COVID-19 hospitalized during the first pandemic surge from March to May 2020 and modeled their clinical course.

The team generated an algorithm with inputs such as a patients age, whether they had underlying medical conditions and what medications they were on when entering the hospital, as well as variables that changed while hospitalized, including vital signs like blood pressure, heart rate and oxygenation ratio, among others.

Their question: which of these data points helped to best predict which patients would decompensate and require mechanical ventilator or die within 24 hours?

Of the 398 patients in their study, 93 required a ventilator or died within two weeks. The model was able to predict mechanical ventilation most accurately based upon key vital signs, including oxygen saturation ratio (SpO2/FiO2), respiratory rate, heart rate, blood pressure and blood glucose level.

The team assessed the data points of interest at 4, 8, 24 and 48 hour increments, in an attempt to identify the optimal amount of time necessary to predictand intervenebefore a patient deteriorates.

"The closer we were to the event, the higher our ability to predict, which we expected.But we were still able to predict the outcomes with good discrimination at 48 hours, giving providers time to make alterations to the patients care or to mobilize resources, says Douville.

For instance, the algorithm could quickly identify a patient on a general medical floor who would be a good candidate for transfer to the ICU, before their condition deteriorated to the point where ventilation would be more difficult.

In the long term, Douville and his colleagues hope the algorithm can be integrated into existing clinical decision support tools already used in the ICU. In the short term, the study brings to light patient characteristics that clinicians caring for patients with COVID-19 should keep in the back of their minds. The work also raises new questions about which COVID-19 therapies, such as anti-coagulants or anti-viral drugs, may or may not alter a patients clinical trajectory.

Says Douville, While many of our model features are well known to experienced clinicians, the utility of our model is that it performs a more complex calculation than the clinician could perform on the back of the envelope it also distills the overall risk to an easily interpretable value, which can be used to flag patients in a way so they are not missed.

Paper cited: Clinically Applicable Approach for Predicting Mechanical Ventilation in Patients with COVID-19, British Journal of Anaesthesia. DOI: 10.1016/j.bja.2020.11.03

Read more:
Using Machine Learning to Predict Which COVID-19 Patients Will Get Worse - Michigan Medicine

Improve Machine Learning Performance with These 5 Strategies – Analytics Insight

Advances in innovation to capture and process a lot of data have left us suffocating in information. This makes it hard to extricate insights from data at the rate we get it. This is the place where machine learning offers some benefit to a digital business.

We need strategies to improve machine learning performance all the more effectively. Since, supposing that we put forth efforts in the wrong direction, we cant get a lot of progress and burn through a lot of time. Then, we need to get a few expectations toward the path we picked, for instance, how much precision can be improved.

There are by and large two kinds of organizations that participate in machine learning: those that build applications with a trained ML model inside as their core business proposition and those that apply ML to upgrade existing business work processes. In the latter case, articulating the issue will be the underlying challenge. Diminishing the expense or increasing income should be limited to the moment that it gets solvable by gaining the right data.

For example, if you need to minimize the churn rate, data may assist you with detecting clients with a high fly risk by analyzing their activities on a website, a SaaS application, or even online media. In spite of the fact that you can depend on traditional metrics and make suppositions, the algorithm may unwind shrouded dependencies between the data in clients profiles and the probability to leave.

Resource management has become a significant part of a data scientists duties. For instance, it is a challenge having a GPU worker on-prem for a group of five data scientists. A lot of time is spent sorting out some way to share those GPUs simply and effectively. Allocation of compute resources for machine learning can be a major agony, and takes time away from doing data science tasks.

Data science is an expansive field of practices pointed toward removing significant insights from data in any structure. Furthermore, utilizing data science in decision-making is a better method to stay away from bias. Nonetheless, that might be trickier than you might suspect. Indeed, even Google has as of late fallen into a trap of indicating more esteemed jobs to men in their ads than to women. Clearly, it isnt so much that Google data scientists are sexist, but instead the data that the algorithm utilizes is one-sided on the grounds that it was gathered from our interactions on the web.

Machine learning is compute-intensive. A scalable machine learning foundation should be compute agnostic. Joining public clouds, private clouds, and on-premise resources offers flexibility and agility as far as running AI workloads. Since the kinds of workloads shift significantly between AI workloads, companies that construct a hybrid cloud infrastructure can dispense assets all the more deftly in custom sizes. You can bring down CapEx expenditure with public cloud, and offer the scalability required for times of high compute demands. In companies with strict security demands, the expansion of private cloud is essential, and can bring down OpEx over the long-term. Hybrid cloud encourages you to accomplish the control and flexibility necessary to improve planning of resources.

A large portion of the models are created on a static subset of information, and they capture the conditions of the time frame when the data was gathered. When you have a model or various them deployed, they become dated over time and give less exact expectations. Contingent upon how effectively the patterns in your business climate change, you should pretty much regularly replace models or retrain them

Share This ArticleDo the sharing thingy

About AuthorMore info about author

View original post here:
Improve Machine Learning Performance with These 5 Strategies - Analytics Insight

How A Crazy Idea Changed The Way We Do Machine Learning: Test Of Time Award Winner – Analytics India Magazine

HOGWILD! Wild as it sounds, the paper that goes by the same name was supposed to be an art project by Christopher Re, an associate professor at Stanford AI Lab, and his peers. Little did they know that the paper would change the way we do machine learning. Ten years later, it even bagged the prestigious Test of Time award at the latest NeurIPS conference.

To identify the most impactful paper in the past decade, the conference organisers selected a list of 12 papers published at NeurIPS over the years NeurIPS 2009, NeurIPS 2010, NeurIPS 2011 with the highest numbers of citations since their publication. They also collected data about the recent citations counts for each of these papers by aggregating citations that these papers received in the past two years at NeurIPS, ICML and ICLR. The organisers then asked the whole senior program committee with 64 SACs to vote on up to three of these papers to help in picking an impactful paper.

Most of the machine learning is about finding the right kind of variables for converging towards reasonable predictions. Hogwild! is a method that helps in finding those variables very efficiently. The reason it had such a crazy name, to begin with, was it was intentionally a crazy idea, said Re in an interview for Stanford AI.

With its small memory footprint, robustness against noise, and rapid learning rates, Stochastic Gradient Descent (SGD) has proved to be well suited to data-intensive machine learning tasks. However, SGDs scalability is limited by its inherently sequential nature; it is difficult to parallelise. A decade ago, when the hardware was still playing catch up with the algorithms, the key objective for scalable data analysis, on vast data, is to minimise the overhead caused due to locking. Back then, when parallelisation of SGD was proposed, there was no way around memory locking, which deteriorated the performance. Memory locking was essential to reduce latency for between processes.

Re and his colleagues demonstrated that this work aims to show using novel theoretical analysis, algorithms, and implementation that stochastic gradient descent can be implemented without any locking.

In Hogwild!, the authors made the processors have equal access to shared memory and were able to update individual components of memory at will. The risk here is that a lock-free scheme can fail as processors could overwrite each others progress. However, when the data access is sparse, meaning that individual SGD steps only to modify a small part of the decision variable, we show that memory overwrites are rare and that they introduce barely any error into the computation when they do occur, explained the authors.

When asked about the weird exclamation point at the end of the already weird name I thought the phrase going hog-wild was hysterical to describe what we were trying. So I thought an exclamation point would just make it better, quipped Re.

In spite of being honoured with being a catalyst behind driving ML revolution, Re believes that this change would have happened with or without their paper. What really stands out, according to him, is that an odd-ball, goofy sounding research is recognised even after a decade. This is a testimony to an old adage there is no such thing as a bad idea!

Find the original paper here.

Here are the test of time award winners in the past:

2017: Random Features for Large-Scale Kernel Machines by Ali Rahimi and Ben Recht

2018: The Trade-Offs of Large Scale Learning by Lon Bottou

2019: Dual Averaging Method for Regularized Stochastic Learning and Online Optimisation by Lin Xiao

I have a master's degree in Robotics and I write about machine learning advancements.email:ram.sagar@analyticsindiamag.com

Here is the original post:
How A Crazy Idea Changed The Way We Do Machine Learning: Test Of Time Award Winner - Analytics India Magazine

How machines are changing the way companies talk – VentureBeat

Anyone whos ever been on an earnings call knows company executives already tend to look at the world through rose-colored glasses, but a new study by economics and machine learning researchers says thats getting worse, thanks to machine learning. The analysis found that companies are adapting their language in forecasts, SEC regulatory filings, and earnings calls due to the proliferation of AI used to analyze and derive signals from the words they use. In other words: Businesses are beginning to change the way they talk because they know machines are listening.

Forms of natural language processing are used to parse and process text in the financial documents companies are required to submit to the SEC. Machine learning tools are then able to do things like summarize text or determine whether language used is positive, neutral, or negative. Signals these tools provide are used to inform the decisions advisors, analysts, and investors make. Machine downloads are associated with faster trading after an SEC filing is posted.

This trend has implications for the financial industry and economy, as more companies shift their language in an attempt to influence machine learning reports. A paper detailing the analysis, originally published in October by researchers from Columbia University and Georgia State Universitys J. Mack Robinson College of Business, was highlighted in this months National Bureau of Economic Research (NBER) digest. Lead author Sean Cao studies how deep learning can be applied to corporate accounting and disclosure data.

More and more companies realize that the target audience of their mandatory and voluntary disclosures no longer consists of just human analysts and investors. A substantial amount of buying and selling of shares [is] triggered by recommendations made by robots and algorithms which process information with machine learning tools and natural language processing kits, the paper reads. Anecdotal evidence suggests that executives have become aware that their speech patterns and emotions, evaluated by human or software, impact their assessment by investors and analysts.

The researchers examined nearly 360,000 SEC filings between 2003 and 2016. Over that time period, regulatory filing downloads from the SECs Electronic Data Gathering, Analysis, and Retrieval (EDGAR) tool increased from roughly 360,000 filing downloads to 165 million, climbing from 39% of all downloads in 2003 to 78% in 2016.

A 2011 study concluded that the majority of words identified as negative by a Harvard dictionary arent actually considered negative in a financial context. That study also included lists of negative words used in 10-K filings. After the release of that list,researchers found high machine download companies began to change their behavior and use fewer negative words.

Generally, the stock market responds more positively to disclosures with fewer negative words or strong modal words.

As more and more investors use AI tools such as natural language processing and sentiment analyses, we hypothesize that companies adjust the way they talk in order to communicate effectively and predictably, the paper reads. If managers are aware that their disclosure documents could be parsed by machines, then they should also expect that their machine readers may also be using voice analyzers to extract signals from vocal patterns and emotions contained in managers speeches.

A study released earlier this year by Yale University researchers used machine learning to analyze startup pitch videos and found that positive (i.e., passionate, warm) pitches increase funding probability. And another study from earlier this year (by Crane, Crotty, and Umar) showed hedge funds that use machines to automate downloads of corporate filings perform better than those that do not.

In other applications at the locus of AI and investor decisions, last year InReach Ventures launched a $60 million fund that uses AI as part of its process for evaluating startups.

See the article here:
How machines are changing the way companies talk - VentureBeat

Neurals AI predictions for 2021 – The Next Web

Its that time of year again! Were continuing our longrunning tradition of publishing a list of predictions fromAI experts who know whats happening on the ground, in the research labs, and at the boardroom tables.

Without further ado, lets dive in and see what the pros think will happen in the wake of 2020.

Dr. Arash Rahnama, Head of Applied AI Research at Modzy:

Just as advances in AI systems are racing forward, so too are opportunities and abilities for adversaries to trick AI models into making wrong predictions. Deep neural networks are vulnerable to subtle adversarial perturbations applied to their inputs adversarial AI which are imperceptible to the human eye. These attacks pose a great risk to the successful deployment of AI models in mission critical environments. At the rate were going, there will be a major AI security incident in 2021 unless organizations begin to adopt proactive adversarial defenses into their AI security posture.

2021 will be the year of explainability. As organization integrate AI, explainability will become a major part of ML pipelines to establish trust for the users. Understanding how machine learning reasons against real-world data helps build trust between people and models. Without understanding outputs and decision processes, there will never be true confidence in AI-enabled decision-making. Explainability will be critical in moving forward into the next phase of AI adoption.

The combination of explainability, and new training approaches initially designed to deal with adversarial attacks, will lead to a revolution in the field. Explainability can help understand what data influenced a models prediction and how to understand bias information which can then be used to train robust models that are more trusted, reliable and hardened against attacks. This tactical knowledge of how a model operates, will help create better model quality and security as a whole. AI scientists will re-define model performance to encompass not only prediction accuracy but issues such as lack of bias, robustness and strong generalizability to unpredicted environmental changes.

Dr. Kim Duffy, Life Science Product Manager at Vicon.

Forming predictions for artificial intelligence (AI) and machine learning (ML) is particularly difficult to do while only looking one year into the future. For example, in clinical gait analysis, which looks at a patients lower limb movement to identify underlying problems that result in difficulties walking and running, methodologies like AI and ML are very much in their infancy. This is something Vicon highlights in our recent life sciences report, A deeper understanding of human movement. To utilize these methodologies and see true benefits and advancements for clinical gait will take several years. Effective AI and ML requires a mass amount of data to effectively train trends and pattern identifications using the appropriate algorithms.

For 2021, however, we may see more clinicians, biomechanists, and researchers adopting these approaches during data analysis. Over the last few years, we have seen more literature presenting AI and ML work in gait. I believe this will continue into 2021, with more collaborations occurring between clinical and research groups to develop machine learning algorithms that facilitate automatic interpretations of gait data. Ultimately, these algorithms may help propose interventions in the clinical space sooner.

It is unlikely we will see the true benefits and effects of machine learning in 2021. Instead, well see more adoption and consideration of this approach when processing gait data. For example, the presidents of Gait and Postures affiliate society provided a perspective on the clinical impact of instrumented motion analysis in their latest issue, where they emphasized the need to use methods like ML on big-data in order to create better evidence of the efficiency of instrumented gait analysis. This would also provide better understanding and less subjectivity in clinical decision-making based on instrumented gait analysis. Were also seeing more credible endorsements of AI/ML such as the Gait and Clinical Movement Analysis Society which will also encourage further adoption by the clinical community moving forward.

Joe Petro, CTO of Nuance Communications:

In 2021, we will continue to see AI come down from the hype cycle, and the promise, claims, and aspirations of AI solutions will increasingly need to be backed up by demonstrable progress and measurable outcomes. As a result, we will see organizations shift to focus more on specific problem solving and creating solutions that deliver real outcomes that translate into tangible ROI not gimmicks or building technology for technologys sake. Those companies that have a deep understanding of the complexities and challenges their customers are looking to solve will maintain the advantage in the field, and this will affect not only how technology companies invest their R&D dollars, but also how technologists approach their career paths and educational pursuits.

With AI permeating nearly every aspect of technology, there will be an increased focus on ethics and deeply understanding the implications of AI in producing unintentional consequential bias. Consumers will become more aware of their digital footprint, and how their personal data is being leveraged across systems, industries, and the brands they interact with, which means companies partnering with AI vendors will increase the rigor and scrutiny around how their customers data is being used, and whether or not it is being monetized by third parties.

Dr. Max Versace, CEO and Co-Founder, Neurala:

Well see AI be deployed in the form of inexpensive and lightweight hardware. Its no secret that 2020 was a tumultuous year, and the economic outlook is such that capital intensive, complex solutions will be sidestepped for lighter-weight, perhaps software-only, less expensive solutions. This will allow manufacturers to realize ROIs in the short term without massive up-front investments. It will also give them the flexibility needed to respond to fluctuations the supply chain and customer demands something that weve seen play out on a larger scale throughout the pandemic.

Humans will turn their attention to why AI makes the decisions it makes. When we think about the explainability of AI, it has often been talked about in the context of bias and other ethical challenges. But as AI comes of age and gets more precise, reliable and finds more applications in real-world scenarios, well see people start to question the why? The reason? Trust: humans are reluctant to give power to automatic systems they do not fully understand. For instance, in manufacturing settings, AI will need to not only be accurate, but also explain why a product was classified as normal or defective, so that human operators can develop confidence and trust in the system and let it do its job.

Another year, another set of predictions. You can see how our experts did last year by clicking here. You can see how our experts did this year by building a time machine and traveling to the future. Happy Holidays!

Published December 28, 2020 07:00 UTC

Read more:
Neurals AI predictions for 2021 - The Next Web

Machine Learning Market Size 2020 by Top Key Players, Global Trend, Types, Applications, Regional Demand, Forecast to 2027 – LionLowdown

New Jersey, United States,- The report, titled Machine Learning Market Size By Types, Applications, Segmentation, and Growth Global Analysis and Forecast to 2019-2027 first introduced the fundamentals of Machine Learning: definitions, classifications, applications and market overview; Product specifications; Production method; Cost Structures, Raw Materials, etc. The report takes into account the impact of the novel COVID-19 pandemic on the Machine Learning market and also provides an assessment of the market definition as well as the identification of the top key manufacturers which are analyzed in-depth as opposed to the competitive landscape. In terms of Price, Sales, Capacity, Import, Export, Machine Learning Market Size, Consumption, Gross, Gross Margin, Sales, and Market Share. Quantitative analysis of the Machine Learning industry from 2019 to 2027 by region, type, application, and consumption rating by region.

Impact of COVID-19 on Machine Learning Market: The Coronavirus Recession is an economic recession that will hit the global economy in 2020 due to the COVID-19 pandemic. The pandemic could affect three main aspects of the global economy: manufacturing, supply chain, business and financial markets. The report offers a full version of the Machine Learning Market, outlining the impact of COVID-19 and the changes expected on the future prospects of the industry, taking into account political, economic, social, and technological parameters.

Request Sample Copy of this Report @ Machine Learning Market Size

In market segmentation by manufacturers, the report covers the following companies-

How to overcome obstacles for the septennial 2020-2027 using the Global Machine Learning market report?

Presently, going to the main part-outside elements. Porters five powers are the main components to be thought of while moving into new business markets. The customers get the opportunity to use the approaches to plan the field-tested strategies without any preparation for the impending monetary years.

We have faith in our services and the data we share with our esteemed customers. In this way, we have done long periods of examination and top to bottom investigation of the Global Machine Learning market to give out profound bits of knowledge about the Global Machine Learning market. Along these lines, the customers are enabled with the instruments of data (as far as raw numbers are concerned).

The graphs, diagrams and infographics are utilized to speak out about the market drifts that have formed the market. Past patterns uncover the market turbulences and the final results on the markets. Then again, the investigation of latest things uncovered the ways, the organizations must take for shaping themselves to line up with the market.

Machine Learning Market: Regional analysis includes:

?Asia-Pacific(Vietnam, China, Malaysia, Japan, Philippines, Korea, Thailand, India, Indonesia, and Australia)?Europe(Turkey, Germany, Russia UK, Italy, France, etc.)?North America(the United States, Mexico, and Canada.)?South America(Brazil etc.)?The Middle East and Africa(GCC Countries and Egypt.)

The report includes Competitors Landscape:

? Major trends and growth projections by region and country? Key winning strategies followed by the competitors? Who are the key competitors in this industry?? What shall be the potential of this industry over the forecast tenure?? What are the factors propelling the demand for the Machine Learning Industry?? What are the opportunities that shall aid in the significant proliferation of market growth?? What are the regional and country wise regulations that shall either hamper or boost the demand for Machine Learning Industry?? How has the covid-19 impacted the growth of the market?? Has the supply chain disruption caused changes in the entire value chain?

The report also covers the trade scenario,Porters Analysis,PESTLE analysis, value chain analysis, company market share, segmental analysis.

About us:

Market Research Blogs is a leading Global Research and Consulting firm servicing over 5000+ customers. Market Research Blogs provides advanced analytical research solutions while offering information enriched research studies. We offer insight into strategic and growth analyses, Data necessary to achieve corporate goals, and critical revenue decisions.

Our 250 Analysts and SMEs offer a high level of expertise in data collection and governance use industrial techniques to collect and analyze data on more than 15,000 high impact and niche markets. Our analysts are trained to combine modern data collection techniques, superior research methodology, expertise, and years of collective experience to produce informative and accurate research.

Get More Market Research Report Click @ Market Research Blogs

Read more:
Machine Learning Market Size 2020 by Top Key Players, Global Trend, Types, Applications, Regional Demand, Forecast to 2027 - LionLowdown

What the Hell Is Quantum Chess? | IE – Interesting Engineering

Have you ever heard of Quantum Chess? If not, we are confident you are in for a real treat.

Read on to find out more about this interesting take on a very ancient strategy game. But brace yourself, things are about to get a little "spooky".

RELATED: WINNER OF THE WORLD'S FIRST QUANTUM CHESS TOURNAMENT ANNOUNCED

Quantum Chess is a variant of the classical strategy game. It incorporates the principles of quantum physics. For example, unlike traditional chess, the piecescan be placed into a superposition of two locations, meaning that a piece can occupy more than one square.

Unlike chesspieces in the conventional game where, for example, a pawn is always a pawn, aquantum chesspiece is a superposition of "states", with each state representing a different conventional piece.

Conventional chess is a very complex game, although it is possible for computer algorithmsto beat the world's greatest chess playersby accurately determining the moves necessary to win the game at any point.

The main rationale behind the creation of Quantum Chess is to introduce an element of unpredictability into the game, and thereby place the computer and the human on a more equal footing. The game can also help "level the playing field" somewhat between human players of widely different skills and experience with chess.

Its like youre playing in a multiverse but the different boards [in different universes] are connected to each other, said Caltech physicist Spiros Michalakis during aLivestreamof a recent Quantum Chess tournament. It makes 3D chess fromStar Treklook silly.

But don't let the term intimidate you. New players to the game don't need to be experts in quantum physics a basic understanding of chess is more important actually.

While it might sound like something of a gimmick, Quantum Chess is an interesting and entertaining spin on the classic game that many find enjoyable. Unless, of course, you cannot live without knowing for sure what and where each piece is at any given time.

If that is the case, you might find this one of the most frustrating games ever created!

Quantum Chess, as you have probably already worked out, is not like any game of classical chess you have ever played. But, it is important to note that there are also several variants of Quantum Chess.

The best known is probably the one created by Chris Cantwell when he was a graduate student at theUniversity of Southern California.This variant differs from other examples by the fact that it is more "truly quantum" than others.

My initial goal was to create a version of quantum chess that was truly quantum in nature, so you get to play with the phenomenon,Cantwell said in an interview with Gizmodoback in 2016.

I didnt want it to just be a game that taught people, quantum mechanics. The idea is that by playing the game, a player will slowly develop an intuitive sense of the rules governing the quantum realm. In fact, I feel like Ive come to more intuitively understand quantum phenomena myself, just by making the game, he added.

In Cantwell's version of Quantum Chess, this superposition of pieces is indicated by a ring that details the probability that the piece can actually be found in a given square. Not only that, but when moving a piece, each action can also be governed by probability.

You can think of the pieces of the game existing on multiple boards in which their numbers are also not fixed. The board you see is a kind of overview of all of these other boards and a single move acts on other boards at the same time.

Whenever a piece moves, many calculations are made behind the scenes to determine the actual outcome, which could be completely unexpected.

That being said, moves do follow the basic rules of traditional chess, including things like castling and en passant. However, there are a few important differences:

Pieces in this version of Quantum Chess can make a series of either "quantum moves" (except for pawns) or regular chess moves. In this sense, the pieces can occupy more than one square on the multiverse of boards simultaneously.

These moves also come in a variety of "flavors".

The first is a move called a "split move". This can be performed by all non-pawn pieces and allows a piece to actually occupy two different target squares that it could traditionally reach in normal chess.

But, this can only be done if the target square is unoccupied or is occupied by pieces of the same color and type. A white knight, for example, could use this kind of move to occupy the space of another white knight.

Such a move cannot; however, be used to capture an opponent's piece.

Another interesting move is called a "merge move". This can be performed by all pieces except pawns and, like a split move, can only be performed on an unoccupied square or one occupied by a piece of the same type and color.

Using our previous example of a white knight, this would mean that two white knights could merge together on the same square. Again, this move cannot be used to capture enemy pieces.

So how do you take pieces in Quantum Chess?

Well, when two pieces of different colors meet on the same square the game makes a series of measurements.These measurements are designed to answer a specific yes or no question.

For example, the game's mechanics will look at certain squares to determine if they are occupied or not.The outcome of this can be to cause a piece's "superposition" state to "collapse".

If the superposition state collapses, then the desired move will be performed. If not, the move is not made and the player's turn ends.

Capturing is also very different in a game of Quantum Chess. When a player attempts to do this, the game will make calculations for the square where the piece is situated and for its target square, as well as any other squares in its path, to answer the question, "is the attacking piece present and can it reach the target?".

If the answer is no, it is important to note that this doesn't necessarily mean the attacking piece is not present. Nor does it mean that its path is blocked.

Another interesting concept of Quantum Chess is called "exclusion". If a moving target is occupied and is in superposition by a piece that cannot be captured by the move, it is called an exclusion move.

Again, calculations are made for the target square and any squares in the path of an allowed move by a piece in superposition. This is done to answer the same question as capturing, with similar outcomes.

Castling is also very different in Quantum Chess. This move always involves two targets, and the same measurements are made for both targets. Castling cannot be used to capture, and will always be an exclusion move.

So, you might be wondering how you actually win a game of Quantum Chess?

Just like traditional chess, the aim of the game is to capture the opponent's king. However, unlike in traditional chess, the concept of checkmate does not exist.

To win, the enemy king must no longer actually exist on the board. As any piece, including the king, exist in a state of superposition, they can either be captured or not which further complicates the issue.

The game, therefore, continues until it is known, with certainty, that a particular player has no king left. For this reason, it is possible for both players to lose their king at the same time and the game would then be considered a draw.

Another important thing to note is that each player has a set amount of time for the game. For this reason, you can also win by running an opponent's time out.

How you play Quantum Chess depends on the variant of the game you are playing. We have already covered the rules of one variant above, and that game can be played throughQuantum Realm Games. But another version created byAlice Wismath at theSchool of Computing at Queen's University in Californiahas some slightly different rules.

You can try that game for yourself here.

In her version, each player has sixteen pieces. These pieces are in a quantum state of superposition of two types: a primary and a secondary type.

They are also in an unknown (quantum) type or a known (classical) type.When a piece is "touched" it collapses into its classical state and has an equal probability of becoming either a primary or secondary type. The king, however, is an exception, and is always in a classical state.

Each player has one king and its position is always known.

All other pieces are assigned the following primary piece types: left rook, left bishop, left knight, queen, right knight, right bishop, right rook, and pawns one through eight. Secondary piece types are then randomly assigned from this same list of piece types so that each type occurs exactly twice in the player's pieces.

Each piece is created at the start of each game and superpositions are not changed throughout the game. Pieces also start as they would in regular chess, on the first two rows, according to their primary piece type with all, except the king, in a state of superposition.

Once a quantum state piece is touched (i.e. chosen to move), it collapses into one of its two predetermined states, and this state is suddenly revealed to both players.

This can mean that a pawn in the front row can suddenly become a white knight once the piece has been "touched". You won't know until the piece's quantum state collapses.

Quantum Chess boards are the same as regular chess boards except that when a piece lands on a white square it remains in its classical state. When pieces land on black squares, however, they undergo a quantum transformation and regain, if lost, their quantum superposition.

This means that a previously "revealed" pawn can also suddenly transform into a queen if that was one of its predetermined primary or secondary types. A very interesting concept indeed.

To play the game, each player chooses a piece to move and must move it. If the quantum piece collapses into a piece type with no possible moves, then the player's move is over.

Pieces in classical states with no possible moves cannot be chosen. All pieces move as they would in classical chess with some of the following exceptions:

Pieces can also be captured as normal, and quantum pieces collapse from their superposition state and are removed from play.

If a player touches a quantum piece that collapses into a state that puts the opponent's king in check, their move is over. The opponent, however, is not required to get out of check in such circumstances.

Pawns that reach the opposite side of the board can be promoted to aqueen, bishop, rook, or knight, regardless of the number of pieces of that type already in the game. Also, if a piece in the quantum state on the far row is touched and revealed to be a pawn, it is promoted, but the promotion takes up the turn. The superimposed piece type is not affected.

To win the game, each player must capture the enemy's king, as a checkmate does not happen in Quantum Chess. For this reason, kings can actually move into a position that would normally be considered check.

Games are considered a draw if both opponents are left with only their king in play or 100 consecutive moves have been made with no captures or pawn movements by either player.

It was recently announced that the world's first Quantum Chess tournament had been won by Aleksander Kubica, a postdoctoral fellow at Canada's Perimeter Institute for Theoretical Physics and Institute for Quantum Computing. The tournament was held on the 9th of December 2020 at the Q2B 2020 conference.

The tournament games are timed, and Kubica managed to beat his opponent, Google's Doug Strain, by letting him run out of time. This currently makes Kubica officially the best Quantum Chess player in the world.

Not a bad way to see out one of the worst years in living memory.

And that, ladies and gentlemen, is a wrap.

If you like the sound of playing Quantum Chess, why not check out either of the versions we have discussed above in this article. Who knows, you might get proficient enough to challenge Kubica for the title in the not too distant future?

Follow this link:
What the Hell Is Quantum Chess? | IE - Interesting Engineering

Rewind 2020: Business, politics, social and professional impact, and what lies ahead – YourStory

In this year-end article, we look at the broad array of changes witnessed in 2020, transformative forces, and future trends for 2021 and beyond.

Some of the obvious developments for the year 2020 were offline or in-person meetings being replaced by virtual meetings, and travel and tourism being replaced by OTT and online binging. Office space was replaced by work from home.

Polluted air was replaced by cleaner air. Budget allocations for defence were reduced and budget allocations to stimulate the economy were increased. Going to schools and colleges was replaced by online classes or your teachers were replaced by teachers from anywhere. The swanky stores and fancy malls were replaced by online sales.

The most important change was that the GDP or the type of governance or the climate that a country had did not matter this is what I call a level playing field for the world.

All the above changes were across all countries, across all continents, across all levels of the society. It did not matter if you were developed or not, it did not matter if you had a medical infrastructure better than the others, it did not matter if you were in the tropics or not, it did not matter if you were rich or poor, and so on so forth.

The underlying impact of all of this will be short term and long term, is great or will be greater. For example, corporates are questioning the need to travel or to have office space in swanky zip codes. Parents are questioning the high school or college fees that they have to pay.

Governments are realising the importance of the impact of sporadic growth on the environment. They are questioning if chemical warfare is the future or not, especially when one country cant stay in isolation from the other.

The country that rules the tech space will rule the world, will be the future economic power.

Whilst all of the above developments were happening on the ground, there were huge enhancements in Artificial Intelligence, Machine Learning, Blockchain, facial recognition software, quantum computing, data storage, wearable devices and adoption of 5G.

All of this combined will pave the future of the world that we live in. Based upon the above context, this is what I feel the coming year or two will be for all of us, or for the world at large.

The misuse of advancement in science and tech has also always had the negative impact on our future, form minor misuses on audio and video content distortion to the hacking of websites and passwords, to targeted warfare, I fear that the use of AI and ML by countries into social media or other digital means of communication can change the mindset of the society, a country or a generation gradually without them even realising it.

The predictive behaviour online of an individual or a group of individuals can be further directed into a more regimented/chaotic society by implanting the algorithms that one wants to, whether a political party or a country or a group of countries.

So, while we have to be careful on the use of or influence of online behaviour, especially social media, we also need to be careful of the fact that the countries will not trust other countries.

Land records and legal documents will be more authentic and safer. Tokenisation of investment in shares or equity, in land and property, and other assets will also revolutionise the world. Tokenisation will democratise investments across all sectors of investments. And many such things will be much more secure and easy to transact.

But will this lead to a new currency, an e-currency for every country and a new world order which will cashless and corrupt free? Would the countries or individuals that lose because of all this, let that happen? Not in 2021 or 2022, but we shall soon know of this too.

While life becomes smaller and easier, our memories would fade, as we will be more dependent on devices, our abilities to be human will gradually diminish, more knowledge will be imparted to us than we need or can digest. The speed of growth of the human race will be enhanced multifold, meaning thereby what has changed in the last decades will take years to change. Good or bad is for all of us to see and live.

Furthermore, in my opinion, here are a few things that hopefully will not change or will make a strong comeback.

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)

Excerpt from:
Rewind 2020: Business, politics, social and professional impact, and what lies ahead - YourStory

Here’s Why Quantum Computing Will Not Break Cryptocurrencies – Forbes

Safe Deposit. Symbol of cryptocurrency safety. The man puts a physical bitcoin in small Residential ... [+] Vault. Toned soft focus picture.

Theres a lurking fear in cryptocurrency communities about quantum computing. Could it break cryptocurrencies and the encryption that protects them? How close might that be? Do the headlines around quantum supremacy mean that my private keys are at risk?

ADVERTISEMENT

The simple answer: no. But lets dive deeper into this phenomenon and really try to understand why this is the case and how quantum computing will interact with cryptocurrencies.

To start off with, lets define quantum computing and the classical computing were all used to, and seeing where the terms compare and contrast with one another. Quantum computing can be roughly placed in the same paradigm as classical pre-1900s physics and modern physics which comprises Einsteins insights on relativity and quantum physics.

Classical computing is the kind of computers weve grown used to, the extensions of Turings theories on computation, the laptops or mobile phones that you carry around with you. Classical computing relies heavily on the manipulation of physical bits the famous 0s and 1s.

Quantum computing relies on qubits, bits that are held in superposition and use quantum principles to complete calculations. The information captured or generated by a quantum system benefits from the ability of qubits to be in more than one physical state at a time (superposition), but there is information decay in capturing the state of the system.

One point that will be immediately relevant to the discussion is that quantum computers are not universally better than classical computers as a result. When people speak about quantum supremacy, including reports from Google GOOG and/or China, they really mean that a quantum computer can do a certain task better than classical computers, perhaps one that is impossible to do in any reasonable timeframe with classical computers.

We can think of this in terms of time scales from a computing perspective there are some, but not all functions, that go from being impossible to accomplish in any meaningful human-level time period to ones that become slow but manageable with a large enough quantum computer.

In a way, you can think of Turing tests and quantum supremacy tests in much the same way. Designed at first to demonstrate the superiority of one system over another (in the case of Turing tests, artificial language generation vs. human language comprehension, in the case of quantum supremacy tests, quantum computing systems vs classical computers), theyve become more gimmick than substance.

A quantum computer has to perform better at some minute and trivial task that might seem impressive but completely useless in much the same way a Turing test of machine-generated English might fool a Ukrainian child with no fluency in the language.

ADVERTISEMENT

This means that we have to narrow down to a function that quantum computers can be better on that would materially affect cryptocurrencies or the encryption theyre built on in order for quantum supremacy to matter.

One area of specific focus is Shors Algorithm, which can factor large prime numbers down into two smaller ones. This is a very useful property for breaking encryption, since the RSA family of encryption depends on factoring large prime numbers in exactly this manner. Shors Algorithm works in theory with a large enough quantum computer and so its a practical concern that eventually, Shors Algorithm might come into play and among other things, RSA encryption might be broken.

On this front, the US National Institute of Standards and Technology (NIST) has already started gathering proposals for post-quantum cryptography, encryption that would operate and not be broken even with much larger quantum computers than the ones were currently able to build. They estimate that large enough quantum computers to disrupt classical encryption will potentially arrive in the next twenty years.

ADVERTISEMENT

For cryptocurrencies, a fork in the future that might affect large parts of the chain, but it will be somewhat predictable there is a lot of thought being placed on post-quantum encryption technology. Bitcoin would not be one of the first planks to fall if classical encryption were suddenly broken for a number of reasons. Yet, a soft fork (as opposed to a hard one) might be enough to help move crypto-assets from suddenly insecure keys to secure post-quantum encryption.

Even an efficient implementation of Shors Algorithm may not break some of the cryptography standards used in bitcoin. SHA-256 is theorized to be quantum-resistant.

The most efficient theoretical implementation of a quantum computer to detect a SHA-256 collision is actually less efficient than the theorized classical implementation for breaking the standard. The wallet file in the original Bitcoin client is using SHA-512 (a more secure version than SHA-256) to help encrypt private keys.

ADVERTISEMENT

Most of the encryption in modern cryptocurrencies are built on elliptic curve cryptography rather than RSA especially in the generation of signatures in bitcoin which requires ECDSA. This is largely due to the fact that elliptic curves are correspondingly harder to crack than RSA (sometimes exponentially so) from classical computers.

Thanks to Moores law and better classical computing, secure RSA key sizes have grown so large so as to be impractical compared to elliptic curve cryptography so most people will opt for elliptic curve cryptography for performance reasons for their systems, which is the case with bitcoin.

However, quantum computers seem to flip this logic on its head: given a large enough quantum computer with enough qubits, you can break elliptic curve cryptography easier than you might break RSA.

ADVERTISEMENT

Both elliptic curve cryptography are widely used in a bunch of other industries and use cases as well RSA-2048 and higher are standards in the conventional banking system to send encrypted information, for example.

Yet, even with a large enough quantum computer, you would still have to reveal or find somebodys public keys so they could be subject to attack. With cryptocurrency wallet reuse being frowned upon, and a general encouragement of good privacy practices, the likelihood of this attack is already being reduced.

Another area of attack could be Grovers algorithm, which can exponentially speed up mining with a large enough quantum computer though its probable that ASICs, the specialized classical computers mostly used to mine bitcoin now, would be faster compared to the earliest versions of more complete quantum computers.

ADVERTISEMENT

This poses more of a stronger threat when it comes to the state of cryptocurrencies: the ability to mine quickly in a sudden quantum speedup could lead to destabilization of prices and more importantly control of the chain itself an unexpected quantum speedup could, if hidden, lead to vast centralization of mining and possible 51% attacks. Yet the most likely case is that larger systems of quantum computing will be treated like any kind of hardware, similar to the transition for miners between GPUs, FGPAs and ASICs a slow economic transition to better tooling.

Its conceivable that these avenues of attack and perhaps other more unpredictable ones might emerge, yet post-quantum encryption planning is already in process and through the mechanism of forks, cryptocurrencies can be updated to use post-quantum encryption standards and defend against these weaknesses.

Bitcoin and even other cryptocurrencies and their history are filled with examples of hardware and software changes that had to be made to make the network more secure and performant and good security practices in the present (avoiding wallet reuse) can help prepare for a more uncertain future.

ADVERTISEMENT

So quantum computers being added to the mix wont suddenly render classical modes of encryption useless or mining trivial quantum supremacy now doesnt mean that your encryption or the security of bitcoin is at risk right at this moment.

The real threat is when quantum computers become many scales larger than they currently are by which point planning for post-quantum encryption, which is already well on the way would come to the fore, and at which point bitcoin and other cryptocurrencies can soft fork and use both decentralized governance and dynamism when needed in the face of new existential threats to defeat the threat of quantum supremacy.

More:
Here's Why Quantum Computing Will Not Break Cryptocurrencies - Forbes

Global Quantum Computing Market Predicted to Garner $667.3 Million by 2027, Growing at 30.0% CAGR from 2020 to 2027 – [193 pages] Informative Report…

New York, USA, Dec. 22, 2020 (GLOBE NEWSWIRE) -- A latest report published by Research Dive on the globalquantum computing market sheds light on the current outlook and future growth of the market. As per the report, the global quantum computing market is expected to garner $667.3 million by growing at a CAGR of 30.0% from 2020 to 2027. This report is drafted by market experts by evaluating all the important aspects of the market. It is a perfect source of information and statistics for new entrants, market players, shareholders, stakeholders, investors, etc.

Check out How COVID-19 impacts the Global Quantum Computing Market. Click here to Connect with our Analyst to get more Market Insight: https://www.researchdive.com/connect-to-analyst/8332

Exclusive Offer - As we are running Anniversary discount, here are some additional benefits which you are entitled to avail with this report.

Free Excel Data Pack The report will cover impact of COVID-19 on this market. 20% Free Customisation 16 analyst hours support Quarterly Update on Enterprise License 24 hours priority response

Download Sample Report of the Global Quantum Computing Market and Reveal the Market Overview, Opportunity, Expansion, Growth and More: https://www.researchdive.com/download-sample/8332

The report includes:

A summary of the market with its definition, advantages, and application areas. Detailed insights on market position, dynamics, statistics, growth rate, revenues, market shares, and future predictions. Key market segments, boomers, restraints, and investment opportunities. Present situation of the global as well as regional market from the viewpoint of companies, countries, and end industries. Information on leading companies, current market trends and developments, Porter Five Analysis, and top winning business strategies.

Factors Impacting the Market Growth:

As per the report, the growing cyber-attacks across the world is hugely contributing to the growth of the global quantum computing market. Moreover, the rising implementation of quantum computing technologies in agriculture for helping farmers to improve the efficiency and yield of crops is likely to unlock rewarding opportunities for the market growth. However, absence of highly experienced employees, having knowledge regarding quantum computing is likely to hinder the market growth.

Access Varied Market Reports Bearing Extensive Analysis of the Market Situation, Updated With The Impact of COVID-19: https://www.researchdive.com/covid-19-insights

COVID-19 Impact Analysis:

The sudden outbreak of COVID-19 pandemic has made a significant impact on the global quantum computing market. During this crisis period, quantum computing technology can be used for medical research and other activities related to COVID-19 pandemic. Moreover, the technology can be beneficial for developing advanced drugs at an accelerated speed and for analyzing different types of interactions between biomolecules and fight infectious like viruses. In addition, businesses are greatly investing in the development of quantum computers for drug discovery amidst the crisis period. All these factors are expected to unlock novel investment opportunities for the market growth in the upcoming years.

Check out all Information and communication technology & media Industry Reports: https://www.researchdive.com/information-and-communication-technology-and-media

Segment Analysis:

The report segments the quantum computing market into offerings type, end user, and application.

By offerings type, the report further categorizes the market into: Consulting solutions Systems

Among these, the systems segment is expected to dominate the market by garnering a revenue of $313.3 million by 2027. This is mainly due to growing use of quantum computing in AI, radar making, machine learning technologies, and many others.

Based on application, the report further classifies the market into: Optimization Machine Learning Material Simulation

Among these, themachine learning segment is expected to observe accelerated growth and garner $236.9 million by 2027. This is mainly due to significant role of quantum computing in enhancing runtime, capacity, and learning efficiency. Moreover, quantum machine learning has the potential to speed-up various machine learning processes such as optimization, linear algebra, deep learning, and Kernel evaluation, which is likely to boost the market growth during the forecast period.

Regional Analysis:

The report explains the lookout of the global quantum computing market across several regions, including: Europe Asia Pacific LAMEA North America

Among these, the Asia-Pacific region is estimated to lead the market growth by growing at a striking growth rate of 31.60% during the forecast period. This is mainly because of the growing adoption of quantum computing technologies in numerous sectors including chemicals, healthcare, utilities & pharmaceuticals, and others in this region.

Market Players and Business Strategies:

The report offers a list of global key players in the quantum computing market and discloses some of their strategies and developments. The key players listed in the report are:

QC Ware, Corp. Cambridge Quantum Computing Limited D-Wave Systems Inc., International Business Machines Corporation Rigetti Computing 1QB Information Technologies River Lane Research StationQ Microsoft Anyon Google Inc.

These players are massively contributing to the growth of the market by performing activities such as mergers and acquisitions, novel developments, geographical expansions, and many more.

Our market experts have made use of several tools, methodologies, and research methods to get in-depth insights of the global quantum computing sector. Moreover, we strive to deliver a customized report to fulfill special requirements of our clients, on demand.Click Here to Get Absolute Top Companies Development Strategies Summary Report.

TRENDING REPORTS WITH COVID-19 IMPACT ANALYSIS

Mobile Device Management Market https://www.researchdive.com/412/mobile-device-management-marketData Center Power Market https://www.researchdive.com/415/data-center-power-marketAugmented Reality (AR) in Healthcare Market https://www.researchdive.com/covid-19-insights/218/global-augmented-reality-ar-in-healthcare-marketAI in Construction Market https://www.researchdive.com/covid-19-insights/222/ai-in-construction-market

Original post:
Global Quantum Computing Market Predicted to Garner $667.3 Million by 2027, Growing at 30.0% CAGR from 2020 to 2027 - [193 pages] Informative Report...

Beam me up: long-distance quantum teleportation has happened for the first time ever – SYFY WIRE

Raise your hand if you ever wanted to get beamed onto the transport deck of the USS Enterprise. Maybe we havent reached the point of teleporting entire human beings yet (sorry Scotty), but what we have achieved is a huge breakthrough towards quantum internet.

Led by Caltech, a collaborative team from Fermilab, NASAs Jet Propulsion Lab, Harvard University, the University of Calgary and AT&T have now successfully teleported qubits (basic units of quantum info) across almost 14 miles of fiber optic cables with 90 percentprecision. This is because of quantum entanglement, the phenomenon in which quantum particles which are mysteriously entangled behave exactly the same even when far away from each other.

When quantum internet is finally a thing, it will make Wifi look obsolete and dial-up even more ancient than it already is. We achieved sustained, high-fidelity quantum teleportation utilizing time-bin (time-of-arrival_ qubits of light, at the telecommunication wavelength of 1.5 microns, over fiber optic cables, Panagiotis Spentzouris, Head of Quantum Science at the Fermilab Quantum Institute, told SYFY WIRE. This type of qubit is compatible with several devices that are required for the deployment of quantum networks.

What you might recognize is the fiber optic cables used in the experiment, since they are everywhere in telecommunication tech today. Lasers, electronics and optical equipment which were also used for the experiments at Caltech (CQNET) and Fermilab (FQNET) that could someday evolve into the next iteration of internet. Though this is equipment you probably also recognize, what it did for these experiments was enable them to go off without a glitch. Information traveled across the cables at warp speed with the help of semi-autonomous systems that monitored it while while managing control and synchronization of the entangled particles. The system could run for up to a week without human intervention.

So if entangled qubits are inextricably linked despite the distance between them, is there even a limit to how far information can travel? Hypothetically, they could go on forever. What limits exist in reality are not in the qubits but the effects of their surroundings. While one of the qubits containing information stays where it is, the other one has to zoom over to wherever it needs to transfer that information. It could run into obstacles on the way.

What limits the distance that information can be transmitted is loss and noise: either from the properties of the medium we use to send the information or the effects of the environment on the medium, or imperfections on the various operations we need to perform to realize the information transfer, Spentzouris, who coauthored a study recently published in PRX Qunatum, said.

To keep quantum internet running at high precision and over distances around what it was able to cover in this experiment, the quantum teleportation that powers it needs quantum memory and quantum repeaters. Quantum memory is basically the quantum version of the memory your computer and smartphone use now. Instead of storing memory as something like 100101011, it stores it in the form of qubits. To make it possible for entangled qubits to travel as far as possible, quantum repeaters make it easier for those qubits to traverse by splitting it into sections over which they are teleported.

With this system, Spentzouris and his team are planning to lay out the epic Illinois Express Quantum Network (IEQNET), which will use the same technologies that the CQNET and FQNET experiments so successfully pulled off. More tech will obviously needed to realize this sci-fi brainchild. It will combine quantum and non-quantum functions for its quantum nodes and controls. The only thing missing will be the repeaters, since they will need more development to operate over such an expanse. Spentzouris believes quantum computing itself reaches far beyond internet.

Fully distributed quantum computing includes applications include GPS, secure computation beyond anything that can be achieved now, all the way to enabling advances in designing new materials and medicine, as well basic science discoveries, he said. It will unleash the full power of quantum computing and have a profound impact on our lives.

Continued here:
Beam me up: long-distance quantum teleportation has happened for the first time ever - SYFY WIRE

University collaboration gives Scotland the edge in global quantum computing race – HeraldScotland

SCOTLAND has the expertise to potentially equal tech giants like IBM, Google and Intel in the race to develop next-generation computing technologies, scientists believe.

The universities of Edinburgh, Glasgow and Strathclyde have collaborated to form a new national centre that brings together internationally-recognised experts in hardware, software and application development for quantum computing a sector predicted to be worth $65 billion by 2030.

The new Scottish Centre for Innovation in Quantum Computing and Simulation has received funding from the Scottish Government to explore inward investment opportunities.

Quantum computers process information using the properties of tiny microscopic particles or nanoelectronic circuits making them exponentially more powerful than traditional computers. Tech giants including IBM, Google, Microsoft, Intel and Amazon are investing millions of dollars in developing the worlds first workable quantum computers.

Last October, Google announced that its quantum computer took three minutes and 20 seconds to solve a problem that would have taken the worlds fastest supercomputer around 10,000 years to complete.

There are problems that even the worlds biggest supercomputers are unable to solve, said Andrew Daley, a professor of quantum computing at the University of Strathclyde. For example, how to optimise traffic flow by controlling motorways in various places; how to maximise fuel efficiency when big aircraft take off or how to invest in stocks for the maximum reward and minimum risk. Because we can do computing in a very different way on a quantum computer, these are the kinds of things we believe we may be able to do that we can't do on a traditional computer.

Scottish universities are major beneficiaries of the UK governments 1 billion UK National Quantum Technologies Programme, a 10-year drive to put the UK at the forefront of quantum technology research and commercialisation.

Edinburgh University already hosts the UKs 79m national supercomputer and is one of the partners in a 10m project to develop the UKs first commercial quantum computer.

Strathclyde Universitys quantum computing research includes a 10m industry-led project addressing technology barriers to scaling quantum hardware. And Glasgow Universitys projects include being part of a 7m UK consortium aimed at commercialising quantum technologies.

Ivan McKee, Scottish trade, investment and innovation minister, said: This joint project between the universities of Edinburgh, Glasgow and Strathclyde seeks to position Scotland as the go-to location for quantum computing and has the potential to attract significant international research funding and create jobs.

It also provides a model of collaboration which could be applied in other sectors to attract inward investment and boost Scotlands economy.

The Scottish Government funding will finance a feasibility study into inward investment opportunities in quantum computing. These might include partnerships with major technology companies, institutions or countries who already have their own quantum computing programmes.

Microsoft, for example, has quantum computing partnerships with universities and other places in the world, Professor Daley said. There are large centres of quantum computing in Singapore and in the Netherlands at Delft University. The German and US governments have also created clusters in quantum computing and other quantum technologies.

Professor Elham Kashefi, who leads the quantum team at Edinburgh Universitys School of Informatics, believes the new centre could help unlock the potential of quantum tech in an unprecedented way.

She added: Perhaps such a dream could be only achieved at large corporates like IBM, Microsoft, Amazon or Google. Yet I believe the flexibility that the centre could afford as a research institute, compared to a fully business-driven programme, could be the very fundamental bridge that our field desperately needs.

Martin Weides, professor of quantum technologies at Glasgow Universitys James Watt School of Engineering, said: Theres now an international race to realise practical technologies and applications for quantum computing. I believe the Scottish Centre for Innovation in Quantum Computing and Simulation will bring together the strong academic excellence at the three founding universities to give Scotland the edge to develop a vibrant quantum ecosystem.

Follow this link:
University collaboration gives Scotland the edge in global quantum computing race - HeraldScotland

Bitcoin is quantum computing resistant regardless of rising fears among investors – FXStreet

All cryptocurrencies are based on cryptography and require miners to solve extremely complex mathematical problems in order to secure the network. The idea behind quantum computing is that it will be able to crack Bitcoins algorithm much faster than the network.

The basic principle is that Bitcoins network has to be sufficiently fast in order for a quantum attacker to not have enough time to derive the private key of a specific public key before the network.

So far, it seems that quantum computers would take around 8 hours to derive a Bitcoin private key which, in theory, means the network is secure against them. It seems that the mark right now is around 10 minutes. If quantum computers can get close to this time, the Bitcoin network could be compromised.

Its also important to note that quantum computing not only poses a threat to Bitcoin and cryptocurrencies but to other platforms, even banks. Many platforms use encryption which would be broken if quantum computing becomes real, which means the implications of this technology go way beyond just cryptocurrencies.

Theoretically, cryptocurrencies have several ways to mitigate or completely stop quantum computing attacks in the future. For instance, a soft fork on the network of an asset could be enough to at least move some of the assets that are insecure.

Additionally, there are many algorithms that are theorized to be quantum-resistant. In fact, SHA-256 which is currently used should be resistant to these types of attacks. According to recent statistics, around 25% of Bitcoin in circulation remains vulnerable to quantum attacks. You should transfer your coins to a new p2pkh address to make sure they are safe.

Originally posted here:
Bitcoin is quantum computing resistant regardless of rising fears among investors - FXStreet

Scaling the heights of quantum computing to deliver real results – Chinadaily.com.cn – China Daily

Jiuzhang, a quantum computer prototype developed at the University of Science and Technology of China, represents such a giant leap forward in computing that just 200 seconds of its time dedicated to a specific task would equal 600 million years of computing time for today's current most powerful supercomputer.

On Dec 4, Science magazine announced a major breakthrough made by a team from USTC headed by renowned physicist Pan Jianwei. The team had jointly developed a 76-photon Jiuzhang, realizing an initial milestone on the path to full-scale quantum computing.

This quantum computational advantage, also known as "quantum supremacy", established China's leading position in the sphere of quantum computing research in the world.

USTC has produced a string of wonders: Sending Wukong, China-'s first dark matter particle explorer, and Mozi, the world's first quantum communication satellite, into space; and witnessing the National Synchrotron Radiation Laboratory sending off light from the Hefei Light Source.

During the past 50 years, USTC has made significant achievements in the fields of quantum physics, high-temperature superconductivity, thermonuclear fusion, artificial intelligence and nanomaterials.

Technology is the foundation of a country's prosperity, while innovation is the soul of national progress.

Since 1970, when USTC was relocated to Hefei, Anhui province, it has focused on research and innovation, targeting basic and strategic work in a bid to fulfill its oath to scale "the peak of sciences".

The large number of world-renowned innovative achievements shined glory on USTC, exhibiting its courage to innovate, daring to surpass its peers and unremitting pursuit of striving to be a top university in the world.

Although USTC was set up only 62 years ago, it established the country's first national laboratory and also the first national research center. It has obtained the largest number of achievements selected among China's Top 10 News for Scientific and Technological Progress each year since its founding.

Its reputation as an "important stronghold of innovation" has become stronger over the years.

While facing the frontiers of world science and technology, the main economic battlefield, the major needs of China and people's healthcare, USTC focuses on cultivating high-level scientific and technological innovation talents and teams, and shoulders national tasks.

It has used innovation to generate transformative technologies and develop strategic emerging industries, perfecting its ability to serve national strategic demand, and regional economic and social development.

Facing sci-tech frontiers

USTC has top disciplines covering mathematics, physics, chemistry, Earth and space sciences, biology and materials science. While based on basic research, USTC pays close attention to cutting-edge exploration, encouraging innovative achievements.

Serving major needs

In response to major national needs, USTC has led and participated in a number of significant scientific and technological projects that showcase the nation's strategic aims.

For example, sending the Mozi satellite and Wukong probe into space. Meanwhile, it also participated in the development of core components of Tiangong-2, China's first space lab, and Tianwen-1, the nation's first Mars exploration mission.

Main economic battlefield

In the face of economic and social development needs, USTC has balanced meeting national needs and boosting exploration in frontier spheres.

It has witnessed a series of innovative achievements in the fields of materials science, energy, environment, advanced manufacturing, AI, big data and security.

Safeguarding health

USTC's School of Life Sciences was founded in 1958 with emphasis on biophysics. In recent years, this flourished into many branches of biological sciences.

The new School of Life Sciences was established in Hefei in 1998. Based on its years of cultivation in the field of life sciences, the university has contributed much to China's medical science.

In 2020, the university developed the "USTC protocol" to treat COVID-19 patients, which has been introduced to more than 20 countries and regions.

Go here to see the original:
Scaling the heights of quantum computing to deliver real results - Chinadaily.com.cn - China Daily

This Incredible Particle Only Arises in Two Dimensions – Popular Mechanics

Physicists have confirmed the existence of an extraordinary, flat particle that could be the key that unlocks quantum computing.

Get unlimited access to the weird world of Pop Mech.

What is the rare and improbable anyon, and how on Earth did scientists verify them?

[T]hese particle-like objects only arise in realms confined to two dimensions, and then only under certain circumstanceslike at temperatures near absolute zero and in the presence of a strong magnetic field, Discover explains.

Scientists have theorized about these flat, peculiar particle-like objects since the 1980s, and the very nature of them has made it sometimes seem impossible to ever verify them. But the qualities scientists believe anyons have also made them sound very valuable to quantum research and, now, quantum computers.

This content is imported from {embed-name}. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

The objects have many possible positions and "remember," in a way, what has happened. In a press release earlier this fall, Purdue University explains more about the value of anyons:

GeoSafari Jr. Microscope for Kids (3+)

$21.99

GeoSafari Jr. Talking Microscope for Kids (4+)

$66.43

Beginner Microscope for Kids

40X-1000X LED Illumination Lab Compound Monocular Microscope

SE306R-PZ-LED Forward-Mounted Binocular Stereo Microscope

$189.99

SE400-Z Professional Binocular Stereo Microscope

$223.99

LCD Digital Microscope

$79.99

Andonstar AD407 3D HDMI Soldering Digital Microscope

$279.99 (12% off)

Its these fractional charges that let scientists finally design the exact right experiments to shake loose the real anyons. A coin sorter is a good analogy for a lot of things, and this time is no different: scientists had to find the right series of sorting ideas in order to build one experimental setup that would, ultimately, only register the anyons. And having the unique quality of fractional charges gave them, at least, a beginning to work on those experiments.

Following an April paper about using a miniature particle accelerator to notice anyons, in July, researchers from Purdue published their findings after using a microchip etched to route particles through a maze that phased out all other particles. The maze combined an interferometera device that uses waves to measure what interferes with themwith a specially designed chip that activates anyons at a state.

Purdue University

What results is a measurable phenomenon called anyonic braiding. This is surprising and good, because it confirms the particle-like anyons exhibit this particular particle behavior, and because braiding as a behavior has potential for quantum computing. Electrons also braid, but researchers werent certain the much weaker charge of anyons would exhibit the same behavior.

Braiding isnt just for electrons and anyons, either: photons do it, too. "Braiding is a topological phenomenon that has been traditionally associated with electronic devices," photon researcher Mikael Rechtsman said in October.

He continued:

Now, the quantum information toolkit includes electrons, protons, and what Discover calls these strange in-betweeners: the anyons.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io

View original post here:
This Incredible Particle Only Arises in Two Dimensions - Popular Mechanics

Two Years into the Government’s National Quantum Initiative – Nextgov

Monday markedtwo years since the passage of the National Quantum Initiative, or NQI Actand in that time, federal agencies followed through on its early calls and helped lay the groundwork for new breakthroughs across the U.S. quantum realm.

Now, the sights of those helping implement the law are set on the future.

I would say in five years, something we'd love to see is ... a better idea of, What are the applications for a quantum computer thats buildable in the next fiveto 10 years, that would be beneficial to society? the Office of Science and Technology Policy Assistant Director for Quantum Information Science Dr. Charles Tahan told Nextgov in an interview Friday. He also serves as the director of the National Quantum Coordination Officea cooperation-pushing hub established by the legislation.

Tahan reflected on some foundational moves made over the last 24 months and offered a glimpse into his teams big-ticket priorities for 2021.

Quantum devices and technologies are among an ever-evolving field that hones in on phenomena at the atomic scale. Potential applications are coming to light, and are expected to radically reshape science, engineering, computing, networking, sensing, communication and more. They offer promises like unhackable internet or navigation support in places disconnected from GPS.

Federal agencies have a long history of exploring physical sciences and quantum-related pursuitsbut previous efforts were often siloed. Signed by President Donald Trump in 2018, the NQI Act sought to provide for a coordinated federal program to accelerate quantum research and development for the economic and national security of America. It assigned specific jobs for the National Institute of Standards and Technology, Energy Department and National Science Foundation, among others, and mandated new collaborations to boost the nations quantum workforce talent pipeline and strengthen societys grasp of this relatively fresh area of investment. The functions of the National Quantum Coordination Office, or NQCO, were also set forth in the bill, and it was officially instituted in early 2019. Since then, the group has helped connect an array of relevant stakeholders and facilitate new initiatives proposed by the law.

Now, everything that's been called out in the act has been establishedits started up, Tahan explained. He noted the three agencies with weighty responsibilities spent 2019 planning out their courses of action within their communities, and this year, subsequently launched weighty new efforts.

One of the latest was unveiled in August by the Energy Department, which awarded $625 million over five yearssubject to appropriationsto its Argonne, Brookhaven, Fermi, Oak Ridge and Lawrence Berkeley national laboratories to establish QIS Research Centers. In each, top thinkers will link up to push forward collaborative research spanning many disciplines. Academic and private-sector institutions also pledged to provide $340 million in contributions for the work.

These are about $25 million eachthat's a tremendous amount of students, and postdocs, and researchers, Tahan said. And those are spread out across the country, focusing on all different areas of quantum: computing, sensing and networking.

NSF this summer also revealed the formation of new Quantum Leap Challenge Institutes to tackle fundamental research hurdles in quantum information science and engineering over the next half-decade. The University of Colorado, University of Illinois-Urbana-Champaign, and University of California, Berkeley are set to head and house the first three institutes, though Tahan confirmed more could be launched next year. The initiative is backed by $75 million in federal fundingand while it will take advantage of existing infrastructures, non-governmental entities involved are also making their own investments and constructing new facilities.

That's the foundation, you know, Tahan said. The teams have been formed, the research plans have been writtenthat's a tremendous amount of workand now they're off actually working. So now, we start to reap the rewards because all the heavy lifting of getting people organized has been done.

Together with NSF, OSTP also helped set in motion the National Q-12 Education Partnership. It intends to connect public, private and academic sector quantum players and cohesively create and release learning materials to help U.S. educators produce new courses to engage students with quantum fields. The work is ultimately meant to spur K-12 students' interest in the emerging areas earlier into their education, and NSF will award nearly $1 million across QIS education efforts through the work.

And beyond the governments walls and those of academia, the NQI Act also presented new opportunities for industry. Meeting the laws requirements, NIST helped convene a consortium of cross-sector stakeholders to strategically confront existing quantum-related technology, standards and workforce gaps, and needs. This year, that groupthe Quantum Economic Development Consortium, or QED-Cbloomed in size, established a more formal membership structure and announced companies that make up its steering committee.

It took a year or more to get all these companies together and then write partnership agreements. So, that partnership agreement was completed towards the beginning of summer, and the steering committee signed it over the summer, and now there are I think 100 companies or so who have signed it, Tahan said. So, it's up and running. It's a real economic development consortiumthats a technical thingand that's a big deal. And how big it is, and how fast it's growing is really, really remarkable.

This fall also brought the launch of quantum.gov, a one-stop website streamlining federal work and policies. The quantum coordination office simultaneously released a comprehensive roadmap pinpointing crucial areas of needed research, deemed the Quantum Frontiers Report.

That assessment incorporates data collected from many workshops, and prior efforts OSTP held to promote the national initiative and establishes eight frontiers that contain core problems with fundamental questions confronting QIS today and must be addressed to push forward research and development breakthroughs in the space. They include expanding opportunities for quantum technologies to benefit society, characterizing and mitigating quantum errors, and more.

It tries to cut through the hype a little bit, Tahan explained. It's a field that requires deep technical expertise. So, it's easy to be led in the wrong direction if you don't have all the data. So we try to narrow it down into here are the important problems, here's what we really don't know, heres what we do know, and go this way, and that will, hopefully benefit the whole enterprise.

Quantum-focused strides have also been made by the U.S. on the international front. Tahan pointed to the first quantum cooperation agreement signed between America and Japan late last year, which laid out basic core values guiding their working together.

We've been using that as a model to engage with other countries. We've had high-level meetings with Australia, industry collaborations with the U.K., and we're engaging with other countries. So, that's progressing, Tahan said. Many countries are interested in quantum as you can guesstheres a lot of investments around the worldand many want to work with us on going faster together.

China had also made its own notable quantum investments (some predating the NQI Act), and touted new claims of quantum supremacy, following Google, on the global stage this year.

I wouldn't frame it as a competition ... We are still very much in the research phase here, and we'll see how those things pan out, Tahan said. I think we're taking the right steps, collectively. The U.S. ecosystem of companies, nonprofits and governments arebased on our strategy, both technical and policiesgoing in the right direction and making the right investments.

Vice President-elect Kamala Harris previously put forthlegislationto broadly advance quantum research, but at this point, the Biden administration hasnt publicly shared any intentions to prioritize government-steered ongoing or future quantum efforts.

[One of] the big things we're looking towards in the next year, is workforce development. We have a critical shortage or need for talent in this space. Its a very diverse set of skills. With these new centers, just do the math. How many students and postdocs are you going to need to fill up those, to do all that research? It's a very large number, Tahan said. And so we're working on something to create that pipeline.

In that light, the team will work to continue to develop NSFs ongoing, Q-12 partnership. Theyll also reflect on whats been built so far through the national initiative to identify any crucial needs that may have been looked over.

As you stand something up thats really big, you're always going to make some mistakes. What have you missed? Tahan noted.

And going forward, the group plans to hone deeper in on balancing the economic and security implications of the burgeoning fields.

As the technology gets more and more advanced, how do we be first to realize everything but also protect our investments? Tahan said. And getting that balance right is going to require careful policy thinking about how to update the way the United States does things.

Go here to see the original:
Two Years into the Government's National Quantum Initiative - Nextgov