A Nepalese Machine Learning (ML) Researcher Introduces Papers-With-Video Browser Extension Which Allows Users To Access Videos Related To Research…

Amit Chaudhary, a machine learning (ML) researcher from Nepal, has recently introduced a browser extension that allows users to directly access videos related to research papers published on the platform arXiv.

ArXiv has become an essential resource for new machine learning (ML) papers. Initially, in 1991, it was launched as a storage site for physics preprints. In 2001 it was named ArXiv and had since been hosted by Cornell University. ArXiv has received close to 2 million submissions across various scientific research fields.

Amit obtained publicly released videos from 2020 ML conferences. He then indexed the videos and reverse-mapped them to the relevant arXiv links through pyarxiv, a dedicated wrapper for the arXiv API. The Google Chrome extension creates a video icon next to the paper title on the arXiv abstract page, enabling users to identify and access available videos related to the paper directly.

Many research teams are creating videos to accompany their papers. These videos can act as a guide by providing demo and other valuable information on the research document. In several situations, the videos are created as an alternative to traditional in-person presentations at AI conferences. This is useful in current circumstances as almost all panels have moved to virtual forms due to the Covid-19 pandemic.

The Papers-With-Video extension enables direct video links for around 3.7k arXiv ML papers. Amit aims to figure out how to pair documents and videos related effectively but has different titles, and with this, he hopes to expand coverage to 8k videos. He has proposed community feedback and has now tweaked the extensions functionality based on user remarks and suggestions.

The browser extension is not available on the Google Chrome Web Store yet. However, one can find the extension, installation guide, and further information on GitHub.

GitHub: https://github.com/amitness/papers-with-video

Paper List: https://gist.github.com/amitness/9e5ad24ab963785daca41e2c4cfa9a82

Suggested

Read the original:
A Nepalese Machine Learning (ML) Researcher Introduces Papers-With-Video Browser Extension Which Allows Users To Access Videos Related To Research...

Comprehensive Report on Cloud Machine Learning Market 2021 | Trends, Growth Demand, Opportunities & Forecast To 2027 |Amazon, Oracle Corporation,…

Cloud Machine Learning Market research report is the new statistical data source added by A2Z Market Research.

Cloud Machine Learning Market is growing at a High CAGR during the forecast period 2021-2027. The increasing interest of the individuals in this industry is that the major reason for the expansion of this market.

Cloud Machine Learning Market research is an intelligence report with meticulous efforts undertaken to study the right and valuable information. The data which has been looked upon is done considering both, the existing top players and the upcoming competitors. Business strategies of the key players and the new entering market industries are studied in detail. Well explained SWOT analysis, revenue share and contact information are shared in this report analysis.

Get the PDF Sample Copy (Including FULL TOC, Graphs and Tables) of this report @:

https://www.a2zmarketresearch.com/sample?reportId=14611

Note In order to provide more accurate market forecast, all our reports will be updated before delivery by considering the impact of COVID-19.

Top Key Players Profiled in this report are:

Amazon, Oracle Corporation, IBM, Microsoft Corporation, Google Inc., Salesforce.Com, .

The key questions answered in this report:

Various factors are responsible for the markets growth trajectory, which are studied at length in the report. In addition, the report lists down the restraints that are posing threat to the global Cloud Machine Learning market. It also gauges the bargaining power of suppliers and buyers, threat from new entrants and product substitute, and the degree of competition prevailing in the market. The influence of the latest government guidelines is also analyzed in detail in the report. It studies the Cloud Machine Learning markets trajectory between forecast periods.

Regions Covered in the Global Cloud Machine Learning Market Report 2021: The Middle East and Africa (GCC Countries and Egypt) North America (the United States, Mexico, and Canada) South America (Brazil etc.) Europe (Turkey, Germany, Russia UK, Italy, France, etc.) Asia-Pacific (Vietnam, China, Malaysia, Japan, Philippines, Korea, Thailand, India, Indonesia, and Australia)

Get up to 30% Discount on this Premium Report @:

https://www.a2zmarketresearch.com/discount?reportId=14611

The cost analysis of the Global Cloud Machine Learning Market has been performed while keeping in view manufacturing expenses, labor cost, and raw materials and their market concentration rate, suppliers, and price trend. Other factors such as Supply chain, downstream buyers, and sourcing strategy have been assessed to provide a complete and in-depth view of the market. Buyers of the report will also be exposed to a study on market positioning with factors such as target client, brand strategy, and price strategy taken into consideration.

The report provides insights on the following pointers:

Market Penetration: Comprehensive information on the product portfolios of the top players in the Cloud Machine Learning market.

Product Development/Innovation: Detailed insights on the upcoming technologies, R&D activities, and product launches in the market.

Competitive Assessment: In-depth assessment of the market strategies, geographic and business segments of the leading players in the market.

Market Development: Comprehensive information about emerging markets. This report analyzes the market for various segments across geographies.

Market Diversification: Exhaustive information about new products, untapped geographies, recent developments, and investments in the Cloud Machine Learning market.

Table of Contents

Global Cloud Machine Learning Market Research Report 2021 2027

Chapter 1 Cloud Machine Learning Market Overview

Chapter 2 Global Economic Impact on Industry

Chapter 3 Global Market Competition by Manufacturers

Chapter 4 Global Production, Revenue (Value) by Region

Chapter 5 Global Supply (Production), Consumption, Export, Import by Regions

Chapter 6 Global Production, Revenue (Value), Price Trend by Type

Chapter 7 Global Market Analysis by Application

Chapter 8 Manufacturing Cost Analysis

Chapter 9 Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter 10 Marketing Strategy Analysis, Distributors/Traders

Chapter 11 Market Effect Factors Analysis

Chapter 12 Global Cloud Machine Learning Market Forecast

Buy Exclusive Report @:

https://www.a2zmarketresearch.com/buy?reportId=14611

If you have any special requirements, please let us know and we will offer you the report as you want.

About A2Z Market Research:

The A2Z Market Research library provides syndication reports from market researchers around the world. Ready-to-buy syndication Market research studies will help you find the most relevant business intelligence.

Our Research Analyst Provides business insights and market research reports for large and small businesses.

The company helps clients build business policies and grow in that market area. A2Z Market Research is not only interested in industry reports dealing with telecommunications, healthcare, pharmaceuticals, financial services, energy, technology, real estate, logistics, F & B, media, etc. but also your company data, country profiles, trends, information and analysis on the sector of your interest.

Contact Us:

Roger Smith

1887 WHITNEY MESA DR HENDERSON, NV 89014

[emailprotected]

+1 775 237 4147

https://neighborwebsj.com/

View post:
Comprehensive Report on Cloud Machine Learning Market 2021 | Trends, Growth Demand, Opportunities & Forecast To 2027 |Amazon, Oracle Corporation,...

Machine Learning in Finance Market Benefits, Forthcoming Developments, Business Opportunities & Future Investments to 2028 KSU | The Sentinel…

COVID-19 can affect the global economy in three main ways: by directly affecting production and demand, by creating supply chain and market disruption, and by its financial impact on firms and financial markets. Global Machine Learning in Finance Market size has covered and analysed the potential of Worldwide market Industry and provides statistics and information on market dynamics, market analysis, growth factors, key challenges, major drivers & restraints, opportunities and forecast. This report presents a comprehensive overview, market shares, and growth opportunities of market 2021 by product type, application, key manufacturers and key regions and countries.

Market Research Inc.proclaims a new addition of comprehensive data to its extensive repository titled as, Machine Learning in Financemarket. This informative data has been scrutinized by using effective methodologies such as primary and secondary research techniques. This research report estimates the scale of the global Machine Learning in Finance market over the upcoming year. The recent trends, tools, methodologies have been examined to get a better insight into the businesses.

Request a sample copy of this report @:

https://www.marketresearchinc.com/request-sample.php?id=31104

Top key players::Ignite LtdYodleeTrill A

Additionally, it throws light on different dynamic aspects of the businesses, which help to understand the framework of the businesses. The competitive landscape has been elaborated on the basis of profit margin, which helps to understand the competitors at domestic as well as global level.

The globalMachine Learning in Financemarket has been studied by considering numerous attributes such as type, size, applications, and end-users. It includes investigations on the basis of current trends, historical records, and future prospects. This statistical data helps in making informed business decisions for the progress of the industries. For an effective and stronger business outlook, some significant case studies have been mentioned in this report.

Get a reasonable discount on this premium report @:

https://www.marketresearchinc.com/ask-for-discount.php?id=31104

Key Objectives of Machine Learning in Finance Market Report:

Study of the annual revenues and market developments of the major players that supply Machine Learning in Finance Analysis of the demand for Machine Learning in Finance by component Assessment of future trends and growth of architecture in the Machine Learning in Finance market Assessment of the Machine Learning in Finance market with respect to the type of application Study of the market trends in various regions and countries, by component, of the Machine Learning in Finance market Study of contracts and developments related to the Machine Learning in Finance market by key players across different regions Finalization of overall market sizes by triangulating the supply-side data, which includes product developments, supply chain, and annual revenues of companies supplying Machine Learning in Finance across the globe.

Further information:

https://www.marketresearchinc.com/enquiry-before-buying.php?id=31104

In this study, the years considered to estimate the size ofMachine Learning in Financeare as follows:

History Year: 2016-2019

Base Year: 2020

Forecast Year 2021 to 2028.

About Us

Market Research Inc is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other. When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

Contact Us

Market Research Inc

Kevin

51 Yerba Buena Lane, Ground Suite,

Inner Sunset San Francisco, CA 94103, USA

Call Us:+1 (628) 225-1818

Write Us@sales@marketresearchinc.com

https://www.marketresearchinc.com

Link:
Machine Learning in Finance Market Benefits, Forthcoming Developments, Business Opportunities & Future Investments to 2028 KSU | The Sentinel...

4Paradigm Defends its Championship in China’s Machine Learning Platform Market in the 1st Half of 2020, According to IDC – Yahoo Finance

4Paradigm stays on a leadership position from 2018 to the first half of 2020

BEIJING, Jan. 21, 2021 /PRNewswire/ -- IDC, a premier global provider of market intelligence, has recently published China AI Software and Application (2020 H1) Report (hereinafter referred to as "Report"), where 4Paradigm as an AI innovator recognized for its software standardization level, scope of industrial coverage and solid customer base, has led China's machine learning platform market from 2018 to the first half of 2020 with expanding market share, ahead of leading vendors such as Alibaba, Tencent, Baidu and Huawei.

The report dives into China's AI market in 2020 in retrospect: from 2015 to 2020, every single year has seen new drivers emerging from the AI market and the market landscape continuously evolving from cognition to exploration, to deep application and then to scale-up. An unprecedentedly prosperous AI market has been witnessed since 2020 as both awareness and investment are boosted for AI and data intelligence in the Chinese market driven by pandemic control, new infrastructure initiatives and impact of international trade frictions. Since the second half 2020, a series of policies such as digital transformation of SOE, intelligent computing center launched by governmental authorities are expected to galvanize AI growth to a new height.

Looking into the future, Yanxia Lu, Chief AI Analyst of IDC China says, "Market opportunities generated from continual AI implementation are just around the corner. For further expansion of market shares, it's necessary to leverage technological leadership and product innovation for new market opportunities, to explore replicable and scalable application scenarios and to unite partners with industrial know-how for deployment of technologies on enterprise."

The IDC report recognizes the advantages of 4Paradigm machine learning platform and AutoML products in technological accumulation, enterprise-level product layout, commercial implementation performance, AI industrial ecosystem, etc., hence an important benchmark for enterprises' choice of machine learning platform.

Story continues

4Paradigm has built an AutoML full stack algorithm layout including perceptive, cognitive and decision-making algorithm, enabling enterprises to drive up key decision-making performance and empowering enterprises to scale up AI scenario deployment with low threshold and high efficiency in all-dimensional observation, accurate orientation and optimized decision-making.

4Paradigm released four products this year, respectively are Sage AIOS, an enterprise AI operation system, Sage HyperCycle ML, a fully automatic tool for scaled-up AI development, Sage CESS, a one-stop intelligent operation platform and Sage One, an AI computing power platform for full life cycle, hence building a full stack AI product matrix covering computing power, OS, production platform and business system.

To help enterprises address the booming demand of moving online, 4Paradigm continues to provide online, intelligent and precise operation capabilities to numerous prominent enterprises and organizations in China and abroad, among which are Bank of Communications, Industrial Bank, Huaxia Bank, Guosen Securities, Laiyifen, Feihe, China Academy of Railway Sciences, DHL, Zegna, Budweiser China, KRASTASE, etc., enabling them to embrace digital transformation and seize new opportunities online.

With over 200 partners in 15 sectors, 4Paradigm is experiencing rapid increase in its eco partners and industrial coverage on the basis of existing ecosystem.

Despite the unprecedent boom on AI market, enterprises face mounting challenges in their intelligent transformation in terms of high development threshold of AI, low implementation efficiency and poor business value. In FutureScape China ICT Market Forecast Forum, an annual IDC event recently held, Zhenshan Zhong, Vice President IDC China, offered elaborated insights on the ten predictions of AI market in China from 2021 to 2025, among which AutoML (automated machine learning) ranks the top. IDC holds that AutoML will lower the threshold of AI development to make inclusive AI a reality. It is expected that the number of data analysts and modelling scientists using AutoML technology encapsulation in providing end-to-end machine learning platforms from data preparation to model deployment will double by 2023.

Through product embedding of AutoML technology and rigorous methodology for implementation, 4Paradigm has built a systematic AutoML implementation solutions and pathways, which have enabled successful implementation of over 10,000 AI applications for enterprises in finance, retail, healthcare, manufacturing, internet, media, government, energy, carrier, among other sectors, with positive feedbacks from leaders and innovators in the tide of transformation. In the future, 4Paradigm will continuously commit to promoting the implementation of machine learning platforms and AutoML products in more industries and scenarios, helping more enterprises in their journey of intelligent transformation and upgrade for higher business efficiency while removing obstacles and boosting social productivity.

http://www.4paradigm.com

SOURCE 4Paradigm

Read more here:
4Paradigm Defends its Championship in China's Machine Learning Platform Market in the 1st Half of 2020, According to IDC - Yahoo Finance

Over 50s Spending Spree Boosts Economy By Billions – Money International

Older consumers are a benefit to the economy rather than a drain on resources, according to new research.

The over 50s will spend 63p in every 1 in the UK by 2040 rising from 54p in the 1 in 2018.

And the money is spent across the board rather than on specific goods and services,

However, think-tank the International Longevity Centre (ILC) believes the country could benefit even more if the government looked at lifting barriers to spending by older people.

The ILC reportMaximising the longevity dividendreveals spending by the over 50s has dominated the UK economy since 2013 and will rise over the coming decades, from 54% (319 billion) of total consumer spending in 2018 to 63% by 2040 (550 billion).

According to the report, lifting barriers to spending by the over 75s could add 2% (47 billion) to GDP a year by 2040 and supporting the over 50s to remain in the workforce could add an another 1.3% to GDP a year by 2040.

David Sinclair, Director of the ILC, said: As the population ages there are enormous economic opportunities, but these are currently being neglected.

There are enormous gains to be made by individual businesses and for the economy if we can unlock the spending and earning power of older adults.

But too many people face barriers to working and spending in later life issues like inaccessible high streets, poorly designed products, and age discriminatory attitudes require a serious response.

Weve become accustomed to hearing our ageing population talked about as a bad thing but the reality is it could be an opportunity.

However, we wont realise this longevity dividend through blind optimism about ageing. Instead, we need concerted action to tackle the barriers to spending and working in later life.

We need action to make sure our extra years are healthy years, we need accessible high streets and workplaces that are free from age discrimination and we need continued action to ensure that people have access to decent pensions in later life.

Realising the longevity dividend will require decisive action of the kind weve yet to see from either business or government. For all the talk of baby boomers dominating politics, weve yet to see a serious response to the opportunities and this needs to change.

Further related information and articles can be found following the links below

Continue reading here:
Over 50s Spending Spree Boosts Economy By Billions - Money International

Mission Healthcare of San Diego Adopts Muse Healthcare’s Machine Learning Tool – Southernminn.com

ST. PAUL, Minn., Jan. 19, 2021 /PRNewswire/ -- San Diego-based Mission Healthcare, one of the largest home health, hospice, and palliative care providers in California, will adopt Muse Healthcare's machine learning and predictive modeling tool to help deliver a more personalized level of care to their patients.

The Muse technology evaluates and models every clinical assessment, medication, vital sign, and other relevant data to perform a risk stratification of these patients. The tool then highlights the patients with the most critical needs and visually alerts the agency to perform additional care. Muse Healthcare identifies patients as "Critical," which means they have a greater than 90% likelihood of passing in the next 7-10 days. Users are also able to make accurate changes to care plans based on the condition and location of the patient. When agencies use Muse's powerful machine learning tool, they have an advantage and data proven outcomes to demonstrate they are providing more care and better care to patients in transition.

According to Mission Healthcare's Vice President of Clinical and Quality, Gerry Smith, RN, MSN, Muse will serve as an invaluable tool that will assist their clinicians to enhance care for their patients. "Mission Hospice strives to ensure every patient receives the care and comfort they need while on service, and especially in their final days. We are so excited that the Muse technology will provide our clinical team with additional insights to positively optimize care for patients at the end of life. This predictive modeling technology will enable us to intervene earlier; make better decisions for more personalized care; empower staff; and ultimately improve patient outcomes."

Mission Healthcare's CEO, Paul VerHoeve, also believes that the Muse technology will empower their staff to provide better care for patients. "Predictive analytics are a new wave in hospice innovation and Muse's technology will be a valuable asset to augment our clinical efforts at Mission Healthcare. By implementing a revolutionary machine learning tool like Muse, we can ensure our patients are receiving enhanced hands-on care in those critical last 7 10 days of life. Our mission is to take care of people, with Muse we will continue to improve the patient experience and provide better care in the final days and hours of a patient's life."

As the only machine learning tool in the hospice industry, the Muse transitions tool takes advantage of the implemented documentation within the EMR. This allows the agency to quickly implement the tool without disruption. "With guidance from our customers in the hundreds of locations that are now using the tool, we have focused on deploying time saving enhancements to simplify a clinician's role within hospice agencies. These tools allow the user to view a clinical snapshot, complete review of the scheduled frequency, and quickly identify the patients that need immediate attention. Without Muse HC, a full medical review must be conducted to identify these patients," said Tom Maxwell, co-Founder of Muse Healthcare. "We are saving clinicians time in their day, simplifying the identification challenges of hospice, and making it easier to provide better care to our patients. Hospice agencies only get one chance to get this right," said Maxwell.

CEO of Muse Healthcare, Bryan Mosher, is also excited about Mission's adoption of the Muse tool. "We welcome the Mission Healthcare team to the Muse Healthcare family of customers, and are happy to have them adopt our product so quickly. We are sure with the use of our tools,clinicians at Mission Healthcare will provide better care for their hospice patients," said Mosher.

About Mission Healthcare

As one of the largest regional home health, hospice, and palliative care providers in California, San Diego-based Mission Healthcare was founded in 2009 with the creation of its first service line, Mission Home Health. In 2011, Mission added its hospice service line. Today, Mission employs over 600 people and serves both home health and hospice patients through Southern California. In 2018, Mission was selected as a Top Workplace by the San Diego Union-Tribune. For more information visit https://homewithmission.com/.

About Muse Healthcare

Muse Healthcare was founded in 2019 by three leading hospice industry professionals -- Jennifer Maxwell, Tom Maxwell, and Bryan Mosher. Their mission is to equip clinicians with world-class analytics to ensure every hospice patient transitions with unparalleled quality and dignity. Muse's predictive model considers hundreds of thousands of data points from numerous visits to identify which hospice patients are most likely to transition within 7-12 days. The science that powers Muse is considered a true deep learning neural network the only one of its kind in the hospice space. When hospice care providers can more accurately predict when their patients will transition, they can ensure their patients and the patients' families receive the care that matters most in the final days and hours of a patient's life. For more information visit http://www.musehc.com.

View original post here:
Mission Healthcare of San Diego Adopts Muse Healthcare's Machine Learning Tool - Southernminn.com

Researchers Develop New Machine Learning Technique to Predict Progress of COVID-19 Patients | The Weather Channel – Articles from The Weather Channel…

An illustration of novel coronavirus SARS-CoV-2.

Researchers have published one of the first studies using a Machine Learning (ML) technique called "federated learning" to examine electronic health records to better predict how COVID-19 patients will progress.

The study, published in the Journal of Medical Internet Research - Medical Informatics, indicates that the emerging technique holds promise to create more robust machine learning models that extend beyond a single health system without compromising patient privacy.

These models, in turn, can help triage patients and improve the quality of their care. "Machine Learning models in health care often require diverse and large-scale data to be robust and translatable outside the patient population they were trained on," said co-author Benjamin Glicksberg, Assistant Professor at Mount Sinai.

Federated learning is a technique that trains an algorithm across multiple devices or servers holding local data samples but avoids clinical data aggregation, which is undesirable for reasons including patient privacy issues.

For the study, the researchers implemented and assessed federated learning models using data from electronic health records at five separate hospitals within the Health System to predict mortality in COVID-19 patients.

They compared the performance of a federated model against ones built using data from each hospital separately, referred to as local models.

After training their models on a federated network and testing the data of local models at each hospital, the researchers found the federated models demonstrated enhanced predictive power and outperformed local models at most of the hospitals.

**

The above article has been published from a wire agency with minimal modifications to the headline and text.

Continue reading here:
Researchers Develop New Machine Learning Technique to Predict Progress of COVID-19 Patients | The Weather Channel - Articles from The Weather Channel...

Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows – Georgia State University News

ATLANTACompared to standard machine learning models, deep learning models are largely superior at discerning patterns and discriminative features in brain imaging, despite being more complex in their architecture, according to a new study in Nature Communications led by Georgia State University.

Advanced biomedical technologies such as structural and functional magnetic resonance imaging (MRI and fMRI) or genomic sequencing have produced an enormous volume of data about the human body. By extracting patterns from this information, scientists can glean new insights into health and disease. This is a challenging task, however, given the complexity of the data and the fact that the relationships among types of data are poorly understood.

Deep learning, built on advanced neural networks, can characterize these relationships by combining and analyzing data from many sources. At the Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State researchers are using deep learning to learn more about how mental illness and other disorders affect the brain.

Although deep learning models have been used to solve problems and answer questions in a number of different fields, some experts remain skeptical. Recent critical commentaries have unfavorably compared deep learning with standard machine learning approaches for analyzing brain imaging data.

However, as demonstrated in the study, these conclusions are often based on pre-processed input that deprive deep learning of its main advantagethe ability to learn from the data with little to no preprocessing. Anees Abrol, research scientist at TReNDS and the lead author on the paper, compared representative models from classical machine learning and deep learning, and found that if trained properly, the deep-learning methods have the potential to offer substantially better results, generating superior representations for characterizing the human brain.

We compared these models side-by-side, observing statistical protocols so everything is apples to apples. And we show that deep learning models perform better, as expected, said co-author Sergey Plis, director of machine learning at TReNDS and associate professor of computer science.

Plis said there are some cases where standard machine learning can outperform deep learning. For example, diagnostic algorithms that plug in single-number measurements such as a patients body temperature or whether the patient smokes cigarettes would work better using classical machine learning approaches.

If your application involves analyzing images or if it involves a large array of data that cant really be distilled into a simple measurement without losing information, deep learning can help, Plis said.. These models are made for really complex problems that require bringing in a lot of experience and intuition.

The downside of deep learning models is they are data hungry at the outset and must be trained on lots of information. But once these models are trained, said co-author Vince Calhoun, director of TReNDS and Distinguished University Professor of Psychology, they are just as effective at analyzing reams of complex data as they are at answering simple questions.

Interestingly, in our study we looked at sample sizes from 100 to 10,000 and in all cases the deep learning approaches were doing better, he said.

Another advantage is that scientists can reverse analyze deep-learning models to understand how they are reaching conclusions about the data. As the published study shows, the trained deep learning models learn to identify meaningful brain biomarkers.

These models are learning on their own, so we can uncover the defining characteristics that theyre looking into that allows them to be accurate, Abrol said. We can check the data points a model is analyzing and then compare it to the literature to see what the model has found outside of where we told it to look.

The researchers envision that deep learning models are capable of extracting explanations and representations not already known to the field and act as an aid in growing our knowledge of how the human brain functions. They conclude that although more research is needed to find and address weaknesses of deep-learning models, from a mathematical point of view, its clear these models outperform standard machine learning models in many settings.

Deep learnings promise perhaps still outweighs its current usefulness to neuroimaging, but we are seeing a lot of real potential for these techniques, Plis said.

Read more:
Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows - Georgia State University News

Project MEDAL to apply machine learning to aero innovation – The Engineer

Metallic alloys for aerospace components are expected to be made faster and more cheaply with the application of machine learning in Project MEDAL.

This is the aim of Project MEDAL: Machine Learning for Additive Manufacturing Experimental Design,which is being led by Intellegens, a Cambridge University spin-out specialising in artificial intelligence, the Sheffield University AMRC North West, and Boeing. It aims to accelerate the product development lifecycle of aerospace components by using a machine learning model to optimise additive manufacturing (AM) for new metal alloys.

How collaboration is driving advances in additive manufacturing

Project MEDALs research will concentrate on metal laser powder bed fusion and will focus on so-called parameter variables required to manufacture high density, high strength parts.

The project is part of the National Aerospace Technology Exploitation Programme (NATEP), a 10m initiative for UK SMEs to develop innovative aerospace technologies funded by the Department for Business, Energy and Industrial Strategy and delivered in partnership with the Aerospace Technology Institute (ATI) and Innovate UK.

In a statement, Ben Pellegrini, CEO of Intellegens, said: The intersection of machine learning, design of experiments and additive manufacturing holds enormous potential to rapidly develop and deploy custom parts not only in aerospace, as proven by the involvement of Boeing, but in medical, transport and consumer product applications.

There are many barriers to the adoption of metallic AM but by providing users, and maybe more importantly new users, with the tools they need to process a required material should not be one of them, added James Hughes, research director for Sheffield University AMRC North West. With the AMRCs knowledge in AM, and Intellegens AI tools, all the required experience and expertise is in place in order to deliver a rapid, data-driven software toolset for developing parameters for metallic AM processes to make them cheaper and faster.

Aerospace components must withstand certain loads and temperature resistances, and some materials are limited in what they can offer. There is also simultaneous push for lower weight and higher temperature resistance for better fuel efficiency, bringing new or previously impractical-to-machine metals into the aerospace sector.

One of the main drawbacks of AM is the limited material selection currently available and the design of new materials, particularly in the aerospace industry, requires expensive and extensive testing and certification cycles which can take longer than a year to complete and cost as much as 1m. Project MEDAL aims to accelerate this process.

The machine learning solution in this project can significantly reduce the need for many experimental cycles by around 80 per cent, Pellegrini said: The software platform will be able to suggest the most important experiments needed to optimise AM processing parameters, in order to manufacture parts that meet specific target properties. The platform will make the development process for AM metal alloys more time and cost-efficient. This will in turn accelerate the production of more lightweight and integrated aerospace components, leading to more efficient aircraft and improved environmental impact.

More:
Project MEDAL to apply machine learning to aero innovation - The Engineer

Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis – AJMC.com Managed Markets Network

Machine learning was shown to identify patients with rheumatoid arthritis (RA) who present an increased chance of achieving clinical response with sarilumab, with those selected also showing an inferior response to adalimumab, according to an abstract presented at ACR Convergence, the annual meeting of the American College of Rheumatology (ACR).

In prior phase 3 trials comparing the interleukin 6 receptor (IL-6R) inhibitor sarilumab with placebo and the tumor necrosis factor (TNF-) inhibitor adalimumab, sarilumab appeared to provide superior efficacy for patients with moderate to severe RA. Although promising, the researchers of the abstract highlight that treatment of RA requires a more individualized approach to maximize efficacy and minimize risk of adverse events.

The characteristics of patients who are most likely to benefit from sarilumab treatment remain poorly understood, noted researchers.

Seeking to better identify the patients with RA who may best benefit from sarilumab treatment, the researchers applied machine learning to select from a predefined set of patient characteristics, which they hypothesized may help delineate the patients who could benefit most from either antiIL-6R or antiTNF- treatment.

Following their extraction of data from the sarilumab clinical development program, the researchers utilized a decision tree classification approach to build predictive models on ACR response criteria at week 24 in patients from the phase 3 MOBILITY trial, focusing on the 200-mg dose of sarilumab. They incorporated the Generalized, Unbiased, Interaction Detection and Estimation (GUIDE) algorithm, including 17 categorical and 25 continuous baseline variables as candidate predictors. These included protein biomarkers, disease activity scoring, and demographic data, added the researchers.

Endpoints used were ACR20, ACR50, and ACR70 at week 24, with the resulting rule validated through application on independent data sets from the following trials:

Assessing the end points used, it was found that the most successful GUIDE model was trained against the ACR20 response. From the 42 candidate predictor variables, the combined presence of anticitrullinated protein antibodies (ACPA) and C-reactive protein >12.3 mg/L was identified as a predictor of better treatment outcomes with sarilumab, with those patients identified as rule-positive.

These rule-positive patients, which ranged from 34% to 51% in the sarilumab groups across the 4 trials, were shown to have more severe disease and poorer prognostic factors at baseline. They also exhibited better outcomes than rule-negative patients for most end points assessed, except for patients with inadequate response to TNF inhibitors.

Notably, rule-positive patients had a better response to sarilumab but an inferior response to adalimumab, except for patients of the HAQ-Disability Index minimal clinically important difference end point.

If verified in prospective studies, this rule could facilitate treatment decision-making for patients with RA, concluded the researchers.

Reference

Rehberg M, Giegerich C, Praestgaard A, et al. Identification of a rule to predict response to sarilumab in patients with rheumatoid arthritis using machine learning and clinical trial data. Presented at: ACR Convergence 2020; November 5-9, 2020. Accessed January 15, 2021. 021. Abstract 2006. https://acrabstracts.org/abstract/identification-of-a-rule-to-predict-response-to-sarilumab-in-patients-with-rheumatoid-arthritis-using-machine-learning-and-clinical-trial-data/

See the article here:
Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis - AJMC.com Managed Markets Network

AI in Credit Decision-Making Is Promising, but Beware of Hidden Biases, Fed Warns – JD Supra

As financial services firms increasingly turn to artificial intelligence (AI), banking regulators warn that despite their astonishing capabilities, these tools must be relied upon with caution.

Last week, the Board of Governors of the Federal Reserve (the Fed) held a virtual AI Academic Symposium to explore the application of AI in the financial services industry. Governor Lael Brainard explained that particularly as financial services become more digitized and shift to web-based platforms, a steadily growing number of financial institutions have relied on machine learning to detect fraud, evaluate credit, and aid in operational risk management, among many other functions.[i]

In the AI world, machine learning refers to a model that processes complex data sets and automatically recognizes patterns and relationships, which are in turn used to make predictions and draw conclusions.[ii] Alternative data is information that is not traditionally used in a particular decision-making process but that populates machine learning algorithms in AI-based systems and thus fuels their outputs.[iii]

Machine learning and alternative data have special utility in the consumer lending context, where these AI applications allow financial firms to determine the creditworthiness of prospective borrowers who lack credit history.[iv] Using alternative data such as the consumers education, job function, property ownership, address stability, rent payment history, and even internet browser history and behavioral informationamong many other datafinancial institutions aim to expand the availability of affordable credit to so-called credit invisibles or unscorables.[v]

Yet, as Brainard cautioned last week, machine-learning AI models can be so complex that even their developers lack visibility into how the models actually classify and process what could amount to thousands of nonlinear data elements.[vi] This obscuring of AI models internal logic, known as the black box problem, raises questions about the reliability and ethics of AI decision-making.[vii]

When using AI machine learning to evaluate access to credit, the opaque and complex data interactions relied upon by AI could result in discrimination by race, or even lead to digital redlining, if not intentionally designed to address this risk.[viii] This can happen, for example, when intricate data interactions containing historical information such as educational background and internet browsing habits become proxies for race, gender, and other protected characteristicsleading to biased algorithms that discriminate.[ix]

Consumer protection laws, among other aspects of the existing regulatory framework, cover AI-related credit decision-making activities to some extent. Still, in light of the rising complexity of AI systems and their potentially inequitable consequences, AI-focused legal reforms may be needed. At this time, to help ensure that financial services are prepared to manage these risks, the Fed has called on stakeholdersfrom financial services firms to consumer advocates and civil rights organizations as well as other businesses and the general publicto provide input on responsible AI use.[x]

[i] Lael Brainard, Governor, Bd. of Governors of the Fed. Reserve Sys., AI Academic Symposium: Supporting Responsible Use of AI and Equitable Outcomes in Financial Services (Jan. 12, 2021), available at https://www.federalreserve.gov/newsevents/speech/brainard20210112a.htm.

[ii] Pratin Vallabhaneni and Margaux Curie, Leveraging AI and Alternative Data in Credit Underwriting: Fair Lending Considerations for Fintechs, 23 No. 4 Fintech L. Rep. NL 1 (2020).

[iii] Id.

[iv] Id.; Brainard, supra n. 1.

[v] Vallabhaneni and Margaux Curie, supra n.2; Kathleen Ryan, The Big Brain in the Black Box, Am. Bar Assoc. (May 2020), https://bankingjournal.aba.com/2020/05/the-big-brain-in-the-black-box/.

[vi] Brainard, supra n.1; Ryan, supra n.5.

[vii] Brainard, supra n.1; Ryan, supra n.5.

[viii] Brainard, supra n.1.

[ix] Id. (citing Carol A. Evans and Westra Miller, From Catalogs to Clicks: The Fair Lending Implications of Targeted, Internet Marketing, Consumer Compliance Outlook (2019)).

[x] Id.

Read the original post:
AI in Credit Decision-Making Is Promising, but Beware of Hidden Biases, Fed Warns - JD Supra

Has the Time Come to Trust Machines more than Humans? – Analytics Insight

Its stunning what innovation can do nowadaysnow and again, taking on jobs and decisions that once required human thought. Think about the capability of artificial intelligence, machine learning and predictive analytics, and the effect that these advances could have on humans.

Theoretically, you would already be able to do a lot of things and much more utilizing technology. Yet, are the decisions that algorithms can make dependent on predictive analytics and big data fundamentally any better than decisions seasoned managers may make, taking into considerations their years of experience?

Not every person fears our machine overlords. Truth be told, as indicated by Penn State scientists, with regards to private data and access to financial data, individuals will trust machines more than humans, which could prompt both positive and negative online practices.

The study showed that individuals who trusted machines were essentially bound to surrender their Mastercard numbers to a computerized travel planner than a human travel planner. Experts in both innovation and business are united in accepting that AI isnt yet prepared to overtake the human components of decision-making identified with different business choicesif it actually will be. It is, they state, a balance.

Technology, and the data it very well may be programmed to capture, is a massively important tool for quick decision-making or to carry business activities to a set of conclusions. However, these should be placed into context by a human, indeed, more than one human. Human decision-making is vulnerable to predisposition thus, in light of a legitimate concern for fairness, more than one individuals instinct should be thought of.

In a car accident, individuals judge the action of a self-driving vehicle as more destructive and corrupt, despite the fact that the action performed by the human was actually the equivalent. In another situation, we consider an emergency response system responding to a tidal wave. A few people were informed that the town was effectively evacuated. Others were informed that the evacuation effort failed.

Studies demonstrate that for this situation machines additionally got the worst part of the deal. Truth be told, if the rescue effort failed, individuals assessed the action of the machine adversely and that of the human positively. The data demonstrated that individuals appraised the action of the machine as essentially more hurtful and less good, and furthermore revealed needing to hire the human, yet not the machine.

That confidence in machines might be set off in light of the fact that individuals accept that machines dont talk, or have unlawful plans on their private data. In any case, while machines probably wont have ulterior intentions in their data, individuals creating and running those computers could prey on this gullibility to harness personal data from clueless users, for instance, through phishing tricks, which are endeavors by criminals to get client names, passwords, credit card numbers and different bits of private data by acting like trustworthy sources.

Another study supported by Oracle and Future Workplace sullen that individuals have more trust in robots than their managers. The study of 8,370 employees, directors and managers across 10 nations found that AI has changed the relationship among individuals and technology at work, and is reshaping the job HR teams and leaders need to play in pulling in, holding and creating talent.

The most recent headways in AI and machine learning are quickly arriving at standard, bringing about a huge shift in the way individuals across the world interface with technology and their teams, said Emily He, senior VP of the Human Capital Management Cloud Business Group at Oracle. As this study shows, the connection between humans and machines is being reimagined at work, and there is no one-size-fits-all approach to deal with effectively dealing with this change. All things considered, companies need to band together with their HR companies to customize the way to implement AI at work to meet the changing expectations for their teams the world over.

Individuals surely dont care for one-sided humans or machines, yet when we test their repudiation experimentally, individuals rate human bias as marginally more destructive and less good than those of machines.

We are moving from a time of imposing standards on machine behavior to one of finding laws which dont reveal to us how machines should act, however, how we judge them. Furthermore, the primary principle is incredible and straightforward: individuals judge people by their intentions and machines by their results.

Go here to read the rest:
Has the Time Come to Trust Machines more than Humans? - Analytics Insight

CERC plans to embrace AI, machine learning to improve functioning – Business Standard

The apex power sector regulator, the Central Electricity Regulatory Commission (CERC), is planning to set up an artificial intelligence (AI)-based regulatory expert system tool (REST) for improving access to information and assist the commission in discharge of its duties. So far, only the Supreme Court (SC) has an electronic filing (e-filing) system and is in the process of building an AI-based back-end service.

The CERC will be the first such quasi-judicial regulatory body to embrace AI and machine learning (ML). The decision comes at a time when the CERC has been shut for four ...

Key stories on business-standard.com are available to premium subscribers only.

MONTHLY STAR

Business Standard Digital Monthly Subscription

Complete access to the premium product

Convenient - Pay as you go

Pay using Master/Visa Credit Card & ICICI VISA Debit Card

Auto renewed (subject to your card issuer's permission)

Cancel any time in the future

Note: Subscription will be auto renewed, you may cancel any time in the future without any questions asked.

Requires personal information

SMART MONTHLY

Business Standard Digital - 12 Months

Get 12 months of Business Standard digital access

Single Seamless Sign-up to Business Standard Digital

Convenient - Once a year payment

Pay using an instrument of your choice - Credit/Debit Cards, Net Banking, Payment Wallets accepted

Exclusive Invite to select Business Standard events

Note: Subscription will be auto renewed, you may cancel any time in the future without any questions asked.

Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.

As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.

Support quality journalism and subscribe to Business Standard.

Digital Editor

First Published: Fri, January 15 2021. 06:10 IST

Read the original:
CERC plans to embrace AI, machine learning to improve functioning - Business Standard

New research project will use machine learning to advance metal alloys for aerospace – Metal Additive Manufacturing magazine

Ian Brooks, AM Technical Fellow, AMRC North West with Renishaws RenAM 500Q metal Additive Manufacturing machine (Courtesy Renishaw/ AMRC North West)

UK-based Intellegens, a University of Cambridge spin-out specialising in artificial intelligence; the University of Sheffield Advanced Manufacturing Research Centre (AMRC) North West, Preston, Lancashire, UK; and Boeing will collaborate on Project MEDAL: Machine Learning for Additive Manufacturing Experimental Design.

The project aims to accelerate the product development lifecycle of aerospace components by using a machine learning model to optimise Additive Manufacturing processing parameters for new metal alloys at a lower cost and faster rate. The research will focus on metal Laser Beam Powder Bed Fusion (PBF-LB), specifically on key parameter variables required to manufacture high density, high strength parts.

Project MEDAL is part of the National Aerospace Technology Exploitation Programme (NATEP), a 10 million initiative for UK SMEs to develop innovative aerospace technologies funded by the Department for Business, Energy and Industrial Strategy and delivered in partnership with the Aerospace Technology Institute (ATI) and Innovate UK. Intellegens was a startup in the first group of companies to complete the ATI Boeing Accelerator last year.

We are very excited to be launching this project in conjunction with the AMRC, stated Ben Pellegrini, CEO of Intellegens. The intersection of machine learning, design of experiments and Additive Manufacturing holds enormous potential to rapidly develop and deploy custom parts not only in aerospace, as proven by the involvement of Boeing, but in medical, transport and consumer product applications.

James Hughes, Research Director for University of Sheffield AMRC North West, explained that the project will build the AMRCs knowledge and expertise in alloy development so it can help other UK manufacturers.

Hughes commented, At the AMRC we have experienced first-hand, and through our partner network, how onerous it is to develop a robust set of process parameters for AM. It relies on a multi-disciplinary team of engineers and scientists and comes at great expense in both time and capital equipment.

It is our intention to develop a robust, end-to-end methodology for process parameter development that encompasses how we operate our machinery right through to how we generate response variables quickly and efficiently. Intellegens AI-embedded platform Alchemite will be at the heart of all of this.

There are many barriers to the adoption of metallic AM but by providing users, and maybe more importantly new users, with the tools they need to process a required material should not be one of them, Hughes continued. With the AMRCs knowledge in AM, and Intellegens AI tools, all the required experience and expertise is in place in order to deliver a rapid, data-driven software toolset for developing parameters for metallic AM processes to make them cheaper and faster.

Sir Martin Donnelly, president of Boeing Europe and managing director of Boeing in the UK and Ireland, reported that the project shows how industry can successfully partner with government and academia to spur UK innovation.

Donnelly noted, We are proud to see this project move forward because of what it promises aviation and manufacturing, and because of what it represents for the UKs innovation ecosystem. We helped found the AMRC two decades ago, Intellegens was one of the companies we invested in as part of the ATI Boeing Accelerator and we have longstanding research partnerships with Cambridge University and the University of Sheffield.

He added, We are excited to see what comes from this continued collaboration and how we might replicate this formula in other ways within the UK and beyond.

Aerospace components have to withstand certain loads and temperature resistances, and some materials are limited in what they can offer. There is also simultaneous push for lower weight and higher temperature resistance for better fuel efficiency, bringing new or previously impractical-to-machine metals into the aerospace material mix.

One of the main drawbacks of AM is the limited material selection currently available and the design of new materials, particularly in the aerospace industry, requires expensive and extensive testing and certification cycles which can take longer than a year to complete and cost as much as 1 million to undertake.

Pellegrini explained that experimental design techniques are extremely important to develop new products and processes in a cost-effective and confident manner. The most common approach is Design of Experiments (DOE), a statistical method that builds a mathematical model of a system by simultaneously investigating the effects of various factors.

Pellegrini added, DOE is a more efficient, systematic way of choosing and carrying out experiments compared to the Change One Separate variable at a Time (COST) approach. However, the high number of experiments required to obtain a reliable covering of the search space means that DOE can still be a lengthy and costly process, which can be improved.

The machine learning solution in this project can significantly reduce the need for many experimental cycles by around 80%. The software platform will be able to suggest the most important experiments needed to optimise AM processing parameters, in order to manufacture parts that meet specific target properties. The platform will make the development process for AM metal alloys more time and cost-efficient. This will in turn accelerate the production of more lightweight and integrated aerospace components, leading to more efficient aircrafts and improved environmental impact, concluded Pellegrini.

Intellegens will produce a software platform with an underlying machine learning algorithm based on its Alchemite platform. It has reportedly already been used successfully to overcome material design problems in a University of Cambridge research project with a leading OEM where a new alloy was designed, developed and verified in eighteen months rather than the expected twenty-year timeline, saving approximately $10 million.

http://www.intellegens.ai

http://www.amrc.co.uk

http://www.boeing.com

View original post here:
New research project will use machine learning to advance metal alloys for aerospace - Metal Additive Manufacturing magazine

The Worldwide Quantum Computing Industry will Exceed $7.1 Billion by 2026 – Yahoo Finance

Dublin, Jan. 19, 2021 (GLOBE NEWSWIRE) -- The "Quantum Computing Market by Technology, Infrastructure, Services, and Industry Verticals 2021 - 2026" report has been added to ResearchAndMarkets.com's offering.

This report assesses the technology, companies/organizations, R&D efforts, and potential solutions facilitated by quantum computing. The report provides global and regional forecasts as well as the outlook for quantum computing impact on infrastructure including hardware, software, applications, and services from 2021 to 2026. This includes the quantum computing market across major industry verticals.

While classical (non-quantum) computers make the modern digital world possible, there are many tasks that cannot be solved using conventional computational methods. This is because of limitations in processing power. For example, fourth-generation computers cannot perform multiple computations at one time with one processor. Physical phenomena at the nanoscale indicate that a quantum computer is capable of computational feats that are orders of magnitude greater than conventional methods.

This is due to the use of something referred to as a quantum bit (qubit), which may exist as a zero or one (as in classical computing) or may exist in two-states simultaneously (0 and 1 at the same time) due to the superposition principle of quantum physics. This enables greater processing power than the normal binary (zero only or one only) representation of data.

Whereas parallel computing is achieved in classical computers via linking processors together, quantum computers may conduct multiple computations with a single processor. This is referred to as quantum parallelism and is a major difference between hyper-fast quantum computers and speed-limited classical computers.

Quantum computing is anticipated to support many new and enhanced capabilities including:

Ultra-secure Data and Communications: Data is encrypted and also follow multiple paths through a phenomenon known as quantum teleportation

Super-dense Data and Communications: Significantly denser encoding will allow substantially more information to be sent from point A to point B

Target Audience:

Story continues

ICT Service Providers

ICT Infrastructure Providers

Security Solutions Providers

Data and Computing Companies

Governments and NGO R&D Organizations

Select Report Findings:

The global market for QC hardware will exceed $7.1 billion by 2026

Leading application areas are simulation, optimization, and sampling

Managed services will reach $206 million by 2026 with CAGR of 44.2%

Key professional services will be deployment, maintenance, and consulting

QC based on superconducting (cooling) loops tech will reach $3.3B by 2026

Fastest growing industry verticals will be government, energy, and transportation

Report Benefits:

Market forecasts globally, regionally, and by opportunity areas for 2021 - 2026

Understand how quantum computing will accelerate growth of artificial intelligence

Identify opportunities to leverage quantum computing in different industry verticals

Understand challenges and limitations to deploying and operating quantum computing

Identify contribution of leading vendors, universities, and government agencies in R&D

Key Topics Covered:

1.0 Executive Summary

2.0 Introduction

3.0 Technology and Market Analysis3.1 Quantum Computing State of the Industry3.2 Quantum Computing Technology Stack3.3 Quantum Computing and Artificial Intelligence3.4 Quantum Neurons3.5 Quantum Computing and Big Data3.6 Linear Optical Quantum Computing3.7 Quantum Computing Business Model3.8 Quantum Software Platform3.9 Application Areas3.10 Emerging Revenue Sectors3.11 Quantum Computing Investment Analysis3.12 Quantum Computing Initiatives by Country3.12.1 USA3.12.2 Canada3.12.3 Mexico3.12.4 Brazil3.12.5 UK3.12.6 France3.12.7 Russia3.12.8 Germany3.12.9 Netherlands3.12.10 Denmark3.12.11 Sweden3.12.12 Saudi Arabia3.12.13 UAE3.12.14 Qatar3.12.15 Kuwait3.12.16 Israel3.12.17 Australia3.12.18 China3.12.19 Japan3.12.20 India3.12.21 Singapore

4.0 Quantum Computing Drivers and Challenges4.1 Quantum Computing Market Dynamics4.2 Quantum Computing Market Drivers4.2.1 Growing Adoption in Aerospace and Defense Sectors4.2.2 Growing investment of Governments4.2.3 Emergence of Advance Applications4.3 Quantum Computing Market Challenges

5.0 Quantum Computing Use Cases5.1 Quantum Computing in Pharmaceuticals5.2 Applying Quantum Technology to Financial Problems5.3 Accelerate Autonomous Vehicles with Quantum AI5.4 Car Manufacturers using Quantum Computing5.5 Accelerating Advanced Computing for NASA Missions

6.0 Quantum Computing Value Chain Analysis6.1 Quantum Computing Value Chain Structure6.2 Quantum Computing Competitive Analysis6.2.1 Leading Vendor Efforts6.2.2 Start-up Companies6.2.3 Government Initiatives6.2.4 University Initiatives6.2.5 Venture Capital Investments6.3 Large Scale Computing Systems

7.0 Company Analysis7.1 D-Wave Systems Inc.7.1.1 Company Overview:7.1.2 Product Portfolio7.1.3 Recent Development7.2 Google Inc.7.2.1 Company Overview:7.2.2 Product Portfolio7.2.3 Recent Development7.3 Microsoft Corporation7.3.1 Company Overview:7.3.2 Product Portfolio7.3.3 Recent Development7.4 IBM Corporation7.4.1 Company Overview:7.4.2 Product Portfolio7.4.3 Recent Development7.5 Intel Corporation7.5.1 Company Overview7.5.2 Product Portfolio7.5.3 Recent Development7.6 Nokia Corporation7.6.1 Company Overview7.6.2 Product Portfolio7.6.3 Recent Developments7.7 Toshiba Corporation7.7.1 Company Overview7.7.2 Product Portfolio7.7.3 Recent Development7.8 Raytheon Company7.8.1 Company Overview7.8.2 Product Portfolio7.8.3 Recent Development7.9 Other Companies7.9.1 1QB Information Technologies Inc.7.9.1.1 Company Overview7.9.1.2 Recent Development7.9.2 Cambridge Quantum Computing Ltd.7.9.2.1 Company Overview7.9.2.2 Recent Development7.9.3 QC Ware Corp.7.9.3.1 Company Overview7.9.3.2 Recent Development7.9.4 MagiQ Technologies Inc.7.9.4.1 Company Overview7.9.5 Rigetti Computing7.9.5.1 Company Overview7.9.5.2 Recent Development7.9.6 Anyon Systems Inc.7.9.6.1 Company Overview7.9.7 Quantum Circuits Inc.7.9.7.1 Company Overview7.9.7.2 Recent Development7.9.8 Hewlett Packard Enterprise (HPE)7.9.8.1 Company Overview7.9.8.2 Recent Development7.9.9 Fujitsu Ltd.7.9.9.1 Company Overview7.9.9.2 Recent Development7.9.10 NEC Corporation7.9.10.1 Company Overview7.9.10.2 Recent Development7.9.11 SK Telecom7.9.11.1 Company Overview7.9.11.2 Recent Development7.9.12 Lockheed Martin Corporation7.9.12.1 Company Overview7.9.13 NTT Docomo Inc.7.9.13.1 Company Overview7.9.13.2 Recent Development7.9.14 Alibaba Group Holding Limited7.9.14.1 Company Overview7.9.14.2 Recent Development7.9.15 Booz Allen Hamilton Inc.7.9.15.1 Company Overview7.9.16 Airbus Group7.9.16.1 Company Overview7.9.16.2 Recent Development7.9.17 Amgen Inc.7.9.17.1 Company Overview7.9.17.2 Recent Development7.9.18 Biogen Inc.7.9.18.1 Company Overview7.9.18.2 Recent Development7.9.19 BT Group7.9.19.1 Company Overview7.9.19.2 Recent Development7.9.20 Mitsubishi Electric Corp.7.9.20.1 Company Overview7.9.21 Volkswagen AG7.9.21.1 Company Overview7.9.21.2 Recent Development7.9.22 KPN7.9.22.1 Recent Development7.10 Ecosystem Contributors7.10.1 Agilent Technologies7.10.2 Artiste-qb.net7.10.3 Avago Technologies7.10.4 Ciena Corporation7.10.5 Eagle Power Technologies Inc7.10.6 Emcore Corporation7.10.7 Enablence Technologies7.10.8 Entanglement Partners7.10.9 Fathom Computing7.10.10 Alpine Quantum Technologies GmbH7.10.11 Atom Computing7.10.12 Black Brane Systems7.10.13 Delft Circuits7.10.14 EeroQ7.10.15 Everettian Technologies7.10.16 EvolutionQ7.10.17 H-Bar Consultants7.10.18 Horizon Quantum Computing7.10.19 ID Quantique (IDQ)7.10.20 InfiniQuant7.10.21 IonQ7.10.22 ISARA7.10.23 KETS Quantum Security7.10.24 Magiq7.10.25 MDR Corporation7.10.26 Nordic Quantum Computing Group (NQCG)7.10.27 Oxford Quantum Circuits7.10.28 Post-Quantum (PQ Solutions)7.10.29 ProteinQure7.10.30 PsiQuantum7.10.31 Q&I7.10.32 Qasky7.10.33 QbitLogic7.10.34 Q-Ctrl7.10.35 Qilimanjaro Quantum Hub7.10.36 Qindom7.10.37 Qnami7.10.38 QSpice Labs7.10.39 Qu & Co7.10.40 Quandela7.10.41 Quantika7.10.42 Quantum Benchmark Inc.7.10.43 Quantum Circuits Inc. (QCI)7.10.44 Quantum Factory GmbH7.10.45 QuantumCTek7.10.46 Quantum Motion Technologies7.10.47 QuantumX7.10.48 Qubitekk7.10.49 Qubitera LLC7.10.50 Quintessence Labs7.10.51 Qulab7.10.52 Qunnect7.10.53 QuNu Labs7.10.54 River Lane Research (RLR)7.10.55 SeeQC7.10.56 Silicon Quantum Computing7.10.57 Sparrow Quantum7.10.58 Strangeworks7.10.59 Tokyo Quantum Computing (TQC)7.10.60 TundraSystems Global Ltd.7.10.61 Turing7.10.62 Xanadu7.10.63 Zapata Computing7.10.64 Accenture7.10.65 Atos Quantum7.10.66 Baidu7.10.67 Northrop Grumman7.10.68 Quantum Computing Inc.7.10.69 Keysight Technologies7.10.70 Nano-Meta Technologies7.10.71 Optalysys Ltd.

8.0 Quantum Computing Market Analysis and Forecasts 2021 - 20268.1.1 Quantum Computing Market by Infrastructure8.1.1.1 Quantum Computing Market by Hardware Type8.1.1.2 Quantum Computing Market by Application Software Type8.1.1.3 Quantum Computing Market by Service Type8.1.1.3.1 Quantum Computing Market by Professional Service Type8.1.2 Quantum Computing Market by Technology Segment8.1.3 Quantum Computing Market by Industry Vertical8.1.4 Quantum Computing Market by Region8.1.4.1 North America Quantum Computing Market by Infrastructure, Technology, Industry Vertical, and Country8.1.4.2 European Quantum Computing Market by Infrastructure, Technology, and Industry Vertical8.1.4.3 Asia-Pacific Quantum Computing Market by Infrastructure, Technology, and Industry Vertical8.1.4.4 Middle East & Africa Quantum Computing Market by Infrastructure, Technology, and Industry Vertical8.1.4.5 Latin America Quantum Computing Market by Infrastructure, Technology, and Industry Vertical

9.0 Conclusions and Recommendations

10.0 Appendix: Quantum Computing and Classical HPC10.1 Next Generation Computing10.2 Quantum Computing vs. Classical High-Performance Computing10.3 Artificial Intelligence in High Performance Computing10.4 Quantum Technology Market in Exascale Computing

For more information about this report visit https://www.researchandmarkets.com/r/omefq7

See more here:
The Worldwide Quantum Computing Industry will Exceed $7.1 Billion by 2026 - Yahoo Finance

Tech partnership to drive Finlands quantum computing project – ComputerWeekly.com

Finlands VTT Technical Research Centre has formed a strategic collaboration with tech startup IQM Group to build the countrys first quantum computer.

The VTT-IQM co-innovation partnership aims to deliver a 50-qubit machine by 2024, drawing on international quantum technology expertise to augment Finlands home-grown quantum capabilities.

The partnership combines VTTs expertise in supercomputing and networking systems with IQMs capacity to deliver a hardware stack for a quantum computer while working with VTT to integrate critical technologies.

The financing element of the project saw IQM launch a new series A funding round in November. The Helsinki-headquartered company raised 39m in new capital in the funding round, bringing to 71m the total amount raised by IQM for quantum computing-related research and development (R&D) project activities to date.

State-owned VTT is providing financing for the project in the form of grants totalling 20.7m from the Finnish government.

Micronova, a national research and development infrastructure resource operated jointly by VTT and Aalto University, will provide the clean room environment to build the quantum computer and associated components at a dedicated facility at Espoo, southwest of Helsinki. The build will use Micronovas specialised input and micro- and nanotechnology expertise to guide the project.

The project marks the latest phase in cooperation between VTT and Aalto University. The two partners are also involved in a joint venture to develop a new detector for measuring energy quana. As measuring the energy of qubits lies at the core of how quantum computers operate, the detector project has the potential to become a game-changer in quantum technology.

IQMs collaborative role with VTT emerged following an international public tender process. All partners expect to see robust advances in the quantum computing project in 2021, said Jan Goetz, CEO of IQM.

This project is extremely prestigious for us, said Goetz. We will be collaborating with leading experts from VTT, so this brings a great opportunity to work together in ways that help build the future of quantum technologies.

Finlands plan to build a 50-qubit machine stacks up reasonably well in terms of ambition and scope, compared with projects being run by global tech giants Google and IBM.

In 2019, Google disclosed that it had used its 53-qubit quantum computer to perform a calculation on an unidentified unique abstract problem that took 200 seconds to accomplish. Google, which hopes to build a one million-qubit quantum computer within 10 years, estimated that it would have taken the worlds most powerful supercomputer, at the time, 10,000 years to resolve and complete the same calculation.

For its part, IBM is engaged in a milestone project to build a quantum computer comprising 1,000 qubits by 2023. IBMs largest current quantum computer contains 65 qubits.

The VTT-IQM project will proceed in three stages. The first will involve the construction of a five-qubit computer by the year of 2021. The project will then be scaled up in 2022, parallel with enhancement of support infrastructure, to deliver the target 50-qubit machine in 2023.

Our focus is more on how effectively we use the qubits, rather than the number, said Goetz. We expect, that by 2024, we will be in a place where there is a high likelihood of simulating several real-world problems and start finding solutions with a quantum computer.

For instance, conducting quantum material simulations for chemistry applications such as molecule design for new drugs, or the discovery of chemical reaction processes to achieve superior battery and fertiliser production.

The Finnish governments direct funding of the project is driven by a broader mission to further elevate the countrys reputation as a European tech hub and computing superpower, said Mika Lintil, Finlands economic affairs minister.

We want Finland to harness its potential to become the European leader in quantum technologies, he added. By having this resource, we can explore the opportunities that quantum computing presents to Finnish and European businesses. We see quantum computing as a dynamic tool to drive competitiveness across the whole of the European Union.

Within VTT, the quantum computing project will run parallel with connected areas of application, including quantum sensors and quantum-encryption algorithms. Quantum sensors are becoming increasingly important tools in medical imaging and diagnostics, while quantum-encryption algorithms are being deployed more widely to protect information networks.

Quantum computing-specific applications have the capacity to empower businesses to answer complex problems in chemistry and physics that cannot be solved by current supercomputers, said VTT CEO Antti Vasara.

Investing in disruptive technologies like quantum computing means we are investing in our future ability to solve global problems and create sustainable growth, he said. Its a machine that has immense real-world applications that can make the impossible possible. It can be used to simulate or calculate how materials or medicinal drugs work at the atomic level.

In the future, quantum technologies will play a significant role in the accelerated development and delivery of new and critical vaccines.

Finlands advance into quantum computing will further enhance Helsinkis status as a Nordic and European hub for world-leading innovative ecosystems dedicated to new technologies.

The project will also bolster IQMs capacity to build Europes largest industrial quantum hardware team to support projects across Europe, said Goetz.

IQM established a strategic presence in Germany in 2020, following the German governments commitment to invest 2bn in a project to build two quantum computers.

We are witnessing a boost in deep-tech funding in Europe, said Goetz. Startups like us need access to three channels of funding to ensure healthy growth. We need research grants to stimulate new key innovations and equity investments to grow the company. We also require early adoption through acquisitions supported by the government. This combination of funding enables us to pool risk and create a new industry.

IQMs initial startup funding included a 3.3m grant from Business Finland, the governments innovation financing vehicle, in addition to 15m equity investment from the EIC (European Innovation Council) Accelerator programme.

The 71m harvested by IQM in 2020 ranks among the highest capital fund raising rounds by a European deep-tech startup in such a short period.

See the rest here:
Tech partnership to drive Finlands quantum computing project - ComputerWeekly.com

Mind the (skills) gap: Cybersecurity talent pool must expand to take advantage of quantum computing opportunities – The Daily Swig

Experts at the CES 2021 conference stress importance of security education

The second age of quantum computing is poised to bring a wealth of new opportunities to the cybersecurity industry but in order to take full advantage of these benefits, the skills gap must be closed.

This was the takeaway of a discussion between two cybersecurity experts at the CES 2021 virtual conference last week.

Pete Totrorici, director of Joint Information Warfare at the Department of Defense (DoD) Joint Artificial Intelligence (AI) Center, joined Vikram Sharma, CEO of QuintessenceLabs, during a talk titled AI and quantum cyber disruption.

Quantum computing is in its second age, according to Sharma, meaning that the cybersecurity industry will soon start to witness the improvements in encryption, AI, and other areas that have long been promised by the technology.

BACKGROUND Quantum leap forward in cryptography could make niche technology mainstream

Quantum-era cybersecurity will wield the power to detect and deflect quantum-era cyber-attacks before they cause harm, a report from IBM reads.

It is the technology of our time, indeed, commented Sharma, who is based in Canberra, Australia.

QuintessenceLabs is looking at the application of advanced quantum technologies within the cybersecurity sphere, says Sharma, in particular the realm of data protection.

Governments and large organizations have also invested in the quantum space in recent years, with the US, UK, and India all providing funding for research.

The Joint AI Center was founded in 2018 and was launched to transform the Department of Defense to the adoption of artificial intelligence, said Totrorici.

A subdivision of the US Armed Forces, the center is responsible for exploring the use of AI and AI-enhanced communication for use in real-world combat situations.

Specifically, were trying to identify how we employ AI solutions that will have a mission impact, he said.

Across the department our day-to-day composes everything from development strategy, policy, product development, industry engagement, and other outreach activities, but if I need to identify something that I think is my most significant challenge today, its understanding the departments varied needs.

As with last year, CES took place virtually in 2021 due to the coronavirus pandemic

In order to reach these needs, Totrorici said that relationships between the center, academia, industry, and government need to be established.

There was a time when the DoD go it alone, [however] those days are long gone.

If were going to solve problems like AI employment or quantum development, [it] is going to require partnerships, he said.

Totrorici and Sharma both agreed that while the future is certainly in quantum computing, the ever-widening cyber skills gap needs to be addressed to take advantage of its potential.

Indeed, these partnerships cannot be formed if there arent enough experts in the field.

Totrorici said: Forefront in the mind of the DoD nowadays is, How do we how do we cultivate and retain talent?

I still think the United States does a great job of growing and building talent. Now the question becomes, Will we retain that talent, how do we leverage that time going forward, and where are we building it?

YOU MAY ALSO LIKE Quantum encryption the devil is in the implementation

The (ISC)2 2020 Workforce Study (PDF) found that the current cybersecurity industry needs to grow by 89% in order to effectively protect against cyber threats.

Of the companies surveyed, the study also revealed that 64% current have some shortage of dedicated cybersecurity staff.

Here in Australia weve recently established whats called the Sydney Quantum Academy, and that is an overarching group that sits across four leadings institutions that are doing some cutting-edge work in quantum in the country, said Sharma.

One of the aims of that academy is to produce quantum skilled folks broadly, but also looking specifically in the quantum cybersecurity area.

So certainly, some small initiatives that [have] kicked off, but I think theres a big gap there that that will need to be filled as we move forward.

READ MORE Infosec pro Vandana Verma on improving diversity and helping to grow the Indian security community

Continue reading here:
Mind the (skills) gap: Cybersecurity talent pool must expand to take advantage of quantum computing opportunities - The Daily Swig

IonQ and South Korea’s Q Center Announce Three-Year Quantum Alliance – PRNewswire

COLLEGE PARK, Md., Jan. 19, 2021 /PRNewswire/ --IonQ, the leader in quantum computing, today announced a three-year alliance with South Korea's Quantum Information Research Support Center, or Q Center. The Q Center is an independent organization at Sungkyunkwan University (SKKU) focused on the creation of a rich research ecosystem in the field of quantum information science. The partnership will make IonQ's trapped-ion quantum computers available for research and teaching across South Korea.

IonQ's systems have the potential to solve the world's most complex problems with the greatest accuracy. To date, the company's quantum computers have a proven track record of outperforming all other available quantum hardware.

Researchers and students across South Korea will be able to immediately start running jobs on IonQ's quantum computers. This partnership will enable researchers, scientists, and students to learn, develop, and deploy quantum applications on one of the world's leading quantum systems.

"I am proud to see IonQ enter this alliance with Q Center," said Peter Chapman, CEO & President of IonQ. "IonQ's hardware will serve as the backbone for quantum research. Our technology will play a critical role not only in the advancement of quantum, but also in fostering the next generation of quantum researchers and developers in South Korea."

"Our mission is to cultivate and promote the advancement of quantum information research in South Korea," said SKKU Professor of SAINT (SKKU Advanced Institute of NanoTechnology), Yonuk Chong. "We believe IonQ has the most advanced quantum technology available, and through our partnership, we will be able to make tremendous strides in the advancement of the industry."

This alliance builds on IonQ's continued success. IonQ recently released a product roadmap to deploy rack mounted quantum computers by 2023, and achieve broad quantum advantage by 2025. IonQ also recently unveiled a new $5.5 million, 23,000 square foot Quantum Data Center in Maryland's Discovery District. IonQ has raised $84 million in funding to date, announcing new investment from Lockheed Martin, Robert Bosch Venture Capital GmbH (RBVC) and Cambium earlier this year. Previous investors include Samsung Electronics, Mubadala Capital, GV, Amazon, and NEA. The company's two co-founders were also recently named to the National Quantum Initiative Advisory Committee (NQIAC).

About IonQIonQ is the leader in quantum computing. By making our quantum hardware accessible through the cloud, we're empowering millions of organizations and developers to build new applications to solve the world's most complex problems in business, and across society. IonQ's unique approach to quantum computing is to start with nature: using individual atoms as the heart of our quantum processing units. We levitate them in space with electric potentials applied to semiconductor-defined electrodes on a chip, and then use lasers to do everything from initial preparation to final readout and the quantum gate operations in between. The unique IonQ architecture of random-access processing of qubits in a fully connected and modular architecture will allow unlimited scaling. The IonQ approach requires atomic physics, precision optical and mechanical engineering, and fine-grained firmware control over a variety of components. Leveraging this approach, IonQ provides both a viable technological roadmap to scale and the flexibility necessary to explore a wide range of application spaces in the near term. IonQ was founded in 2015 by Jungsang Kim and Christopher Monroe and their systems are based on foundational research at The University of Maryland and Duke University.

About SKKUSungkyunkwan University (SKKU) is a leading research university located in Seoul, South Korea. SKKU is known around the world for the quality of its research and invests heavily in research and development. SKKU has more than 600 years of history as a leading educational institution, and is guided by the founding principles of benevolence, righteousness, propriety, wisdom, and self-cultivation.

SOURCE IonQ

https://ionq.com

Read the original:
IonQ and South Korea's Q Center Announce Three-Year Quantum Alliance - PRNewswire

Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon – CircleID

This is the fourth in a multi-part series on cryptography and the Domain Name System (DNS).

One of the "key" questions cryptographers have been asking for the past decade or more is what to do about the potential future development of a large-scale quantum computer.

If theory holds, a quantum computer could break established public-key algorithms including RSA and elliptic curve cryptography (ECC), building on Peter Shor's groundbreaking result from 1994.

This prospect has motivated research into new so-called "post-quantum" algorithms that are less vulnerable to quantum computing advances. These algorithms, once standardized, may well be added into the Domain Name System Security Extensions (DNSSEC) thus also adding another dimension to a cryptographer's perspective on the DNS.

(Caveat: Once again, the concepts I'm discussing in this post are topics we're studying in our long-term research program as we evaluate potential future applications of technology. They do not necessarily represent Verisign's plans or position on possible new products or services.)

The National Institute of Standards and Technology (NIST) started a Post-Quantum Cryptography project in 2016 to "specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers."

Security protocols that NIST is targeting for these algorithms, according to its 2019 status report (Section 2.2.1), include: "Transport Layer Security (TLS), Secure Shell (SSH), Internet Key Exchange (IKE), Internet Protocol Security (IPsec), and Domain Name System Security Extensions (DNSSEC)."

The project is now in its third round, with seven finalists, including three digital signature algorithms, and eight alternates.

NIST's project timeline anticipates that the draft standards for the new post-quantum algorithms will be available between 2022 and 2024.

It will likely take several additional years for standards bodies such as the Internet Engineering Task (IETF) to incorporate the new algorithms into security protocols. Broad deployments of the upgraded protocols will likely take several years more.

Post-quantum algorithms can therefore be considered a long-term issue, not a near-term one. However, as with other long-term research, it's appropriate to draw attention to factors that need to be taken into account well ahead of time.

The three candidate digital signature algorithms in NIST's third round have one common characteristic: all of them have a key size or signature size (or both) that is much larger than for current algorithms.

Key and signature sizes are important operational considerations for DNSSEC because most of the DNS traffic exchanged with authoritative data servers is sent and received via the User Datagram Protocol (UDP), which has a limited response size.

Response size concerns were evident during the expansion of the root zone signing key (ZSK) from 1024-bit to 2048-bit RSA in 2016, and in the rollover of the root key signing key (KSK) in 2018. In the latter case, although the signature and key sizes didn't change, total response size was still an issue because responses during the rollover sometimes carried as many as four keys rather than the usual two.

Thanks to careful design and implementation, response sizes during these transitions generally stayed within typical UDP limits. Equally important, response sizes also appeared to have stayed within the Maximum Transmission Unit (MTU) of most networks involved, thereby also avoiding the risk of packet fragmentation. (You can check how well your network handles various DNSSEC response sizes with this tool developed by Verisign Labs.)

The larger sizes associated with certain post-quantum algorithms do not appear to be a significant issue either for TLS, according to one benchmarking study, or for public-key infrastructures, according to another report. However, a recently published study of post-quantum algorithms and DNSSEC observes that "DNSSEC is particularly challenging to transition" to the new algorithms.

Verisign Labs offers the following observations about DNSSEC-related queries that may help researchers to model DNSSEC impact:

A typical resolver that implements both DNSSEC validation and qname minimization will send a combination of queries to Verisign's root and top-level domain (TLD) servers.

Because the resolver is a validating resolver, these queries will all have the "DNSSEC OK" bit set, indicating that the resolver wants the DNSSEC signatures on the records.

The content of typical responses by Verisign's root and TLD servers to these queries are given in Table 1 below. (In the table, . are the final two labels of a domain name of interest, including the TLD and the second-level domain (SLD); record types involved include A, Name Server (NS), and DNSKEY.)

For an A or NS query, the typical response, when the domain of interest exists, includes a referral to another name server. If the domain supports DNSSEC, the response also includes a set of Delegation Signer (DS) records providing the hashes of each of the referred zone's KSKs the next link in the DNSSEC trust chain. When the domain of interest doesn't exist, the response includes one or more Next Secure (NSEC) or Next Secure 3 (NSEC3) records.

Researchers can estimate the effect of post-quantum algorithms on response size by replacing the sizes of the various RSA keys and signatures with those for their post-quantum counterparts. As discussed above, it is important to keep in mind that the number of keys returned may be larger during key rollovers.

Most of the queries from qname-minimizing, validating resolvers to the root and TLD name servers will be for A or NS records (the choice depends on the implementation of qname minimization, and has recently trended toward A). The signature size for a post-quantum algorithm, which affects all DNSSEC-related responses, will therefore generally have a much larger impact on average response size than will the key size, which affects only the DNSKEY responses.

Post-quantum algorithms are among the newest developments in cryptography. They add another dimension to a cryptographer's perspective on the DNS because of the possibility that these algorithms, or other variants, may be added to DNSSEC in the long term.

In my next post, I'll make the case for why the oldest post-quantum algorithm, hash-based signatures, could be a particularly good match for DNSSEC. I'll also share the results of some research at Verisign Labs into how the large signature sizes of hash-based signatures could potentially be overcome.

Read the previous posts in this six-part blog series:

Read more:
Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon - CircleID

2020 Rewind: SciTech discoveries of the year – McGill Tribune

2020 was a year characterized by uncertainty, despair, and drastic change. However, several scientific and technological achievements provide hope for the future.

Google stakes its claim on quantum supremacy

Googles quantum computer, Sycamore, is the first instance of such a device outcompeting a classical computer. While a classical computer reads information as bits valued at 0 or 1, a quantum computers qubits can exist as both 0 and 1 at the same time, allowing for more data processing. Google announced that Sycamore performed a calculation in three minutes and 20 seconds that would otherwise have taken the most advanced classical computer 10,000 years. The applications of quantum computing are limitless, ranging from drug development to accurate weather forecasts to identifying which exoplanets likely harbour life. Although we may be five to 10 years away from having quantum computers that are useful for applications like these, Googles achievement is proof that such a future is possible.

Cave excavations push back arrival of first humans in the Americas by 15,000 years

New research published in Natureshows that humans may have arrived in the Americas as early as 30,000 years ago15,000 years earlier than current estimates. After painstaking excavations of the Chiquihuite Cave in Mexico, archaeologists uncovered nearly 2,000 stone tools and charcoal bits dating back 30,000 years. Further DNA analysis of the cave sediment, composed of plant and animal remains, corroborates these findings. The discovery challenges the commonly held theory that the Clovis people were the first inhabitants of the Americas 15,000 years ago. However, identifying factors of these mysterious early inhabitants, such as human DNA, were not found, suggesting they did not stay in the cave for long.

CRISPR-Cas9 edits genes in the human body

Doctors performed the first gene editing project in the human body using CRISPR-Cas9, a genome editing tool that can remove, add, or change parts of an organisms DNA sequence. The CRISPR method is based on a natural mechanism bacteria use to protect themselves from viral infections. Previous methods involved editing the genome after extracting DNA from the body. The treatment was administered to a patient with Lebers Congenital Amaurosis, an inherited form of blindness caused by a genetic mutation. Scientists deleted the harmful mutation by making two cuts on either side of the gene and allowing the ends of the DNA to reconnect. Although the patients vision showed some improvement, scientists are hopeful that further research into gene editing technologies will allow a permanent fix. This is one of many development efforts to use CRISPR-Cas9 technology to treat different diseases.

Anti-aging drugs: Senolytics

Growing old is a fight that many of us resist, but cannot win. Anti-aging drugs called senolytics could potentially delay aging and treat a number of associated diseases, although they do not prolong ones life. In the body, cells that are damaged beyond repair enter a senescence phase in which they stop dividing and begin programmed death. However, sometimes senescent cells resist their fate, build up in our bodies as we age, and seriously harm surrounding cells. Scientists believe that they are linked to diseases caused by aging and that targeting these cells using senolytics could be the solution. Anti-aging drugs entered human trials in 2020 and are predicted to become available in less than five years.

Virti: Training surgeons and front-line workers using virtual reality

Virti is an immersive video platform that allows users to visualize a high-stress situation in virtual reality in order to train ones decision-making skills under pressure and access real-time feedback. As part of efforts to mitigate the spread of COVID-19 this year and help train clinicians while avoiding in-person contact, Vitri designed an AI-powered virtual patient that can role play life-like scenarios. Their COVID-19 modules also teach frontline workers how to put on personal protective equipment, administer treatments, and ventilate patients. A company study by Virti found that their approaches increase knowledge retention by 230 per cent compared to training in person.

See the rest here:
2020 Rewind: SciTech discoveries of the year - McGill Tribune