Reality Of Metrics: Is Machine Learning Success Overhyped? – Analytics India Magazine

In one of the most revealing research papers written recent times, the researchers from Cornell Tech and Facebook AI quash the hype around the success of machine learning. They opine and even demonstrate that the trend appears to be overstated. In other words, the so-called cutting edge research or benchmark work perform similarly to one another even if they are a decade apart. In other words, the authors believe that metric learning algorithms have not made spectacular progress.

In this work, the authors try to demonstrate the significance of assessing algorithms more diligently and how few practices can help reflect ML success in reality.

Over the past decade, deep convolutional networks have made tremendous progress. Their application in computer vision is almost everywhere; from classification to segmentation to object detection and even generative models. But is the metric evaluation carried out to track this progress has been leakproof? Are the techniques employed werent affected by the improvement in deep learning methods?

The goal of metric learning is to map data to an embedding space, where similar data are close together, and the rest are far apart. So, the authors begin with the notion that the deep networks have had a similar effect on metric learning. And, the combination of the two is known as deep metric learning.

The authors then examined flaws in the current research papers, including the problem of unfair comparisons and the weaknesses of commonly used accuracy metrics. They then propose a training and evaluation protocol that addresses these flaws and then run experiments on a variety of loss functions.

For instance, one benchmark paper in 2017, wrote the authors, used ResNet50, and then claimed huge performance gains. But the competing methods used GoogleNet, which has significantly lower initial accuracies. Therefore, the authors conclude that much of the performance gain likely came from the choice of network architecture, and not their proposed method. Practices such as these can put ML on headlines, but when we look at how much of these state-of-the-art models are really deployed, the reality is not that impressive.

The authors underline the importance of keeping the parameters constant if one has to prove that a certain new algorithm outperforms its contemporaries.

To carry out the evaluations, the authors introduce settings that cover the following:

As shown in the above plot, the trends, in reality, arent that far from the previous related works and this indicates that those who claim a dramatic improvement might not have been fair in their evaluation.

If a paper attempts to explain the performance gains of its proposed method, and it turns out that those performance gains are non-existent, then their explanation must be invalid as well.

The results show that when hyperparameters are properly tuned via cross-validation, most methods perform similarly to one another. This work, believe the authors, will lead to more investigation into the relationship between hyperparameters and datasets, and the factors related to particular dataset/architecture combinations.

According to the authors, this work exposes the following:

The authors conclude that if proper machine learning practices are followed, then the results of metric learning papers will better reflect reality, and can lead to better works in most impactful domains like self-supervised learning.

comments

More here:
Reality Of Metrics: Is Machine Learning Success Overhyped? - Analytics India Magazine

Key Ways Machine Learning is Being Used in Software Testing – Techiexpert.com – TechiExpert.com

Listen : Audio version of this article

The impact that the software development industry has on the world is unparalleled to any other industry, because obviously software is one of the cores of modern-day society, no matter what industry or niche youre looking at. This is only going to continue as time goes on.

However, with the rise of services like AI, Big Data, and machine learning, its going to be interesting to see how technologies like machine learning are going to be effect industries like the software development industry, in particular, the software testing part of the process.

Today, were going to explore exactly why and how machine learning is being used and is going to be used in software testing practices, and what benefits this is going to provide to the services and procedures. Lets get into it.

When you manually test software, there are plenty of problems that currently affect the process. For one, software testing is time-consuming and expensive, and productivity is low. You also need a specialist software tester to make sure that everything is handled properly. This, of course, invites the risk of human error, which in some cases, could be incredibly costly.

On the other hand, when you train a machine to test software, once the machine has learned what its supposed to be doing, its incredibly fast at testing and will do what a human tester can do in a fraction of the time. This saves not only time, but also the money that would be spent on a software tester, shares Nick Denning, a business writer.

With a manual tester, youll have someone sitting in your office or remotely and testing your app or program. Theyll go through your software and sample different features and will test it. In bigger applications, you may have a group so you can test multiple users at once.

However, with machine learning testing, you can test up to 1,000+ of instances at the same time, meaning you can test network and user strain of multiple users of your software, or just try lots of different situations to see if bugs appear.

When youre manual testing your software, whether youre doing it yourself, or youre using a dedicated software tester, one key problem is that your tester might not be able to pick up glitches that theyre not used too. This can cause glitches to slip through the gaps and end up in your final product.

When youre using machine learning technologies, these are tools that are designed, by their very nature, to be as accurate as possible. Every single time to run them, they are going to go out of their way to deliver the results youve trained them to find. This is true whether youre purchasing machine learning software or training your own, explains Michael Taylor, a tech writer.

As we mentioned above, software testing is a slow process when its carried out by a person or a small team. It can take weeks, or even months, in extreme cases, depending on the size of the project, and this can cost a huge amount of your budget. When machine learning is involved, you only need one technology to carry out all the tasks from start to finish.

This can save you so much time and will prevent you from having to carry out mundane tasks checking data logs and trying to find areas of code that are experiencing errors. This, of course, means you can automate a ton of steps in your testing procedures.

Since machine learning applications learn every single time they run, once theyve found an error and addressed it, they can then learn this error has been dealt with, and can simultaneously run thousands of tests to make sure that nothing else was affected, all with this information still in mind.

This delivers more accurate results, more actionable data, and faster testing times that can help you get your software project to its final stages quicker than ever.

See the article here:
Key Ways Machine Learning is Being Used in Software Testing - Techiexpert.com - TechiExpert.com

What Is Differential Deep Learning? Through The Lens Of Trading – Analytics India Magazine

The explosion of the internet, in conjunction with the success of neural networks, brought the world of finance closer to more exotic approaches. Deep learning today is one such technique that is being widely adopted to cut down losses and generate profits.

When gut instincts do not do the job, mathematical methods come into play. Differential equations, for instance, can be used to represent a dynamic model. The approximation of pricing functions is a persistent challenge in quantitative finance. By the early 1980s, researchers were already experimenting with Taylor Expansions for stochastic volatility models.

For example, if company A wants to buy a commodity say oil in future from company B but is unsure of the future prices. So company A wants to make a deal with B that no matter what the price of oil is in the future, B should sell it to A for a price according to their contract.

In the world of finance, this is a watered-down version of derivatives trading. Derivatives are the securities made on underlying assets. In the above case, company A predicts a rise in price, and company B predicts a fall in price. Both these companies are making a bet on future prices and agree upon a price that cuts down their losses or can even bring profits (if A sells after price rise). So how do these companies arrive at a certain price or how do they predict the future price?

Taking the same example of derivatives trading, the researchers at Danske Bank of Denmark, have explored the implications of differential deep learning.

Deep learning offers the much needed analytic speeds, which are necessary for an approximation of volatile markets. Machine learning tools can take up the high dimensionality (many parameters) trait of a market and help resolve the computational bottlenecks.

Differential machine learning is an extension of supervised learning, where ML models are trained on differentials of labels to inputs.

In the context of financial derivatives and risk management, pathwise differentials are popularly

computed with automatic adjoint differentiation (AAD). AAD is an algorithm to calculate derivative sensitivities, very quickly. Nothing more, nothing less. AAD is also known in the field of machine learning under the name back-propagation or simply backprop.

Differential machine learning, combined with AAD, wrote the authors, provides extremely effective pricing and risk approximations. They say that fast pricing analytics can be produced and can effectively compute risk management metrics and even simulate hedge strategies.

This work compares differential machine learning to data augmentation in computer vision, where multiple labelled images are produced from a single one, by cropping, zooming, rotating or recolouring.

Data augmentation not only extends the training set but also encourages the machine learning model to learn important invariances (features that stay the same). Similarly, derivatives labels not only increase the amount of information in the training set but also encourage the model to learn the shape of the pricing function. Derivatives from feedforward networks form another neural network, efficiently computing risk sensitivities in the context of pricing approximation. Since the adjoints form a second network, one can use them for training as well as expect significant performance gain.

Risk sensitivities converge considerably slower than values and often remain blatantly wrong, even with hundreds of thousands of examples. We resolve these problems by training ML models on datasets augmented with differentials of labels with respect to the following inputs:

This simple idea, assert the authors, along with the adequate training algorithm, will allow ML models to learn accurate approximations even from small datasets, making machine learning viable in the context of trading.

Differential machine learning learns better from data alone, the vast amount of information contained in the differentials playing a similar role, and often more effective, to manual adjustments from contextual information.

The researchers posit that the unreasonable effectiveness of differential ML is applicable in situations where high-quality first-order derivatives with training inputs are available and in complex computational tasks such as the pricing and risk approximation of complex derivatives trading.

Differentials inject meaningful additional information, eventually resulting in better results with smaller datasets. Learning effectively from small datasets is critical in the context of regulations, where the pricing approximation must be learned quickly, and the expense of a large training set cannot be afforded.

The results from the experiments by Danske banks researchers show that learning the correct shape from differentials is crucial to the performance of regression models, including neural networks.

Know more about differential deep learning here.

comments

Originally posted here:
What Is Differential Deep Learning? Through The Lens Of Trading - Analytics India Magazine

The Librarians of the Future Will Be AI Archivists – Popular Mechanics

Library of Congress/Newspaper Navigator

In July 1848, L'illustration, a French weekly, printed the first photo to appear alongside a story. It depicted Parisian barricades set up during the city's June Days uprising. Nearly two centuries later, photojournalism has bestowed libraries with legions of archival pictures that tell stories of our past. But without a methodical approach to curate them, these historical images could get lost in endless mounds of data.

That's why the Library of Congress in Washington, D.C. is undergoing an experiment. Researchers are using specialized algorithms to extract historic images from newspapers. While digital scans can already compile photos, these algorithms can also analyze, catalog, and archive them. That's resulted in 16 million newspaper pages' worth of images that archivists can sift through with a simple search.

The Bridgeman Art Library

Ben Lee, innovator-in-residence at the Library of Congress, and a graduate student studying computer science at the University of Washington, is spearheading what's called Newspaper Navigator. His dataset comes from an existing project called Chronicling America, which compiles digital newspaper pages between 1789 and 1963.

He noticed that the library had already embarked on a crowdsourcing journey to turn some of those newspaper pages into a searchable database, with a focus on content relating to World War I. Volunteers could mark up and transcribe the digital newspaper pagessomething that computers aren't always so great at. In effect, what they had built was a perfect set of training data for a machine learning algorithm that could automate all of that grueling, laborious work.

"Volunteers were asked to draw the bounding boxes such that they included things like titles and captions, and so then the system would...identify that text," Lee tells Popular Mechanics. "I thought, let's try to see how we can use some emerging computer science tools to augment our abilities and how we use collections."

In total, it took about 19 days' worth of processing time for the system to sift through all 16,358,041 newspaper pages. Of those, the system only failed to process 383 pages.

Newspaper Navigator/ArXiv

Newspaper Navigator builds upon the same technology that engineers used to create Google Books. It's called optical character recognition, or OCR for short, and it's a class of machine learning algorithms that can translate images of typed or handwritten symbols, like words on a scanned magazine page, into digital, machine-readable text.

At Popular Mechanics, we have an archive of almost all of our magazines on Google Books, dating back to January 1905. Because Google has used OCR to optimize those digital scans, it's simple to go through and search our entire archive for mentions of, say, "spies," to get a result like this:

Popular Mechanics

But images are something else entirely.

Using deep learning, Lee built an object detection model that could isolate seven different types of content: photographs, illustrations, maps, comics, editorial cartoons, headlines, and advertisements. So if you want to find photos specifically of soldiers in trenches, you might search "trenches" in Newspaper Navigator and get results instantly.

Before, you'd have to sift through potentially thousands of pages' worth of data. This breakthrough will be extremely empowering for archivists, and Lee has open-sourced all of the code that he used to build his deep-learning model.

"Our hope is actually that people who have collections of newspapers...might be able to use the the code that I'm releasing, or do their own version of this at different scales," Lee says. One day your local library could use this sort of technology to help digitize and archive the history of your local community.

Newspaper Navigator/ArXiV

This is not to say that the system is perfect. "There definitely are cases in which the system will especially miscategorize say, an illustration as a cartoon or something like that," Lee says. But he has accounted for these false positives through confidence scores that highlight the likelihood that a given piece of media is a cartoon or a photograph.

"One of my goals is to use this project...to highlight some of the issues around algorithmic bias."

Lee also says that, even despite his best efforts, these kinds of systems will always encode some human bias. But to reduce any heavy-handedness, Lee tried to focus on emphasizing the classes of imagescartoon versus advertisementrather than what's actually shown in the images themselves. Lee believes this should reduce the instances of the system attempting to make judgement calls about the dataset. That should be left up to the curator, he says.

"I think a lot of these questions are very very important ones to consider and one of my goals is to use this project as an opportunity to highlight some of the issues around algorithmic bias," Lee says. "It's easy to assume that machine learning solves all the problemsthat's a fantasybut in the this project, I think it's a real opportunity to emphasize that we need to be careful how we use these tools."

The rest is here:
The Librarians of the Future Will Be AI Archivists - Popular Mechanics

Preventing procurement fraud in local government with the help of machine learning – Open Access Government

Half of the worlds organisations experienced economic crime between 2016 and 2018, according to PwCs Global Economic Crime and Fraud survey. Government organisations are among these and by no means immune from crime. And yet, the perpetrators are not always remote cyber criminals. The most worrying news is that half of fraud is carried out by agents within the organisation. For many rogue employees, their method is procurement fraud.

Though businesses may often misjudge the cost of fraud, the lack of a definable victim aside from the UK taxpayer means that the figures for fraud which hits public services are unclear. Figures for procurement fraud specifically are even more questionable, though estimates suggest that local councils quashed malicious efforts to steal over 300 million through procurement fraud in 2017/18. The amount undetected by anti-fraud investigators may dwarf that.

The importance of public sector funds not being lost to fraud or wasted has been crucially highlighted in recent weeks by the COVID-19 virus outbreak. These funds, always in high demand, are even more precious in emergency situations like were facing at the moment.

Transparency is a buzzword within the public sector. Citizens are looking for clarity on the value the public sector provides to taxpayers and are demanding more when it comes to services. It is increasingly clear to cash-strapped governments that preventing and detecting fraud is crucial for achieving goals when faced with the alternative of raising taxes which is never popular with the general public. The imperative is to safeguard taxpayers funds and for the public sector to do everything in its power to ensure that these funds are spent on crucial services.

Local government is a particular risk area for procurement fraud. Local governments, including city management,spend a lot of money particularly because many now outsource significant amounts of service provision. They may also lack expertise in contracting and commissioning, and may, therefore, be an easy target for fraudsters. The procurement process is an obvious target.

Procurement fraud can occur at any stage of the procurement lifecycle, which makes it extremely complex to detect and prevent. Analysis suggests that for government organisations, procurement fraud is most likely to occur at the payments processing stage, although vendor selection and bids are also vulnerable stages.

There are a number of ways in which procurement fraud can occur. Some involve collusion between employees and contractors, and others involve external fraudsters taking advantage of a vulnerability in the system. Organisations can also make themselves more vulnerable to fraud by not ensuring that employees follow proper procedures for procurement. One possible problem, for example, is dividing invoices up into smaller chunks to avoid particular thresholds. This is usually done in all innocence as a way to make procurement simpler, but it also leaves the organisation open to abuse because the proper checks are not made.

But if procurement fraud is on the rise, so too is counter-fraud work. Governments around the world have strategies and are monitoring the situation carefully. Many have increased the checks put on procurement processes and have also provided more information to employees and potential contractors about how to spot fraud and potential fraud.

There is growing understanding that rules-based systems are not enough to stop fraud: they may help to detect it after the event, but they are unlikely to prevent it, even in combination with systems to reduce opportunity. Analytics-based systems, however, can both improve detection of fraud, and also start to predict it. They are often based onartificial intelligence(AI), which learns from previous cases, and can then detect patterns that may be associated with fraud, or process breaches that may be a problem.

Detecting anomalies, however, is just one step in the process of preventing fraud. Its only an indicator, and all indicators can do is to indicate. In fraud detection, indicators like anomalies highlight an area for further investigation. Then its over to the fraud, audit and compliance teams to take a look.

Traditional fraud detection has often taken months to complete. Time-consuming audits could detect fraud, but these could begin months after the event, and may only occur once a year. Fraud detection systems based on analytics can spot fraud in a fraction of the time, flagging anomalies to investigation squads in real-time. The actions of those teams can then halt fraud in its tracks, before it takes place, or provide rapid evidence on the perpetrator. Public organisations that put these new technologies in place can rest assured that, with machine learning, fraud detection is not only smart, efficient and speedy, but a frightening prospect for those participating in procurement fraud.

Editor's Recommended Articles

Read this article:
Preventing procurement fraud in local government with the help of machine learning - Open Access Government

Deep Learning: An Overview in Scientific Applications – Analytics Insight

Over the last few years, deep learning has seen a huge uptake in popularity in businesses and scientific applications as well. It is defined as a subset of artificial intelligence that leverages computer algorithms to generate autonomous learning from data and information. Deep learning is prevalent across many scientific disciplines, from high-energy particle physics and weather and climate modeling to precision medicine and more. The technology has come a long way, when scientists developed a computer model in the 1940s that was organized in interconnected layers, like neurons in the human brain.

Deep learning signifies substantial progress in the ability of neural networks to automatically create problemsolving features and capture highly complex data distributions. Deep neural networks are now the state-of-the-art machine learning models across diverse areas, including image analysis and natural language processing, among others, and extensively deployed in academia and industry.

Developments in this technology have a vast potential for scientific applications and medical imaging, medical data analysis, and diagnostics. In scientific settings, data analysis is understanding as recognizing the underlying mechanisms that give rise to patterns in the data. When this is the goal, dimensionality reduction, and clustering are simple and unsupervised but highly effective techniques to divulge concealed properties in the data.

In areport, titled A Survey of Deep Learning for Scientific Discovery, where former Google CEO Eric Schmidt and Google AI researcher Maithra Raghu have put together a comprehensive overview on deep learning techniques and their application to scientific research. According to their guide, deep learning algorithms have been very effective in the processing of visual data. They also describe convolutional neural networks (CNNs) as the most eminent family of neural networks and very constructive in working with any kind of image data.

In scientific contexts, one of the best applications of CNNs is medical imaging analysis. Human experts such as radiologists and physicians have mostly performed the medical image interpretation. However, owing to large variations in pathology and potential fatigue of human experts, researchers now have started capitalizing on computer-assisted interventions. Already, many deep learning algorithms are in use to analyze CT scans and x-rays and assist in the diagnosis of diseases. Recently, in the time of crisis caused by COVID-19, scientists have started using CNNs to find out symptoms of the virus in chest x-rays.

Deep learning algorithms are also effective is natural language processing. It deals with building computational algorithms to automatically assess and represent human language. Today, NLP-based systems have enabled a various number of applications, and are useful to train machines to perform complex natural language-related tasks like machine translation and dialogue generation.

Moreover, deep learning models originally inspired by biological neural networks, which encompasses artificial neurons, or nodes, connected to a web of other nodes through edges, allowing these artificial neurons to collect and send information to each other.

Read the original here:
Deep Learning: An Overview in Scientific Applications - Analytics Insight

How Do Quantum Computers Work? – ScienceAlert

Quantum computers perform calculations based on the probability of an object's state before it is measured - instead of just 1s or 0s - which means they have the potential to process exponentially more data compared to classical computers.

Classical computers carry out logical operations using the definite position of a physical state. These are usually binary, meaning its operations are based on one of two positions. A single state - such as on or off, up or down, 1 or 0 - is called a bit.

In quantum computing, operations instead use the quantum state of an object to produce what's known as a qubit. These states are the undefined properties of an object before they've been detected, such as the spin of an electron or the polarisation of a photon.

Rather than having a clear position, unmeasured quantum states occur in a mixed 'superposition', not unlike a coin spinning through the air before it lands in your hand.

These superpositions can be entangled with those of other objects, meaning their final outcomes will be mathematically related even if we don't know yet what they are.

The complex mathematics behind these unsettled states of entangled 'spinning coins' can be plugged into special algorithms to make short work of problems that would take a classical computer a long time to work out... if they could ever calculate them at all.

Such algorithms would be useful in solving complex mathematical problems, producing hard-to-break security codes, or predicting multiple particle interactions in chemical reactions.

Building a functional quantum computer requires holding an object in a superposition state long enough to carry out various processes on them.

Unfortunately, once a superposition meets with materials that are part of a measured system, it loses its in-between state in what's known as decoherence and becomes a boring old classical bit.

Devices need to be able to shield quantum states from decoherence, while still making them easy to read.

Different processes are tackling this challenge from different angles, whether it's to use more robust quantum processes or to find better ways to check for errors.

For the time being, classical technology can manage any task thrown at a quantum computer. Quantum supremacy describes the ability of a quantum computer to outperform their classical counterparts.

Some companies, such as IBM and Google, claim we might be close, as they continue to cram more qubits together and build more accurate devices.

Not everybody is convinced that quantum computers are worth the effort. Some mathematicians believe there are obstacles that are practically impossible to overcome, putting quantum computing forever out of reach.

Time will tell who is right.

All topic-based articles are determined by fact checkers to be correct and relevant at the time of publishing. Text and images may be altered, removed, or added to as an editorial decision to keep information current.

See the original post here:
How Do Quantum Computers Work? - ScienceAlert

Seeqc UK Awarded 1.8M In Grants To Advance Quantum Computing Initiatives – Business Wire

LONDON--(BUSINESS WIRE)--Seeqc, the Digital Quantum Computing company, today announced its UK team has been selected to receive two British grants totaling 1.8 million from Innovate UKs Industrial Challenge Strategy Fund.

Quantum Foundry

The first 800,000 grant from Innovate UK is part of a 7M project dedicated to advancing the commercialization of superconducting technology. Its goal is to bring quantum computing closer to business-applicable solutions, cost-efficiently and at scale.

Seeqc UK is joining six UK-based companies and universities in a consortium to collaborate on the initiative. This is the first concerted effort to bring all leading experts across industry and academia together to advance the development of quantum technologies in the UK.

Other grant recipients include Oxford Quantum Circuits, Oxford Instruments, Kelvin Nanotechnology, University of Glasgow and the Royal Holloway University of London.

Quantum Operating System

The second 1 million grant is part of a 7.6 million seven-organization consortium dedicated to advancing the commercialization of quantum computers in the UK by building a highly innovative quantum operating system. A quantum operating system, Deltaflow.OS, will be installed on all quantum computers in the UK in order to accelerate the commercialization and collaboration of the British quantum computing community. The universal operating system promises to greatly increase the performance and accessibility of quantum computers in the UK.

Seeqc UK is joined by other grant recipients, Riverlane, Hitachi Europe, Universal Quantum, Duality Quantum Photonics, Oxford Ionics, and Oxford Quantum Circuits, along with UK-based chip designer, ARM, and the National Physical Laboratory.

Advancing Digital Quantum Computing

Seeqc owns and operates a multi-layer superconductive electronics chip fabrication facility, which is among the most advanced in the world. The foundry serves as a testing and benchmarking facility for Seeqc and the global quantum community to deliver quantum technologies for specific use cases. This foundry and expertise will be critical to the success of the grants. Seeqcs Digital Quantum Computing solution is designed to manage and control qubits in quantum computers in a way that is cost-efficient and scalable for real-world business applications in industries such as pharmaceuticals, logistics and chemical manufacturing.

Seeqcs participation in these new industry-leading British grants accelerates our work in making quantum computing useful, commercially and at scale, said Dr. Matthew Hutchings, chief product officer and co-founder at Seeqc, Inc. We are looking forward to applying our deep expertise in design, testing and manufacturing of quantum-ready superconductors, along with our resource-efficient approach to qubit control and readout to this collaborative development of quantum circuits.

We strongly support the Deltaflow.OS initiative and believe Seeqc can provide a strong contribution to both consortiums work and advance quantum technologies from the lab and into the hands of businesses via ultra-focused and problem-specific quantum computers, continued Hutchings.

Seeqcs solution combines classical and quantum computing to form an all-digital architecture through a system-on-a-chip design that utilizes 10-40 GHz superconductive classical co-processing to address the efficiency, stability and cost issues endemic to quantum computing systems.

Seeqc is receiving the nearly $2.3 million in grant funding weeks after closing its $6.8 million seed round from investors including BlueYard Capital, Cambium, NewLab and the Partnership Fund for New York City. The recent funding round is in addition to a $5 million investment from M Ventures, the strategic corporate venture capital arm of Merck KGaA, Darmstadt, Germany.

About Seeqc:

Seeqc is developing the first fully digital quantum computing platform for global businesses. Seeqc combines classical and quantum technologies to address the efficiency, stability and cost issues endemic to quantum computing systems. The company applies classical and quantum technology through digital readout and control technology and a unique chip-scale architecture. Seeqcs quantum system provides the energy- and cost-efficiency, speed and digital control required to make quantum computing useful and bring the first commercially-scalable, problem-specific quantum computing applications to market.

The company is one of the first companies to have built a superconductor multi-layer commercial chip foundry and through this experience has the infrastructure in place for design, testing and manufacturing of quantum-ready superconductors. Seeqc is a spin-out of HYPRES, the worlds leading developer of superconductor electronics. Seeqcs team of executives and scientists have deep expertise and experience in commercial superconductive computing solutions and quantum computing. Seeqc is based in Elmsford, NY with facilities in London, UK and Naples, Italy.

Read more here:
Seeqc UK Awarded 1.8M In Grants To Advance Quantum Computing Initiatives - Business Wire

Registration Open for Inaugural IEEE International Conference on Quantum Computing and Engineering (QCE20) – thepress.net

LOS ALAMITOS, Calif., May 14, 2020 /PRNewswire/ --Registration is now open for the inaugural IEEE International Conference on Quantum Computing and Engineering (QCE20), a multidisciplinary event focusing on quantum technology, research, development, and training. QCE20, also known as IEEE Quantum Week, will deliver a series of world-class keynotes, workforce-building tutorials, community-building workshops, and technical paper presentations and posters on October 12-16 in Denver, Colorado.

"We're thrilled to open registration for the inaugural IEEE Quantum Week, founded by the IEEE Future Directions Initiative and supported by multiple IEEE Societies and organizational units," said Hausi Mller, QCE20 general chair and co-chair of the IEEE Quantum Initiative."Our initial goal is to address the current landscape of quantum technologies, identify challenges and opportunities, and engage the quantum community. With our current Quantum Week program, we're well on track to deliver a first-rate quantum computing and engineering event."

QCE20's keynote speakersinclude the following quantum groundbreakers and leaders:

The week-long QCE20 tutorials program features 15 tutorials by leading experts aimed squarely at workforce development and training considerations. The tutorials are ideally suited to develop quantum champions for industry, academia, and government and to build expertise for emerging quantum ecosystems.

Throughout the week, 19 QCE20 workshopsprovide forums for group discussions on topics in quantum research, practice, education, and applications. The exciting workshops provide unique opportunities to share and discuss quantum computing and engineering ideas, research agendas, roadmaps, and applications.

The deadline for submitting technical papers to the eight technical paper tracks is May 22. Papers accepted by QCE20 will be submitted to the IEEE Xplore Digital Library. The best papers will be invited to the journalsIEEE Transactions on Quantum Engineering(TQE)andACM Transactions on Quantum Computing(TQC).

QCE20 provides attendees a unique opportunity to discuss challenges and opportunities with quantum researchers, scientists, engineers, entrepreneurs, developers, students, practitioners, educators, programmers, and newcomers. QCE20 is co-sponsored by the IEEE Computer Society, IEEE Communications Society, IEEE Council on Superconductivity,IEEE Electronics Packaging Society (EPS), IEEE Future Directions Quantum Initiative, IEEE Photonics Society, and IEEETechnology and Engineering Management Society (TEMS).

Register to be a part of the highly anticipated inaugural IEEE Quantum Week 2020. Visit qce.quantum.ieee.org for event news and all program details, including sponsorship and exhibitor opportunities.

About the IEEE Computer SocietyThe IEEE Computer Society is the world's home for computer science, engineering, and technology. A global leader in providing access to computer science research, analysis, and information, the IEEE Computer Society offers a comprehensive array of unmatched products, services, and opportunities for individuals at all stages of their professional career. Known as the premier organization that empowers the people who drive technology, the IEEE Computer Society offers international conferences, peer-reviewed publications, a unique digital library, and training programs. Visit http://www.computer.orgfor more information.

About the IEEE Communications Society The IEEE Communications Societypromotes technological innovation and fosters creation and sharing of information among the global technical community. The Society provides services to members for their technical and professional advancement and forums for technical exchanges among professionals in academia, industry, and public institutions.

About the IEEE Council on SuperconductivityThe IEEE Council on Superconductivityand its activities and programs cover the science and technology of superconductors and their applications, including materials and their applications for electronics, magnetics, and power systems, where the superconductor properties are central to the application.

About the IEEE Electronics Packaging SocietyThe IEEE Electronics Packaging Societyis the leading international forum for scientists and engineers engaged in the research, design, and development of revolutionary advances in microsystems packaging and manufacturing.

About the IEEE Future Directions Quantum InitiativeIEEE Quantumis an IEEE Future Directions initiative launched in 2019 that serves as IEEE's leading community for all projects and activities on quantum technologies. IEEE Quantum is supported by leadership and representation across IEEE Societies and OUs. The initiative addresses the current landscape of quantum technologies, identifies challenges and opportunities, leverages and collaborates with existing initiatives, and engages the quantum community at large.

About the IEEE Photonics SocietyTheIEEE Photonics Societyforms the hub of a vibrant technical community of more than 100,000 professionals dedicated to transforming breakthroughs in quantum physics into the devices, systems, and products to revolutionize our daily lives. From ubiquitous and inexpensive global communications via fiber optics, to lasers for medical and other applications, to flat-screen displays, to photovoltaic devices for solar energy, to LEDs for energy-efficient illumination, there are myriad examples of the Society's impact on the world around us.

About the IEEE Technology and Engineering Management SocietyIEEE TEMSencompasses the management sciences and practices required for defining, implementing, and managing engineering and technology.

Go here to see the original:
Registration Open for Inaugural IEEE International Conference on Quantum Computing and Engineering (QCE20) - thepress.net

Video: The Future of Quantum Computing with IBM – insideHPC

Dario Gil from IBM Research

In this video, Dario Gil from IBM shares results from the IBM Quantum Challenge and describes how you can access and program quantum computers on the IBM Cloud today.

From May 4-8, we invited people from around the world to participate in the IBM Quantum Challengeon the IBM Cloud. We devised the Challenge as a global event to celebrateour fourth anniversary of having a real quantum computer on the cloud. Over those four days 1,745people from45countries came together to solve four problems ranging from introductory topics in quantum computing, to understanding how to mitigate noise in a real system, to learning about historic work inquantum cryptography, to seeing how close they could come to the best optimization result for a quantum circuit.

Those working in the Challenge joined all those who regularly make use of the 18quantum computing systems that IBM has on the cloud, includingthe 10 open systemsand the advanced machines available within theIBM Q Network. During the 96 hours of the Challenge, the total use of the 18 IBM Quantum systems on the IBM Cloud exceeded 1 billion circuits a day. Together, we made history every day the cloud users of the IBM Quantum systems made and then extended what can absolutely be called a world record in computing.

Every day we extend the science of quantum computing and advance engineering to build more powerful devices and systems. Weve put new two new systems on the cloud in the last month, and so our fleet of quantum systems on the cloud is getting bigger and better. Well be extending this cloud infrastructure later this year by installing quantum systems inGermanyand inJapan. Weve also gone more and more digital with our users with videos, online education, social media, Slack community discussions, and, of course, the Challenge.

Dr. Dario Gil is the Director of IBM Research, one of the worlds largest and most influential corporate research labs. IBM Research is a global organization with over 3,000 researchers at 12 laboratories on six continents advancing the future of computing. Dr. Gil leads innovation efforts at IBM, directing research strategies in Quantum, AI, Hybrid Cloud, Security, Industry Solutions, and Semiconductors and Systems. Dr. Gil is the 12th Director in its 74-year history. Prior to his current appointment, Dr. Gil served as Chief Operating Officer of IBM Research and the Vice President of AI and Quantum Computing, areas in which he continues to have broad responsibilities across IBM. Under his leadership, IBM was the first company in the world to build programmable quantum computers and make them universally available through the cloud. An advocate of collaborative research models, he co-chairs the MIT-IBM Watson AI Lab, a pioneering industrial-academic laboratory with a portfolio of more than 50 projects focused on advancing fundamental AI research to the broad benefit of industry and society.

Sign up for our insideHPC Newsletter

More here:
Video: The Future of Quantum Computing with IBM - insideHPC