Is Now the Time to Start Protecting Government Data from Quantum Hacking? – Nextgov

My previous column about the possibility of pairing artificial intelligence with quantum computing to supercharge both technologies generated a storm of feedback via Twitter and email. Quantum computing is a science that is still somewhat misunderstood, even by scientists working on it, but might one day be extremely powerful. And artificial intelligence has some scary undertones with quite a few trust issues. So I understand the reluctance that people have when considering this marriage of technologies.

Unfortunately, we dont really get a say in this. The avalanche has already started, so its too late for all of us pebbles to vote against it. All we can do now is deal with the practical ramifications of these recent developments. The most critical right now is protecting government encryption from the possibility of quantum hacking.

Two years ago I warned that government data would soon be vulnerable to quantum hacking, whereby a quantum machine could easily shred the current AES encryption used to protect our most sensitive information. Government agencies like NIST have been working for years on developing quantum-resistant encryption schemes. But adding AI to a quantum computer might be the tipping point needed to give quantum the edge, while most of the quantum-resistant encryption protections are still being slowly developed. At least, that is what I thought.

One of the people who contacted me after my last article was Andrew Cheung, the CEO of 01 Communique Laboratory and IronCAP. They have a product available right now which can add quantum-resistant encryption to any email. Called IronCAP X, its available for free for individual users, so anyone can start protecting their email from the threat of quantum hacking right away. In addition to downloading the program to test, I spent about an hour interviewing Cheung about how quantum-resistant encryption works, and how agencies can keep their data protection one step ahead of some of the very same quantum computers they are helping to develop.

For Cheung, the road to quantum-resistant encryption began over 10 years ago, long before anyone was seriously engineering a quantum computer. It almost felt like we were developing a bulletproof vest before anyone had created a gun, Cheung said.

But the science of quantum-resistant encryption has actually been around for over 40 years, Cheung said. It was just never specifically called that. People would ask how we could develop encryption that would survive hacking by a really fast computer, he said. At first, nobody said the word quantum, but that is what we were ultimately working against.

According to Cheung, the key to creating quantum-resistant encryption is to get away from the core strength of computers in general, which is mathematics. He explained that RSA encryption used by the government today is fundamentally based on prime number factorization, where if you multiply two prime numbers together, the result is a number that can only be broken down into those primes. Breaking encryption involves trying to find those primes by trial and error.

So if you have a number like 21, then almost anyone can use factorization to quickly break it down and find its prime numbers, which are three and seven. If you have a number like 221, then it takes a little bit longer for a human to come up with 13 and 17 as its primes, though a computer can still do that almost instantaneously. But if you have something like a 500 digit number, then it would take a supercomputer more than a century to find its primes and break the related encryption. The fear is that quantum computers, because of the strange way they operate, could one day do that a lot more quickly.

To make it more difficult for quantum machines, or any other kind of fast computer, Cheung and his company developed an encryption method based on binary Goppa code. The code was named for the renowned Russian mathematician who invented it, Valerii Denisovich Goppa, and was originally intended to be used as an error-correcting code to improve the reliability of information being transmitted over noisy channels. The IronCAP program intentionally introduces errors into the information its protecting, and then authorized users can employ a special algorithm to decrypt it, but only if they have the private key so that the numerous errors can be removed and corrected.

What makes encryption based on binary Goppa code so powerful against quantum hacking is that you cant use math to guess at where or how the errors have been induced into the protected information. Unlike encryption based on prime number factorization, there isnt a discernible pattern, and theres no way to brute force guess at how to remove the errors. According to Cheung, a quantum machine, or any other fast system like a traditional supercomputer, cant be programmed to break the encryption because there is no system for it to use to begin its guesswork.

A negative aspect to binary Goppa code encryption, and also one of the reasons why Cheung says the protection method is not more popular today, is the size of the encryption key. Whether you are encrypting a single character or a terabyte of information, the key size is going to be about 250 kilobytes, which is huge compared with the typical 4 kilobyte key size for AES encryption. Even ten years ago, that might have posed a problem for many computers and communication methods, though it seems tiny compared with file sizes today. Still, its one of the main reasons why AES won out as the standard encryption format, Cheung says.

I downloaded the free IronCAP X application and easily integrated it into Microsoft Outlook. Using the application was extremely easy, and the encryption process itself when employing it to protect an email is almost instantaneous, even utilizing the limited power of an average desktop. And while I dont have access to a quantum computer to test its resilience against quantum hacking, I did try to extract the information using traditional methods. I can confirm that the data is just unreadable gibberish with no discernable pattern to unauthorized users.

Cheung says that binary Goppa code encryption that can resist quantum hacking can be deployed right now on the same servers and infrastructure that agencies are already using. It would just be a matter of switching things over to the new method. With quantum computers evolving and improving so rapidly these days, Cheung believes that there is little time to waste.

Yes, making the switch in encryption methods will be a little bit of a chore, he said. But with new developments in quantum computing coming every day, the question is whether you want to maybe deploy quantum-resistant encryption two years too early, or risk installing it two years too late.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

The rest is here:
Is Now the Time to Start Protecting Government Data from Quantum Hacking? - Nextgov

Cracking the secrets of an emerging branch of physics – MIT News

Thanh Nguyen is in the habit of breaking down barriers. Take languages, for instance: Nguyen, a third-year doctoral candidate in nuclear science and engineering (NSE), wanted to connect with other people and cultures for his work and social life, he says, so he learned Vietnamese, French, German, and Russian, and is now taking an MIT course in Mandarin. But this drive to push past obstacles really comes to the fore in his research, where Nguyen is trying to crack the secrets of a new and burgeoning branch of physics.

My dissertation focuses on neutron scattering on topological semimetals, which were only experimentally discovered in 2015, he says. They have very special properties, but because they are so novel, theres a lot thats unknown, and neutrons offer a unique perspective to probe their properties at a new level of clarity.

Topological materials dont fit neatly into conventional categories of substances found in everyday life. They were first materialized in the 1980s, but only became practical in the mid-2000s with deepened understanding of topology, which concerns itself with geometric objects whose properties remain the same even when the objects undergo extreme deformation. Researchers experimentally discovered topological materials even more recently, using the tools of quantum physics.

Within this domain, topological semimetals, which share qualities of both metals and semiconductors, are of special interest to Nguyen.They offer high levels of thermal and electric conductivity, and inherent robustness, which makes them very promising for applications in microelectronics, energy conversions, and quantum computing, he says.

Intrigued by the possibilities that might emerge from such unconventional physics, Nguyen is pursuing two related but distinct areas of research: On the one hand, Im trying to identify and then synthesize new, robust topological semimetals, and on the other, I want to detect fundamental new physics with neutrons and further design new devices.

On a fast research track

Reaching these goals over the next few years might seem a tall order. But at MIT, Nguyen has seized every opportunity to master the specialized techniques required for conducting large-scale experiments with topological materials, and getting results. Guided by his advisor,Mingda Li, the Norman C Rasmussen Assistant Professor and director of theQuantum Matter Group within NSE, Nguyen was able to dive into significant research even before he set foot on campus.

The summer, before I joined the group, Mingda sent me on a trip to Argonne National Laboratory for a very fun experiment that used synchrotron X-ray scattering to characterize topological materials, recalls Nguyen. Learning the techniques got me fascinated in the field, and I started to see my future.

During his first two years of graduate school, he participated in four studies, serving as a lead author in three journal papers. In one notable project,described earlier this year in Physical Review Letters, Nguyen and fellow Quantum Matter Group researchers demonstrated, through experiments conducted at three national laboratories, unexpected phenomena involving the way electrons move through a topological semimetal, tantalum phosphide (TaP).

These materials inherently withstand perturbations such as heat and disorders, and can conduct electricity with a level of robustness, says Nguyen. With robust properties like this, certain materials can conductivity electricity better than best metals, and in some circumstances superconductors which is an improvement over current generation materials.

This discovery opens the door to topological quantum computing. Current quantum computing systems, where the elemental units of calculation are qubits that perform superfast calculations, require superconducting materials that only function in extremely cold conditions. Fluctuations in heat can throw one of these systems out of whack.

The properties inherent to materials such as TaP could form the basis of future qubits, says Nguyen. He envisions synthesizing TaP and other topological semimetals a process involving the delicate cultivation of these crystalline structures and then characterizing their structural and excitational properties with the help of neutron and X-ray beam technology, which probe these materials at the atomic level. This would enable him to identify and deploy the right materials for specific applications.

My goal is to create programmable artificial structured topological materials, which can directly be applied as a quantum computer, says Nguyen. With infinitely better heat management, these quantum computing systems and devices could prove to be incredibly energy efficient.

Physics for the environment

Energy efficiency and its benefits have long concerned Nguyen. A native of Montreal, Quebec, with an aptitude for math and physics and a concern for climate change, he devoted his final year of high school to environmental studies. I worked on a Montreal initiative to reduce heat islands in the city by creating more urban parks, he says. Climate change mattered to me, and I wanted to make an impact.

At McGill University, he majored in physics. I became fascinated by problems in the field, but I also felt I could eventually apply what I learned to fulfill my goals of protecting the environment, he says.

In both classes and research, Nguyen immersed himself in different domains of physics. He worked for two years in a high-energy physics lab making detectors for neutrinos, part of a much larger collaboration seeking to verify the Standard Model. In the fall of his senior year at McGill, Nguyens interest gravitated toward condensed matter studies. I really enjoyed the interplay between physics and chemistry in this area, and especially liked exploring questions in superconductivity, which seemed to have many important applications, he says. That spring, seeking to add useful skills to his research repertoire, he worked at Ontarios Chalk River Laboratories, where he learned to characterize materials using neutron spectroscopes and other tools.

These academic and practical experiences served to propel Nguyen toward his current course of graduate study. Mingda Li proposed an interesting research plan, and although I didnt know much about topological materials, I knew they had recently been discovered, and I was excited to enter the field, he says.

Man with a plan

Nguyen has mapped out the remaining years of his doctoral program, and they will prove demanding. Topological semimetals are difficult to work with, he says. We dont yet know the optimal conditions for synthesizing them, and we need to make these crystals, which are micrometers in scale, in quantities large enough to permit testing.

With the right materials in hand, he hopes to develop a qubit structure that isnt so vulnerable to perturbations, quickly advancing the field of quantum computing so that calculations that now take years might require just minutes or seconds, he says. Vastly higher computational speeds could have enormous impacts on problems like climate, or health, or finance that have important ramifications for society. If his research on topological materials benefits the planet or improves how people live, says Nguyen, I would be totally happy.

View original post here:
Cracking the secrets of an emerging branch of physics - MIT News

Does Schrdinger’s Cat Think Quantum Computing Is a Sure Thing? – Walter Bradley Center for Natural and Artificial Intelligence

Some hope that a move to quantum computingqubits instead of bits, analog instead of digitalwill work wonders, including the invention of the true thinking computer. In last weeks podcast, futurist George Gilder and computer engineer Robert J. Marks looked at, among other things, whats really happening with quantum computing:

(The quantum computing discussion begins at 15:04.)

Robert J. Marks: Whats your take on quantum computing? It seems to me that theres been glacial progress in the technology.

George Gilder (pictured): I think quantum computing is rather like AI, in that it moves the actual problem outside the computational process and gives the illusion that it solved the problem, but its really just pushed the problem out. Quantum computing is analog computing, thats what it is. Its changing primitives of the computation to quantum elements, which are presumably the substance of all matter in the universe.

Note: Quantum computing would use actual quantum elements (qubits) to compute instead of digital signals, thus taking advantage of their subatomic speed. But AI theorists have noted, that doesnt get around the halting problem (the computer actually doesnt know what it is doing). That means that a computer still wouldnt replicate human intelligence. That, in turn, is one reason that quantum supremacy can sound a lot like hype.

George Gilder: But still youve got to translate the symbols in the world, which in turn have to be translated from the objects in the world, into these qubits, which are quantum entities. Once youve defined all these connections and structured the data, then the problem is essentially solved by the process of defining it and inputting it into the computer but quantum computing again is a very special purpose machine, extremely special purpose. Because everything has to be exactly structured right for it.

Robert J. Marks: Yeah, thats my point. I think that once we get quantum computing and if it works well, we can also do quantum encryption, which quantum computing cant decode. So thats the next step. So yeah, thats fascinating stuff.

In his new book, Gaming AI (free download here. ), Gilder explains one of the ways quantum computing differs from digital computing:

The qubit is one of the most enigmatic tangles of matter and ghost in the entire armament of physics. Like a binary digit, it can register 0 or 1; what makes it quantum is that it can also register a nonbinary superposition of 0 and 1.

In 1989 I published a book, Microcosm, with the subtitle The Quantum Era in Economics and Technology. Microcosm made the observation that all computers are quantum machines in that they shun the mechanics of relays, cogs, and gears, and manipulate matter from the inside following quantum rules. But they translate all measurements and functions into rigorous binary logicevery bit is 1 or 0. At the time I was writing Microcosm, a few physicists were speculating about a computer that used qubits rather than bits, banishing this translation process and functioning directly in the quantum domain. (P. 39)

The quantum world impinges on computer technology whether we like it or not:

For example, today the key problem in microchips is to avoid spontaneous quantum tunneling, where electrons can find themselves on the other side of a barrier that by the laws of classical physics would have been insurmountable and impenetrable. In digital memory chips or processors, spontaneous tunneling can mean leakage and loss. In a quantum computer, though, such quantum effects may endow a portfolio of features, providing a tool or computational primitive that enables simulation of a world governed by quantum rules. (p. 40)

Quantum rules, while strange, might insure the integrity of a connection because entangled quantum particles respond to each other no matter how far they are separated:

A long-ago thought experiment of Einsteins showed that once any two photonsor other quantum entitiesinteract, they remain in each others influence no matter how far they travel across the universe (as long as they do not interact with something else). Schrdinger christened this entanglement: The spinor other quantum attributeof one behaves as if it reacts to what happens to the other, even when the two are impossibly remote. (p. 40)

So, apart from interaction, no one can change only the data on their side without it being noticed

Underlying all this heady particle physics and quantum computing speculations is actually a philosophical shift. As Gilder puts it in Gaming AI,

John Wheeler provocatively spoke of it from bit and the elementary act of observer-participancy: in short all things physical are information-theoretic in origin and this is a participatory universe.(p. 41)

Which is another way of saying that in reality information, rather than matter and energy, rules our universe.

Also discussed in last weeks podcast (with links to the series and transcripts):

While the West hesitates, China is moving to blockchain. Life After Google by George Gilder, advocating blockchain, became a best seller in China and received a social sciences award. George Gilder, also the author of Gaming AI, explains why Bitcoin might not do as well as blockchain in general, as a future currency source.

You may also enjoy: Will quantum mechanics produce the true thinking computer. Quantum computers come with real world problems of their own.

and

Why AI geniuses havent created true thinking machines. The problems have been hinting at themselves all along.

Next: Whats the future for carbon computing?

See the original post here:
Does Schrdinger's Cat Think Quantum Computing Is a Sure Thing? - Walter Bradley Center for Natural and Artificial Intelligence

Quantum Computing in Aerospace and Defense Market Forecast to 2028: How it is Going to Impact on Global Industry to Grow in Near Future – Eurowire

Quantum Computing in Aerospace and Defense Market 2020: Latest Analysis:

The most recent Quantum Computing in Aerospace and Defense Market Research study includes some significant activities of the current market size for the worldwide Quantum Computing in Aerospace and Defense market. It presents a point by point analysis dependent on the exhaustive research of the market elements like market size, development situation, potential opportunities, and operation landscape and trend analysis. This report centers around the Quantum Computing in Aerospace and Defense-business status, presents volume and worth, key market, product type, consumers, regions, and key players.

Sample Copy of This Report @ https://www.quincemarketinsights.com/request-sample-29723?utm_source=Eurowire/komal

The prominent players covered in this report: D-Wave Systems Inc, Qxbranch LLC, IBM Corporation, Cambridge Quantum Computing Ltd, 1qb Information Technologies Inc., QC Ware Corp., Magiq Technologies Inc., Station Q-Microsoft Corporation, and Rigetti Computing

The market is segmented into By Component (Hardware, Software, Services), By Application (QKD, Quantum Cryptanalysis, Quantum Sensing, Naval).

Geographical segments are North America, Europe, Asia Pacific, Middle East & Africa, and South America.

It has a wide-ranging analysis of the impact of these advancements on the markets future growth, wide-ranging analysis of these extensions on the markets future growth. The research report studies the market in a detailed manner by explaining the key facets of the market that are foreseeable to have a countable stimulus on its developing extrapolations over the forecast period.

Get ToC for the overview of the premium report @ https://www.quincemarketinsights.com/request-toc-29723?utm_source=Eurowire/komal

This is anticipated to drive the Global Quantum Computing in Aerospace and Defense Market over the forecast period. This research report covers the market landscape and its progress prospects in the near future. After studying key companies, the report focuses on the new entrants contributing to the growth of the market. Most companies in the Global Quantum Computing in Aerospace and Defense Market are currently adopting new technological trends in the market.

Finally, the researchers throw light on different ways to discover the strengths, weaknesses, opportunities, and threats affecting the growth of the Global Quantum Computing in Aerospace and Defense Market. The feasibility of the new report is also measured in this research report.

Reasons for buying this report:

Make an Enquiry for purchasing this Report @ https://www.quincemarketinsights.com/enquiry-before-buying/enquiry-before-buying-29723?utm_source=Eurowire/komal

About Us:

QMI has the most comprehensive collection of market research products and services available on the web. We deliver reports from virtually all major publications and refresh our list regularly to provide you with immediate online access to the worlds most extensive and up-to-date archive of professional insights into global markets, companies, goods, and patterns.

Contact Us:

Quince Market Insights

Ajay D. (Knowledge Partner)

Office No- A109

Pune, Maharashtra 411028

Phone: APAC +91 706 672 4848 / US +1 208 405 2835 / UK +44 1444 39 0986

Email: [emailprotected]

Web: https://www.quincemarketinsights.com

Go here to see the original:
Quantum Computing in Aerospace and Defense Market Forecast to 2028: How it is Going to Impact on Global Industry to Grow in Near Future - Eurowire

Confirming simulated calculations with experiment results – Science Codex

Dr Zi Yang MENG from Division of Physics and Astronomy, Faculty of Science, the University of Hong Kong (HKU), is pursuing a new paradigm of quantum material research that combines theory, computation and experiment in a coherent manner. Recently, he teamed up with Dr Wei LI from Beihang University, Professor Yang QI from Fudan University, Professor Weiqiang YU from Renmin University and Professor Jinsheng WEN from Nanjing University to untangle the puzzle of Nobel Prize-winning theory Kosterlitz-Thouless (KT) phase.

Not long ago, Dr Meng, Dr Li and Dr Qi achieved accurate model calculations of a topological KT phase for a rare-earth magnet TmMgGaO4 (TMGO), by performing computation on the Supercomputers Tianhe 1 and Tianhe 2 (see supplementary information); this time, the team overcame several conceptual and experimental difficulties, and succeeded in discovering a topological KT phase and its transitions in the same rare-earth magnet via highly sensitive nuclear magnetic resonance (NMR) and magnetic susceptibility measurements, means of detecting magnetic responses of material. The former one is more sensitive in detecting small magnetic moments while the latter one can facilitate easy implementation of the experiment.

These experimental results, further explained the quantum Monte Carlo computations of the team, have completed the half-a-century pursuit of the topological KT phase in quantum magnetic material, which eventually leads to the Nobel Physics Prize of 2016. The research findings are recently published in renowned academic journal Nature Communications.

KT phase of TMGO is detected

Quantum materials are becoming the cornerstone for the continuous prosperity of human society, including the next-generation AI computing chips that go beyond Moore's law, the high-speed Maglev train, and the topological unit for quantum computers, etc. However, these complicated systems require modern computational techniques and advanced analysis to reveal their microscopic mechanism. Thanks to the fast development of the supercomputing platforms all over the world, scientists and engineers are now making great use of these facilities to discover better materials that benefit our society. Nevertheless, computation cannot stand alone.

In the present investigation, experimental techniques for handling extreme conditions such as low temperature, high sensitivity and strong magnetic field, are required to verify the predictions and make discoveries. These equipments and technologies are acquired and organised by the team members coherently.

The research is inspired by the KT phase theory discovered by V Berezinskii, J Michael Kosterlitz and David J Thouless, of which the latter two are laureates of the Nobel Prize in Physics 2016 (together with F Duncan M Haldane) for their theoretical discoveries of topological phase, and phase transitions of matter. Topology is a new way of classifying and predicting the properties of materials, and now becoming the mainstream of quantum material research and industry, with broad potential applications in quantum computer, lossless transmission of signals for information technology, etc. Back to 1970s, Kosterlitz and Thouless had predicted the existence of topological phase, hence named after them as the KT phase in quantum magnetic materials. Although such phenomena have been found in superfluids and superconductors, KT phase has yet been realised in bulk magnetic material, and is eventually discovered in the present work.

To detect such interesting KT phase in a magnetic material is not easy, as usually the 3-dimensional coupling would render magnetic material to develop ordered phase but not topological phase at low temperature, and even if there exists a temperature window for the KT phase, highly sensitive measurement technique is required to be able to pick up the unique fluctuation pattern of the topological phase, and that is the reason why such phase has been enthusiastically perused, but its experimental discovery has defied many previous attempts. After some initial failures, the team member discovered that the NMR method under in-plane magnetic fields, do not disturb the low-energy electronic states as the in-plane moment in TMGO is mostly multipolar with little interference on magnetic field and intrinsic magnetic moments of the material, which consequently allows the intricated topological KT fluctuations in the phase to be detected sensitively.

As shown in Fig.1, NMR spin-lattice relaxation rate measurements indeed revealed a KT phase sandwiched between a paramagnetic phase at temperature T > T_u and an antiferromagnetic phase at temperature T

This finding indicates a stable phase (KT phase) of TMGO, which serves as a concrete example of topological state of matter in crystalline material, might have potential applications in future information technologies. With its unique properties of topological excitations and strong magnetic fluctuations, many interesting research and potential applications with topological quantum materials can be pursued from here.

Dr Meng said: "It will eventually bring benefits to the society, such that quantum computers, lossless transmission of signals for information technology, faster and more energy-saving high-speed trains, all these dreams could gradually come true from quantum material research."

"Our approach, combining the state-of-art experimental techniques with unbiased quantum many-body computation schemes, enables us to directly compare experimental data to accurate numerical results with key theoretical predictions quantitatively, providing a bridge way to connect theoretical, numerical and experimental studies, the new paradigm set up by the joint team will certainly lead to more profound and impactful discoveries in quantum materials." He added.

The supercomputers used in computations and simulations

The powerful supercomputers Tianhe-1 and Tianhe-2 in China used in the computations are among the world's fastest supercomputers and ranked No.1 in 2010 and 2014 respectively in the TOP500 list (https://www.top500.org/). Their next-generation Tianhe-3 is expected to be in usage in 2021 and will be world first exaFLOPS scale supercomputer. The quantum Monte Carlo and tensor network simulations performed by the joint team make use of the Tianhe supercomputers and requires the parallel simulations for thousands of hours on thousands of CPUs, it will take more than 20 years to finish if performed in common PC.

Read the rest here:
Confirming simulated calculations with experiment results - Science Codex

#ISSE2020: Focus on 2020’s Crypto Successes Rather than Efforts to Break it – Infosecurity Magazine

Efforts to break encryption in new crypto wars are ongoing, but there are many successes to recount in the past year.

Speaking in the closing session thevirtual ISSE ConferenceProfessor Bart Preneel from the KU Leuven, where he heads the COSIC research group, said more and more research crypto hasbeen published this year and he praised the work to enable contact tracing, but was critical of government and law enforcements efforts around end-to-end (E2E) encryption.

Saying the crypto wars have come back again, something Im doomed to live with for the rest of my life, Preneel referred to the case in 1993 when AT&T introduced a secure phone with E2E-based on Triple DES, which the US government was not happy with as it stopped them intercepting phone calls, especially outside US. The clipper chip with key escrow project failed, and now the crypto wars have come back as cryptography has shifted from hardware to software.

He said there is a case for interception of those people communicating child abuse images, terrorist acts and kidnapping cases, and governments are unable to access encrypted communications, so the government has no access. Preneel also said some people use Facebook Messenger for those purposes, and it is possible at the moment as it is not E2E encrypted, but Facebook announced E2E for Messenger to stop that channel of access, and the stupid people will not be able to escape.

He said this proposal was met with criticism as most people are not happy with backdoors, and as a society, we can agree to filter for abuse messages and images, but it could also be used against the freedom of speech of people you dont like, and for political purposes.

It keeps coming in different forms and shapes, but the debate is essentially the same and the main complaint is police and intelligence services have lots of metadata, once they find one person they can use that infrastructure to find other people, once you have metadata you have access, he said. It is a one-sided debate as law enforcement does not show what they acquired in the last 20 years, so that is actually a debate that is happening, and it is difficult to debate with one side who doesnt disclose.

Among other cryptography highlights from 2020, Preneel cited the breaking of RSA 250, where the researchers found two prime factors. It is important as a large part of digital infrastructure relies on RSA, he said. It was amazing as they used so little power, and more effort and money was put in.

Speaking on quantum computing, he said despite Google, Intel and Microsoft building and spending in quantum computing research, there were no big examples of successes this year, even by companies spending small fortunes. He said in order to break RSA 2048 you will need something like 20 million qbits, and most companies were very far from that, so he predicted that we will be safe until 2035.

With regards to contact tracing, Preneel welcomed the work done to create apps that anonymized user details, and using decentralized proximity tracing (DP3T), he said there had been 57 million downloads of DP3T-based apps across 18 EU countries and Switzerland. He said: There arestill problems in integration in some national health systems, but it is a solution that seems to work. There are clear indications it works and people are being warned and it is cost effective. The solution was security and privacy friendly.

Link:
#ISSE2020: Focus on 2020's Crypto Successes Rather than Efforts to Break it - Infosecurity Magazine

NTTs Kazuhiro Gomi says Bio Digital Twin, quantum computing the next-gen tech – Backend News

At the recently concluded Philippine Digital Convention (PH Digicon 2020) by PLDT Enterprise, Kazuhiro Gomi, president and CEO, NTT Research, shared the fundamental research milestones coming out of its three labs: the Physics and Informatics (PHI) Lab, the Cryptography and Information Security (CIS) Lab, and the Medical and Health Informatics (MEI) Lab, that are hoped to lead to monumental tech innovations.

The three-day virtual convention drew in more than 3,000 views during the live stream broadcast of the plenary sessions and breakout sessions covering various topics.

Gomi headlined the second day with his topic Upgrading Reality, a glimpse into breakthrough research that NTT Research is currently working on that could hasten digital transformations.

PLDT sets up Data Privacy and Information Security Committee

PLDT Home broadband service expands 46% nationwide

In a discussion with Cathy Yap-Yang, FVP and head Corporate Communications, PLDT, Gomi elaborated on next-generation technologies, particularly the Bio Digital Twin project, that could potentially be game-changing in the medical field, quantum computing, and advanced cryptography.

Bido Digital Twin

The Bio Digital Twin is an initiative where a digital replica of a patients internal system functions first as a model for possible testing of procedures and chemical reactions and seeing possible results before actual application to the person.

We are trying to create an electronic replica of the human body. If we are able to create something like that, the future of clinical and medical activities will be very different, Gomi said. If we have a precise replica of your human body, you can predict what type of disease or what type of problem you might have maybe three years down the road. Or, if your doctor needs to test a new drug for you, he can do so onto the digital twin.

NTT Research is a fundamental research organization in Silicon Valley that carries out advanced research for some of the worlds most important and impactful technologies, including quantum computing, cryptography, information security, and medical and health informatics.

Computing power

However, to get there and make the Bio Digital Twin possible, there are hurdles from various disciplines, including the component of computing power.

Gomi explained that people believed that todays computers can do everything, but in reality, it might actually take years to solve complex problems, whereas a quantum computer could solve these problems in seconds.

There are different kinds of quantum computers, but all are based upon quantum physics. At NTT Research, Gomi revealed that their group is working on a quantum computer called a coherent Ising machine which could solve combinatorial optimization problems.

We may be able to bring those superfast machines to market, to reality, much quicker. That is what we are aiming for, he said.

Basically, the machine, using many parameters and complex optimization, finds the best solution in a matter of seconds which may take months or years using conventional computers.

Some examples where quantum computing may be applied include lead optimization problems such as effects on small molecule drugs, peptide drugs, and Biocatalyst, or resource optimization challenges such as logistics, traffic control, or using wireless networks. Gomi also expounded on compressed sensing cases, including use in astronomical telescopes, magnetic resonance imaging (MRI), and computed tomography.

Quantum computing

Apart from quantum computing, Gomi reiterated the issues of cybersecurity and privacy. Today, encryption is able to address those challenges but it would soon require a more advanced and sophisticated type of technology if we are to upgrade reality.

From the connected world, obviously we want to exchange more data among each other, but we have to make sure that security and privacy are maintained. We have to have those things together to get the best out of a connected world, he said.

Among next-generation advanced encryptions, Gomi highlighted Attribute-Based Encryption where various decryption keys define access control of the encrypted data. For example, depending on the user (or the type of key he/she has) what they are allowed to view is different or controlled by the key issuers.

He noted that in the next couple of years, we should be able to commercialize this type of technology. We can maintain privacy while encouraging the sharing of data with this mechanism.

Gomi reiterated that we are at the stage of all kinds of digital transformations.

Digital transformation

Those digital transformations are making our lives so much richer and business so much more interesting and efficient. I would imagine those digital transformations will continue to advance even more, he said.

However, there are limiting factors that could impede or slow down those digital transformations such as energy consumption, Moores law of limitation as we cannot expect too much of the capacities of the electronic chips from current computers, and the issues on privacy and security. Hence, we need to address those factors.

PH Digicon 2020 is the annual convention organized by PLDT Enterprise which gathered global industry leaders to speak on the latest advancements in the digital landscape. This years roster of speakers included tech experts and heads from Cisco, Nokia, Salesforce, NTT Research, and goop CEO and multi-awarded Hollywood actress Gwyneth Paltrow who headlined the first virtual run.

Related

More here:
NTTs Kazuhiro Gomi says Bio Digital Twin, quantum computing the next-gen tech - Backend News

The 12 Coolest Machine-Learning Startups Of 2020 – CRN

Learning Curve

Artificial intelligence has been a hot technology area in recent years and machine learning, a subset of AI, is one of the most important segments of the whole AI arena.

Machine learning is the development of intelligent algorithms and statistical models that improve software through experience without the need to explicitly code those improvements. A predictive analysis application, for example, can become more accurate over time through the use of machine learning.

But machine learning has its challenges. Developing machine-learning models and systems requires a confluence of data science, data engineering and development skills. Obtaining and managing the data needed to develop and train machine-learning models is a significant task. And implementing machine-learning technology within real-world production systems can be a major hurdle.

Heres a look at a dozen startup companies, some that have been around for a few years and some just getting off the ground, that are addressing the challenges associated with machine learning.

AI.Reverie

Top Executive: Daeil Kim, Co-Founder, CEO

Headquarters: New York

AI.Reverie develops AI and machine -earning technology for data generation, data labeling and data enhancement tasks for the advancement of computer vision. The companys simulation platform is used to help acquire, curate and annotate the large amounts of data needed to train computer vision algorithms and improve AI applications.

In October AI.Reverie was named a Gartner Cool Vendor in AI core technologies.

Anodot

Top Executive: David Drai, Co-Founder, CEO

Headquarters: Redwood City, Calif.

Anodots Deep 360 autonomous business monitoring platform uses machine learning to continuously monitor business metrics, detect significant anomalies and help forecast business performance.

Anodots algorithms have a contextual understanding of business metrics, providing real-time alerts that help users cut incident costs by as much as 80 percent.

Anodot has been granted patents for technology and algorithms in such areas as anomaly score, seasonality and correlation. Earlier this year the company raised $35 million in Series C funding, bringing its total funding to $62.5 million.

BigML

Top Executive: Francisco Martin, Co-Founder, CEO

Headquarters: Corvallis, Ore.

BigML offers a comprehensive, managed machine-learning platform for easily building and sharing datasets and data models, and making highly automated, data-driven decisions. The companys programmable, scalable machine -earning platform automates classification, regression, time series forecasting, cluster analysis, anomaly detection, association discovery and topic modeling tasks.

The BigML Preferred Partner Program supports referral partners and partners that sell BigML and oversee implementation projects. Partner A1 Digital, for example, has developed a retail application on the BigML platform that helps retailers predict sales cannibalizationwhen promotions or other marketing activity for one product can lead to reduced demand for other products.

Carbon Relay

Top Executive: Matt Provo, Founder, CEO

Headquarters: Cambridge, Mass.

Carbon Relay provides machine learning and data science software that helps organizations optimize application performance in Kubernetes.

The startups Red Sky Ops makes it easy for DevOps teams to manage a large variety of application configurations in Kubernetes, which are automatically tuned for optimized performance no matter what IT environment theyre operating in.

In February the company said that it had raised $63 million in a funding round from Insight Partners that the company will use to expand its Red Sky Ops AIOps offering.

Comet.ML

Top Executive: Gideon Mendels, Co-Founder, CEO

Headquarters: New York

Comet.ML provides a cloud-hosted machine-learning platform for building reliable machine-learning models that help data scientists and AI teams track datasets, code changes, experimentation history and production models.

Launched in 2017, Comet.ML has raised $6.8 million in venture financing, including $4.5 million in April 2020.

Dataiku

Top Executive: Florian Douetteau, Co-Founder, CEO

Headquarters: New York

Dataikus goal with its Dataiku DSS (Data Science Studio) platform is to move AI and machine-learning use beyond lab experiments into widespread use within data-driven businesses. Dataiku DSS is used by data analysts and data scientists for a range of machine-learning, data science and data analysis tasks.

In August Dataiku raised an impressive $100 million in a Series D round of funding, bringing its total financing to $247 million.

Dataikus partner ecosystem includes analytics consultants, service partners, technology partners and VARs.

DotData

Top Executive: Ryohei Fujimaki, Founder, CEO

Headquarters: San Mateo, Calif.

DotData says its DotData Enterprise machine-learning and data science platform is capable of reducing AI and business intelligence development projects from months to days. The companys goal is to make data science processes simple enough that almost anyone, not just data scientists, can benefit from them.

The DotData platform is based on the companys AutoML 2.0 engine that performs full-cycle automation of machine-learning and data science tasks. In July the company debuted DotData Stream, a containerized AI/ML model that enables real-time predictive capabilities.

Eightfold.AI

Top Executive: Ashutosh Garg, Co-Founder, CEO

Headquarters: Mountain View, Calif.

Eightfold.AI develops the Talent Intelligence Platform, a human resource management system that utilizes AI deep learning and machine-learning technology for talent acquisition, management, development, experience and diversity. The Eightfold system, for example, uses AI and ML to better match candidate skills with job requirements and improves employee diversity by reducing unconscious bias.

In late October Eightfold.AI announced a $125 million Series round of financing, putting the startups value at more than $1 billion.

H2O.ai

Top Executive: Sri Ambati, Co-Founder, CEO

Headquarters: Mountain View, Calif.

H2O.ai wants to democratize the use of artificial intelligence for a wide range of users.

The companys H2O open-source AI and machine-learning platform, H2O AI Driverless automatic machine-learning software, H20 MLOps and other tools are used to deploy AI-based applications in financial services, insurance, health care, telecommunications, retail, pharmaceutical and digital marketing.

H2O.ai recently teamed up with data science platform developer KNIME to integrate Driverless AI for AutoMl with KNIME Server for workflow management across the entire data science life cyclefrom data access to optimization and deployment.

Iguazio

Top Executive: Asaf Somekh, Co-Founder, CEO

Headquarters: New York

The Iguazio Data Science Platform for real-time machine learning applications automates and accelerates machine-learning workflow pipelines, helping businesses develop, deploy and manage AI applications at scale that improve business outcomeswhat the company calls MLOps.

In early 2020 Iguazio raised $24 million in new financing, bringing its total funding to $72 million.

OctoML

Top Executive: Luis Ceze, Co-Founder, CEO

Headquarters: Seattle

OctoMLs Software-as-a-Service Octomizer makes it easier for businesses and organizations to put deep learning models into production more quickly on different CPU and GPU hardware, including at the edge and in the cloud.

OctoML was founded by the team that developed the Apache TVM machine-learning compiler stack project at the University of Washingtons Paul G. Allen School of Computer Science & Engineering. OctoMLs Octomizer is based on the TVM stack.

Tecton

Top Executive: Mike Del Balso, Co-Founder, CEO

Headquarters: San Francisco

Tecton just emerged from stealth in April 2020 with its data platform for machine learning that enables data scientists to turn raw data into production-ready machine-learning features. The startups technology is designed to help businesses and organizations harness and refine vast amounts of data into the predictive signals that feed machine-learning models.

The companys three founders: CEO Mike Del Balso, CTO Kevin Stumpf and Engineering Vice President Jeremy Hermann previously worked together at Uber where they developed the companys Michaelangelo machine-learning platform the ride-sharing company used to scale its operations to thousands of production models serving millions of transactions per second, according to Tecton.

The company started with $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia.

See the original post here:
The 12 Coolest Machine-Learning Startups Of 2020 - CRN

Utilizing machine learning to uncover the right content at KMWorld Connect 2020 – KMWorld Magazine

At KMWorld Connect 2020 David Seuss, CEO, Northern Light, Sid Probstein, CTO, Keeeb, and Tom Barfield, chief solution architect, Keeb discussed Machine Learning & KM.

KMWorld Connect, November 16-19, and its co-located events, covers future-focused strategies, technologies, and tools to help organizations transform for positive outcomes.

Machine learning can assist KM activities in many ways. Seuss discussed using a semantic analysis of keywords in social posts about a topic of interest to yield clear guidance as to which terms have actual business relevance and are therefore worth investing in.

What are we hearing from our users? Seuss asked. The users hate the business research process.

By using AstraZeneca as an example, Seuss started the analysis of the companys conference presentations. By looking at the topics, Diabetes sank lower as a focus of AstraZenicas focus.

When looking at their twitter account, themes included oncology, COVID-19, and environmental issues. Not one reference was made to diabetes, according to Seuss.

Social media is where the energy of the company is first expressed, Seuss said.

An instant news analysis using text analytics tells us the same story: no mention of diabetes products, clinical trials, marketing, etc.

AI-based automated insight extraction from 250 AstraZeneca oncolcogy conference presentations gives insight into R&D focus.

Let the machine read the content and tell you what it thinks is important, Seuss said.

You can do that with a semantic graph of all the ideas in the conference presentations. Semantic graphs look for relationships between ideas and measure the number and strength of the relationships. Google search results are a real-world example of this in action.

We are approaching the era when users will no longer search for information, they will expect the machine to analyze and then summarize for them what they need to know, Seuss said. Machine-based techniques will change everything.

Probstein and Barfield addressed new approaches to integrate knowledge sharing into work. They looked at collaborative information curation so end users help identify the best content, allowing KM teams to focus on the most strategic knowledge challenges as well as the pragmatic application of AI through text analytics to improve both curation and findability and improve performance.

The super silo is on the rise, Probstein said. It stores files, logs, customer/sales and can be highly variable. He looked at search results for how COVID-19 is having an impact on businesses.

Not only are there many search engines, each one is different, Probstein said.

Probstein said Keeeb can help with this problem. The solution can search through a variety of data sources to find the right information.

One search, a few seconds, one pane of glass, Probstein said. Once you solve the search problem, now you can look through the documents.

Knowledge isnt always a whole document, it can be a few paragraphs or an image, which can then be captured and shared through Keeeb.

AI and machine learning can enable search to be integrated with existing tools or any system. Companies should give end-users simple approaches to organize with content-augmented with AI-benefitting themselves and others, Barfield said.

More:
Utilizing machine learning to uncover the right content at KMWorld Connect 2020 - KMWorld Magazine

Machine Learning Predicts How Cancer Patients Will Respond to Therapy – HealthITAnalytics.com

November 18, 2020 -A machine learning algorithm accurately determined how well skin cancer patients would respond to tumor-suppressing drugs in four out of five cases, according to research conducted by a team from NYU Grossman School of Medicine and Perlmutter Cancer Center.

The study focused on metastatic melanoma, a disease that kills nearly 6,800 Americans each year. Immune checkpoint inhibitors, which keep tumors from shutting down the immune systems attack on them, have been shown to be more effective than traditional chemotherapies for many patients with melanoma.

However, half of patients dont respond to these immunotherapies, and these drugs are expensive and often cause side effects in patients.

While immune checkpoint inhibitors have profoundly changed the treatment landscape in melanoma, many tumors do not respond to treatment, and many patients experience treatment-related toxicity, said corresponding study authorIman Osman, medical oncologist in the Departments of Dermatology and Medicine (Oncology) at New York University (NYU) Grossman School of Medicine and director of the Interdisciplinary Melanoma Program at NYU Langones Perlmutter Cancer Center.

An unmet need is the ability to accurately predict which tumors will respond to which therapy. This would enable personalized treatment strategies that maximize the potential for clinical benefit and minimize exposure to unnecessary toxicity.

READ MORE: How Social Determinants Data Can Enhance Machine Learning Tools

Researchers set out to develop a machine learning model that could help predict a melanoma patients response to immune checkpoint inhibitors. The team collected 302 images of tumor tissue samples from 121 men and women treated for metastatic melanoma with immune checkpoint inhibitors at NYU Langone hospitals.

They then divided these slides into 1.2 million portions of pixels, the small bits of data that make up images. These were fed into the machine learning algorithm along with other factors, such as the severity of the disease, which kind of immunotherapy regimen was used, and whether a patient responded to the treatment.

The results showed that the machine learning model achieved an AUC of 0.8 in both the training and validation cohorts, and was able to predict which patients with a specific type of skin cancer would respond well to immunotherapies in four out of five cases.

Our findings reveal that artificial intelligence is a quick and easy method of predicting how well a melanoma patient will respond to immunotherapy, said study first author Paul Johannet, MD, a postdoctoral fellow at NYU Langone Health and its Perlmutter Cancer Center.

Researchers repeated this process with 40 slides from 30 similar patients at Vanderbilt University to determine whether the results would be similar at a different hospital system that used different equipment and sampling techniques.

READ MORE: Simple Machine Learning Method Predicts Cirrhosis Mortality Risk

A key advantage of our artificial intelligence program over other approaches such as genetic or blood analysis is that it does not require any special equipment, said study co-author Aristotelis Tsirigos, PhD, director of applied bioinformatics laboratories and clinical informatics at the Molecular Pathology Lab at NYU Langone.

The team noted that aside from the computer needed to run the program, all materials and information used in the Perlmutter technique are a standard part of cancer management that most, if not all, clinics use.

Even the smallest cancer center could potentially send the data off to a lab with this program for swift analysis, said Osman.

The machine learning method used in the study is also more streamlined than current predictive tools, such as analyzing stool samples or genetic information, which promises to reduce treatment costs and speed up patient wait times.

Several recent attempts to predict immunotherapy responses do so with robust accuracy but use technologies, such as RNA sequencing, that are not readily generalizable to the clinical setting, said corresponding study authorAristotelis Tsirigos, PhD, professor in the Institute for Computational Medicine at NYU Grossman School of Medicine and member of NYU Langones Perlmutter Cancer Center.

READ MORE: Machine Learning Forecasts Prognosis of COVID-19 Patients

Our approach shows that responses can be predicted using standard-of-care clinical information such as pre-treatment histology images and other clinical variables.

However, researchers also noted that the algorithm is not yet ready for clinical use until they can boost the accuracy from 80 percent to 90 percent and test the algorithm at more institutions. The research team plans to collect more data to improve the performance of the model.

Even at its current level of accuracy, the model could be used as a screening method to determine which patients across populations would benefit from more in-depth tests before treatment.

There is potential for using computer algorithms to analyze histology images and predict treatment response, but more work needs to be done using larger training and testing datasets, along with additional validation parameters, in order to determine whether an algorithm can be developed that achieves clinical-grade performance and is broadly generalizable, said Tsirigos.

There is data to suggest that thousands of images might be needed to train models that achieve clinical-grade performance.

Read the rest here:
Machine Learning Predicts How Cancer Patients Will Respond to Therapy - HealthITAnalytics.com

The way we train AI is fundamentally flawed – MIT Technology Review

For example, they trained 50 versions of an image recognition model on ImageNet, a dataset of images of everyday objects. The only difference between training runs were the random values assigned to the neural network at the start. Yet despite all 50 models scoring more or less the same in the training testsuggesting that they were equally accuratetheir performance varied wildly in the stress test.

The stress test used ImageNet-C, a dataset of images from ImageNet that have been pixelated or had their brightness and contrast altered, and ObjectNet, a dataset of images of everyday objects in unusual poses, such as chairs on their backs, upside-down teapots, and T-shirts hanging from hooks. Some of the 50 models did well with pixelated images, some did well with the unusual poses; some did much better overall than others. But as far as the standard training process was concerned, they were all the same.

The researchers carried out similar experiments with two different NLP systems, and three medical AIs for predicting eye disease from retinal scans, cancer from skin lesions, and kidney failure from patient records. Every system had the same problem: models that should have been equally accurate performed differently when tested with real-world data, such as different retinal scans or skin types.

We might need to rethink how we evaluate neural networks, says Rohrer. It pokes some significant holes in the fundamental assumptions we've been making.

DAmour agrees. The biggest, immediate takeaway is that we need to be doing a lot more testing, he says. That wont be easy, however. The stress tests were tailored specifically to each task, using data taken from the real world or data that mimicked the real world. This is not always available.

Some stress tests are also at odds with each other: models that were good at recognizing pixelated images were often bad at recognizing images with high contrast, for example. It might not always be possible to train a single model that passes all stress tests.

One option is to design an additional stage to the training and testing process, in which many models are produced at once instead of just one. These competing models can then be tested again on specific real-world tasks to select the best one for the job.

Thats a lot of work. But for a company like Google, which builds and deploys big models, it could be worth it, says Yannic Kilcher, a machine-learning researcher at ETH Zurich. Google could offer 50 different versions of an NLP model and application developers could pick the one that worked best for them, he says.

DAmour and his colleagues dont yet have a fix but are exploring ways to improve the training process. We need to get better at specifying exactly what our requirements are for our models, he says. Because often what ends up happening is that we discover these requirements only after the model has failed out in the world.

Getting a fix is vital if AI is to have as much impact outside the lab as it is having inside. When AI underperforms in the real-world it makes people less willing to want to use it, says co-author Katherine Heller, who works at Google on AI for healthcare: We've lost a lot of trust when it comes to the killer applications, thats important trust that we want to regain.

Read more from the original source:
The way we train AI is fundamentally flawed - MIT Technology Review

SiMa.ai Adopts Arm Technology to Deliver a Purpose-built Heterogeneous Machine Learning Compute Platform for the Embedded Edge – Design and Reuse

Licensing agreement enables machine learning intelligence with best-in-class performance and power for robotics, surveillance, autonomous, and automotive applications

SAN JOSE, Calif.-- November 18, 2020 -- SiMa.ai, the machine learning company enabling high performance compute at the lowest power, today announced the adoption of low-power Arm compute technology to build its purpose-built Machine Learning SoC (MLSoC) platform. The licensing of this technology brings machine learning intelligence with best-in-class performance and power to a broad set of embedded edge applications including robotics, surveillance, autonomous, and automotive.

SiMa.ai is adopting Arm Cortex-A and Cortex-M processors optimized for power, throughput efficiency, and safety-critical tasks. In addition, SiMa.ai is leveraging a combination of widely used open-source machine learning frameworks from Arms vast ecosystem, to allow software to seamlessly enable machine learning for legacy applications at the embedded edge.

Arm is the industry leader in energy-efficient processor design and advanced computing, said Krishna Rangasayee, founder and CEO of SiMa.ai. The integration of SiMa.ai's high performance and low power machine learning accelerator with Arm technology accelerates our progress in bringing our MLSoC to the market, creating new solutions underpinned by industry-leading IP, the broad Arm ecosystem, and world-class support from its field and development teams."

From autonomous systems to smart cities, the applications enabled by ML at the edge are delivering increased functionality, leading to more complex device requirements, said Dipti Vachani, senior vice president and general manager, Automotive and IoT Line of Business at Arm. SiMa.ai is innovating on top of Arms foundational IP to create a unique low power ML SoC that will provide intelligence to the next generation of embedded edge use cases.

SiMa.ai is strategically leveraging Arm technology to deliver its unique Machine Learning SoC. This includes:

About SiMa.ai

SiMa.ai is a machine learning company enabling high performance compute at the lowest power. Initially focused on solutions for computer vision applications at the embedded edge, the company is led by a team of technology experts committed to delivering the industrys highest frames per second per watt solution to its customers. To learn more, visit http://www.sima.ai.

Continue reading here:
SiMa.ai Adopts Arm Technology to Deliver a Purpose-built Heterogeneous Machine Learning Compute Platform for the Embedded Edge - Design and Reuse

SVG Tech Insight: Increasing Value of Sports Content Machine Learning for Up-Conversion HD to UHD – Sports Video Group

This fall SVG will be presenting a series of White Papers covering the latest advancements and trends in sports-production technology. The full series of SVGs Tech Insight White Papers can be found in the SVG Fall SportsTech Journal HERE.

Following the height of the 2020 global pandemic, live sports are starting to re-emerge worldwide albeit predominantly behind closed doors. For the majority of sports fans, video is the only way they can watch and engage with their favorite teams or players. This means the quality of the viewing experience itself has become even more critical.

With UHD being adopted by both households and broadcasters around the world, there is a marked expectation around visual quality. To realize these expectations in the immediate term, it will be necessary for some years to up-convert from HD to UHD when creating 4K UHD sports channels and content.

This is not so different from the early days of HD, where SD sporting related content had to be up-converted to HD. In the intervening years, however, machine learning as a technology has progressed sufficiently to be a serious contender for performing better up-conversions than with more conventional techniques, specifically designed to work for TV content.

Ideally, we want to process HD content into UHD with a simple black box arrangement.

The problem with conventional up-conversion, though, is that it does not offer an improved resolution, so does not fully meet the expectations of the viewer at home watching on a UHD TV. The question, therefore, becomes: can we do better for the sports fan? If so, how?

UHD is a progressive scan format, with the native TV formats being 38402160, known as 2160p59.64 (usually abbreviated to 2160p60) or 2160p50. The corresponding HD formats, with the frame/field rates set by region, are either progressive 1280720 (720p60 or 720p50) or interlaced 19201080 (1080i30 or 1080i25).

Conversion from HD to UHD for progressive images at the same rate is fairly simple. It can be achieved using spatial processing only. Traditionally, this might typically use a bi-cubic interpolation filter, (a 2-dimensional interpolation commonly used for photographic image scaling.) This uses a grid of 44 source pixels and interpolates intermediate locations in the center of the grid. The conversion from 1280720 to 38402160 requires a 3x scaling factor in each dimension and is almost the ideal case for an upsampling filter.

These types of filters can only interpolate, resulting in an image that is a better result than nearest-neighbor or bi-linear interpolation, but does not have the appearance of being a higher resolution.

Machine Learning (ML) is a technique whereby a neural network learns patterns from a set of training data. Images are large, and it becomes unfeasible to create neural networks that process this data as a complete set. So, a different structure is used for image processing, known as Convolutional Neural Networks (CNNs). CNNs are structured to extract features from the images by successively processing subsets from the source image and then processes the features rather than the raw pixels.

Up-conversion process with neural network processing

The inbuilt non-linearity, in combination with feature-based processing, mean CNNs can invent data not in the original image. In the case of up-conversion, we are interested in the ability to create plausible new content that was not present in the original image, but that doesnt modify the nature of the image too much. The CNN used to create the UHD data from the HD source is known as the Generator CNN.

When input source data needs to be propagated through the whole chain, possibly with scaling involved, then a specific variant of a CNN known as a Residual Network (ResNet) is used. A ResNet has a number of stages, each of which includes a contribution from a bypass path that carries the input data. For this study, a ResNet with scaling stages towards the end of the chain was used as the Generator CNN.

For the Generator CNN to do its job, it must be trained with a set of known data patches of reference images and a comparison is made between the output and the original. For training, the originals are a set of high-resolution UHD images, down-sampled to produce HD source images, then up-converted and finally compared to the originals.

The difference between the original and synthesized UHD images is calculated by the compare function with the error signal fed back to the Generator CNN. Progressively, the Generator CNN learns to create an image with features more similar to original UHD images.

The training process is dependent on the data set used for training, and the neural network tries to fit the characteristics seen during training onto the current image. This is intriguingly illustrated in Googles AI Blog [1], where a neural network presented with a random noise pattern introduces shapes like the ones used during training. It is important that a diverse, representative content set is used for training. Patches from about 800 different images were used for training during the process of MediaKinds research.

The compare function affects the way the Generator CNN learns to process the HD source data. It is easy to calculate a sum of absolute differences between original and synthesized. This causes an issue due to training set imbalance; in this case, the imbalance is that real pictures have large proportions with relatively little fine detail, so the data set is biased towards regenerating a result like that which is very similar to the use of a bicubic interpolation filter.

This doesnt really achieve the objective of creating plausible fine detail.

Generative Adversarial Neural Networks (GANs) are a relatively new concept [2], where a second neural network, known as the Discriminator CNN, is used and is itself trained during the training process of the Generator CNN. The Discriminator CNN learns to detect the difference between features that are characteristic of original UHD images and synthesized UHD images. During training, the Discriminator CNN sees either an original UHD image or a synthesized UHD image, with the detection correctness fed back to the discriminator and, if the image was a synthesized one, also fed back to the Generator CNN.

Each CNN is attempting to beat the other: the Generator by creating images that have characteristics more like originals, while the Discriminator becomes better at detecting synthesized images.

The result is the synthesis of feature details that are characteristic of original UHD images.

With a GAN approach, there is no real constraint to the ability of the Generator CNN to create new detail everywhere. This means the Generator CNN can create images that diverge from the original image in more general ways. A combination of both compare functions can offer a better balance, retaining the detail regeneration, but also limiting divergence. This produces results that are subjectively better than conventional up-conversion.

Conversion from 1080i60 to 2160p60 is necessarily more complex than from 720p60. Starting from 1080i, there are three basic approaches to up-conversion:

Training data is required here, which must come from 2160p video sequences. This enables a set of fields to be created, which are then downsampled, with each field coming from one frame in the original 2160p sequence, so the fields are not temporally co-located.

Surprisingly, results from field-based up-conversion tended to be better than using de-interlaced frame conversion, despite using sophisticated motion-compensated de-interlacing: the frame-based conversion being dominated by the artifacts from the de-interlacing process. However, it is clear that potentially useful data from the opposite fields did not contribute to the result, and the field-based approach missed data that could produce a better result.

A solution to this is to use multiple fields data as the source data directly into a modified Generator CNN, letting the GAN learn how best to perform the deinterlacing function. This approach was adopted and re-trained with a new set of video-based data, where adjacent fields were also provided.

This led to both high visual spatial resolution and good temporal stability. These are, of course, best viewed as a video sequence, however an example of one frame from a test sequence shows the comparison:

Comparison of a sample frame from different up-conversion techniques against original UHD

Up-conversion using a hybrid GAN with multiple fields was effective across a range of content, but is especially relevant for the visual sports experience to the consumer. This offers a realistic means by which content that has more of the appearance of UHD can be created from both progressive and interlaced HD source, which in turn can enable an improved experience for the fan at home when watching a sports UHD channel.

1 A. Mordvintsev, C. Olah and M. Tyka, Inceptionism: Going Deeper into Neural Networks, 2015. [Online]. Available: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

2 I. e. a. Goodfellow, Generative Adversarial Nets, Neural Information Processing Systems Proceedings, vol. 27, 2014.

Read more:
SVG Tech Insight: Increasing Value of Sports Content Machine Learning for Up-Conversion HD to UHD - Sports Video Group

Hokitika father charged with murder initially claimed son fell out of bed – Stuff.co.nz

Stuff

David Grant Sinclair is on trial in the High Court at Greymouth for the murder of his 10-month-old son.

A Hokitika baby died with 30 bruises all over his body, fractures to his skull and bleeding on the brain and behind both eyes.

The Crown says the fatal injuries were inflicted by his father but his father alleges they were sustained from a fall down the stairs.

David Grant Sinclair is charged with murdering his 10-month-old son, CJ Bodhi White, at Hokitika on July 9, 2019.

A jury trial began on Monday in the High Court at Greymouth before Justice Rebecca Edwards. It is set down for two weeks and 27 witnesses plus Sinclair himself are due to give evidence.

READ MORE:* Suppression lifts on Hokitika man charged with murdering 10-month-old son* West Coast baby allegedly killed by father was 'an angel to us', mother says* Homicide investigation launched in Northland after death of baby

Supplied

CJ Bodhi White died aged 10 months in Hokitika on July 9, 2019. His father, David Grant Sinclair, denies murdering the infant.

Crown prosecutor William Taffs said CJ had been in Sinclairs fulltime care for only six weeks before his death.

He said Sinclair told his family, first responders and police he fell asleep with CJ in his bed and was woken between 3am and 4am by a thud from CJ falling out of bed.

Taffs told the jury CJ had not been sleeping well because he was teething and would have been in significant pain from injuries inflicted earlier by his father, including a fractured bone in his foot and bruising to his groin and scrotum.

He said Sinclair put his phone into incognito mode at 3.27am and searched: Does a babys head flop backwards from concussion.

He then accessed several apps including a gambling site and checked the weather forecast.

At 4.17am, he again searched: Why has my 1-year-olds neck gone all floppy after falling out of bed?

He then sent a message to his mother asking her to call him.

When she rang, he told her CJ had fallen out of bed. She arrived at the house less than 10 minutes later, began CPR and rang emergency services.

CJ was unresponsive and was flown to Christchurch Hospital but doctors ruled out surgery because his injuries were unsurvivable. His life support was turned off at 11.40am. He was declared dead 25 minutes later.

CJ had 30 bruises across his body, significant brain injuries, skull fractures, soft swelling to his skull, bleeding to both retinas and swelling, cuts, bleeding and clots on his brain.

Several medical professionals would give evidence that the injuries were not consistent with a fall out of bed on to a carpeted floor. They were consistent with his father hitting his head against a hard object or hitting his head with a hard object, Taffs said.

He had too many bruises in all the wrong places to be accidental. Bruises consistent with finger marks ... inflicted in what the Crown says was a moment of anger or frustration.

Taffs said Sinclair had told medical professionals the historic bruises were caused when he caught CJs leg in the car seat buckle and from CJ hitting himself with a rattle.

Defence lawyer Andrew McKenzie said the jury would be presented with two vastly different scenarios.

He said Sinclair would give evidence that CJ had fallen down the stairs. The defence would also present evidence from experts.

David Sinclair is guilty of taking too much time to call 111. He is guilty of lying to police and lying to people about his baby falling out of bed. He is guilty of not taking steps to remove the risk of his baby falling down the stairs.

He is not guilty of the crime of murder, McKenzie said.

Joanne Carroll/STUFF

The jury trial is set down for two weeks and 27 prosecution witnesses will be called to give evidence.

See original here:
Hokitika father charged with murder initially claimed son fell out of bed - Stuff.co.nz

6 Anti Aging Benefits of Metformin – The Shepherd of the Hills Gazette

In one way or another, we are all concerned about how well we are aging, whether that is physically, mentally, or visually in the way we look! Living a healthier lifestyle is something many of us focus on as we age; we all want to live well and experience the most out of life! If you have been looking into ways to improve your overall health and promote healthy aging, you may have heard of metformin. Metformin is becoming an increasingly popular topic when it comes to anti-aging, thanks to the several studies that have been published about the incredible benefits for overall health, mortality, and anti-aging. Metformin is derived from natural compounds in the French Lilac plant and has been used to treat diabetes since the Middle Ages. Used for over 60 years, this medication has an outstanding safety record, and is a safe and cost effective.

Leading Harvard scientist, Dr. David Sinclair, wrote a blog on the benefits of metformin anti aging, titled This cheap pill might help you live a longer, healthier life. Dr. Sinclair outlined numerous studies that proved the beneficial effects metformin can have, and we have compiled 6 of the top benefits backed by science.

As we age, we are more likely to face complicated health challenges, like cancer. However, research has shown metformin to have anti-cancer properties that reduce the likelihood of being diagnosed with numerous forms of cancer. In 2009, a promising study was carried out in the UK, with over 62,000 participants. The study revealed that using metformin was associated with a lowered risk of cancer in the colon and pancreas. The study divided participants into four separate groups, based on where they were receiving monotherapy with metformin, or sulfonylurea, combined therapy (metformin plus sulfonylurea), or insulin. Those using metformin monotherapy had the lowest risk of colon and pancreas cancer. Whereas those on insulin, or insulin secretagogues were more likely to develop cancers.

Additionally, a study conducted by the Mayo Clinic revealed that diabetic women who took metformin had a better survival rate than those who did not. This research is extremely significant because ovarian cancer is the 5th most common cancer in women and has a mortality rate of 65%. The benefits of metformin are impressive and women around the world are taking note.

When taking a closer look at the overall mortality rate of cancer patients with Diabetes, a study of 1,300 participants in the Netherlands revealed that metformin use was associated with lower cancer mortality compared with non-users.

Further, metformin users with diabetes were shown to have a reduced risk of colorectal cancer than non-users. A comprehensive study that took place in the United States, included over 460,000 participants over the course of several years. The results revealed an 8% reduction in the likelihood of a colorectal cancer diagnosis among those who used metformin.

Scientists compared 78,000 people diagnosed with type 2 diabetes and compared them to a control group of 78,000 people without diabetic controls. Patients with type 2 diabetes who were taking metformin had a longer survival rate than the non-diabetic control group, over a 5-year period. There have been multiple studies on the use of metformin that confirm the positive effects on overall health and longevity.

In a comprehensive, multi-year study, metformin was shown to reduce the risk of developing diabetes by 31%. This study showed the effect of metformin was equally effective for both men and women. As we see the rates of diabetes on the rise, and the associated health issues related to diabetes can be serious and life-threatening, these results were very positive and have significant potential for those at risk of developing diabetes.

A 2009 study of 390 patients in a randomized, placebo-controlled trial showed that metformin reduced the risk of macrovascular disease. This study included a 4.3-year follow-up period and demonstrated that metformin can significantly reduce cardiovascular mortality. Additionally, the study showed that metformin can reduce the incidence of cardiovascular events in both diabetic, and non-diabetic patients with coronary heart disease. As such, the benefits of metformin are effective for anti-aging purposes, due to the fact cardiovascular health typically declines with age.

A study of 67,731 participants who were non-demented, non-diabetic, and over 65 years of age, were studied from January 2004 to December 2009: The study revealed that diabetes is associated with an increased risk of dementia. When provided with sulfonylureas or metformin, rather than thiazolidinediones for a longer period the risk was reduced. More specifically, the study determined that metformin use showed a significant inverse association with cognitive impairment. This large scale study controlled for age, education, diabetes duration, fasting blood glucose, vascular and non-vascular risk factors. This is a significant finding proving the anti-aging effects of metformin.

Weight-loss is something on many peoples minds as they age, and not just for superficial reasons! Weight gain can affect mobility, can negatively impact cardiovascular health, and can result in a wide array of other health challenges. As we age, it often becomes more difficult to lose weight. A 2012 study of 154 patients was conducted over a 6 month period in Germany. The purpose of the study was to determine the efficacy of metformin for the treatment of obesity. The results were impressive and demonstrated that metformin is an effective drug to reduce weight in a naturalistic outpatient setting in insulin sensitive and insulin resistant overweight and obese patients.

The anti-aging benefits of metformin have been experienced by thousands of patients around the world. Not only can it reduce mortality, but it can improve longevity, overall health, and quality of life as one ages. The studies on metformin have been extensive, and the results truly do speak for themselves. In addition to the listed benefits of above, studies have shown metformin slows down the rate of DNA damage. Though access to metformin has not always been easy, there are telehealth subscription services available that have improved access to buy metformin. AgelessRx.com is an American based company that provides high-quality metformin through their telehealth subscription service.

More here:
6 Anti Aging Benefits of Metformin - The Shepherd of the Hills Gazette

We should allow ourselves to be #pharmaproud – – pharmaphorum

In the hours after Pfizers momentous vaccine news emerged on Monday #pfizerproud popped up on my social media feeds again and again from the firms employees, both past and present.

Ive been an avid observer of pharma social media for some time and this is somewhat unusual. Not for the pharma employees to be proud of the work they do, but such a spontaneous and widespread demonstration of pride in our industry is not normally seen, though it is thoroughly deserved here.

Interim analysis of the COVID-19 vaccine candidate Pfizer has been working on with BioNTech found it to be more than 90% effective at countering the disease, and the company said it expected to be in a position to file BNT162b2 for FDA approval in the third week of November.

The phase 3 trial results are a huge advance in the fight against the global coronavirus pandemic. The study, which only began at the end of July, has enrolled 43,538 patients to date and has shown that protection against COVID-19 is achieved 28 days after the initiation of the two-dose vaccination.

As Pfizers CEO Albert Bourla said: Today is a great day for science and humanity. We are a significant step closer to providing people around the world with a much-needed breakthrough to help bring an end to this global health crisis.

The work to date certainly justifies Bourlas insistence on pushing his vaccine research and manufacturing leadership to think differently about the issue and move quicker that they would have thought possible.

The scientists have done their job

Think in different terms, he told them back in March, according to Forbes, when the COVID-19 pandemic was beginning to overwhelm countries like Italy and Spain in Western Europe.

Think you have an open chequebook, you dont need to worry about such things. Think that we will do things in parallel, not sequential. Think you need to build manufacturing of a vaccine before you know whats working. If it doesnt, let me worry about it and we will write it off and throw it out.

His approach is certainly in keeping with the transformative nature of 2020 and the innovations and adaptations that the year has so far forced on us all. It was, after all, shortly after the outbreak began in January that scientists from China published details of the SARS-CoV-2 virus.

Of course, the Pfizer/BioNTech COVID-19 vaccine is just one of many in development and study is still ongoing and collecting additional safety and efficacy data.

Its final vaccine efficacy percentage may vary from the headline grabbing results released this week, as the companies themselves have noted, and many wider questions remain for policymakers and politicians. Theres the ongoing issue of public attitudes to vaccines and trust, deliberations on how to best distribute Pfizers, or any other companys, COVID-19 vaccine, and the financial returns of any vaccines will be sure to be scrutinised.

Having a vaccine which works is just the starting point, acknowledged David Sinclair, director of UK charity and thinktank the International Longevity Centre commented. But he added: That we are one step closer to a vaccine against Covid-19 is brilliant news. The scientists have done their job.

Its a sentiment that can be applied to all of those across the industry who have been working, directly or indirectly, on COVID-19 and all the healthcare outcomes affected by the pandemic.

So, although I started this article focusing on #pfizerproud, the industry should also be #gileadproud, #astrazenecaproud, #lillyproud and so on.

Hope from medicines, vaccines and health tech

Pharma has always existed at close intersection to mainstream society. Its an industry that touches all of our lives with its vital role in our healthcare, but this year has, unfortunately, given it even more resonance.

At a time when the public is obsessing over infection rates, the R number and COVID-19s deadly toll, like many in the industry Ive been having really quite detailed conversations with non-pharma friends about clinical trials, vaccines and public health.

The upshot of those conversations, in addition to a burning desire for rapid progress, is that we need pharma now more than ever.

As ABPI chief executive Richard Torbett said earlier this week when talking about the importance of vaccines: Millions of people all over the world are living under some form of restrictions.The organisations who research, develop and manufacture medicines, vaccines and health tech are our best hope of treating, preventing or one day even eradicating the virus.

Much as Joe Bidens win in the US presidential election provides a sense of a weight having been lifted from the minds of many, in the US and far around the world, Pfizers COVID-19 vaccine clinical trial results brings a similar sense of relief.

In neither case are we out of the woods yet, and its not even that things wont get worse before they get better but the last week has provided some very welcome news indeed.

So, for now, lets celebrate a major step towards the emergence of a COVID-19 vaccine and be #pharmaproud about the huge contribution the industry had made, and is making, during this global health emergency.

About the author

Dominic Tyer is a journalist and editor specialising in the pharmaceutical and healthcare industries. He is currently pharmaphorums interim managing editor and is also creative and editorial director at the companys specialist healthcare content consultancy pharmaphorum connect.

Connect with Dominic on LinkedIn or Twitter

Continue reading here:
We should allow ourselves to be #pharmaproud - - pharmaphorum

Machine Learning as a Service (MLaaS) Market Size Explores Growth Opportunities from 2020 to 2028 – The Think Curiouser

The total % of ICT Goods Exports around the Globe Increased from 11.20% in 2016 to 11.51% in 2017 UNCTAD

CRIFAX added a new market research report onGlobalMachine Learning as a Service (MLaaS) Market, 2020-2028to its database of market research collaterals consisting of overall market scenario with prevalent and future growth prospects, among other growth strategies used by key players to stay ahead of the game. Additionally, recent trends, mergers and acquisitions, region-wise growth analysis along with challenges that are affecting the growth of the market are also stated in the report.

Be it artificial intelligence (AI), internet of things (IoT) or digital reality, the increased rate of technological advancements around the world is directly proportional to the growth of global Machine Learning as a Service (MLaaS) Market. In the next two years, more than 20 billion devices are predicted to be connected to internet. With hundreds of devices getting connected to internet every second, the worldwide digital transformation in various industries is estimated to provide value-producing prospects in the global Machine Learning as a Service (MLaaS) Market, which is further anticipated to significantly boost the market revenue throughout the forecast period, i.e., 2020-2028.

Get Exclusive Sample Report Copy Of This Report @https://www.crifax.com/sample-request-1000774

From last two decades, the investments by ICT industry has contributed extensively in strengthening the developed, developing and emerging countries economic growth. According to the statistics provided by United Nations Conference on Trade and Development (UNCTAD), the total export (%) of ICT goods such as computers, peripheral, communication and electronic equipment among other IT goods around the world grew from 10.62% in 2011 to 11.51% in 2017. The highest was recorded in Hong Kong, with 51.7% in 2017, followed by Philippines, Singapore and Malaysia. Additionally, growth in global economy coupled with various initiatives proposed by governments of different nations to meet their policy objectives is estimated to hone the growth of theGlobal Machine Learning as a Service (MLaaS) Marketin upcoming years.

Not only the ever growing IT sector brings with it numerous advancements, it also creates fair amount of challenges when it comes to security concerns pertaining to data storage among the users. With increasing availability of internet access leading to rising number of internet users, there is vast amount of user information that is being stored online through cloud services. This has driven many nations to compile laws (such as European Unions GDPR and U.S.s CLOUD Act) in an attempt to protect their citizens data. In addition to that, the growth of the global Machine Learning as a Service (MLaaS) Market might also be obstructed by lack of skilled professionals. To overcome this obstacle, companies should focus on providing skills and required training to their workforce, in order to keep up in this digital era.

Download Sample of This Strategic Report:https://www.crifax.com/sample-request-1000774

Furthermore, to provide better understanding of internal and external marketing factors, the multi-dimensional analytical tools such as SWOT and PESTEL analysis have been implemented in the global Machine Learning as a Service (MLaaS) Market report. Moreover, the report consists of market segmentation, CAGR (Compound Annual Growth Rate), BPS analysis, Y-o-Y growth (%), Porters five force model, absolute $ opportunity and anticipated cost structure of the market.

About CRIFAX

CRIFAX is driven by integrity and commitment to its clients, and provides cutting-edge marketing research and consulting solutions with a step-by-step guide to accomplish their business prospects. With the help of our industry experts having hands on experience in their respective domains, we make sure that our industry enthusiasts understand all the business aspects relating to their projects, which further improves the consumer base and the size of their organization. We offer wide range of unique marketing research solutions ranging from customized and syndicated research reports to consulting services, out of which, we update our syndicated research reports annually to make sure that they are modified according to the latest and ever-changing technology and industry insights. This has helped us to carve a niche in delivering distinctive business services that enhanced our global clients trust in our insights, and helped us to outpace our competitors as well.

For More Update Follow:-LinkedIn|Twitter

Contact Us:

CRIFAX

Email: [emailprotected]

U.K. Phone: +44 161 394 2021

U.S. Phone: +1 917 924 8284

More Related Reports:-

Europe Next-generation Organic Solar Cell MarketEurope 5G in Healthcare MarketEurope IoT in Elevators MarketEurope Smart Indoor Garden MarketEurope Com

The total % of ICT Goods Exports around the Globe Increased from 11.20% in 2016 to 11.51% in 2017 UNCTAD

CRIFAX added a new market research report onGlobalMachine Learning as a Service (MLaaS) Market, 2020-2028to its database of market research collaterals consisting of overall market scenario with prevalent and future growth prospects, among other growth strategies used by key players to stay ahead of the game. Additionally, recent trends, mergers and acquisitions, region-wise growth analysis along with challenges that are affecting the growth of the market are also stated in the report.

Be it artificial intelligence (AI), internet of things (IoT) or digital reality, the increased rate of technological advancements around the world is directly proportional to the growth of global Machine Learning as a Service (MLaaS) Market. In the next two years, more than 20 billion devices are predicted to be connected to internet. With hundreds of devices getting connected to internet every second, the worldwide digital transformation in various industries is estimated to provide value-producing prospects in the global Machine Learning as a Service (MLaaS) Market, which is further anticipated to significantly boost the market revenue throughout the forecast period, i.e., 2020-2028.

Get Exclusive Sample Report Copy Of This Report @https://www.crifax.com/sample-request-1000774

From last two decades, the investments by ICT industry has contributed extensively in strengthening the developed, developing and emerging countries economic growth. According to the statistics provided by United Nations Conference on Trade and Development (UNCTAD), the total export (%) of ICT goods such as computers, peripheral, communication and electronic equipment among other IT goods around the world grew from 10.62% in 2011 to 11.51% in 2017. The highest was recorded in Hong Kong, with 51.7% in 2017, followed by Philippines, Singapore and Malaysia. Additionally, growth in global economy coupled with various initiatives proposed by governments of different nations to meet their policy objectives is estimated to hone the growth of theGlobal Machine Learning as a Service (MLaaS) Marketin upcoming years.

Not only the ever growing IT sector brings with it numerous advancements, it also creates fair amount of challenges when it comes to security concerns pertaining to data storage among the users. With increasing availability of internet access leading to rising number of internet users, there is vast amount of user information that is being stored online through cloud services. This has driven many nations to compile laws (such as European Unions GDPR and U.S.s CLOUD Act) in an attempt to protect their citizens data. In addition to that, the growth of the global Machine Learning as a Service (MLaaS) Market might also be obstructed by lack of skilled professionals. To overcome this obstacle, companies should focus on providing skills and required training to their workforce, in order to keep up in this digital era.

Download Sample of This Strategic Report:https://www.crifax.com/sample-request-1000774

Furthermore, to provide better understanding of internal and external marketing factors, the multi-dimensional analytical tools such as SWOT and PESTEL analysis have been implemented in the global Machine Learning as a Service (MLaaS) Market report. Moreover, the report consists of market segmentation, CAGR (Compound Annual Growth Rate), BPS analysis, Y-o-Y growth (%), Porters five force model, absolute $ opportunity and anticipated cost structure of the market.

About CRIFAX

CRIFAX is driven by integrity and commitment to its clients, and provides cutting-edge marketing research and consulting solutions with a step-by-step guide to accomplish their business prospects. With the help of our industry experts having hands on experience in their respective domains, we make sure that our industry enthusiasts understand all the business aspects relating to their projects, which further improves the consumer base and the size of their organization. We offer wide range of unique marketing research solutions ranging from customized and syndicated research reports to consulting services, out of which, we update our syndicated research reports annually to make sure that they are modified according to the latest and ever-changing technology and industry insights. This has helped us to carve a niche in delivering distinctive business services that enhanced our global clients trust in our insights, and helped us to outpace our competitors as well.

For More Update Follow:-LinkedIn|Twitter

Contact Us:

CRIFAX

Email: [emailprotected]

U.K. Phone: +44 161 394 2021

U.S. Phone: +1 917 924 8284

More Related Reports:-

Europe Next-generation Organic Solar Cell MarketEurope 5G in Healthcare MarketEurope IoT in Elevators MarketEurope Smart Indoor Garden MarketEurope Compact Industrial Metal AM Printer MarketEurope Counter Drone MarketEurope 5G Applications and Services MarketEurope Smart Manufacturing Platform MarketEurope Emotion Recognition and Sentiment Analysis MarketEurope Construction & Demolition Robots Market

pact Industrial Metal AM Printer MarketEurope Counter Drone MarketEurope 5G Applications and Services MarketEurope Smart Manufacturing Platform MarketEurope Emotion Recognition and Sentiment Analysis MarketEurope Construction & Demolition Robots Market

See the rest here:
Machine Learning as a Service (MLaaS) Market Size Explores Growth Opportunities from 2020 to 2028 - The Think Curiouser

The consistency of machine learning and statistical models in predicting clinical risks of individual patients – The BMJ – The BMJ

Now, imagine a machine learning system with an understanding of every detail of that persons entire clinical history and the trajectory of their disease. With the clinicians push of a button, such a system would be able to provide patient-specific predictions of expected outcomes if no treatment is provided to support the clinician and patient in making what may be life-or-death decisions[1] This would be a major achievement. The English NHS is currently investing 250 million in Artificial Intelligence (AI). Part of this AI work could help to identify patients most at risk of diseases such as heart disease or dementia, allowing for earlier diagnosis and cheaper, more focused, personalised prevention. [2] Multiple papers have suggested that machine learning outperforms statistical models including cardiovascular disease risk prediction. [3-6] We tested whether it is true with prediction of cardiovascular disease as exemplar.

Risk prediction models have been implemented worldwide into clinical practice to help clinicians make treatment decisions. As an example, guidelines by the UK National Institute for Health and Care Excellence recommend that statins are considered for patients with a predicted 10-year cardiovascular disease risk of 10% or more. [7] This is based on the estimation of QRISK which was derived using a statistical model. [8] Our research evaluated whether the predictions of cardiovascular disease risk for an individual patient would be similar if another model, such as a machine learning models were used, as different predictions could lead to different treatment decisions for a patient.

An electronic health record dataset was used for this study with similar risk factor information used across all models. Nineteen different prediction techniques were applied including 12 families of machine learning models (such as neural networks) and seven statistical models (such as Cox proportional hazards models). It was found that the various models had similar population-level model performance (C-statistics of about 0.87 and similar calibration). However, the predictions for individual CVD risks varied widely between and within different types of machine learning and statistical models, especially in patients with higher CVD risks. Most of the machine learning models, tested in this study, do not take censoring into account by default (i.e., loss to follow-up over the 10 years). This resulted in these models substantially underestimating cardiovascular disease risk.

The level of consistency within and between models should be assessed before they are used for treatment decisions making, as an arbitrary choice of technique and model could lead to a different treatment decision.

So, can a push of a button provide patient-specific risk prediction estimates by machine learning? Yes, it can. But should we use such estimates for patient-specific treatment-decision making if these predictions are model-dependant? Machine learning may be helpful in some areas of healthcare such as image recognition, and could be as useful as statistical models on population level prediction tasks. But in terms of predicting risk for individual decision making we think a lot more work could be done. Perhaps the claim that machine learning will revolutionise healthcare is a little premature.

Yan Li, doctoral student of statistical epidemiology, Health e-Research Centre, Health Data Research UK North, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester.

Matthew Sperrin, senior lecturer in health data science, Health e-Research Centre, Health Data Research UK North, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester.

Darren M Ashcroft, professor of pharmacoepidemiology, Centre for Pharmacoepidemiology and Drug Safety, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester.

Tjeerd Pieter van Staa, professor in health e-research, Health e-Research Centre, Health Data Research UK North, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester.

Competing interests: None declared.

References:

Link:
The consistency of machine learning and statistical models in predicting clinical risks of individual patients - The BMJ - The BMJ

Quantum computers are coming. Get ready for them to change everything – ZDNet

Supermarket aisles filled with fresh produce are probably not where you would expect to discover some of the first benefits of quantum computing.

But Canadian grocery chain Save-On-Foods has become an unlikely pioneer, using quantum technology to improve the management of in-store logistics. In collaboration with quantum computing company D-Wave, Save-On-Foods is using a new type of computing, which is based on the downright weird behaviour of matter at the quantum level. And it's already seeing promising results.

The company's engineers approached D-Wave with a logistics problem that classical computers were incapable of solving. Within two months, the concept had translated into a hybrid quantum algorithm that was running in one of the supermarket stores, reducing the computing time for some tasks from 25 hours per week down to mere seconds.

SEE: Guide to Becoming a Digital Transformation Champion (TechRepublic Premium)

Save-On-Foods is now looking at expanding the technology to other stores, and exploring new ways that quantum could help with other issues. "We now have the capability to run tests and simulations by adjusting variables and see the results, so we can optimize performance, which simply isn't feasible using traditional methods," a Save-On-Foods spokesperson tells ZDNet.

"While the results are outstanding, the two most important things from this are that we were able to use quantum computing to attack our most complex problems across the organization, and can do it on an ongoing basis."

The remarkable properties of quantum computing boil down to the behaviour of qubits -- the quantum equivalent of classical bits that encode information for today's computers in strings of 0s and 1s. But contrary to bits, which can be represented by either 0 or 1, qubits can take on a state that is quantum-specific, in which they exist as 0 and 1 in parallel, or superposition.

Qubits, therefore, enable quantum algorithms to run various calculations at the same time, and at exponential scale: the more qubits, the more variables can be explored, and all in parallel. Some of the largest problems, which would take classical computers tens of thousands of years to explore with single-state bits, could be harnessed by qubits in minutes.

The challenge lies in building quantum computers that contain enough qubits for useful calculations to be carried out. Qubits are temperamental: they are error-prone, hard to control, and always on the verge of falling out of their quantum state. Typically, scientists have to encase quantum computers in extremely cold, large-scale refrigerators, just to make sure that qubits remain stable. That's impractical, to say the least.

This is, in essence, why quantum computing is still in its infancy. Most quantum computers currently work with less than 100 qubits, and tech giants such as IBM and Google are racing to increase that number in order to build a meaningful quantum computer as early as possible. Recently, IBM ambitiously unveiled a roadmap to a million-qubit system, and said that it expects a fault-tolerant quantum computer to be an achievable goal during the next ten years.

IBM's CEO Arvind Krishna and director of research Dario Gil in front of a ten-foot-tall super-fridge for the company's next-generation quantum computers.

Although it's early days for quantum computing, there is still plenty of interest from businesses willing to experiment with what could prove to be a significant development. "Multiple companies are conducting learning experiments to help quantum computing move from the experimentation phase to commercial use at scale," Ivan Ostojic, partner at consultant McKinsey, tells ZDNet.

Certainly tech companies are racing to be seen as early leaders. IBM's Q Network started running in 2016 to provide developers and industry professionals with access to the company's quantum processors, the latest of which, a 65-qubit device called Hummingbird, was released on the platform last month. Recently, US multinational Honeywell took its first steps on the quantum stage, making the company's trapped-ion quantum computer available to customers over the cloud. Rigetti Computing, which has been operating since 2017, is also providing cloud-based access to a 31-qubit quantum computer.

Another approach, called quantum annealing, is especially suitable for optimisation tasks such as the logistics problems faced by Save-On-Foods. D-Wave has proven a popular choice in this field, and has offered a quantum annealer over the cloud since 2010, which it has now upgraded to a 5,000-qubit-strong processor.

A quantum annealing processor is much easier to control and operate than the devices that IBM, Honeywell and Rigetti are working on, which are called gate-model quantum computers. This is why D-Wave's team has already hit much higher numbers of qubits. However, quantum annealing is only suited to specific optimisation problems, and experts argue that the technology will be comparatively limited when gate-model quantum computers reach maturity.

The suppliers of quantum processing power are increasingly surrounded by third-party companies that act as intermediaries with customers. Zapata, QC Ware or 1QBit, for example, provide tools ranging from software stacks to training, to help business leaders get started with quantum experiments.

SEE: What is the quantum internet? Everything you need to know about the weird future of quantum networks

In other words, the quantum ecosystem is buzzing with activity, and is growing fast. "Companies in the industries where quantum will have the greatest potential for complete disruption should get involved in quantum right now," says Ostojic.

And the exponential compute power of quantum technologies, according to the analyst, will be a game-changer in many fields. Qubits, with their unprecedented ability to solve optimisation problems, will benefit any organisation with a supply chain and distribution route, while shaking up the finance industry by maximising gains from portfolios. Quantum-infused artificial intelligence also holds huge promise, with models expected to benefit from better training on bigger datasets.

One example: by simulating molecular interactions that are too complex for classical computers to handle, qubits will let biotech companies fast-track the discovery of new drugs and materials. Microsoft, for example, has already demonstrated how quantum computers can help manufacture fertilizers with better yields. This could have huge implications for the agricultural sector, as it faces the colossal task of sustainably feeding the growing global population in years to come.

Chemistry, oil and gas, transportation, logistics, banking and cybersecurity are often cited as sectors that quantum technology could significantly transform. "In principle, quantum will be relevant for all CIOs as it will accelerate solutions to a large range of problems," says Ostojic. "Those companies need to become owners of quantum capability."

Chemistry, oil and gas, transportation, logistics, banking or cybersecurity are among the industries that are often pointed to as examples of the fields that quantum technology could transform.

There is a caveat. No CIO should expect to achieve too much short-term value from quantum computing in its current form. However fast-growing the quantum industry is, the field remains defined by the stubborn instability of qubits, which still significantly limits the capability of quantum computers.

"Right now, there is no problem that a quantum computer can solve faster than a classical computer, which is of value to a CIO," insists Heike Riel, head of science and technology at IBM Research Quantum Europe. "But you have to be very careful, because the technology is evolving fast. Suddenly, there might be enough qubits to solve a problem that is of high value to a business with a quantum computer."

And when that day comes, there will be a divide between the companies that prepared for quantum compute power, and those that did not. This is what's at stake for business leaders who are already playing around with quantum, explains Riel. Although no CIO expects quantum to deliver value for the next five to ten years, the most forward-thinking businesses are already anticipating the wave of innovation that the technology will bring about eventually -- so that when it does, they will be the first to benefit from it.

This means planning staffing, skills and projects, and building an understanding of how quantum computing can help solve actual business problems. "This is where a lot of work is going on in different industries, to figure out what the true problems are, which can be solved with a quantum computer and not a classical computer, and which would make a big difference in terms of value," says Riel.

Riel points to the example of quantum simulation for battery development, which companies like car manufacturer Daimler are investigating in partnership with IBM. To increase the capacity and speed-of-charging of batteries for electric vehicles, Daimler's researchers are working on next-generation lithium-sulfur batteries, which require the alignment of various compounds in the most stable configuration possible. To find the best placement of molecules, all the possible interactions between the particles that make up the compound's molecules must be simulated.

This task can be carried out by current supercomputers for simple molecules, but a large-scale quantum solution could one day break new ground in developing the more complex compounds that are required for better batteries.

"Of course, right now the molecules we are simulating with quantum are small in size because of the limited size of the quantum computer," says Riel. "But when we scale the next generation of quantum computers, then we can solve the problem despite the complexity of the molecules."

SEE: 10 tech predictions that could mean huge changes ahead

Similar thinking led oil and gas giant ExxonMobilto join the network of companies that are currently using IBM's cloud-based quantum processors. ExxonMobil started collaborating with IBM in 2019, with the objective of one day using quantum to design new chemicals for low energy processing and carbon capture.

The company's director of corporate strategic research Amy Herhold explains that for the past year, ExxonMobil's scientists have been tapping IBM's quantum capabilities to simulate macroscopic material properties such as heat capacity. The team has focused so far on the smallest of molecules, hydrogen gas, and is now working on ways to scale the method up to larger molecules as the hardware evolves.

A number of milestones still need to be achieved before quantum computing translates into an observable business impact, according to Herhold. Companies will need to have access to much larger quantum computers with low error rates, as well as to appropriate quantum algorithms that address key problems.

"While today's quantum computers cannot solve business-relevant problems -- they are too small and the qubits are too noisy -- the field is rapidly advancing," Herhold tells ZDNet. "We know that research and development is critical on both the hardware and the algorithm front, and given how different this is from classical computing, we knew it would take time to build up our internal capabilities. This is why we decided to get going."

Herhold anticipates that quantum hardware will grow at a fast pace in the next five years. The message is clear: when it does, ExxonMobil's research team will be ready.

One industry that has shown an eager interest in quantum technology is the financial sector. From JP Morgan Chase's partnerships with IBM and Honeywell, to BBVA's use of Zapata's services, banks are actively exploring the potential of qubits, and with good reason. Quantum computers, by accounting for exponentially high numbers of factors and variables, could generate much better predictions of financial risk and uncertainty, and boost the efficiency of key operations such as investment portfolio optimisation or options pricing.

Similar to other fields, most of the research is dedicated to exploring proof-of-concepts for the financial industry. In fact, when solving smaller problems, scientists still run quantum algorithms alongside classical computers to validate the results.

"The classical simulator has an exact answer, so you can check if you're getting this exact answer with the quantum computer," explains Tony Uttley, president of Honeywell Quantum Solutions, as he describes the process of quantum options pricing in finance.

"And you better be, because as soon as we cross that boundary, where we won't be able to classically simulate anymore, you better be convinced that your quantum computer is giving you the right answer. Because that's what you'll be taking into your business processes."

Companies that are currently working on quantum solutions are focusing on what Uttley calls the "path to value creation". In other words, they are using quantum capabilities as they stand to run small-scale problems, building trust in the technology as they do so, while they wait for capabilities to grow and enable bigger problems to be solved.

In many fields, most of the research is dedicated to exploring proof-of-concepts for quantum computing in industry.

Tempting as it might be for CIOs to hope for short-term value from quantum services, it's much more realistic to look at longer timescales, maintains Uttley. "Imagine you have a hammer, and somebody tells you they want to build a university campus with it," he says. "Well, looking at your hammer, you should ask yourself how long it's going to take to build that."

Quantum computing holds the promise that the hammer might, in the next few years, evolve into a drill and then a tower crane. The challenge, for CIOs, is to plan now for the time that the tools at their disposal get the dramatic boost that's expected by scientists and industry players alike.

It is hard to tell exactly when that boost will come. IBM's roadmap announces that the company will reach 1,000 qubits in 2023, which could mark the start of early value creation in pharmaceuticals and chemicals, thanks to the simulation of small molecules. But although the exact timeline is uncertain, Uttley is adamant that it's never too early to get involved.

"Companies that are forward-leaning already have teams focused on this and preparing their organisations to take advantage of it once we cross the threshold to value creation," he says. "So what I tend to say is: engage now. The capacity is scarce, and if you're not already at the front of the line, it may be quite a while before you get in."

Creating business value is a priority for every CIO. At the same time, the barrier to entry for quantum computing is lowering every time a new startup emerges to simplify the software infrastructure and assist non-experts in kickstarting their use of the technology. So there's no time to lose in embracing the technology. Securing a first-class spot in the quantum revolution, when it comes, is likely to be worth it.

Here is the original post:
Quantum computers are coming. Get ready for them to change everything - ZDNet

Australia’s Archer and its plan for quantum world domination – ZDNet

Archer CEO Dr Mohammad Choucair and quantum technology manager Dr Martin Fuechsle

Quantum computing will revolutionise the world; its potential is so immeasurable that the greatest minds in Redmond, Armonk, and Silicon Valley are spending big on quantum development. But a company by the name of Archer Materials wants to put Sydney, Australia, on the map alongside, if not ahead, of these tech giants.

Universal quantum computers leverage the quantum mechanical phenomena of superposition and entanglement to create states that scale exponentially with the number of quantum bits (qubits).

Here's an explanation: What is quantum computing? Understanding the how, why and when of quantum computers

"Quantum computing represents the next generation of powerful computing, you don't really have to know how your phone works on the inside, you just want it to do things that you couldn't do before," Archer CEO Dr Mohammad Choucair told ZDNet.

"And with quantum computing, you can do things that you couldn't necessarily do before."

There is currently a very small set number of tasks that a quantum computer can do, but Choucair is hopeful that in the future this will grow to be a little bit more consumer-based and business-faced.

Right now, however, quantum computing, for all intents and purposes, is at a very early stage. It's not going to completely displace a classical computer, but it will give the capacity to do more with what we currently have. Choucair believes this will positively impact a range of sectors that are reliant on an increasing amount of computational power.

"This comes to light when you start to want to optimise very large portfolios, or perform a whole bunch of data crunching, AI and all sorts of buzzwords -- but ultimately, you're looking for more computational power. And you can genuinely get speed-ups in computational power based on certain algorithms for certain problems that are currently being identified," he explained.

"The problems that quantum computers can solve are currently being identified and the end users are being engaged."

Archer describes itself as a materials technology company. Its proposition is simple at heart: "Materials are the tangible physical basis of all technology. We're developing and integrating materials to address complex global challenges in quantum technology, human health, and reliable energy".

There are many components to quantum computing, but Archer is building a qubit processor. 12CQ is touted by the company as a "world-first technology that Archer aims to build for quantum computing operation at room-temperature and integration onboard modern electronic devices".

"We're not building the entire computer, we're building the chipset, the processer at the core of it," Choucair told ZDNet. "That really forms the brain of a quantum computer.

"The difference with us is that we really are looking at on-board use, rather than the heavy infrastructure that's required to house the existing quantum computing architectures.

"This is not all airy-fairy and it is not all of blue sky; it's real, there's proven potential, we've published the workwe have the data, we have the science behind us -- it took seven years of immense, immersive R&D."

Archer is building the chip inside a AU$180 million prototype foundry out of the University of Sydney. The funding was provided by the university as well as government.

"Everyone's playing their role to get this to market," he said.

Choucair is convinced that the potential when Archer "gets this right" will be phenomenal.

"Once you get a minimal viable product, and you can demonstrate the technology can indeed work at room temperature and be integrated into modern-day electronics. I think that's, that's quite disruptive. And it's quite exciting," he said.

Magnified region observing the round qubit clusters which are billionths of a meter in size in the centre of qubit control device components (appearing as parallel lines).

Choucair found himself at Archer in 2017 after the company acquired a startup he founded. Straight away, he and the board got started on the strategy it's currently executing on.

"There is very, very small margin for error from the start, in the middle, at the end -- you need to know what you're getting yourself into, what you're doingthis is why I think we've been able to be so successful moving forward, we've been so rapid in our development, because we know exactly what needs to get done," Choucair said.

"The chip is a world firstscience can fail at any stage, everybody knows that, but more often than not, it may or may not -- how uncertain do you want something to be? So for us, the more and more we develop our chip, the higher chances of success become."

Read more about Archer's commercial strategy here: Archer looks to commercialisation future with graphene-based biosensor tech

Choucair said materials technology itself was able to reduce a lot of the commercial barriers to entry for Archer, which meant the company could take the work out of the university much sooner.

"The material technology allowed us to do things without the need for heavy cooling infrastructure, which costs millions and millions of dollars and had to be housed in buildings that cost millions and millions of dollars,' he explained. "Massive barrier reduced, material could be made simply from common laboratory agents, which means you didn't have to build a billion-dollar facility to control atoms and do all these crazy scientific things at the atomic level.

"And so, really, you end up with the materials technology that was simple to handle, easy to make, and worked at room temperature, and you're like, wow, okay, so now the job for us is to actually build the chip and miniaturise this stuff, which is challenging in itself."

The CEO of the unexplainable has an impressive resum. He landed at Archer with a strong technical background in nanotechnology, served a two-year mandate on the World Economic Forum Global Council for Advanced Materials, is a fellow of both The Royal Society of New South Wales and The Royal Australian Chemical Institute, and was an academic and research fellow at the University of Sydney's School of Chemistry.

Choucair also has in his armoury Dr Martin Fuechsle, who is recognised for developing the world's smallest transistor, a "single-atom transistor".

"Fuechsle is among the few highly talented physicists in the world capable of building quantum devices that push the boundaries of current information processing technology," Choucair said in January 2019, announcing Fuechsle's appointment. "His skills, experience, and exceptional track record strongly align to Archer's requirements for developing our key vertical of quantum technology."

SEE:Guide to Becoming a Digital Transformation Champion(TechRepublic Premium)

Archer is publicly listed on the Australian Securities Exchange, but Choucair would reject any claims of it being a crazy proposition.

"20 years ago, a company that was maybe offering something as abstract as an online financial payment system would have been insane too, but if you have a look at the top 10 companies on the Nasdaqa lot of their core business is embedded in the development of computational architecture, computational hardware," he said.

"We're a very small company, I'm not comparing myself to a Nasdaq-listed company. I'm just saying, the core businessI think it's a unique offering and differentiates us on a stock exchange."

He said quantum technology is something that people are starting to value and see as having potential and scale of opportunity.

Unlike many of the other quantum players in Australia and abroad, Archer is not a result of a spin-off from a university, Choucair claimed.

"The one thing about Archer is that we're not a university spin out -- I think that's what sets us apart, not just in Australia, but globally," he said. "A lot of the time, the quantum is at a university, this is where you go to learn about quantum computing, so it's only natural that it does come out of a university."

Historically, Australia has a reputation of being bad at commercialising research and development. But our curriculum vitae speaks for itself: Spray-on skin, the black box flight recorder, polymer bank notes, and the Cochlear implant, to name a few.

According to Choucair, quantum is next.

"We really are leading the world; we well and truly punch above our weight when it comes to the work that's been done, we lead the world," he said.

"And that quantum technology is across quantum computing and photonics, and sensing -- it's not just quantum computing. We do have a lot of great scientists and those who are developing the technology."

But as highlighted in May by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in its quantum technologies roadmap, there are a lot of gaps that need to be filled over the long term.

"We just have to go out there and get the job done," Choucair said.

"In Australia we have resource constraints, just like anywhere else in the world. And I think there's always a lot more that can be donewe're not doing deep tech as a luxury in this country. From the very top down, there is an understanding, I believe, from our government and from key institutes in the nation that this is what will help us drive forward as a nation."

Archer isn't the only group focused on the promise of quantum tech down under, but Choucair said there's no animosity within the Aussie ecosystem.

Read about UNSW's efforts: Australia's ambitious plan to win the quantum race

There's also a partnership between two universities: UNSW and Sydney Uni quantum partnership already bearing fruit

"I think we all understand that there's a greater mission at stake here. And we all want, I can't speak on everyone's behalf, but at Archer we definitely have vision of making quantum computing widespread -- adopted by consumers and businesses, that's something that we really want to do," he said.

"We have fantastic support here in Australia, there's no doubt about it."

A lot of the work in the quantum space is around education, as Choucair said, it's not something that just comes out of abstractness and then just exists.

"You have to remember this stuff's all been built off 20, 30, 40 years of research and development, quantum mechanics, engineering, science, and tech -- hundreds and thousands of brilliant minds over the course of two-three generations," the CEO explained.

While the technology is here, and people are building algorithms that only run on quantum computers, there is still another 20-or-so years of development to follow.

"This field is not a fast follower field, you don't just get up in the morning and put your slippers on and say you're going to build a quantum computer," he added.

Archer is also part of the IBM Q Network, which is a global network of startups, Fortune 500 companies, and academic research institutes that have access to IBM's experts, developer tools, and cloud-based quantum systems through IBM Q Cloud.

Archer joined the network in May as the first Australian company that's developing a qubit processor.

Choucair said the work cannot be done without partnerships and collaboration alongside the best in the world.

"Yes, there is a race to build quantum computers, but I think more broadly than a race, to just enable the widespread adoption of the technology. And that's not easy. And that takes a concerted effort," he said. "And at this early stage of development, there is a lot of overlap and collaboration.

"There's a bit of a subculture that Australia can't do it -- yeah, we can.

"There's no excuses, right? We're doing it, we're building it, we're getting there. We're working with the very best in the world."

Read more here:
Australia's Archer and its plan for quantum world domination - ZDNet