Latest AI, Machine Learning, Data Extraction and Workflow Optimization Capabilities Added to Blue Prism’s Digital Workforce – Yahoo Finance

Company Continues to Deliver Access to Leading Edge Technologies Through Partnership Program

LONDON andAUSTIN, Texas, April 14, 2020 /PRNewswire/ --Blue Prism(AIM: PRSM) continues to provide easy access to the latest and most innovative technologies that accelerate and scale automation projects via the company'sDigital Exchange (DX), an intelligent automation "app store" and online community. To date, Blue Prism DX assets have been downloaded tens of thousands of times, making it the ideal online community foraugmenting and extending traditional RPA deployments.

Blue Prism logo (PRNewsfoto/Blue Prism)

Every month new Blue Prism affiliate Technology Alliance Program (TAP) partners add their intelligent automation capabilities to the DX, augmenting and extending the power of Blue Prism. Companies like Equinixare using assets found on the DX to streamline business processes for its accounts payable team resulting in a projected up to 7,000 hours returned to business annually and a 60 percent reduction in supplier query response times (from 1 week to 2 days).

This month innovators like Artificial Solutions, CLEVVAand TAIGERhave joined the Blue Prism DX making their software accessible to all. The latest capabilities on the DX enable organizations to take advantage of conversational AI applications, front-office automationsas well as gaining insights from unstructured data such as emails, chat transcripts, outbound marketing materials, internal memos and legal documents, in a way that hasn't been previously possible.

"The Blue Prism DX community is a game changer because it enables, augments and extends our Digital Workforce capabilities with drag-and-dropease of use," says Linda Dotts, SVP Global Partner Strategy and Programs for Blue Prism. "It provides easy access to the latest innovations in intelligent automation through search and an a la carte menu of options. Our Technology Alliance Partners provide easy access to their software integrations via the DX, so everyone can drive better business outcomes via their Digital Workforce."

Below is a quick summary of the new capabilities being brought to market by these new TAP affiliate partners:

Artificial Solutions enables customers of Blue Prism to extend their RPA implementationsthrough an intelligent conversational interface, all the way from consumer engagement to process execution and resolution. This runs the gamut from answering a simple billing query or booking a reservation, to delivering advice on complex topics or resetting a home automation task, an intelligent conversational interface delivers a more human-like, natural experience.

The company's award-winning conversational AI platform Teneo allows intelligent conversational chatbots to integrate directly with Blue Prism's Digital Workers, providing a personalized interface that guides and supports end-users as they fill out data. Teneo runs across 36 languages, multiple platforms and channels, and the ability to analyze enormous quantities of conversational data is fully integrated, delivering new levels of customer insight.

"We're delighted to be working with Blue Prism and its customers helping them further extend the value of existing and new RPA implementations," says Andy Peart, Chief Marketing & Strategy Officer at Artificial Solutions. "Teneo's highly conversational capabilities deliver the accuracy needed to understand and respond in a near-humanlike way, improving efficiency for the client while giving users the seamless experience they're looking for."

Story continues

CLEVVAhelps customers realize straight-through processing across staff-assisted and digital self-service channels. This solution enables front office staff to navigate through rule-based decisions and actions, so they get it right, while driving intelligent self-service across any digital interface (website, mobile app, chatbot, social media, in store kiosk). The combination of CLEVVA's front office digital workers with Blue Prism allows customers to effectively automate end-to-end processes across multiple channels in a consistent, compliant and context-relevant way.

"We allow companies to capture the business logic that sits outside of operational systemsnormally residing in experts, knowledge bases and decision tree scriptsand place it into a digital worker," says Ryan Falkenberg, co-CEO and founder of CLEVVA. "By coupling our ability to navigate customer engagements with Blue Prism's ability to perform required system actions, we're making end-to-end process automation a reality."

TAIGERhelps customers drive greater efficiencies through the automation of complex cognitive tasks which require interpretation, understanding and judgment.By using semantic technology, TAIGER's AI software can read documents and accurately extract data points to automate business tasks, while helping customers make faster and better-informed business decisions.

"We are committed to scale our solution and forge strong global partnerships that bring about a new era of productivity for organizations," says Founder and CEO of TAIGER, Dr. Sinuhe Arroyo. "This partnership encourages us to keep pushing the boundaries of AI in our quest to achieve the best in man-machine collaboration."

Joining the TAP is easier than ever with a new self-serve function on Blue Prism's DX. To find out more please visit:https://digitalexchange.blueprism.com/site/global/partner/index.gsp.

About Blue PrismBlue Prism's vision is to provide a Digital Workforce for Every Enterprise. The company's purpose is to unleash the collaborative potential of humans, operating in harmony with a Digital Workforce, so every enterprise can exceed their business goals and drive meaningful growth, with unmatched speed and agility.

Fortune 500and public-sector organizations, among customers across 70 commercial sectors, trust Blue Prism's enterprise-grade connected-RPA platform, which has users in more than 170 countries. By strategically applying intelligent automation, these organizations are creating new opportunities and services, while unlocking massive efficiencies that return millions of hours of work back into their business.

Available on-premises, in the cloud, hybrid, or as an integrated SaaS solution, Blue Prism's Digital Workforce automates ever more complex, end-to-end processes that drive a true digital transformation, collaboratively, at scale and across the entire enterprise.

Visit http://www.blueprism.comto learn more or follow Blue Prism on Twitter @blue_prism and on LinkedIn.

2020 Blue Prism Limited. "Blue Prism", "Thoughtonomy", the "Blue Prism" logo and Prism device are either trademarks or registered trademarks of Blue Prism Limited and its affiliates. All Rights Reserved.

View original content to download multimedia:http://www.prnewswire.com/news-releases/latest-ai-machine-learning-data-extraction-and-workflow-optimization-capabilities-added-to-blue-prisms-digital-workforce-301037900.html

SOURCE Blue Prism

Visit link:
Latest AI, Machine Learning, Data Extraction and Workflow Optimization Capabilities Added to Blue Prism's Digital Workforce - Yahoo Finance

The pros and cons of AI and ML in DevOps – Information Age

AI and ML are now common within most digital processes, but they bring faults as well as benefits when it comes to DevOps

AI and ML can be very beneficial, but they aren't without their faults.

Machine learning (ML), and artificial intelligence (AI) in general, has been commonly used within DevOps to aid developers and engineers with tasks.

The technology is highly capable of speeding tasks up and getting them done around the clock without its human colleagues needing to be present, if it is trained properly.

It is here where problems with AI and ML implementation can occur; if not taught properly, AI can display some kind of bias, and successful deployment of new software isnt always a guarantee.

Add these possible issues to the challenge of getting staff on board with AI and ML implementation for the first time, and the relationship between this technology and DevOps may not always be the perfect match. With this in mind, lets take a look at some pros and cons.

One common use case of AI and ML is to provide context to the various types of data a company has at its disposal. AI can be taught to categorise data according to its purpose quicker than engineers can.

This is a vital part of DevOps due to engineers needing to carefully examine code releases in order to ensure successful software deployment.

We take a look at how organisations can go about effectively automating the six Cs of continuous and collaborative DevOps principles. Read here

AI and ML will be essential to aiding developers in making sense of the information housed across various data warehouses, said Kevin Dallas, CEO of Wind River. In fact, we believe it will become mandatory for analysing and processing data, as humans simply wont be able to do it themselves.

It will enable developers to better understand and use the data at hand; for example, to understand not just the error, or the occurrence of a fault, but the detail of what happened in the run up to the fault.

Whats clear is that AI/ML is a vital strategy decision for every form of data from management and diagnostics to business-based value insights.

A major part of DevOps is ensuring that all possible errors are quickly eradicated before new software is deployed and made available to end users.

Joao Neto, head of quality & acceleration at OutSystems, explained: With the right data, AI/ML can help us analyse our value streams and help us identify bottlenecks. It can detect inefficiencies and even alert or propose corrective actions.

Smarter code analysis tools and automatic code-reviews will help us detect severe issues earlier and prevent them from being propagated downstream. Testing tools will also become smarter, helping developers identify test coverage gaps and defects.

We can easily extrapolate similar benefits to other practices and roles like security, architecture, performance, and user experience.

Neto continued by explaining the benefits of AI and ML in experimentation when it comes to DevOps.

Running experiments is not trivial and typically requires specialised skills that most teams dont have, such as data analysts, he said.

Picking adequate control groups and understanding if your data is statistically relevant is literally a science. AI/ML can help democratise experimentation and make it accessible to all software teams, maybe even to business roles.

We can also anticipate that by combining observability with ML techniques. Teams can understand and learn how their customers are using the product, what challenges customers face, and what specific situations lead to system failure.

Clint Hook, director of Data Governance at Experian, looks at how organisations can automate data quality to support artificial intelligence and machine learning. Read here

Its clear that AI and ML has an array of capabilities for benefitting DevOps processes, especially when carrying out analysis in the back end.

However, when it comes to deployment, developers and engineers may need to think more specifically about where its needed, as working with AI here may not turn out perfect every time.

A lot of AI projects have been struggling, not so much with the back end analysis such as the building of predictive models, but more with the issue of how to deploy these assets into production, said Peter van der Putten, director of AI systems at Pegasystems.

To some extent good old DevOps practices can come to the rescue here, such as automated testing, integration and deployment pipelines.

But there are also requirements specific to deploying AI assets, for example the integration of ethical bias checks, or checks on whether the models to be deployed pass minimal transparency and explainability requirements for the use case at hand.

Brian Carpenter, senior director, technology strategy, Pure Storage, discusses why enterprises need to modernise their IT infrastructure if they wish to succeed with AI. Read here

A criticism that has been made towards AI in DevOps is that it can distract engineering teams from the end goal, and from more human elements of processes that are just as vital to success.

When it comes to tech and DevOps, were not talking about strong AI or Artificial General Intelligence that mimics the breadth of human understanding, but soft, or weak AI, and specifically narrow, task-specific intelligence, saidNigel Kersten, field CTO of Puppet. Were not talking about systems that think, but really just referring to statistics paired with computational power being applied to a specific task.

Now that sounds practical and useful, but is it useful to DevOps? Sure, but I strongly believe that focusing on this is dangerous and distracts from the actual benefits of a DevOps approach, which should always keep humans front and centre.

I see far too many enterprise leaders looking to AI and Robotic Process Automation as a way of dealing with the complexity and fragility of their IT environments instead of doing the work of applying systems thinking, streamlining processes, creating autonomous teams, adopting agile and lean methodologies, and creating an environment of incremental progress and continuous improvement.

Focus on maximising the value of the intelligence your human employees have first before you start looking to the robots for answers. Once youve done that, look to machine learning and statistics to augment your people, automating away even more of their soul-crushing work in narrow domains such as anomaly detection.

CTOs are trying to figure out what the benefits of AI could be for their enterprise. Spoiler alert: theyre pretty dull, and thats okay, according to, academic and author, Tom Davenport. Read here

While AI and ML has proven to be successful in speeding up DevOps as well as other areas of digital strategies, AI as a whole may need more time to develop and improve.

As this continues to be worked on by developers, what does the future hold for this technologys relationship with DevOps?

As the application of AI and ML in DevOps grows, well increasingly see companies benefit and drive value for the business from more real-time insights, whereby AI and ML frameworks deployed on active systems will be able to optimise the system based on real-time development, validation and operational data, said Dallas.

These are the digital transformation conversations weve been having with customers across the industries we serve. Companies are realising that they cant do things the traditional way and expect to get the type of results that the new world is looking for.

Visit link:
The pros and cons of AI and ML in DevOps - Information Age

Qligent Foresight Released as Predictive Analysis Tool – TV Technology

MELBOURNE, Fla.Qligent is now sharing its second-generation, cloud-based predictive analysis platform Foresight, which uses AI, machine learning and big data to handle content distribution issues. Foresight is designed to provide real-time 24/7 data analytics based on system performance and user behavior.

The Foresight platform aggregates data points from end user equipment, including set-top boxes, smart TVs and iOS and Android devices, as well as CDN logs, stream monitoring data, CRMs, support ticketing systems, network monitoring systems and other hardware monitoring systems.

With scalable cloud processing, Foresights integrated AI and machine learning provide automated data collection, while deep learning technology mines data from layers of data. Big data technology then correlates and aggregates the data for quality assurance.

Foresight features networked and virtual probes that create a controlled data mining environment, which Qligent says is not compromised by operator error, viewer disinterest, user hardware malfunction or other variables.

Users can access customizable reports that summarize key performance indicators, key quality indicators and other criteria for multiplatform content distribution. All findings are presented on Qligents dashboard, which is accessible on a computer or mobile device.

The Qligent Foresight system is available immediately. For more information, visit http://www.qligent.com.

Excerpt from:
Qligent Foresight Released as Predictive Analysis Tool - TV Technology

Quantum computing – Wikipedia

Study of a model of computation

Quantum computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. Computers that perform quantum computation are known as a quantum computers.[1]:I-5 Quantum computers are believed to be able to solve certain computational problems, such as integer factorization (which underlies RSA encryption), significantly faster than classical computers. The study of quantum computing is a subfield of quantum information science.

Quantum computing began in the early 1980s, when physicist Paul Benioff proposed a quantum mechanical model of the Turing machine.[2]Richard FeynmanandYuri Maninlater suggested that a quantum computer had the potential to simulate things that a classical computer could not.[3][4] In 1994, Peter Shor developed a quantum algorithm for factoring integers that had the potential to decrypt RSA-encrypted communications.[5] Despite ongoing experimental progress since the late 1990s, most researchers believe that "fault-tolerant quantum computing [is] still a rather distant dream".[6] In recent years, investment into quantum computing research has increased in both the public and private sector.[7][8] On 23 October 2019, Google AI, in partnership with the U.S. National Aeronautics and Space Administration (NASA), published a paper in which they claimed to have achieved quantum supremacy.[9] While some have disputed this claim, it is still a significant milestone in the history of quantum computing.[10]

Quantum computing is modeled by quantum circuits. Quantum circuits are based on the quantum bit, or "qubit", which is somewhat analogous to the bit in classical computation. Qubits can be in a 1 or 0 quantum state, or they can be in a superposition of the 1 and 0 states. However, when qubits are measured the result is always either a 0 or a 1; the probabilities of these two outcomes depend on the quantum state that they were in immediately prior to the measurement. Computation is performed by manipulating qubits with quantum logic gates, which are somewhat analogous to classical logic gates.

There are currently two main approaches to physically implementing a quantum computer: analog and digital. Analog approaches are further divided into quantum simulation, quantum annealing, and adiabatic quantum computation. Digital quantum computers use quantum logic gates to do computation. Both approaches use quantum bits or qubits.[1]:213

Any computational problem that can be solved by a classical computer can also, in principle, be solved by a quantum computer. Conversely, quantum computers obey the ChurchTuring thesis; that is, any computational problem that can be solved by a quantum computer can also be solved by a classical computer. While this means that quantum computers provide no additional power over classical computers in terms of computability, they do in theory provide additional power when it comes to the time complexity of solving certain problems. Notably, quantum computers are believed to be able to quickly solve certain problems that no classical computer could solve in any feasible amount of timea feat known as "quantum supremacy". The study of the computational complexity of problems with respect to quantum computers is known as quantum complexity theory.

The prevailing model of quantum computation describes the computation in terms of a network of quantum logic gates.[11]

A memory consisting of n {textstyle n} bits of information has 2 n {textstyle 2^{n}} possible states. A vector representing all memory states thus has 2 n {textstyle 2^{n}} entries (one for each state). This vector is viewed as a probability vector and represents the fact that the memory is to be found in a particular state.

In the classical view, one entry would have a value of 1 (i.e. a 100% probability of being in this state) and all other entries would be zero. In quantum mechanics, probability vectors are generalized to density operators. This is the technically rigorous mathematical foundation for quantum logic gates, but the intermediate quantum state vector formalism is usually introduced first because it is conceptually simpler. This article focuses on the quantum state vector formalism for simplicity.

We begin by considering a simple memory consisting of only one bit. This memory may be found in one of two states: the zero state or the one state. We may represent the state of this memory using Dirac notation so that

The state of this one-qubit quantum memory can be manipulated by applying quantum logic gates, analogous to how classical memory can be manipulated with classical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by a matrix

The mathematics of single qubit gates can be extended to operate on multiqubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit whilst leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are

In summary, a quantum computation can be described as a network of quantum logic gates and measurements. Any measurement can be deferred to the end of a quantum computation, though this deferment may come at a computational cost. Because of this possibility of deferring a measurement, most quantum circuits depict a network consisting only of quantum logic gates and no measurements. More information can be found in the following articles: universal quantum computer, Shor's algorithm, Grover's algorithm, DeutschJozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction.

Any quantum computation can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem.

Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes).[12] By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, DiffieHellman, and elliptic curve DiffieHellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.

However, other cryptographic algorithms do not appear to be broken by those algorithms.[13][14] Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory.[13][15] Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem.[16] It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case,[17] meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size).

Quantum cryptography could potentially fulfill some of the functions of public key cryptography. Quantum-based cryptographic systems could, therefore, be more secure than traditional systems against quantum hacking.[18]

Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems,[19] including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely.[20] However, quantum computers offer polynomial speedup for some problems. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than that are required by classical algorithms. In this case, the advantage is not only provable but also optimal, it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.

Problems that can be addressed with Grover's algorithm have the following properties:

For problems with all these properties, the running time of Grover's algorithm on a quantum computer will scale as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied[21] is Boolean satisfiability problem. In this instance, the database through which the algorithm is iterating is that of all possible answers. An example (and possible) application of this is a password cracker that attempts to guess the password or secret key for an encrypted file or system. Symmetric ciphers such as Triple DES and AES are particularly vulnerable to this kind of attack.[citation needed] This application of quantum computing is a major interest of government agencies.[22]

Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing.[23] Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider.[24]

Quantum annealing or Adiabatic quantum computation relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which is slowly evolved to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process.

The Quantum algorithm for linear systems of equations, or "HHL Algorithm", named after its discoverers Harrow, Hassidim, and Lloyd, is expected to provide speedup over classical counterparts.[25]

John Preskill has introduced the term quantum supremacy to refer to the hypothetical speedup advantage that a quantum computer would have over a classical computer in a certain field.[26] Google announced in 2017 that it expected to achieve quantum supremacy by the end of the year though that did not happen. IBM said in 2018 that the best classical computers will be beaten on some practical task within about five years and views the quantum supremacy test only as a potential future benchmark.[27] Although skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved,[28][29] in October 2019, a Sycamore processor created in conjunction with Google AI Quantum was reported to have achieved quantum supremacy,[30] with calculations more than 3,000,000 times as fast as those of Summit, generally considered the world's fastest computer.[31] Bill Unruh doubted the practicality of quantum computers in a paper published back in 1994.[32] Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle.[33]

There are a number of technical challenges in building a large-scale quantum computer.[34] Physicist David DiVincenzo has listed the following requirements for a practical quantum computer:[35]

Sourcing parts for quantum computers is also very difficult. Many quantum computers, like those constructed by Google and IBM, need Helium-3, a nuclear research byproduct, and special superconducting cables that are only made by a single company in Japan.[36]

One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[37] Currently, some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence.[38]

As a result, time-consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions.[39]

These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.

As described in the Quantum threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often cited figure for the required error rate in each gate for fault-tolerant computation is 103, assuming the noise is depolarizing.

Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of qubits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction.[40] With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1MHz, about 10 seconds.

A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.[41][42]

Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows:

There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are:

The quantum Turing machine is theoretically important but the physical implementation of this model is not feasible. All four models of computation have been shown to be equivalent; each can simulate the other with no more than polynomial overhead.

For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):

A large number of candidates demonstrates that quantum computing, despite rapid progress, is still in its infancy.

Any computational problem solvable by a classical computer is also solvable by a quantum computer.[62] Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers.

Conversely, any problem solvable by a quantum computer is also solvable by a classical computer, or more formally any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem and the existence of quantum computers does not disprove the ChurchTuring thesis.[63]

While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve many problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers. However, the capacity of quantum computers to accelerate classical algorithms has rigid upper bounds, and the overwhelming majority of classical calculations cannot be accelerated by the use of quantum computers.[64]

The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be efficiently solved by probabilistic Turing machines with bounded error.[65] It is known that BPP {displaystyle subseteq } BQP and widely suspected, but not proven, that BQP {displaystyle nsubseteq } BPP, which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity.[66]

The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that P {displaystyle subseteq } BQP {displaystyle subseteq } PSPACE; that is, the class of problems that can be efficiently solved by quantum computers includes all problems that can be efficiently solved by deterministic classical computers but does not include any problems that cannot be solved by classical computers with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems are in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that NP {displaystyle nsubseteq } BQP; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems (if an NP-complete problem were in BQP, then it follows from NP-hardness that all problems in NP are in BQP).[68]

The relationship of BQP to the basic classical complexity classes can be summarized as:

It is also known that BQP is contained in the complexity class #P (or more precisely in the associated class of decision problems P#P),[68] which is a subclass of PSPACE.

It has been speculated that further advances in physics could lead to even faster computers. For instance, it has been shown that a non-local hidden variable quantum computer based on Bohmian Mechanics could implement a search of an N-item database in at most O ( N 3 ) {displaystyle O({sqrt[{3}]{N}})} steps, a slight speedup over Grover's algorithm, which runs in O ( N ) {displaystyle O({sqrt {N}})} steps (however, neither search method would allow quantum computers to solve NP-Complete problems in polynomial time).[69] Theories of quantum gravity, such as M-theory and loop quantum gravity, may allow even faster computers to be built. However, defining computation in these theories is an open problem due to the problem of time; that is, within these physical theories there is currently no obvious way to describe what it means for an observer to submit input to a computer at one point in time and then receive output at a later point in time.[70][71]

See the original post here:
Quantum computing - Wikipedia

What Is Quantum Computing? The Next Era of Computational …

When you first stumble across the term quantum computer, you might pass it off as some far-flung science fiction concept rather than a serious current news item.

But with the phrase being thrown around with increasing frequency, its understandable to wonder exactly what quantum computers are, and just as understandable to be at a loss as to where to dive in. Heres the rundown on what quantum computers are, why theres so much buzz around them, and what they might mean for you.

All computing relies on bits, the smallest unit of information that is encoded as an on state or an off state, more commonly referred to as a 1 or a 0, in some physical medium or another.

Most of the time, a bit takes the physical form of an electrical signal traveling over the circuits in the computers motherboard. By stringing multiple bits together, we can represent more complex and useful things like text, music, and more.

The two key differences between quantum bits and classical bits (from the computers we use today) are the physical form the bits take and, correspondingly, the nature of data encoded in them. The electrical bits of a classical computer can only exist in one state at a time, either 1 or 0.

Quantum bits (or qubits) are made of subatomic particles, namely individual photons or electrons. Because these subatomic particles conform more to the rules of quantum mechanics than classical mechanics, they exhibit the bizarre properties of quantum particles. The most salient of these properties for computer scientists is superposition. This is the idea that a particle can exist in multiple states simultaneously, at least until that state is measured and collapses into a single state. By harnessing this superposition property, computer scientists can make qubits encode a 1 and a 0 at the same time.

The other quantum mechanical quirk that makes quantum computers tick is entanglement, a linking of two quantum particles or, in this case, two qubits. When the two particles are entangled, the change in state of one particle will alter the state of its partner in a predictable way, which comes in handy when it comes time to get a quantum computer to calculate the answer to the problem you feed it.

A quantum computers qubits start in their 1-and-0 hybrid state as the computer initially starts crunching through a problem. When the solution is found, the qubits in superposition collapse to the correct orientation of stable 1s and 0s for returning the solution.

Aside from the fact that they are far beyond the reach of all but the most elite research teams (and will likely stay that way for a while), most of us dont have much use for quantum computers. They dont offer any real advantage over classical computers for the kinds of tasks we do most of the time.

However, even the most formidable classical supercomputers have a hard time cracking certain problems due to their inherent computational complexity. This is because some calculations can only be achieved by brute force, guessing until the answer is found. They end up with so many possible solutions that it would take thousands of years for all the worlds supercomputers combined to find the correct one.

The superposition property exhibited by qubits can allow supercomputers to cut this guessing time down precipitously. Classical computings laborious trial-and-error computations can only ever make one guess at a time, while the dual 1-and-0 state of a quantum computers qubits lets it make multiple guesses at the same time.

So, what kind of problems require all this time-consuming guesswork calculation? One example is simulating atomic structures, especially when they interact chemically with those of other atoms. With a quantum computer powering the atomic modeling, researchers in material science could create new compounds for use in engineering and manufacturing. Quantum computers are well suited to simulating similarly intricate systems like economic market forces, astrophysical dynamics, or genetic mutation patterns in organisms, to name only a few.

Amidst all these generally inoffensive applications of this emerging technology, though, there are also some uses of quantum computers that raise serious concerns. By far the most frequently cited harm is the potential for quantum computers to break some of the strongest encryption algorithms currently in use.

In the hands of an aggressive foreign government adversary, quantum computers could compromise a broad swath of otherwise secure internet traffic, leaving sensitive communications susceptible to widespread surveillance. Work is currently being undertaken to mature encryption ciphers based on calculations that are still hard for even quantum computers to do, but they are not all ready for prime-time, or widely adopted at present.

A little over a decade ago, actual fabrication of quantum computers was barely in its incipient stages. Starting in the 2010s, though, development of functioning prototype quantum computers took off. A number of companies have assembled working quantum computers as of a few years ago, with IBM going so far as to allow researchers and hobbyists to run their own programs on it via the cloud.

Despite the strides that companies like IBM have undoubtedly made to build functioning prototypes, quantum computers are still in their infancy. Currently, the quantum computers that research teams have constructed so far require a lot of overhead for executing error correction. For every qubit that actually performs a calculation, there are several dozen whose job it is to compensate for the ones mistake. The aggregate of all these qubits make what is called a logical qubit.

Long story short, industry and academic titans have gotten quantum computers to work, but they do so very inefficiently.

Fierce competition between quantum computer researchers is still raging, between big and small players alike. Among those who have working quantum computers are the traditionally dominant tech companies one would expect: IBM, Intel, Microsoft, and Google.

As exacting and costly of a venture as creating a quantum computer is, there are a surprising number of smaller companies and even startups that are rising to the challenge.

The comparatively lean D-Wave Systems has spurred many advances in the fieldand proved it was not out of contention by answering Googles momentous announcement with news of a huge deal with Los Alamos National Labs. Still, smaller competitors like Rigetti Computing are also in the running for establishing themselves as quantum computing innovators.

Depending on who you ask, youll get a different frontrunner for the most powerful quantum computer. Google certainly made its case recently with its achievement of quantum supremacy, a metric that itself Google more or less devised. Quantum supremacy is the point at which a quantum computer is first able to outperform a classical computer at some computation. Googles Sycamore prototype equipped with 54 qubits was able to break that barrier by zipping through a problem in just under three-and-a-half minutes that would take the mightiest classical supercomputer 10,000 years to churn through.

Not to be outdone, D-Wave boasts that the devices it will soon be supplying to Los Alamos weigh in at 5000 qubits apiece, although it should be noted that the quality of D-Waves qubits has been called into question before. IBM hasnt made the same kind of splash as Google and D-Wave in the last couple of years, but they shouldnt be counted out yet, either, especially considering their track record of slow and steady accomplishments.

Put simply, the race for the worlds most powerful quantum computer is as wide open as it ever was.

The short answer to this is not really, at least for the near-term future. Quantum computers require an immense volume of equipment, and finely tuned environments to operate. The leading architecture requires cooling to mere degrees above absolute zero, meaning they are nowhere near practical for ordinary consumers to ever own.

But as the explosion of cloud computing has proven, you dont need to own a specialized computer to harness its capabilities. As mentioned above, IBM is already offering daring technophiles the chance to run programs on a small subset of its Q System Ones qubits. In time, IBM and its competitors will likely sell compute time on more robust quantum computers for those interested in applying them to otherwise inscrutable problems.

But if you arent researching the kinds of exceptionally tricky problems that quantum computers aim to solve, you probably wont interact with them much. In fact, quantum computers are in some cases worse at the sort of tasks we use computers for every day, purely because quantum computers are so hyper-specialized. Unless you are an academic running the kind of modeling where quantum computing thrives, youll likely never get your hands on one, and never need to.

Go here to read the rest:
What Is Quantum Computing? The Next Era of Computational ...

Quantum computer chips demonstrated at the highest temperatures ever – New Scientist News

By Leah Crane

Credit: Luca Petit for QuTech

Quantum computing is heating up. For the first time, quantum computer chips have been operated at a temperature above -272C, or 1 kelvin. That may still seem frigid, but it is just warm enough to potentially enable a huge leap in the capabilities.

Quantum computers are made of quantum bits, or qubits, which can be made in several different ways. One that is receiving attention from some of the fields big players consists of electrons on a silicon chip.

These systems only function at extremely low temperatures below 100 millikelvin, or -273.05C so the qubits have to be stored in powerful refrigerators. The electronics that power them wont run at such low temperatures, and also emit heat that could disrupt the qubits, so they are generally stored outside the refrigerators with each qubit is connected by a wire to its electronic controller.

Advertisement

Eventually, for useful quantum computing, we will need to go to something like a million qubits, and this sort of brute force method, with one wire per qubit, wont work any more, says Menno Veldhorst at QuTech in the Netherlands. It works for two qubits, but not for a million.

Veldhorst and his colleagues, along with another team led by researchers at the University of New South Wales in Australia, have now demonstrated that these qubits can be operated at higher temperatures. The latter team showed they were able to control the state of two qubits on a chip at temperatures up to 1.5 kelvin, and Veldhorsts group used two qubits at 1.1 kelvin in what is called a logic gate, which performs the basic operations that make up more complex calculations.

Now that we know the qubits themselves can function at higher temperatures, the next step is incorporating the electronics onto the same chip. I hope that after we have that circuit, it wont be too hard to scale to something with practical applications, says Veldhorst.

Those quantum circuits will be similar in many ways to the circuits we use for traditional computers, so they can be scaled up relatively easily compared with other kinds of quantum computers, he says.

Journal references: Nature, DOI: 10.1038/s41586-020-2170-7 and DOI: 10.1038/s41586-020-2171-6

More on these topics:

See the original post:
Quantum computer chips demonstrated at the highest temperatures ever - New Scientist News

Alex Garland on ‘Devs,’ free will and quantum computing – Engadget

Garland views Amaya as a typical Silicon Valley success story. In the world of Devs, it's the first company that manages to mass produce quantum computers, allowing them to corner that market. (Think of what happened to search engines after Google debuted.) Quantum computing has been positioned as a potentially revolutionary technology for things like healthcare and encryption, since it can tackle complex scenarios and data sets more effectively than traditional binary computers. Instead of just processing inputs one at a time, a quantum machine would theoretically be able to tackle an input in multiple states, or superpositions, at once.

By mastering this technology, Amaya unlocks a completely new view of reality: The world is a system that can be decoded and predicted. It proves to them that the world is deterministic. Our choices don't matter; we're all just moving along predetermined paths until the end of time. Garland is quick to point out that you don't need anything high-tech to start asking questions about determinism. Indeed, it's something that's been explored since Plato's allegory of the cave.

"What I did think, though, was that if a quantum computer was as good at modeling quantum reality as it might be, then it would be able to prove in a definitive way whether we lived in a deterministic state," Garland said. "[Proving that] would completely change the way we look at ourselves, the way we look at society, the way society functions, the way relationships unfold and develop. And it would change the world in some ways, but then it would restructure itself quickly."

The sheer difficulty of coming up with something -- anything -- that's truly spontaneous and isn't causally related to something else in the universe is the strongest argument in favor of determinism. And it's something Garland aligns with personally -- though that doesn't change how he perceives the world.

"Whether or not you or I have free will, both of us could identify lots of things that we care about," he said. "There are lots of things that we enjoy or don't enjoy. Or things that we're scared of, or we anticipate. And all of that remains. It's not remotely affected by whether we've got free will or not. What might be affected is, I think, our capacity to be forgiving in some respects. And so, certain kinds of anti-social or criminal behavior, you would start to think about in terms of rehabilitation, rather than punishment. Because then, in a way, there's no point punishing someone for something they didn't decide to do."

Go here to read the rest:
Alex Garland on 'Devs,' free will and quantum computing - Engadget

What the open source community can teach the suddenly remote workforce – Security Boulevard

Productive remote teamwork is possible. Just ask the open source community, who has been doing it for years. Here are some top tips for working remotely.

By now we are all familiar with the, uh, challenges (thats the printable word) of uprooting millions of workers from their offices so they can work more safely from home.

Remote workall of a sudden with no time to plan for itis disruptive. Its unfamiliar. Its stressful. Its distracting, especially for those with school-aged children who are not at school. As one frazzled parent put it, I now have two more full-time jobs. Im a principal and a teacher.

And in the tech sector, software developers who have been working together, collaborating in open office environments, are suddenly isolated. Sure, there are virtual connections, but they are not the same as being in the same room.

That doesnt mean development has to crash and burn, though. There is a template available for overcoming that challenge. The open source software sector has been working remotely since well, since open source became a thing.

In most cases, participants have never met. They dont know each other. They might never see or speak with one another. They are likely in different parts of the country, or different countries, many on different sides of the world. Frequently they dont even speak the same language.

Yet they work together, in many cases with astonishing efficiency, and they produce products of superb quality. The Linux operating system, for example, started as an open source project and remains open source to this day. Open source software is part of virtually every application, network, and system in operation today. It often represents the majoritysometimes more than 90%of the code in a codebase.

So yes, productive remote teamwork is possible.

Thats not to say that open source development is an exact parallel to the corporate world. Open source is a community, not a company. Those who participate are essentially volunteers, not paid employees. There is a hierarchy, but the supervisor is generally more along the lines of community leader than boss.

But conventional development teams and their managers in need of new ideas for working remotely can still learn plenty of things from the remote operation of the open source world. There are even books about itone of the most popular is The Art of Community by Jono Bacon, former community manager for Ubuntu.

Tim Mackey, principal strategist at the Synopsys Cybersecurity Research Center (CyRC), knows about the remote operation of open source communities as well. While he works for a company, he has been a community leader, and still is a community member, for open source projects. He has worked remotely and managed remote teams for the bulk of his career.

So he knows from experience that remote doesnt have to mean disconnected. It just takes some awareness, effort, and cooperation. He described some of the ways open source communities mitigate the absence of physical human contact:

It sounds like the tech version of the real estate mantra Location, location, location, which describes the most important factor in buying a house.

But that is because communication is the foundation for everything else. Of course, working remotely cant be exactly like the physical office environment, where, as Mackey puts it, If someone is working on something that relates to what you are working on, you will know because up will pop a head when you say, I really dont understand why this is doing this.

But it can come close, as long as teams arent too largeideally fewer than 10 people.

You could actually have everyone put their phone on a Skype call. The phone is just sitting in the corner, and it doesnt have any other purpose than to serve as the proxy for the office, he said. There are many ways to solve the problem. You just need to find the pieces that are missing.

Resiliency flows from communicationwhat Mackey says is completely and transparently communicating all of the issues regarding a project.

As is the case with open source projects, resiliency is the result when there is nobody who is magically special who needs to know extra stuff, he said. Anyone at any point in time can know everything. That level of egalitarianism really starts to increase the engagement.

It also means there is no single point of failure, which is a mandatory element of resiliency. If somebody gets sick, goes on vacation, or gets a different job, it doesnt hamstring the rest of the team, because one person isnt carrying all the institutional knowledge. Everybody is.

You dont have to worry about one person having all the magic knowledge and then you are massively disrupted when that one person has to deal with some personal issue or, for that matter, wins a Powerball ticket, Mackey said.

It gives you flexibility. Everybody is going to have some aspect of their life that is going to be variable. Some people want to ski, some people want to surf, some people dont like the cold, some people love the cold.

Emotionally intelligent feedback, which also flows from good communication, can be much trickier among a team working remotely, since emails and texts usually lack tone. Facial expressions, speech volume, and other physical cues present in face-to-face communication can bring a lot of helpful nuance to comments that might seem harsh or even accusatory in writing.

Not to mention that cultural and language barriers can be easier to overcome face-to-face than in writing. If a recipient from another country who speaks another language gets a note and puts it into something like Google Translate, the results can be unpredictable.

If you know multiple languages and have tried that, then you know Google Translate is sometimes really good and sometimes it is absolutely atrocious, Mackey said. He noted that in the open source world or any remote situation, he makes a point of using very precise language when he writes comments.

If you have ever been in a situation where somebody has complained about the tone of your writing, that is exactly the type of scenario that successful open source teams figure out pretty early how to overcome, he said. In some countries, tone is such a key component of their written language that they might miss what you meant more often than you would prefer.

A breakdown in the emotional element of feedback can be a huge kick in morale, Mackey said.

Every team and every project needs a process to govern how things get done. But remember that the whole point of the process is to help things get done. As Bacon says in his book, processes are only useful when they are a means to an end.

Or as Mackey puts it, It really boils down to making certain that everyone knows what it is, why it is, and to a certain extent knows that they can raise their hand and say, But you do realize youre not doing this, right?

And if the shift from the office to working remotely means certain things cant get done, then the process needs a revision.

A perfect example, Mackey said, is a scenario where IT and legal have imposed a security policy that says, To protect our source code, you can only commit code on this special network that will never be accessible outside of the company.

While that might have made perfect sense before, it doesnt work when nobody is at the office. Process needs to be a living entity. You cant just fall back on, But this is the way we have always done it, he said.

One reason is that someone on the team might come up with a workaround just to get their job done, but such a workaround amounts to shadow IT, since it is outside security policy.

Does that mean Ive created a situation where, in trying to do the work Ive been assigned, I have now circumvented every process in place because it wasnt designed for the reality of everybody working from home? he asked.

It is clearly much better to figure out a way to maintain the security of source code without making it impossible for a remote development team to do its work.

All of which, once again, comes down to communication, communication, communication.

Of course, it will take some getting used to. There will likely be some bumps in the road. But if the open source community can do it, organizations can too.

Like what youre reading? Subscribe to the blog!

See more here:
What the open source community can teach the suddenly remote workforce - Security Boulevard

Open source made the cloud in its image – ITworld

The cloud was built for running open source, Matt Wilson once told me, which is why open source [has] worked so well in the cloud.

While true, theres something more fundamental that open source offers the cloud. As one observer put it, The whole intellectual foundation of open interfaces and combinatorial single-purpose tools is pretty well ingrained in cloud. That approach is distinctly open source, which in turn owes much to the Unix mentality that early projects like Linux embraced.

Hence, the next time you pull together different components to build an application on Microsoft Azure, Google Cloud, AWS, or another cloud, realize that the reason you can do this is because the open source ethos permeates the cloud.

Open source has become so commonplace today that we are apt to forget its origins. While it would be an overstatement to suggest that Unix is wholly responsible for what open source became, many of the open source pioneers came from a Unix background, and it shows.

Heres a summary of the Unix philosophy by Doug McIlroy, the creator of Unix pipes:

Sound familiar? From this ideological parentage its not hard to see where open source gets its preference for modularity, transparency, composability, etc. Its also not much of a stretch to see where the open source-centric clouds are picking up their approach to microservices.

In turn, the different clouds have all converged on similar design principles. As Wilson notes, the composable pieces ethos of open source is a property of open systems, and a general Unix philosophy that [is] carried forward in the foundational building blocks of cloud as we know it.

Cloud is impossible without the economics of free and open source software, but cloud is arguably even more impossible at least, in the way we experience it today without the freedoms and design principles offered by open source. Erica Brescia makes this point perfectly.

Importantly, were now in a hyper-growth development phase for the cloud, with different companies with different agendas combining to open source incredibly complex, powerful, and cloud-native software to tackle everything from machine learning to network management. As Jono Bacon notes,

Open source created the model for collaborative technology development in a competitive landscape.

The cloud manifested as the most pressing need to unite this competitive landscape together.

This led to a rich tapestry of communities sharing best practices and approaches.

This rich tapestry of communities sharing owes its existence to open source. Clouds may provide the platforms where open source increasingly lives and grows, but the animating force behind the clouds is open source. Given the pressing problems all around us, were going to need both cloud and communities each driven by open source to help tackle them.

This story, "Open source made the cloud in its image" was originally published by InfoWorld.

Read this article:
Open source made the cloud in its image - ITworld

How Edge Is Different From Cloud And Not – The Next Platform

As the dominant supplier of commercial-grade open source infrastructure software, Red Hat sets the pace and it is not a surprise that IBM was willing to shell out an incredible $34 billion to acquire the company. It is no surprise, then, that Red Hat has its eyes on the edge, that amorphous and potentially substantial collection of distributed computing systems that everyone is figuring out how to chase.

To get a sense of what Red Hat thinks about the edge, we sat down with Joe Fernandes, vice president and general manager of core cloud platforms at what amounts to the future for IBMs software business. Fernandes has been running Red Hats cloud business for nearly a decade, starting with CloudForms and moving through the evolution of OpenShift from a proprietary (but open source) platform to one that has become the main distribution of the Kubernetes cloud controller by enterprises. Meaning those who cant or wont roll their own open source software products.

Timothy Prickett Morgan: Is the edge different, or is it just a variation on the cloud theme?

Joe Fernandes: For Red Hat, the edge is really an extension of our core strategy, which is open hybrid cloud and which is around providing a consistent operating environment for applications that extends from the datacenter across multiple public clouds and now out at the edge. Linux is definitely the foundation of that, and Linux for us is of course Red Hat Enterprise Linux, which we see running in all footprints.

It is not just about trying to get into the core datacenter. Its about trying to deal with the growing opportunity at the edge, and I think its not just important for Red Hat. Look at what Amazon is doing with Outposts, what Microsoft is doing with Azure Stack, and what and Google is doing with Anthos, trying to put out cloud appliances for on premises use. This hybrid cloud is as strategic for any of them as it is for any of us.

TPM: What is your projection for how much compute is on the edge and how much is in the datacenter? If you added up all of the clock cycles, how is it going to balance out?

Joe Fernandes: It is very workload driven. Generally, the advice we always give to clients is that you should always centralize what you can because at the core is where you have the most capacity in terms of infrastructure, the most capacity in terms of your SREs and your ops teams, and so forth. As you start distributing out to the edge, then you are in constrained environments and you are also not going to have humans out there managing things. So centralize what you can and distribute what you must, right.

That being said, specific workloads do need to be distributed. They need to be closer to the sources of data that they are operating upon. We see alignment of the trends around AI and machine learning with the trends around edge, and thats where we see some of the biggest demand. That makes sense because people want to process data close to where it is being generated and they cant they cant incur either the cost or the latency of sending that data back to their datacenter or even the public cloud regions.

And it is not specific to one vertical. Its certainly important for service providers and 5G deployments, but its also important for auto companies doing autonomous vehicles, where those vehicles are essentially data generating machines on wheels that need to have made quick decisions that are as tell.

TPM: As far as I can tell, cars are just portable entertainment units. The only profit anybody gets from a car is all the extra entertainment stuff we add. The rest of the price covers commissions for dealers and the bill of materials for the parts in the car.

Joe Fernandes: At last years Red Hat Summit, we had both BMW and Volkswagen talking about their autonomous vehicle programs, and this year we received an award from Ford Motor Company, who also has major initiatives around autonomous driving as well as electrification. Theyll be speaking at this years Red Hat Summit. Another edge vertical is retail, allowing companies to make decisions in stores to the extent that they still have physical locations.

TPM: I didnt give much thought to the Amazon store where it has something ridiculous like 1,700 cameras and you walk in, you grab stuff, you walk out, it watches everything you do and it takes your money electronically. This is looking pretty attractive this week is my guess. And I thought it was kind of a bizarre two months ago, not shopping as I know and enjoy it. And I know were not going to have a pandemic for the rest of our lives, but this could be the way we do things in the future. My guess is that people are going to be less inclined to do all kinds of things that seem very normal only one or two months ago.

Joe Fernandes: Exactly. The other interesting vertical for edge is financial services, which has branch offices and remote offices. The oil and gas industry is interested in edge deployments close to where they are doing exploration and drilling, and the US Department of Defense is also thinking about remote battlefield and control of ships and planes and tanks.

The thing that those environments have in common is Linux. People arent running these edge platforms on Windows Servers, and they are not using mainframes or Unix systems. It is obviously all Linux and it puts a premium on performance and security, on which Red Hat has obviously made its mark with RHEL. People are interested in driving on open systems anyway, and moving to containers and Kubernetes, and Linux is the foundation of this.

TPM: Are containers a given for edge at this point? I think they are, except where bare metal is required.

Joe Fernandes: I dont think that containers are a prerequisite. But certainly, just like the rest of the Linux deployments, it is going in the direction of containers. The reason is portability, having that same environment to package and deploy and manage at the edge as you do in the datacenter and in the cloud. Bare metal containers can run directly on Linux; you dont need to have a virtualization layer in between.

TPM: Well, when I say bare metal, I mean not even a container. Its Linux. Thats it.

Joe Fernandes: I think that that distinction between bare metal Linux versus bare metal Linux containers is more around do what those packaged as container images, or as something like RPMs or Debian and you need orchestration, do you need orchestrated containers. Right. And again, thats very workload specific. We certainly see folks asking us about environments that are really small, that you might not do orchestration because youre not running more than a single container or a small number of containers. In that case, its just Linux on metal.

TPM: OK, but you didnt answer my question yet, and that is really my fault, not yours. So, to circle back: How much compute is at the edge and how much is on premises or in the cloud? Do you think it will be 50/50? Whats your guess?

Joe Fernandes: I dont think itll be 50/50 for some time. I think in the range of 10 percent to 20 percent in the next couple of years is possible, and I would put that at 10 percent or less because there is just a ton of applications running in core datacenter and a ton running out in the public cloud. People are still making that shift to cloud.

But again, itll be very industry specific. I think the adoption of edge compute using analytics and AI/ML is still now just taking off. For the auto makers doing autonomous vehicles, there is no other choice. It is a datacenter on wheels that needs to make life and death decisions on where to turn and when to brake, and in that market, the aggregate edge compute will be the majority at these companies pretty darn quick. You will see edge compute adoption go to 50 percent or more in some very specific areas, but if you took the entire population of IT, its probably still going to be in the single digits.

TPM: Does edge require a different implementation of Linux, say a cut-down version? Do you need a JEOS-type thing like we used to have in the early days of server virtualization? Do you need a special, easier, more distributed version of OpenShift for Kubernetes? Whats different?

Joe Fernandes: With Linux, the valuable thing is the hardware compatibility that RHEL provides. But we certainly see demand for Linux on different footprints. So, for example, RHEL on Arm devices or RHEL with GPU enablement.

When it comes to OpenShift, obviously Kubernetes is a distributed system, where the cluster is the computer, while Linux is focused on individual servers. What we are seeing is demand for smaller clusters, with OpenShift enabled on three node clusters. Three node clusters, which is sort of the minimum to have a highly available control plane because etcd, which is core to Kubernetes, requires three nodes for quorum. But in that situation, we may put the control plane and the applications run on the same three machines, whereas in a larger setup, you would have a three-node OpenShift control plane and then at least two separate machines running your actual containers so that you have HA for the apps. Obviously those application clusters will grow to tens or even hundreds of nodes. But at the edge, the premium is on size and power, so three nodes might be as much space as youre going to get in the rack out at the edge.

TPM: Either that or you might end up having put your control plane on a bunch of embedded microcontroller type systems and compacting that part down.

Joe Fernandes: Actually, we see a kind of the progression. So there are standard clusters as small as you can get them. So maybe its control plane with one or two nodes. And then the next step weve moved into is a control plane and app nodes are the same three machines. And then you get into what Id call distributed nodes, where you might have a control plane shared across five or ten or twenty edge locations that are running applications and talk back to a shared control plane. You have to worry about connectivity to the control plane.

TPM: If you lose the control plane or your connectivity to it, all it should mean is that you cant change the configuration of the compute cluster at the edge.

Joe Fernandes: Not exactly, because Kubernetes is a declarative system, so it thinks that needs to start up containers on another node or start a new node. In a case where you might have intermittent connectivity, we need to meet to make it more tolerant so it doesnt actually start that process unless it doesnt reconnect for some amount of time. And then the next step beyond that is clusters that have two nodes or a single node, and at that point the control plane, if it exists, is not HA, so youre focusing on high availability some other way.

TPM: You can do virtual machines on a slightly beefier server and have software resilience, but you have the potential of having a hardware resilience issue.

Joe Fernandes: Maybe their resiliency is between edge locations.

TPM: What happens with OpenStack at this point, if anything? AT&T obviously has been widely deploying OpenStack at the edge, with tens of thousands of baby datacenters planned, all linked by and controlled by OpenStack. Is this going to be something like use OpenShift where you can, use OpenStack where you must?

Joe Fernandes: We certainly see Red Hat OpenStack deployed at the edge. Theres an architecture that we put out called the distributed compute node architecture, which customers are adopting. It is relevant that customers virtualized application workloads and also want an open solution, and so I think you will continue to see Red Hat OpenStack at the edge and you continue to see vSphere at the edge, too.

For example, in telco, OpenStack has a big footprint where companies have been creating virtualized network functions, or VNFs, for a number of years and that has driven a lot of our business for OpenStack in telco because a lot of the companies we work with, like Verizon and others, they wanted an open platform to deploy VNFs.

TPM: These telcos are not going to suddenly just decide, to hell with it, and containerize all this and get rid of VMs and server virtualization?

Joe Fernandes: Its not going to be an either/or, but we now see a new wave of containerized network functions, or CNFs, particularly around like the 5G deployment. So the telcos are coming around to containers, but like every other vertical, they dont all switch overnight. Just because Kubernetes has been out for five years now doesnt mean the VMs are gone.

TPM: Is the overhead for containers a lot less than VMs? It must be, and that must be a huge motivator.

Joe Fernandes: Remember that the overhead of a VM includes the operating system that runs inside the guest and the overhead of a container, where you are not virtualizing that hardware, you are virtualizing just the process. You can make a container as small as the process that it runs. And for a VM, you can only make it as small as the operating system.

TPM: We wouldnt have done all this VM stuff if we could have just figured out containers to start with.

Joe Fernandes: You know, Red Hat Summit is coming up in a few weeks and we will be providing an update on KubeVirt, which allows Kubernetes to manage standard virtual machines along with containers. In the past year or more, we have been talking about it strictly in terms of what we are doing in the community to enable it. But it has not been something that we can sell and support. This is the year, its ready for primetime and that presents an opportunity to have a converged management plane. You could have Kubernetes directly on bare metal, managing both container workloads and VM workloads, and also manage the transition as more of those workloads move from VMs to containers. You wont have to switch environments or have that additional layer and so forth.

TPM: And I fully expect people to do that. Ive got nothing against OpenStack. Five years ago, when we started The Next Platform, it was not obvious if the future control plane and management and compute metaphor would be Mesos or OpenStack or Kubernetes. And for a while there, Mesos looked like it was certainly better than OpenStack because of some of the mixed workload capabilities and the fact that it could run Kubernetes better than OpenStack could. But if you can get KubeVirt to work and it provides Kubernetes essentially the same functionality that you get for OpenStack in terms of managing the VMs, then I think were done. It is emotional for me to just put a nail in the coffin like that.

Joe Fernandes: The question is: Is it going to put a nail not just in OpenStack, but in VMware, too.

TPM: VMware is an impressive legacy environment in the enterprise, and it generates more than $8 billion in sales for Dell. There is a lot of inertia with legacy environments I mean, there are still System z mainframes out there doing a lot of useful work and providing value to IT organizations and their businesses. I have seen so many legacy environments in my life, but this may be the last big one I see this decade.

Joe Fernandes: You have covered vSphere 7.0 and Project Pacific and look at the contrast in strategy. Were taking Kubernetes and trying to apply it to standard VM workloads as a cloud native environment. What VMware has done is take Kubernetes and wrap it back around the vSphere stack to keep people on the old environment that theyve been on for the last decade.

Read the original post:
How Edge Is Different From Cloud And Not - The Next Platform