AI Ethics Tempted But Hesitant To Use AI Adversarial Attacks Against The Evils Of Machine Learning, Including For Self-Driving Cars – Forbes

AI Ethics quandary about using adversarial attacks against Machine Learning even if done for ... [+] purposes of goodness.

It is widely accepted sage wisdom to garner as much as you can about your adversaries.

Frederick The Great, the famous king of Prussia and a noted military strategist, stridently said this: Great advantage is drawn from knowledge of your adversary, and when you know the measure of their intelligence and character, you can use it to play on their weakness.

Astutely leveraging the awareness of your adversaries is both a vociferous defense and a compelling offense-driven strategy in life. On the one hand, you can be better prepared for whatever your adversary might try to destructively do to you. The other side of that coin is that you are likely able to carry out better attacks against your adversary via the known and suspected weaknesses of any vaunted foe.

Per the historically revered statesman and ingenious inventor Benjamin Franklin, those that are on their guard and appear ready to receive their adversaries are in much less danger of being attacked, much more so than otherwise being unawares, supine, and negligent in preparation.

Why all this talk about adversaries?

Because one of the biggest concerns facing much of todays AI is that cyber crooks and other evildoers are deviously attacking AI systems using what is commonly referred to as adversarial attacks. This can cause an AI system to falter and fail to perform its designated functions. As youll see in a moment, there are a variety of vexing AI Ethics and Ethical AI issues underlying the matter, such as ensuring that AI systems are protected against such scheming adversaries, see my ongoing and extensive coverage of AI Ethics at the link here and the link here, just to name a few.

Perhaps even worse than getting the AI to simply stumble, the adversarial attack can sometimes be used to get AI to perform as the wrongdoer wishes the AI to perform. The attacker can essentially trick the AI into doing the bidding of the malefactor. Whereas some adversarial attacks seek to disrupt or confound the AI, another equally if not more insidious form of deception involves getting the AI to act on the behalf of the attacker.

It is almost as though one might use a mind trick or hypnotic means to get a human to do wrong acts and yet the person is blissfully unaware that they have been fooled into doing something that they should not particularly have done. To clarify, the act that is performed does not necessarily have to be wrong per se or illegal in its merits. For example, conning a bank teller to open the safe or vault for you is not in itself a wrong or illegal act. The bank teller is doing what they legitimately are able to perform as a valid bank-approved task. Of course, if they open the vault and doing so allows a robber to steal the money and all of the gold bullion therein, the bank teller has been tricked into performing an act that they should not have undertaken in the given circumstances.

The use of adversarial attacks against AI has to a great extent arisen because of the way in which much of contemporary AI is devised. You see, this latest era of AI has tended to emphasize the use of Machine Learning (ML) and Deep Learning (DL). These are computational pattern matching techniques and technologies which have dramatically aided the advancement of modern-day AI systems. ML/DL is often used as a key element in many of the AI systems that you interact with daily, such as the use of conversational interactive systems or Natural Language Processing (NLP) akin to Alexa and Siri.

The manner in which ML/DL is designed and fielded provides a fertile opening for the leveraging of adversarial attacks. Cybercrooks generally can guess how the ML/DL was built. They can make reasoned guesses about how the ML/DL will react when put into use. There are only so many ways that ML/DL is usually constructed. As such, the evildoer hackers can try a slew of underhanded ML/DL adversarial tricks to get the AI to either go awry or do their bidding.

In contrast, during the prior era of AI systems, it was somewhat harder to undertake adversarial attacks since much of the AI was more idiosyncratic and written in a more proprietary or individualistic manner. You would have had a more challenging time trying to guess how the AI was constructed and also how it might react when placed into active use. In comparison, ML/DL is largely more predictable as to its susceptibilities (this is not always the case, and please know that I am broadly generalizing).

You might be thinking that if adversarial attacks are relatively able to be targeted specifically at ML/DL then certainly there be should a boatload of cybersecurity measures available to protect against those attacks. One would hope that those devising and releasing their AI applications would ensure that the app was securely able to fight against those adversarial attacks.

The answer is yes and no.

Yes, there exist numerous cybersecurity protections that can be used by and within ML/DL to guard against adversarial attacks. Unfortunately, the answer is also somewhat a no in that many of the AI builders are not especially versed in those protections or are not explicitly including those protections.

There are lots of reasons for this.

One is that some AI software engineers concentrate solely on the AI side and are not particularly caring about the cybersecurity elements. They figure that someone else further along in the chain of making and releasing the AI will deal with any needed cybersecurity protections. Another reason for the lack of protection against adversarial attacks is that it can be a burden of sorts to the AI project. An AI project might be under a tight deadline to get the AI out the door. Adding into the mix a bunch of cybersecurity protections that need to be crafted or set up will potentially delay the production cycle of the AI. Furthermore, the cost of creating AI is bound to go up too.

Note that none of those are satisfactory as to allow an AI system to be vulnerable to adversarial attacks. Those that are in the know would say the famous line of either pay me now or pay me later would come to play in this instance. You can skirt past the cybersecurity portions to get an AI system sooner into production, but the chances are that it will then suffer an adversarial attack. A cost-benefit analysis and ROI (return on investment) needs to be properly assessed as to whether the cost upfront and the benefits thereof are going to be more profitable against the costs to repair and deal with cybersecurity intrusions further down the pike.

There is no free lunch when it comes to making ML/DL that is well-protected against adversarial attacks.

That being said, you dont necessarily need to move heaven and earth to be moderately protected against those evildoing tricks. Savvy specialists that are versed in cybersecurity protections can pretty much sit side-by-side with the AI crews and dovetail the security into the AI as it is being devised. There is also the assumption that a well-versed AI builder can readily use AI constructing techniques and technologies that simultaneously aid their AI building and that seamlessly encompasses adversarial attack protections. To adequately do so, they usually need to know about the nature of adversarial attacks and how to best blunt or mitigate them. This is something only gradually becoming regularly instituted as part of devising AI systems.

A twist of sorts is that more and more people are getting into the arena of developing ML/DL applications. Regrettably, some of those people are not versed in AI per se, and neither are they versed in cybersecurity. The idea overall is that perhaps by making the ability to craft AI systems with ML/DL widely available to all we are aiming to democratize AI. That sounds good, but there are downsides to this popular exhortation, see my analysis and coverage at the link here.

Speaking of twists, I will momentarily get to the biggest twist of them all, namely, I am going to shock you with a recently emerging notion that some find sensible and others believe is reprehensible. Ill give you a taste of where I am heading on this heated and altogether controversial matter.

Are you ready?

There is a movement toward using adversarial attacks as a means to disrupt or fool AI systems that are being used by wrongdoers.

Let me explain.

So far, I have implied that AI is seemingly always being used in the most innocent and positive of ways and that only miscreants would wish to confound the AI via the use of adversarial attacks. But keep in mind that bad people can readily devise AI and use that AI for doing bad things.

You know how it is, whats good for the goose is good for the gander.

Criminals and cybercrooks are eagerly wising up to the building and using AI ML/DL to carry out untoward acts. When you come in contact with an AI system, you might not have any means of knowing whether it is an AI For Good versus an AI For Bad type of system. Be on the watch! Just because AI is being deployed someplace does not somehow guarantee that the AI will be crafted by well-intended builders. The AI could be deliberately devised for foul purposes.

Here then is the million-dollar question.

Should we be okay with using adversarial attacks on purportedly AI For Bad systems?

Im sure that your first thought is that we ought to indeed be willing to fight fire with fire. If AI For Good systems can be shaken up via adversarial attacks, we can use those same evildoing adversarial attacks to shake up those atrocious AI For Bad systems. We can rightfully turn the attacking capabilities into an act of goodness. Fight evil using the appalling trickery of evil. The net result would seem to be an outcome of good.

Not everyone agrees with that sentiment.

From an AI Ethics perspective, there is a lot of handwringing going on about this meaty topic. Some would argue that by leveraging adversarial attacks, even when the intent is for the good, you are perpetuating the use of adversarial attacks all-told. You are basically saying that it is okay to launch and promulgate adversarial attacks. Shame on you, they exclaim. We ought to be stamping out evil rather than encouraging or expanding upon evil (even if the evil is ostensibly aiming to offset evil and carry out the work of the good).

Those against the use of adversarial attacks would also argue that by keeping adversarial attacks in the game that you are going to merely step into a death knell of quicksand. More and stronger adversarial attacks will be devised under the guise of attacking the AI For Bad systems. That seems like a tremendously noble pursuit. The problem is that the evildoers will undoubtedly also grab hold of those emboldened and super-duper adversarial attacks and aim them squarely at the AI For Good.

You are blindly promoting the cat and mouse gambit. We might be shooting our own foot.

A retort to this position is that there are no practical means of stamping out adversarial attacks. No matter whether you want them to exist or not, the evildoers are going to make sure they do persist. In fact, the evildoers are probably going to be making the adversarial attacks more resilient and potent, doing so to overcome whatever cyber protections are put in place to block them. Thus, a proverbial head-in-the-sand approach to dreamily pretending that adversarial attacks will simply slip quietly away into the night is pure nonsense.

You could contend that adversarial attacks against AI are a double-edged sword. AI researchers have noted this quandary, as stated by these authors in a telling article in AI And Ethics journal: Sadly, AI solutions have already been utilized for various violations and theft, even receiving the name AI or Crime (AIC). This poses a challenge: are cybersecurity experts thus justified to attack malicious AI algorithms, methods and systems as well, to stop them? Would that be fair and ethical? Furthermore, AI and machine learning algorithms are prone to be fooled or misled by the so-called adversarial attacks. However, adversarial attacks could be used by cybersecurity experts to stop the criminals using AI, and tamper with their systems. The paper argues that this kind of attacks could be named Ethical Adversarial Attacks (EAA), and if used fairly, within the regulations and legal frameworks, they would prove to be a valuable aid in the fight against cybercrime (article by Micha Chora and Micha Woniak, The Double-Edged Sword Of AI: Ethical Adversarial Attacks To Counter Artificial Intelligence For Crime).

Id ask you to mull this topic over and render a vote in your mind.

Is it unethical to use AI adversarial attacks against AI For Bad, or can we construe this as an entirely unapologetic Ethical AI practice?

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Lets take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we can set the stage by looking at some examples of adversarial attacks to establish what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which Ive discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, Ill share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isnt as yet a singular list of universal appeal and concurrence. Thats the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, lets cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as Ive covered in-depth at the link here, these are their identified six primary AI ethics principles:

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as Ive covered in-depth at the link here, these are their six primary AI ethics principles:

Ive also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled The Global Landscape Of AI Ethics Guidelines (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that only coders or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Lets also make sure we are on the same page about the nature of todays AI.

There isnt any AI today that is sentient. We dont have this. We dont know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Lets keep things more down to earth and consider todays computational non-sentient AI.

Realize that todays AI is not able to think in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isnt any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the old or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

I trust that you can readily see how adversarial attacks fit into these AI Ethics matters. Evildoers are undoubtedly going to use adversarial attacks against ML/DL and other AI that is supposed to be doing AI For Good. Meanwhile, those evildoers are indubitably going to be devising AI For Bad that they foster upon us all. To try and fight against those AI For Bad systems, we could arm ourselves with adversarial attacks. The question is whether we are doing more good or more harm by leveraging and continuing the advent of adversarial attacks.

Time will tell.

One vexing issue is that there is a myriad of adversarial attacks that can be used against AI ML/DL. You might say there are more than you can shake a stick at. Trying to devise protective cybersecurity measures to negate all of the various possible attacks is somewhat problematic. Just when you might think youve done a great job of dealing with one type of adversarial attack, your AI might get blindsided by a different variant. A determined evildoer is likely to toss all manner of adversarial attacks at your AI and be hoping that at least one or more sticks. Of course, if we are using adversarial attacks against AI For Bad, we too would take the same advantageous scattergun approach.

Some of the most popular types of adversarial attacks include:

At this juncture of this weighty discussion, Id bet that you are desirous of some illustrative examples that might showcase the nature and scope of adversarial attacks against AI and particularly aimed at Machine Learning and Deep Learning. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Heres then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the nature of adversarial attacks against AI, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isnt a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isnt a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

Id like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we dont yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Adversarial Attacks Against AI

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in todays AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to todays AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system wont natively somehow know about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Lets dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesnt do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

As earlier mentioned, some of the most popular types of adversarial attacks include:

We can showcase the nature of each such adversarial attack and do so in the context of AI-based self-driving cars.

Adversarial Falsification Attacks

Consider the use of adversarial falsifications.

There are generally two such types: (1) false-positive attacks, and (2) false-negative attacks. In the false-positive attack, the emphasis is on presenting to AI a so-called negative sample that is then incorrectly classified by the ML/DL as a positive one. The jargon for this is that it is a Type I effort (this is reminiscent perhaps of your days of taking a statistics class in college). In contrast, the false-negative attack entails presenting a positive sample for which the ML/DL incorrectly classifies as a negative instance, known as a Type II error.

Suppose that we had trained an AI driving system to detect Stop signs. We used an ML/DL that we had trained beforehand with thousands of images that contained Stop signs. The idea is that we would be using video cameras on the self-driving car to collect video and images of the roadway scene surrounding the autonomous vehicle during a driving journey. As the digital imagery real-time streams into an onboard computer, the ML/DL scans the digital data to detect any indication of a nearby Stop sign. The detection of a Stop sign is obviously crucial for the AI driving system. If a Stop sign is detected by the ML/DL, this is conveyed to the AI driving system and the AI would need to ascertain a suitable means to use the driving controls to bring the self-driving car to a proper and safe stop.

Humans seem to readily be able to detect Stop signs, at least most of the time. Our human perception of such signs is keenly honed by our seemingly innate cognitive pattern matching capacities. All we need to do is learn what a Stop sign looks like and we take things from there. A toddler learns soon enough that a Stop sign is typically red in color, contains the word STOP in large letters, has a special rectangular shape, usually is posted adjacent to the roadway and resides at a persons height, and so on.

Imagine an evildoer that wants to make trouble for self-driving cars.

In a false-positive adversarial attack, the wrongdoer would try to trick the ML/DL into computationally calculating that a Stop sign exists even when there isnt a Stop sign present. Maybe the wrongdoer puts up a red sign along a roadway that looks generally similar to a Stop sign but lacks the word STOP on it. A human would likely realize that this is merely a red sign and not a driving directive. The ML/DL might though calculate that the sign resembles sufficiently enough a Stop sign to the degree that the AI ought to consider the sign as in fact a Stop sign.

You might be tempted to think that this is not much of an adversarial attack and that it seems rather innocuous. Well, suppose that you are driving in a car and meanwhile a self-driving car that is ahead of you suddenly and seemingly without any basis for doing so comes to an abrupt stop (due to having misconstrued a red sign near the roadway as being a Stop sign). You might ram into that self-driving car. It could be that the AI was fooled into computationally calculating that a non-stop sign was a Stop sign, thus committing a false-positive error. You get injured, the passengers in the self-driving car get injured, and perhaps even pedestrians get injured by this dreadful false-positive adversarial attack.

A false-negative adversarial attack is somewhat akin to this preceding depiction though based on tricking the ML/DL into incorrectly misclassifying in the other direction, as it were. Imagine that a Stop sign is sitting next to the roadway and for all usual visual reasons seems to be a Stop sign. Humans accept that this is indeed a valid Stop sign.

Visit link:
AI Ethics Tempted But Hesitant To Use AI Adversarial Attacks Against The Evils Of Machine Learning, Including For Self-Driving Cars - Forbes

Quantum Computing – Intel

Quantum computing employs the properties of quantum physics like superposition and entanglement to perform computation. Traditional transistors use binary encoding of data represented electrically as on or off states. Quantum bits or qubits can simultaneously operate in multiple states enabling unprecedented levels of parallelism and computing efficiency.

Todays quantum systems only include tens or hundreds of entangled qubits, limiting them from solving real-world problems. To achievequantum practicality, commercial quantum systems need to scale to over a million qubits and overcome daunting challenges like qubit fragility and software programmability. Intel Labs is working to overcome these challenges with the help of industry and academic partners and has made significant progress.

First, Intel is leveraging its expertise in high-volume transistor manufacturing to develophot silicon spin-qubits, much smaller computing devices that operate at higher temperatures. Second, theHorse Ridge IIcryogenic quantum control chip provides tighter integration. And third, thecryoproberenables high-volume testing that is helping to accelerate commercialization.

Even though we may be years away from large-scale implementation, quantum computing promises to enable breakthroughs in materials, chemicals and drug design, financial and climate modeling, and cryptography.

Continued here:
Quantum Computing - Intel

The big money is here: The arms race to quantum computing – Haaretz

Theres a major controversy raging in the field of quantum computing. One side consists of experts and researchers who are skeptical of quantum computers ability to be beneficial in the foreseeable future, simply because the physical and technological challenges are too great. On the other side, if you ask the entrepreneurs and investors at firms banking on quantum computing, that hasnt been the issue for quite some time. From their standpoint, its only a matter of time and concerted effort until the major breakthrough and the real revolution in the field is achieved. And theyre prepared to gamble a lot of money on that.

For decades, most of the quantum research and development has been carried out by academic institutions and government research institutes, but in recent years, steps to make the transition from the academic lab to the industrial sector have increased. Researchers and scientists have been creating or joining companies developing quantum computing technology, and startups in the field have been cropping up at a dizzying pace. In 2021, $3.2 billion was invested in quantum firms around the world, according to The Quantum Insider compared to $900 million in 2020.

And in the first quarter of this year, about $700 million was invested a sum similar to the investments in the field between 2015 and 2019 combined. In addition to the surge in startup activity in the field, tech giants such as IBM, Amazon, Google and Microsoft have been investing major resources in the field and have been recruiting experts as well.

The quantum computing field was academic for a long time, and everything changed the moment that big money reached industry, said Ayal Itzkovitz, managing partner at the Pitango First fund, which has invested in several quantum companies in recent years. Everything is moving forward more quickly. If three years ago, we didnt know if it was altogether possible to build such a computer, now we already know that there will be quantum computers that will be able to do something different from classic computers.

Quantum computers, which are based on the principles of quantum theory, are aimed at providing vastly greater computing power than regular computers, with the capability to carry out a huge number of computations simultaneously. Theoretically it should take them seconds, minutes or hours to do what it would take todays regular supercomputers thousands of years to perform.

Quantum computers are based not on bits, but on qubits produced by a quantum processing unit, which is not limited to the binary of 0 or 1 but is a combination of the two. The idea is that a workable quantum computer, if and when there is such a thing, wont be suitable for use for any task but instead for a set of specific problems that require simultaneous computing, such as simulations, for example. It would be relevant for fields such as chemistry, pharmaceuticals, finance, energy and encoding among others.

It's still all theoretical, and there has yet to be a working quantum computer produced that is capable of performing a task more effectively than a regular computer but that doesnt bother those engaged in the arms race to develop a breakthrough quantum processor.

A million-qubit computer

IBM, which is one of the pioneers in the industry, recently unveiled a particularly large 127-qubit computer, and its promising to produce a 1,000-qubit one within the next few years. In 2019, Google claimed quantum supremacy with a computer that managed in 3.5 minutes to perform a task that would have taken a regular computer 10,000 years to carry out. And in May of last year, it unveiled a new quantum center in Santa Barbara, California and it intends to build a million-qubit computer by 2029 at an investment of billions of dollars.

Amazon has gotten into the field, recruiting researchers and recently launching a new quantum center at the California Institute of Technology, and Intel and Microsoft have also gotten into the game. In addition to their own internal development efforts, Amazon, Microsoft and Google have been offering researchers access to active quantum computers via their cloud computing services.

At the same time, there are several firms in the market that specialize in quantum computing that have already raised considerable sums or have even gone public. One of the most prominent of them is the American company IonQ (which in the past attracted investments from Google, Amazon and Samsung) and which last year went public via a SPAC merger. Another such company is the Silicon Valley firm Rigetti Computing, which also went public via a SPAC merger. Then theres Quantinuum, which was the product of a merger between Honeywell Quantum Solutions and Cambridge Quantum.

All thats in addition to a growing startup ecosystem of smaller companies such as Atom Computing and QuEra, which have raised initial funding to develop their own versions of a quantum processor.

In Israel in recent months, the countrys first two startups trying to create a quantum processor have been established. Theyre still in their stealth stage. One is Rehovot-based Quantum Source, which has raised $15 million to develop photonic quantum computing solutions. Its technology is based on research at the Weizmann Institute of Science, and its headed by leading people in the Israeli processor chip sector. The second is Quantum Art, whose executives came from the Israeli defense sector. Its technology is also based on work at the Weizmann Institute.

There are also other early-stage enterprises that are seeking to develop a quantum processor, including one created by former Intel employees and another by former defense company people. Then there is LightSolver, which is seeking to develop a laser technology computer, which is not quantum technology, but it seeks to provide similar performance.

Going for broke

But all of these are at their early stages from a technological standpoint, and the prominent companies overseas have or are building active but small quantum computers usually of dozens of qubits that are only for R&D use to demonstrate their capabilities but without actual practical application. Thats out of a sense that developing an effective quantum computer that has a real advantage requires millions of qubits. Thats a major disparity that will be difficult to bridge from a technological standpoint.

The problem is that sometimes investing in the here-and-now comes at the expense of investments in the future. The quantum companies are still relatively small and have limited staff. If they have an active computer, they also need to maintain it and support its users in the community and among researchers. That requires major efforts and a lot of money, which might be at the expense of next-generation research and it is already delaying the work of a large number of quantum computer manufacturers who are seeing how smaller startups focusing only on next-generation development are getting ahead of them.

As a result, there are also companies with an entirely different approach, which seeks to skip over the current generation of quantum computers and go for broke to build an effective computer with millions of qubits capable of error detection and correction even if it takes many years.

In 2016, it was on that basis that the Palo Alto, California firm PsiQuantum was founded. Last year the company raised $450 million (in part from Microsoft and BlackRock) based on a company valuation of $3 billion, becoming one of the hot and promising names in the field.

Itzkovitz, from the Pitango fund, was one of its early investors. They said they wouldnt make a small computer with a few qubits because it would delay them but would instead go straight for the real goal, he explained.

PsiQuantum is gambling on a fundamentally different paradigm: Most of the companies building an active computer, including the tech giants, have chosen technology based on specifical material matters (for example superconductors or trapped ions). In contrast, PsiQuantum is building a photonic quantum computer, based on light and optics an approach that until recently was considered physically impossible.

Itzkovitz said that he has encountered a large number of startups that are building quantum processors despite the technological risk and the huge difficulty involved. In the past two weeks, I have spoken with 12 or 13 companies making qubits from England, Holland, Finland, the United States and Canada as if this were the most popular thing there was now in the high-tech industry around the world, he said.

As a result, there are also venture capital funds in Israel and overseas that in the past had not entered the field but that are now looking for such companies to invest in over concern not to be left out of the race, as well as a desire to be exposed to the quantum field.

Its the Holy Grail

Similar to the regular computing industry, in quantum computing, its also not enough to build a processor. A quantum processor is a highly complex system that requires a collection of additional hardware components, as well as software and supporting algorithms, of course all of which are designed to permit its core to function efficiently and to take advantage of the ability and potential of qubits in the real world. Therefore, at the same time that quantum processor manufacturers have been at work, in recent years there has been a growing industry of startups seeking to provide them and clients with layers of hardware and software in the tower that stands on the shoulders of the quantum computers processor.

A good example of that is the Israeli firm Quantum Machines, which was established in 2018 and has so far raised $75 million. It has developed a monitoring and control system for quantum computers consisting of hardware and software. According to the company, the system constitutes the brain of the quantum processor and enables it to perform computing activity well and to fulfill its potential. There are also other companies in the market supplying such components and other components including even the refrigerators necessary to build the computers.

Some companies develop software and algorithms in the hope that they will be needed to effectively operate the computers. One of them is Qedma Quantum Computing from Israel, which has developed what it describes as an operating system for quantum computers that is designed to reduce errors and increase quantum computers reliability.

Our goal is to provide hardware manufacturers with the tools that will enable them to do something efficient with the quantum computers and to help create a world in which quantum algorithmic advantages can actually be realized, said Asif Sinay, the companys founder-partner and CEO. Its the Holy Grail of all of the quantum companies in the world.

The big challenge facing these companies is proving that their technology is genuine and that it provides real value to companies developing quantum processors. Thats of course in addition to providing a solution that is sufficiently unique that the tech giants wont be able to develop it on their own.

The big companies dont throw money around just like that, Sinay said. They want to create cooperation with companies that help them reach their goal and to improve the quality of the quantum computer. Unlike the cyber field, for example, you cant come and scare a customer into buying your product. Here youre sitting with people at your level, really smart [people] who understand that you need to give them value that assists in the companys performance and to take the computer to a higher level.

Two concurrent arms races

What the companies mentioned so far have in common is that they are building technology designed to create an efficient quantum computer, whether its a processor or the technology surrounding it. At the same time, another type of companies is gaining steam those that develop the tools to develop quantum software that in the future will make it possible for developers and firms to build applications for the quantum computer.

Classiq is an Israeli company that has developed tools that make it easier for programmers to write software for quantum computers. It raised $33 million at the beginning of the year and has raised $48 million all told. A competitor in Singapore, Horizon Quantum Computing, which just days ago announced that it raised $12 million, is offering a similar solution.

Another prominent player is the U.S. firm Zapata, in which Israels Pitago fund has also invested, and which is engaged in services involved in building quantum applications for corporations.

There are two concurrent arms races happening now, says Nir Minerbi, co founder and CEO of Classiq. One is to build the worlds first fully functional quantum computer. And many startups and tech giants are working on that and that market is now peaking. The second race is the one for creating applications and software that runs on quantum and can serve these firms. This is a field that is now only making its first steps - and its hard to know when it will reach its goal.

See the article here:
The big money is here: The arms race to quantum computing - Haaretz

PsiQuantum’s Path to 1 Million Qubits by the Middle of the Decade – HPCwire

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups thats kept a moderately low PR profile. (Thats if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for NISQ (near-term intermediate scale quantum) computers and set out to develop a million-qubit system the company says will deliver big gains on big problems as soon as it arrives.

When will that be?

PsiQuantum says it will have all the manufacturing processes in place by the middle of the decade and its working closely with GlobalFoundries (GF) to turn its vision into reality. The generous size of its funding suggests many think it will succeed. PsiQuantum is betting on a photonics-based approach called fusion-based quantum computing (paper) that relies mostly on well-understood optical technology but requires extremely precise manufacturing tolerances to scale up. It also relies on managing individual photons, something that has proven difficult for others.

Heres the companys basic contention:

Success in quantum computing will require large, fault-tolerant systems and the current preoccupation with NISQ computers is an interesting but ultimately mistaken path. The most effective and fastest route to practical quantum computing will require leveraging (and innovating) existing semiconductor manufacturing processes and networking thousands of quantum chips together to reach the million-qubit system threshold thats widely regarded as necessary to run game-changing applications in chemistry, banking, and other sectors.

Its not that incrementalism is bad. In fact, its necessary. But its not well served when focused on delivering NISQ systems argues Peter Shadbolt, one of PsiQuantum founders and the current chief scientific officer.

Conventional supercomputers are already really good. Youve got to do some kind of step change, you cant increment your way [forward], and especially you cant increment with five qubits, 10 qubits, 20 qubits, 50 qubits to a million. That is not a good strategy. But its also not true to say that were planning to leap from zero to a million, said Shadbolt. We have a whole chain of incrementally larger and larger systems that were building along the way. Those allow us to validate the control electronics, the systems integration, the cryogenics, the networking, etc. But were not spending time and energy, trying to dress those up as something that theyre not. Were not having to take those things and try to desperately extract computational value from something that doesnt have any computational value. Were able to use those intermediate systems for our own learnings and for our own development.

Thats a much different approach from the majority of quantum computing hopefuls. Shadbolt suggests the broad message about the need to push beyond NISQ dogma is starting to take hold.

There is a change that is happening now, which is that people are starting to program for error-corrected quantum computers, as opposed to programming for NISQ computers. Thats a welcome change and thats happening across the whole space. If youre programming for NISQ computers, you very rapidly get deeply entangled if youll forgive the pun with the hardware. You start looking under the hood, and you start trying to find shortcuts to deal with the fact that you have so few gates at your disposal. So, programming NISQ computers is a fascinating, intellectually stimulating activity, Ive done it myself, but it rapidly becomes sort of siloed and you have to pick a winner, said Shadbolt.

With fault tolerance, once you start to accept that youre going to need error correction, then you can start programming in a fault-tolerant gate set which is hardware agnostic, and its much more straightforward to deal with. There are also some surprising characteristics, which mean that the optimizations that you make to algorithms in a fault-tolerant regime are in many cases, the diametric opposite of the optimizations that you would make in the NISQ regime. It really takes a different approach but its very welcome that the whole industry is moving in that direction and spending less time on these kinds of myopic, narrow efforts, he said.

That sounds a bit harsh. PsiQuantum is no doubt benefitting from the manifold efforts by the young quantum computing ecosystem to tout advances and build traction by promoting NISQ use cases. Theres an old business axiom that says a little hype is often a necessary lubricant to accelerate development of young industries; quantum computing certainly has its share. A bigger question is will PsiQuantum beat rivals to the end-game? IBM has laid out a detailed roadmap and said 2023 is when it will start delivering quantum advantage, using a 1000-qubit system, with plans for eventual million-qubit systems. Intel has trumpeted its CMOS strength to scale up manufacturing its quantum dot qubits. D-Wave has been selling its quantum annealing systems to commercial and government customers for years.

Its really not yet clear which of the qubit technologies semiconductor-based superconducting, trapped ions, neutral atoms, photonics, or something else will prevail and for which applications. Whats not ambiguous is PsiQuantums Go Big or Go Home strategy. Its photonics approach, argues the company, has distinct advantages in manufacturability and scalability, operating environment (less frigid), ease of networking, and error correction. Shadbolt recently talked with HPCwire about the companys approach, technology and progress.

What is fusion-based quantum computing?

Broadly, PsiQuantum uses a form of linear optical quantum computing in which individual photons are used as qubits. Over the past year and a half, the previously stealthy PsiQuantum has issued several papers describing the approach while keeping many details close to the vest (papers listed at end of article). The computation flow is to generate single photons and entangle them. PsiQuantum uses dual rail entangling/encoding for photons. The entangled photons are the qubits and are grouped into what PsiQuantum calls resource states, a group of qubits if you will. Fusion measurements (more below) act as gates. Shadbolt says the operations can be mapped to a standard gate-set to achieve universal, error-corrected, quantum computing.

On-chip components carry out the process. It all sounds quite exotic, in part because it differs from more-widely used matter-based qubit technologies. The figure below taken from a PsiQuantum paper Fusion-based quantum computation issued about a year ago roughly describes the process.

Digging into the details is best served by reading the papers and the company has archived videos exploring its approach on its website. The video below is a good brief summation by Mercedes Gimeno-Segovia, vice president of quantum architecture at PsiQuantum.

Shadbolt also briefly described fusion-based quantum computation (FBQC).

Once youve got single photons, you need to build what we refer to as seed states. Those are pretty small entangled states and can be constructed again using linear optics. So, you take some single photons and send them into an interferometer and together with single photon detection, you can probabilistically generate small entangled states. You can then multiplex those again and basically the task is to get as fast as possible to a large enough, complex enough, appropriately structured, resource state which is ready to then be acted upon by a fusion network. Thats it. You want to kill the photon as fast as possible. You dont want photons living for a long time if you can avoid it. Thats pretty much it, said Shadbolt.

The fusion operators are the smallest simplest piece of the machine. The multiplex, single-photon sources are the biggest, most expensive piece. Everything in the middle is kind of the secret sauce of our architecture, some of that weve put out in that paper and you can see kind of how that works, he said. (At the risk of overkill, another brief description of the system from PsiQuantum is presented at the end of the article.)

One important FBQC advantage, says PsiQuantum, is that the shallow depth of optical circuits make error correction easier. The small entangled states fueling the computation are referred to as resource states. Importantly, their size is independent of code distance used or the computation being performed. This allows them to be generated by a constant number of operations. Since the resource states will be immediately measured after they are created, the total depth of operations is also constant. As a result, errors in the resource states are bounded, which is important for fault-tolerance.

Some of the differences between the PsiQuantums FBQC design and the more familiar MBQC (measurement-based quantum computing) paradigm are shown below.

Another advantage is the operating environment.

Nothing about photons themselves requires cryogenic operation. You can do very high fidelity manipulation and generation of qubits at room temperature, and in fact, you can even detect single photons at room temperature just fine. The efficiency of room temperature single photon detectors, is not good enough for fault tolerance. These room temperature detectors are based on pretty complex semiconductor devices, avalanche photodiodes, and theres no physical reason why you couldnt push those to the necessary efficiency, but it looks really difficult [and] people have been trying for a very long time, said Shadbolt

We use a superconducting single-photon detector, which can achieve the necessary efficiencies without a ton of development. Its worth noting those detectors run in the ballpark of 4 Kelvin. So liquid helium temperature, which is still very cold, but its nowhere near as cold as milli-Kelvin temperatures required for superconducting qubits or some of the competing technologies, said Shadbolt.

This has important implications for control circuit placement as well as for reduced power needed to generate the 4-degree Kelvin environment.

Theres a lot to absorb here and its best done directly from the papers. PsiQuantum, like many other quantum start-ups, was founded by researchers who were already digging into the quantum computing space and theyve shown that PsiQuantums FBQC flavor of linear optical quantum computing will work. While at Bristol, Shadbolt was involved in the first demonstration of running a Variational Quantum Eigensolver (VQE) on a photonic chip.

The biggest challenges for PsiQuantum, he suggests, are developing manufacturing techniques and system architecture around well-known optical technology. The company argues having a Tier-1 fab partner such as GlobalFoundries is decisive.

You can go into infinite detail on the architecture and how all the bits and pieces go together. But the point of optical quantum computing is that the network of components is pretty complicated all sorts of modules and structures and multiplexing strategies, and resource state generation schemes and interferometers, and so on but theyre all just made out of beam splitters, and switches, and single photon sources and detectors. Its kind of like in a conventional CPU, you can go in with a microscope and examine the structure of the cache and the ALU and whatever, but underneath its all just transistors. Its the same kind of story here. The limiting factor in our development is the semiconductor process enablement. The thesis has always been that if you tried to build a quantum computer anywhere other than a high-volume semiconductor manufacturing line, your quantum computer isnt going to work, he said.

Any quantum computer needs millions of qubits. Millions of qubits dont fit on a single chip. So youre talking about heaps of chips, probably billions of components realistically, and they all need to work and they all need to work better than the state of the art. That brings us to the progress, which is, again, rearranging those various components into ever more efficient and complex networks in pretty close analogy with CPU architecture. Its a very key part of our IP, but its not rate limiting and its not terribly expensive to change the network of components on the chip once weve got the manufacturing process. Were continuously moving the needle on that architecture development and weve improved these architectures in terms of their tolerance to loss by more than 150x, [actually] well beyond that. Weve reduced the size of the machine, purely through architectural improvements by many, many orders of magnitude.

The big, expensive, slow pieces of the development are in being able to build high quality components at GlobalFoundries in New York. What weve already done there is to put single photon sources and superconducting nanowire, single photon detectors into that manufacturing process engine. We can build wafers, 300-millimeter wafers, with tens of thousands of components on the wafer, including a full silicon photonics PDK (process design kit), and also a very high performing single photon detector. Thats real progress that brings us closer to being able to build a quantum computer, because that lets us build millions to billions of components.

Shadbolt says real systems will quickly follow development of the manufacturing process. PsiQuantum, like everyone in the quantum computing community, is collaborating closely with potential users. Roughly a week ago, it issued a joint paper with Mercedes-Benz discussing quantum computer simulation of Li-ion chemistry. If the PsiQuantum-GlobalFoundries process is ready around 2025, can a million-qubit system (100 logical qubits) be far behind?

Shadbolt would only say that things will happen quickly once the process has been fully developed. He noted there are three ways to make money with a quantum computer: sell machines, sell time, and sell solutions that come from that machine. I think we were exploring all of the above, he said.

Our customers, which is a growing list at this point pharmaceutical companies, car companies, materials companies, big banks are coming to us to understand what a quantum computer can do for them. To understand that, what we are doing, principally, is fault-tolerant resource counting, said Shadbolt. So that means were taking the algorithm or taking the problem the customer has, working with their technical teams to look under the hood, and understand the technical requirements of solving that problem. We are turning that into the quantum algorithms and sub routines that are appropriate. Were compiling that for the fault-tolerant gate set that will run on top of that fusion network, which by the way is a completely vanilla textbook fault-tolerant gate set.

Stay tuned.

PsiQuantum Papers

Fusion-based quantum computation, https://arxiv.org/abs/2101.09310

Creation of Entangled Photonic States Using Linear Optics, https://arxiv.org/abs/2106.13825

Interleaving: Modular architectures for fault-tolerant photonic quantum computing, https://arxiv.org/abs/2103.08612

Description of PsiQuantums Fusion-Based System from the Interleaving Paper

Useful fault-tolerant quantum computers require very large numbers of physical qubits. Quantum computers are often designed as arrays of static qubits executing gates and measurements. Photonic qubits require a different approach. In photonic fusion-based quantum computing (FBQC), the main hardware components are resource-state generators (RSGs) and fusion devices connected via waveguides and switches. RSGs produce small entangled states of a few photonic qubits, whereas fusion devices perform entangling measurements between different resource states, thereby executing computations. In addition, low-loss photonic delays such as optical fiber can be used as fixed-time quantum memories simultaneously storing thousands of photonic qubits.

Here, we present a modular architecture for FBQC in which these components are combined to form interleaving modules consisting of one RSG with its associated fusion devices and a few fiber delays. Exploiting the multiplicative power of delays, each module can add thousands of physical qubits to the computational Hilbert space. Networks of modules are universal fault-tolerant quantum computers, which we demonstrate using surface codes and lattice surgery as a guiding example. Our numerical analysis shows that in a network of modules containing 1-km-long fiber delays, each RSG can generate four logical distance-35 surface-code qubits while tolerating photon loss rates above 2% in addition to the fiber-delay loss. We illustrate how the combination of interleaving with further uses of non-local fiber connections can reduce the cost of logical operations and facilitate the implementation of unconventional geometries such as periodic boundaries or stellated surface codes. Interleaving applies beyond purely optical architectures, and can also turn many small disconnected matter-qubit devices with transduction to photons into a large-scale quantum computer.

Slides/Figures from various PsiQuantum papers and public presentations

Read more here:
PsiQuantum's Path to 1 Million Qubits by the Middle of the Decade - HPCwire

Quantum Isnt Armageddon; But Your Horse Has Already Left the Barn – PaymentsJournal

It is true that adversaries are collecting our encrypted data today so they can decrypt it later. In essence anything sent using PKI (Public Key Infrastructure) today may very well be decrypted when quantum computing becomes available. Our recent report identifies the risk to account numbers and other long tail data (data that still has high value 5 years or more into the future). Data you send today using traditional PKI is the horse that left the barn.

But this article describes a scary scenario where an adversarys quantum computer hacks the US militarys communications and utilizes that advantage to sink the US Fleet but that is highly unlikely as long as government agencies follow orders. The US government specifies that AES-128 be used for secret (unclassified) information and AES-256 for top secret (classified) information. While AES-128 can be cracked using quantum computers, one estimate suggests that would take 6 months of computing time. That would be very expensive. Most estimates indicate that using AES-256 would take hundreds of years, but the military is already planning an even safer alternative it just isnt yet in production (that I am aware of):

Arthur Herman conducted two formidable studies on what a single, successful quantum computing attack would do to both our banking systems and a major cryptocurrency. A single attack on the banking system by a quantum computer would take down Fedwire and cause $2 trillion of damage in a very short period of time. A similar attack on a cryptocurrency like bitcoin would cause a 90 percent drop in price and would start a three-year recession in the United States. Both studies were backed up by econometric models using over 18,000 data points to predict these cascading failures.

Another disastrous effect could be that an attacker with a CRQC could take control of any systems that rely on standard PKI. So, by hacking communications, they would be able to disrupt data flows so that the attacker could take control of a device, crashing it into the ground or even using it against an enemy. Think of the number of autonomous vehicles that we are using both from a civilian and military standpoint. Any autonomous devices such as passenger cars, military drones, ships, planes, and robots could be hacked by a CRQC and shut down or controlled to perform activities not originally intended by the current users or owners.

Overview byTim Sloane,VP, Payments Innovation at Mercator Advisory Group

Go here to read the rest:
Quantum Isnt Armageddon; But Your Horse Has Already Left the Barn - PaymentsJournal

Aalto University Wins a 2.5 Million ($2.66M USD) Grant to Develop a New Type of Superconducting Qubit – Quantum Computing Report

Aalto University Wins a 2.5 Million ($2.66M USD) Grant to Develop a New Type of Superconducting Qubit

The award was made by the European Research Council for a project named ConceptQ. It will cover a five year period to research a new superconducting quantum device concept utilizing increased anharmonicity, simple structure, and insensitivity to charge and flux noise. One problem with superconducting qubits is that they can sometimes end up in states other than |0> or |1>, These states are sometimes called Qutrits which could potentially be in a superpositions of three different states denoted as |0>, |1>, and |2>. In current quantum processors, the |2> state is not desired and could cause a loss of qubit fidelity. Aaltos new qubit design is meant to reduce or eliminate the occurrence of the |2> state which would remove a source of errors and help to increase the accuracy of the calculation. Another aspect of the project will be to develop low temperature cryoCMOS electronics that can be used to control qubits inside a dilution refrigerator. More information about this grant and the ConceptQ project is available in a news release posted available on the Aalto University website here.

April 26, 2022

This site uses Akismet to reduce spam. Learn how your comment data is processed.

See the rest here:
Aalto University Wins a 2.5 Million ($2.66M USD) Grant to Develop a New Type of Superconducting Qubit - Quantum Computing Report

Chip startups using light instead of wires gaining speed and investments – Reuters

April 26 (Reuters) - Computers using light rather than electric currents for processing, only years ago seen as research projects, are gaining traction and startups that have solved the engineering challenge of using photons in chips are getting big funding.

In the latest example, Ayar Labs, a startup developing this technology called silicon photonics, said on Tuesday it had raised $130 million from investors including chip giant Nvidia Corp (NVDA.O).

While the transistor-based silicon chip has increased computing power exponentially over past decades as transistors have reached the width of several atoms, shrinking them further is challenging. Not only is it hard to make something so miniscule, but as they get smaller, signals can bleed between them.

Register

So, Moore's law, which said every two years the density of the transistors on a chip would double and bring down costs, is slowing, pushing the industry to seek new solutions to handle increasingly heavy artificial intelligence computing needs.

According to data firm PitchBook, last year silicon photonics startups raised over $750 million, doubling from 2020. In 2016 that was about $18 million.

"A.I. is growing like crazy and taking over large parts of the data center," Ayar Labs CEO Charles Wuischpard told Reuters in an interview. "The data movement challenge and the energy consumption in that data movement is a big, big issue."

The challenge is that many large machine-learning algorithms can use hundreds or thousands of chips for computing, and there is a bottleneck on the speed of data transmission between chips or servers using current electrical methods.

Light has been used to transmit data through fiber-optic cables, including undersea cables, for decades, but bringing it to the chip level was hard as devices used for creating light or controlling it have not been as easy to shrink as transistors.

PitchBooks senior emerging technology analyst Brendan Burke expects silicon photonics to become common hardware in data centers by 2025 and estimates the market will reach $3 billion by then, similar to the market size of the A.I. graphic chips market in 2020.

Beyond connecting transistor chips, startups using silicon photonics for building quantum computers, supercomputers, and chips for self-driving vehicles are also raising big funds.

PsiQuantum raised about $665 million so far, although the promise of quantum computers changing the world is still years out.

Lightmatter, which builds processors using light to speed up AI workloads in the datacenter, raised a total of $113 million and will release its chips later this year and test with customers soon after.

Luminous Computing, a startup building an AI supercomputer using silicon photonics backed by Bill Gates, raised a total of $115 million.

It is not just the startups pushing this technology forward. Semiconductor manufacturers are also gearing up to use their silicon chip-making technology for photonics.

GlobalFoundries Head of Computing and Wired Infrastructure Amir Faintuch said collaboration with PsiQuantum, Ayar, and Lightmatter has helped build up a silicon photonics manufacturing platform for others to use. The platform was launched in March.

Peter Barrett, founder of venture capital firm Playground Global, an investor in Ayar Labs and PsiQuantum, believes in the long-term prospects for silicon photonics for speeding up computing, but says it is a long road ahead.

"What the Ayar Labs guys do so well ... is they solved the data interconnect problem for traditional high-performance (computing)," he said. "But it's going to be a while before we have pure digital photonic compute for non-quantum systems."

Register

Reporting by Jane Lanhee Lee; Editing by Stephen Coates

Our Standards: The Thomson Reuters Trust Principles.

Read more:
Chip startups using light instead of wires gaining speed and investments - Reuters

Earth Day 2022: Quantum Computing has the Key to Protect Environment! – Analytics Insight

Can quantum computing hold the ultimate power to meet sustainable development?

Quantum computing has started gaining popularity with the integration of quantum mechanics through smart quantum computers. Yes, it can transform conventional computers with a highly complex nature. Meanwhile, quantum computing is ready to have the key to protecting the environment with technology. Lets celebrate Earth Day 2022 with sustainable development through quantum computing. Quantum computers hold the substantial potential to save the environment with technology and physics law. Thus, lets dig deeper into quantum computing to look out for ways how it holds the key to protecting the environment.

Earth Day 2022 is celebrated across the world to raise the awareness of environmental issues to human beings. It helps to come up with ideas to reduce the carbon footprint and energy consumption for effective sustainable development. Hence, quantum computing is determined to be the protector of the environment with technology to look out for sustainable development efficiently and effectively.

Quantum computers are a form of supercomputers with thousands of GPU and CPU cores with multiple high degrees of complex issues. It is used for performing multiple quantum calculations with Qubits for simulating the problems that human beings or classical computers cannot solve within a short period of time.

Now in the 21st century with the advancements in technologies, quantum computing can power sustainable development with smart functionalities. Quantum computers can protect the environment with technology by capturing carbon as well as fighting climate change for global warming.

Quantum computing can simulate large complicated molecules which can discover new catalysts for capturing sufficient carbon from the current environment. The room-temperature superconductors hold the key to decreasing the 10% of energy production that is lost in transmission. It will help in better processes to feed the increasing population as well as efficient batteries.

Quantum computing is set to address global challenges, raise awareness, generate solutions, and meet the sustainable development goals on Earth Day 2022. Quantum computers are transforming the illusion into reality with better climate models to protect the environment with technology. It is ready to provide sufficient in-depth insights into how the ways and activities of human beings are drastically affecting the environment and creating a barrier to sustainable development.

Multiple 200 Qubits quantum computers can help to find a catalyst to utilize the 3-5% of the worlds gas production as well as 1-2% of annual energy levels through multiple different tasks. It can be used to generate different catalysts for capturing carbon footprint from the air and decreasing carbon emissions by 80%-90%. Thus, quantum computing can control the rapid rise in temperature in the environment with technology.

That being said, lets celebrate Earth Day 2022 with quantum computing helping the world in ensuring carbon dioxide recycling and reducing harmful emissions of carbon monoxide.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

See the rest here:
Earth Day 2022: Quantum Computing has the Key to Protect Environment! - Analytics Insight

Global Quantum Computing in Communication Market 2022 Precise Scenario Covering Trends, Opportunities and Growth Forecast 2028 Ripon College Days -…

The MarketandResearch.biz recent analysis on the Global Quantum Computing in Communication Market anticipates exponential growth from 2022 to 2028. The report presents a market proportion estimation in terms of volumes for the predicted period. The book focuses on market research from the past and present, which serves as a framework for evaluating the markets potential. The study is based on in-depth research of market dynamics, market size, problems, challenges, competition analysis, and the companies involved. The study takes a close look at a number of critical factors that drive the global Quantum Computing in Communication markets growth.

The report provides a comprehensive analysis of the global Quantum Computing in Communication market, including market trends, market size, market value, and market growth over the forecast period, both on a compound and yearly basis. This document contains a comprehensive analysis of the companys future prospects. The research defines the market situation and forecast details of the main zones with a logical presentation of leading producers, product categories, and final associations.

DOWNLOAD FREE SAMPLE REPORT: https://www.marketandresearch.biz/sample-request/221953

The study looks at competing factors that are important for pushing your business to the next level of innovation. This report then forecasts the market development patterns for this industry. An analysis of upstream raw resources, downstream demand, and current market dynamics is also included in this part.

The major regions covered in the report are:

The product can be segmented into the following market segments based on its type:

Market segmentation by application, broken down into:

The following are the prominent players profiled in the market report:

ACCESS FULL REPORT: https://www.marketandresearch.biz/report/221953/global-quantum-computing-in-communication-market-growth-status-and-outlook-2021-2027

To estimate and forecast the market size, variables such as product price, production, consumption/adoption, import & export, penetration rate, regulations, innovations, technical advancements, demand in specific countries, demand by specific end-use, socio-economic factors, inflation, legal factors, historic data, and regulatory framework were examined.

Customization of the Report:

This report can be customized to meet the clients requirements. Please connect with our sales team (sales@marketandresearch.biz), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on 1-201-465-4211 to share your research requirements.

Contact UsMark StoneHead of Business DevelopmentPhone: 1-201-465-4211Email: sales@marketandresearch.biz

Read the original post:
Global Quantum Computing in Communication Market 2022 Precise Scenario Covering Trends, Opportunities and Growth Forecast 2028 Ripon College Days -...

Australian Institute for Machine Learning (AIML …

News

14

Apr

AI for space research delivers back-to-back success in global satellite challenge

South Australias leadership in space innovation has been recognised, with an AIML-led team securing first place in a global AI competition organised by the European Space Agency.

12

Apr

Tech and defence experts call to build AI Australia

Australia must commit to building its sovereign AI research and innovation capability, or risk being left behind as other countries race to pursue their ambitious AI strategies.

11

Feb

Meet the amazing women training AI machines

For International Day of Women and Girls in Science, meet some of the women at AIML who are building great new things and leading the way in cutting-edge machine learning technology.

16

Dec

Machine learning students say cheers with AI beers

How do you build a neural network that can learn how to make beer? We'll show you.

08

Dec

AI + industry collaborations bring award-winning success

South Australias capacity to lead innovation in AI and machine learning has been recognised at the 2021 SA Science and Innovation Excellence Awards, with an AIML team winning the category of Excellence in Science and Industry Collaboration.

24

Nov

New centre boosts AIMLs advanced machine learning research and innovation

Australia's advancedmachine learning capability has received a boost, with a new $20m research and innovation initiative now underway at AIML.

More here:
Australian Institute for Machine Learning (AIML ...

AI Dynamics Will Employ Machine Learning to Triage TB Patients More Accurately, Quickly, Simply and Inexpensively Using Cough Sound Data, Bringing…

Selected by QB3 and UCSF for R2D2 TB Networks Scale Up Your TB Diagnostic Solution Program

BELLEVUE, Wash., April 26, 2022 (GLOBE NEWSWIRE) -- AI Dynamics, an organization founded on the belief that everyone should have access to the power of artificial intelligence (AI) to change the world, has been selected for the Rapid Research in Diagnostics Development for TB Networks (R2D2 TB Network) Scale Up Your TB Diagnostic Solution Program, hosted by QB3 and the UCSF Rosenman Institute. With 1.5 million deaths reported each year, Tuberculosis (TB) is the worldwide leading cause of death from a single infectious disease agent. The goal of the program is to harness machine learning technology for triaging TB using simple and affordable tests that can be performed on easy-to-collect samples such as cough sounds.

Currently, two weeks of cough sound data is widely used to determine who requires costly confirmatory testing, which delays the initiation of the treatment. AI Dynamics will build a proof-of-concept machine learning model to triage TB patients more accurately, quickly, simply and inexpensively using cough sounds, relieving patients from paying for unnecessary molecular and culture TB tests. Due to the prevalence of TB in under-resourced and remote locations, access to affordable early detection options is necessary to prevent disease transmissions and deaths in such countries.

At the core of AI Dynamics mission is providing equal access to the power of AI to everyone and we are committed to working with like-minded companies that recognize the positive impact innovative technology can have on the world, Rajeev Dutt, Founder and CEO of AI Dynamics said. The collaboration and accessible datasets that the R2D2 TB Network provides help to facilitate life-changing diagnostics for the most vulnerable populations.

The R2D2 TB Network offers a transparent and partner-engaged process for the identification, evaluation and advancement of promising TB diagnostics by providing experts and data and facilitating rigorous clinical study evaluation. AI Dynamics will build and validate a model using cough sounds collected from sites worldwide through the R2D2 TB Network.

About AI Dynamics:

AI Dynamics aims to make artificial intelligence (AI) accessible to organizations of all sizes. The company's NeoPulse Framework is an intuitive development and management platform for AI, which enables companies to develop and implement deep neural networks and other machine learning models that can improve key performance metrics. The company's team brings decades of experience in the fields of machine learning and artificial intelligence from leading companies and research organizations. For more information, please visit aidynamics.com.

About The R2D2 TB Network:

The Rapid Research in Diagnostics Development for TB Network (R2D2 TB Network) brings together various TB experts with highly experienced clinical study sites in 10 countries. For further information, please visit their website at https://www.r2d2tbnetwork.org/.

Media Contact:

Justine GoodielUPRAISE Marketing + PR for AI Dynamicsaidynamics@upraisepr.com

Originally posted here:
AI Dynamics Will Employ Machine Learning to Triage TB Patients More Accurately, Quickly, Simply and Inexpensively Using Cough Sound Data, Bringing...

Politics, Machine Learning, and Zoom Conferences in a Pandemic: A Conversation with an Undergraduate Researcher – Caltech

In every election, after the polls close and the votes are counted, there comes a time for reflection. Pundits appear on cable news to offer theories, columnists pen op-eds with warnings and advice for the winners and losers, and parties conduct postmortems.

The 2020 U.S. presidential election in which Donald Trump lost to Joe Biden was no exception.

For Caltech undergrad Sreemanti Dey, the election offered a chance to do her own sort of reflection. Dey, an undergrad majoring in computer science, has a particular interest in using computers to better understand politics. Working with Michael Alvarez, professor of political and computational social science, Dey used machine learning and data collected during the 2020 election to find out what actually motivated people to vote for one presidential candidate over another.

In December, Dey presented her work on the topic at the fourth-annual International Conference on Applied Machine Learning and Data Analytics, which was held remotely and was recognized by the organizers as having the best paper at the conference.

We recently chatted with Dey and Alvarez, who is co-chair of the Caltech-MIT Voting Project, about their research, what machine learning can offer to political scientists, and what it is like for undergrads doing research at Caltech.

Sreemanti Dey: I think that how elections are run has become a really salient issue in the past couple of years. Politics is in the forefront of people's minds because things have gotten so, I guess, strange and chaotic recently. That, along with a lot of factors in 2020, made people care a lot more about voting. That makes me think it's really important to study how elections work and how people choose candidates in general.

Sreemanti: I've learned from Mike that a lot of social science studies are deductive in nature. So, you pick a hypothesis and then you pick the data that would best help you understand the hypothesis that you've chosen. We wanted to take a more open-ended approach and see what the data itself told us. And, of course, that's precisely what machine learning is good for.

In this particular case, it was a matter of working with a large amount of data that you can't filter through yourself without introducing a lot of bias. And that could be just you choosing to focus on the wrong issues. Machine learning and the model that we used are a good way to reduce the amount of information you're looking at without bias.

Basically it's a way of reducing high-dimensional data sets to the most important factors in the data set. So it goes through a couple steps. It first groups all the features of the data into these modules so that the features within a module are very correlated with each other, but there is not much correlation between modules. Then, since each module represents the same type of features, it reduces how many features are in each module. And then at the very end, it combines all the modules together and then takes one last pass to see if it can be reduced by anything else.

Mike: This technique was developed by Christina Ramirez (MS' 96, PhD '99), a PhD graduate of our program now at UCLA. Christina is someone who I've collaborated with quite a bit. Sreemanti and I were meeting pretty regularly with Christina and getting some advice from her along the way about this project and some others that we're thinking about.

Sreemanti: I think we got pretty much what we expected, except for what the most partisan-coded issues are. Those I found a little bit surprising. The most partisan questions turned out to be about filling the Supreme Court seats. I thought that it was interesting.

Sreemanti: It's really incredible. I find it astonishing that a person like Professor Alvarez has the time to focus so much on the undergraduates in lab. I did research in high school, and it was an extremely competitive environment trying to get attention from professors or even your mentor.

It's a really nice feature of Caltech that professors are very involved with what their undergraduates are doing. I would say it's a really incredible opportunity.

Mike: I and most of my colleagues work really hard to involve the Caltech undergraduates in a lot of the research that we do. A lot of that happens in the SURF [Summer Undergraduate Research Fellowship] program in the summers. But it also happens throughout the course of the academic year.

What's unusual a little bit here is that undergraduate students typically take on smaller projects. They typically work on things for a quarter or a summer. And while they do a good job on them, they don't usually reach the point where they produce something that's potentially publication quality.

Sreemanti started this at the beginning of her freshman year and we worked on it through her entire freshman year. That gave her the opportunity to really learn the tools, read the political science literature, read the machine learning literature, and take this to a point where at the end of the year, she had produced something that was of publication quality.

Sreemanti: It was a little bit strange, first of all, because of the time zone issue. This conference was in a completely different time zone, so I ended up waking up at 4 a.m. for it. And then I had an audio glitch halfway through that I had to fix, so I had some very typical Zoom-era problems and all that.

Mike: This is a pandemic-era story with how we were all working to cope and trying to maintain the educational experience that we want our undergraduates to have. We were all trying to make sure that they had the experience that they deserved as a Caltech undergraduate and trying to make sure they made it through the freshman year.

We have the most amazing students imaginable, and to be able to help them understand what the research experience is like is just an amazing opportunity. Working with students like Sreemanti is the sort of thing that makes being a Caltech faculty member very special. And it's a large part of the reason why people like myself like to be professors at Caltech.

Sreemanti: I think I would want to continue studying how people make their choices about candidates but maybe in a slightly different way with different data sets. Right now, from my other projects, I think I'm learning how to not rely on surveys and rely on more organic data, for example, from social media. I would be interested in trying to find a way to study their candidatepeople's candidate choice from their more organic interactions with other people.

Sreemanti's paper, titled, "Fuzzy Forests for Feature Selection in High-Dimensional Survey Data: An Application to the 2020 U.S. Presidential Election," was presented in December at the fourth-annual International Conference on Applied Machine Learning and Data Analytics," where it won the best paper award.

Originally posted here:
Politics, Machine Learning, and Zoom Conferences in a Pandemic: A Conversation with an Undergraduate Researcher - Caltech

Mperativ Adds New Vice President of Applied Data Science, Machine Learning and AI to Advance Vision for AI in Revenue Marketing – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Mperativ, the Revenue Marketing Platform that aligns marketing with sales, customer success, and finance on the cause and effect relationships between marketing activities and revenue outcomes, today announced the appointment of Nohyun Myung as Vice President of Applied Data Science, Machine Learning and AI. In this new role, Nohyun will lead the development of new Mperativ platform capabilities to help marketers realize the value of AI predictions and seamlessly connect data across the customer journey without having to build a data science practice.

Nohyun has unique and important experience in data science, analytics and AI that will be critical to the growth of the Mperativ Data Science and AI practices, said Jim McHugh, CEO and co-founder of Mperativ. He not only brings the knowledge and skill set to help accelerate the evolution of the Mperativ platform, but his involvement in the technical side of sales organizations will give us a unique perspective on how AI and forecasting can be used to help address the challenges go-to-market teams face.

Nohyun brings over 20 years of experience as a data and analytics practitioner. Prior to Mperativ he built and scaled high-functioning, multi-disciplinary teams in his roles as Vice President of Global Solution Engineering & Customer Success at OmniSci and as Vice President of Global Solution Engineering at Kinetica. He has worked closely with industry leaders across Telco, Utilities, Automotive and Government verticals to deliver enterprise-grade AI and advanced analytics capabilities to their data practices, pioneering work across autonomous vehicle deployments to telecommunications network optimization and uncovering anomalies from object-detected features of satellite imagery. Nohyuns prior experience has led to the advancement of enterprise-class AI capabilities spanning Autonomous Vehicles, automating Object Detection from optical imagery and Global-Scale Smart Infrastructure initiatives across various industries.

Throughout my career Ive become acutely familiar with the immense challenges that go-to-market teams face when trying to get a comprehensive and accurate picture of the customer journey, said Nohyun. As the world sprints towards becoming more prescriptive and predictive, having operational tools and platforms that can augment business without having to build it in-house will become essential across B2B organizations. I look forward to working with the talented team at Mperativ to bring the true value of AI to marketing leaders so they can better execute engagement strategies that produce their desired revenue outcomes.

About Mperativ

Mperativ provides the first strategic platform to align marketing with sales, customer success, and finance on the cause and effect relationships between marketing activities and revenue outcomes. Despite pouring significant effort into custom analytics, marketers are struggling to convey the value of their initiatives. By recentering marketing metrics around revenue, Mperativ makes it possible to uncover data narratives and extract trends across the entire customer journey, with beautifully-designed interactive visualizations that demonstrate the effectiveness of marketing in a new revenue-centric language. As a serverless data warehouse, Mperativ eliminates the complexity of surfacing compelling marketing insights. Connect marketing strategy to revenue results with Mperativ. To learn more, visit us at http://www.mperativ.io or contact us at info@mperativ.io.

Read this article:
Mperativ Adds New Vice President of Applied Data Science, Machine Learning and AI to Advance Vision for AI in Revenue Marketing - Business Wire

Five Machine Learning Project Pitfalls to Avoid in 2022 – EnterpriseTalk

Machine Learning (ML) systems are complex, and this complexity increases the chances of failure as well. Knowing what may go wrong is critical for developing robust machine learning systems.

Machine Learning (ML) initiatives fail 85% of the time, according to Gartner. Worse yet, according to the research firm, this tendency will continue until the end of 2022.

There are a number of foreseeable reasons why machine learning initiatives fail, many of which may be avoided with the right knowledge and diligence. Here are some of the most common challenges that machine learning projects face, as well as ways to prevent them.

All AI/ML endeavors require data, which is needed for testing, training, and operating models. However, acquiring such data is a stumbling block because most organizational data is dispersed among on-premises and cloud data repositories, each with its own set of compliance and quality control standards, making data consolidation and analysis that much more complex.

Another stumbling block is data silos. When teams use multiple systems to store and handle data sets, data silos collections of data controlled by one team but not completely available to others can form. That might, however, be a result of a siloed organizational structure.

In reality, no one knows everything. It is critical to have at least one ML expert on the team, to be able to do the foundational work, for the successful adoption and implementation of ML in enterprise projects. Being overly confident, without the right skill, sets in the team will only add to the chances of failure.

Organizations are nearly drowning in large volumes of observational data. Thanks to developments in technology such as integrated smart devices and telematics as well as relatively inexpensive and available big data storage and a desire to incorporate more data science into business decisions. However, a high level of data availability might result in observational data dumpster diving.

Also Read: How Enterprises can Keep Machine Learning Models on Track with Crucial Guard Rails

When adopting a strong tool like machine learning, it pays to be more aware about what organizations are searching for. Businesses should take advantage of their large observational data resources to uncover potentially valuable insights, but evaluate those hypotheses through AB or multivariate testing to distinguish reality from fiction.

The ability to evaluate the overall performance of a trained model is crucial in machine learning. Its critical to assess how well the model performs when compared to both training and test data. This data will be used to choose the model to use, the hyper-parameters to utilize, and decide if the model is ready for production use.

It is vital to select the right assessment measures for the job at hand when evaluating model performance.

Machine learning has become more accessible in various ways. There are far more machine learning tools available today than there were even a few years ago, and data science knowledge has multiplied.

Having a data science team to work on an AI and ML project in isolation, on the other hand, might drive the organization down the most difficult path to success. They may come across unanticipated difficulties unless they have prior familiarity with them. Unfortunately, they can also get into the thick of a project before recognizing they are not adequately prepared.

Its imperative to make sure that domain specialists like process engineers and plant operators are not left out of the process because they are familiar with its complexity and the context of relevant data.

Check Out The NewEnterprisetalk Podcast.For more such updates follow us on Google NewsEnterprisetalk News.

Originally posted here:
Five Machine Learning Project Pitfalls to Avoid in 2022 - EnterpriseTalk

VelocityEHS Industrial Ergonomics Solution Harnesses AI and Machine Learning to Drive … – KULR-TV

CHICAGO, April 26, 2022 (GLOBE NEWSWIRE) -- VelocityEHS,the global leader in cloud-based environmental, health, safety (EHS) and environmental, social, and corporate governance (ESG) software, announced the latest additions to the Accelerate Platform, including a highly anticipated new feature,Active Causes & Controls, to its award-winning Industrial Ergonomics Solution. Rooted in ActiveEHS the proprietary VelocityEHS methodology that leverages AI & machine learning to help non-experts produce expert-level results this enhancement kicks off a new era in the prevention of musculoskeletal disorders (MSDs).

Designed, engineered, and embedded with expertise by an unmatched group of board-certified ergonomists, the ActiveEHS powered Active Causes and Controls feature helps companies reduce training time, maintain process consistency across locations, and focus on implementing changes that maximize business results. Starting with the industrys best sensorless, motion-capture technology, which performs ergonomics assessments faster, easier, and more accurately than any human could, the solution then guides users through suggested root causes and job improvement controls. Recommendations are based on AI and machine learning insights fed by data collected from hundreds of global enterprise customers and millions of MSD risk data points.

The result is an unparalleled opportunity to prevent MSD risk, reduce overall injury costs, drive productivity, and provide employees with quality-of-life changing improvements in the workplace.

These are exciting times for anyone who cares about EHS and ESG, said John Damgaard, CEO of VelocityEHS. While its true, the job of a C-suite executive or EHS professional has never been more challenging and complex; its also true that leaders have never had this kind of advanced, highly usable, and easy-to-deploy technology at their fingertips. Ergonomics is just the start; ActiveEHS will transform how we think about health, safety, and sustainability going forward. It is the key to evolving from a reactive documentation and compliance mindset to a proactive continuous improvement cycle of prediction, intervention, and outcomes.

MSDs are a major burden on workers and a huge cost to employers.According to the Bureau of Labor Statistics, for employers in the U.S. private sector alone, MDSs cause more than 300,000 days away from work and per OSHA, are responsible for $20 billioneveryyear in workers compensation claims.

Also Announced Today: New Training & Learning Content, Enhancements to Automated Utility Data Management, and Improved workflows for the Control of Work Solution.

The VelocityEHS Safety Solution, which includes robust Training & Learning capabilities, is undergoing a major expansion of its online training content library. To enable companies to meet more of their training responsibilities, the training content library is growing from approximately 100 courses to over 750. They will be available in multiple languages, including 300+ courses in Spanish. The new content will feature microlearning modules, which have gained popularity in recent years as workers prefer shorter, easily digestible training sessions. This results in less time in front of the screen for workers, while employers report better engagement and overall retention of the material.

The VelocityEHS Climate Solution continues to capitalize on the VelocityEHS partnership with Urjanet the engine behind the recently announced Automated Utility Data Management capabilities. Now, in addition to saving time and reducing costs related to the collection of utility data, users can automatically port their energy, gas and water usage data into the VelocityEHS Climate Solution to perform GHG calculations and report on Scope 1,2, and 3 emissions, without any manual effort.

The Companys Control of Work Solution boasts a new streamlined navigation and enhanced functionality that allows customers to add new, pre-approved roles for improved compliance and approval workflows.

Industrial Ergonomics, Safety, Climate, and Control of Work solutions are all part of the VelocityEHS AcceleratePlatform, which delivers best-in-class performance in the areas of health, safety, risk, ESG, and operational excellence. Backed by the largest global software community of EHS experts and thought leaders, the software drives expert processes so every team member can produce outstanding results.

For more information about VelocityEHS and its complete offering of award-winning software solutions, visit http://www.EHS.com.

AboutVelocityEHS Trusted by more than 19,000 customers worldwide, VelocityEHS is the global leader in true SaaS enterprise EHS technology. Through the VelocityEHS Accelerate Platform, the company helps global enterprises drive operational excellence by delivering best-in-class capabilities for health, safety, environmental compliance, training, operational risk, and environmental, social, and corporate governance (ESG). The VelocityEHS team includes unparalleled industry expertise, with more certified experts in health, safety, industrial hygiene, ergonomics, sustainability, the environment, AI, and machine learning than any EHS software provider. Recognized by the EHS industrys top independent analysts as a Leader in the Verdantix 2021 Green Quadrant AnalysisVelocityEHS is committed to industry thought leadership and to accelerating the pace of innovation through its software solutions and vision.

VelocityEHS is headquartered in Chicago, Illinois, with locations in Ann Arbor, Michigan; Tampa, Florida; Oakville, Ontario; London, England; Perth, Western Australia; and Cork, Ireland. For more information, visit http://www.EHS.com.

Media Contact Brad Harbaugh 312.881.2855 bharbaugh@ehs.com

Continue reading here:
VelocityEHS Industrial Ergonomics Solution Harnesses AI and Machine Learning to Drive ... - KULR-TV

Gimme named top machine learning company in Georgia – Vending Market Watch

Gimme, whose technology helps foodservice and grocery store delivery operators automate merchandising, announced they were named as a top machine learning company in Georgia by Data Magazine. The rankings were based upon four categories including innovation, growth, management and societal impact. The magazine showcased its top picks for the best Georgia-based machine learning companies, noting these startups and companies are taking a variety of approaches to innovating the machine learning industry, but are all exceptional companies well worth a follow.

Gimme has been dedicated to investing and developing our machine learning and AI infrastructure, so to be recognized for this innovation is exciting, said Cory Hewett, co-founder and CEO of Gimme. Our plans for 2022 include continued accelerating of our AI progress with tools like vendor receipt import from pictures, stock-out detection from visit photos, and AI schedule suggestions. These new tools along with others will expand our use of AI across our platform, increasing speed in our data handling.

Gimme's technology provides management for operators of grocery, convenience, vending machines, micro markets and office coffee. Gimmes use of artificial intelligence, computer vision and machine learning technologies impacts not only its own products and services but also how the unattended retail industry operates. The technology provides machine status data to help operators focus on cash accountability and inventory tracking to reduce stockouts, accelerate warehousing and restocking, and streamline product planning. The companys hardware product, the Gimme Key, is now the #1 wireless DEX adapter for direct store delivery, using Bluetooth Low Energy technology and replacing previous outdated legacy handhelds.

To learn more about the Gimmes management platform, visitwww.vms.aior for grocery delivery platform atwww.dsd.ai.

Read the original here:
Gimme named top machine learning company in Georgia - Vending Market Watch

Control Risks Taps Reveal-Brainspace to Bolster its Suite of Analytics, AI and Machine Learning Capabilities – GlobeNewswire

London, Chicago, April 26, 2022 (GLOBE NEWSWIRE) -- Control Risks, the specialist risk consultancy, today announced it is expanding its technology offering with Reveal, the global provider of the leading AI-powered eDiscovery and investigations platform. Reveal uses adaptive AI, behavioral analysis, and pre-trained AI model libraries to help uncover connections and patterns buried in large volumes of unstructured data.

Corporate legal and compliance teams, and their outside counsel, are looking to technology to better understand data, reduce risks and costs, and extract key insights faster across an ever-increasing volume and variety of data. We look forward to leveraging Reveals data visualization, AI and machine learning functionality to drive innovation with our clients, said Brad Kolacinski, Partner, Control Risks.

Control Risks will leverage the platform globally to unlock intelligence that will help clients mitigate risks across a range of areas including litigation, investigations, compliance, ethics, fraud, human resources, privacy and security.

We work with clients and their counsel on large, complex, cross-border forensics and investigations engagements. It is no secret that AI, ML and analytics are now required tools in matters where we need to sift through enormous quantities of data and deliver insights to clients efficiently, says Torsten Duwenhorst, Partner, Control Risks. Offering the full range of Reveals capabilities globally will benefit our clients enormously.

As we continue to expand the depth and breadth of Reveals marketplace offerings, we are excited to partner with Control Risks, a demonstrated leader in security, compliance and organizational resilience offerings that are more critical now than ever, said Wendell Jisa, Reveals CEO. By taking full advantage of Reveals powerful platform, Control Risks now has access to the industrys leading SaaS-based, AI-powered technology stack, helping them and their clients solve their most complex problems with greater intelligence.

For more information about Reveal-Brainspace and its AI platform for legal, enterprise and government organizations, visit http://www.revealdata.com.

###

About Control Risks

Control Risks is a specialist global risk consultancy that helps to create secure, compliant and resilient organizations in an age of ever-changing risk. Working across disciplines, technologies and geographies, everything we do is based on our belief that taking risks is essential to our clients success. We provide our clients with the insight to focus resources and ensure they are prepared to resolve the issues and crises that occur in any ambitious global organization. We go beyond problem-solving and provide the insights and intelligence needed to realize opportunities and grow. Control Risks will initially provide Reveal-Brainspace in the US, Europe and Asia Pacific. Visit us online at http://www.controlrisks.com.

About Reveal

Reveal, with Brainspace technology, is a global provider of the leading AI-powered eDiscovery platform. Fueled by powerful AI technology and backed by the most experienced team of data scientists in the industry, Reveals cloud-based software offers a full suite of eDiscovery solutions all on one seamless platform. Users of Reveal include law firms, Fortune 500 corporations, legal service providers, government agencies and financial institutions in more than 40 countries across five continents. Featuring deployment options in the cloud or on-premise, an intuitive user design and multilingual user interfaces, Reveal is modernizing the practice of law, saving users time and money and offering them a competitive advantage. For more information, visit http://www.revealdata.com.

Continued here:
Control Risks Taps Reveal-Brainspace to Bolster its Suite of Analytics, AI and Machine Learning Capabilities - GlobeNewswire

Machine learning hiring levels in the ship industry rose in March 2022 – Ship Technology

The proportion of ship equipment supply, product and services companies hiring for machine learning related positions rose in March 2022 compared with the equivalent month last year, with 20.6% of the companies included in our analysis recruiting for at least one such position.

This latest figure was higher than the 16.2% of companies who were hiring for machine learning related jobs a year ago but a decrease compared to the figure of 22.6% in February 2022.

When it came to the rate of all job openings that were linked to machine learning, related job postings dropped in March 2022, with 0.4% of newly posted job advertisements being linked to the topic.

This latest figure was a decrease compared to the 0.5% of newly advertised jobs that were linked to machine learning in the equivalent month a year ago.

Machine learning is one of the topics that GlobalData, from whom our data for this article is taken, have identified as being a key disruptive force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

Our analysis of the data shows that ship equipment supply, product and services companies are currently hiring for machine learning jobs at a rate lower than the average for all companies within GlobalData's job analytics database. The average among all companies stood at 1.3% in March 2022.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

Ship Windows, Glass and Frame Constructions

Go here to see the original:
Machine learning hiring levels in the ship industry rose in March 2022 - Ship Technology

Deep Science: AI simulates economies and predicts which startups receive funding – TechCrunch

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column aims to collect some of the most relevant recent discoveries and papers particularly in, but not limited to, artificial intelligence and explain why they matter.

This week in AI, scientists conducted a fascinating experiment to predict how market-driven platforms like food delivery and ride-hailing businesses affect the overall economy when theyre optimized for different objectives, like maximizing revenue. Elsewhere, demonstrating the versatility of AI, a team hailing from ETH Zurich developed a system that can read tree heights from satellite images, while a separate group of researchers tested a system to predict a startups success from public web data.

The market-driven platform work builds on Salesforces AI Economist, an open source research environment for understanding how AI could improve economic policy. In fact, some of the researchers behind the AI Economist were involved in the new work, which was detailed in a study originally published in March.

As the co-authors explained to TechCrunch via email, the goal was to investigate two-sided marketplaces like Amazon, DoorDash, Uber and TaskRabbit that enjoy larger market power due to surging demand and supply. Using reinforcement learning a type of AI system that learns to solve a multi-level problem by trial and error the researchers trained a system to understand the impact of interactions between platforms (e.g. Lyft) and consumers (e.g. riders).

Image Credit: Xintong Wang et al.

We use reinforcement learning to reason about how a platform would operate under different design objectives [Our] simulator enables evaluating reinforcement learning policies in diverse settings under different objectives and model assumptions, the co-authors told TechCrunch via email. We explored a total of 15 different market settings i.e. a combination of market structure, buyer knowledge about sellers, [economic] shock intensity and design objective.

Using their AI system, the researchers arrived at the conclusion that a platform designed to maximize revenue tends to raise fees and extract more profits from buyers and sellers during economic shocks at the expense of social welfare. When platform fees are fixed (e.g. due to regulation), they found a platforms revenue-maximizing incentive generally aligns with the welfare considerations of the overall economy.

The findings might not be Earth-shattering, but the coauthors believe the system which they plan to open source could provide a foundation for either a business or policymaker to analyze a platform economy under different conditions, designs and regulatory considerations. We adopt reinforcement learning as a methodology to describe strategic operations of platform businesses that optimize their pricing and matching in response to changes in the environment, either the economic shock or some regulation, they added. This may give new insights about platform economies that go beyond this work or those that can be generated analytically.

Turning our attention from platform businesses to the venture capital that fuels them, researchers hailing from Skopai, a startup that uses AI to characterize companies based on criteria like technology, market and finances, claims to be able to predict the ability of a startup to attract investments using publicly available data. Relying on data from startup websites, social media, and company registries, the co-authors say that they can obtain prediction results comparable to the ones making also use of structured data available in private databases.

Image Credits: Mariia Garkavenko et al.

Applying AI to due diligence is nothing new. Correlation Ventures, EQT Ventures and SignalFire are among the firms currently using algorithms to inform their investments. Gartner predicts that 75% of VCs will use AI to make investment decisions by 2025, up from less than 5% today. But while some see the value in the technology, dangers lurk beneath the surface. In 2020, Harvard Business Review (HBR) found that an investment algorithm outperformed novice investors but exhibited biases, for example frequently selecting white and male entrepreneurs. HBR noted that this reflects the real world, highlighting AIs tendency to amplify existing prejudices.

In more encouraging news, scientists at MIT, alongside researchers at Cornell and Microsoft, claim to have developed a computer vision algorithm STEGO that can identify images down to the individual pixel. While this might not sound significant, its a vast improvement over the conventional method of teaching an algorithm to spot and classify objects in pictures and videos.

Traditionally, computer vision algorithms learn to recognize objects (e.g. trees, cars, tumors, etc.) by being shown many examples of the objects that have been labeled by humans. STEGO does away with this time-consuming, labor-intensive workflow by instead applying a class label to each pixel in the image. The system isnt perfect it sometimes confuses grits with pasta, for example but STEGO can successfully segment out things like roads, people and street signs, the researchers say.

On the topic of object recognition, it appears were approaching the day when academic work like DALL-E 2, OpenAIs image-generating system, becomes productized. New research out of Columbia University shows a system called Opal thats designed to create featured images for news stories from text descriptions, guiding users through the process with visual prompts.

Image Credits: Vivian Liu et al.

When they tested it with a group of users, the researchers said that those who tried Opal were more efficient at creating featured images for articles, creating over two times more usable results than users without. Its not difficult to imagine a tool like Opal eventually making its way into content management systems like WordPress, perhaps as a plugin or extension.

Given an article text, Opal guides users through a structured search for visual concepts and provides pipelines allowing users to illustrate based on an articles tone, subjects and intended illustration style, the co-authors wrote. [Opal] generates diverse sets of editorial illustrations, graphic assets and concept ideas.

See more here:
Deep Science: AI simulates economies and predicts which startups receive funding - TechCrunch

Machine Learning as a Service Market-Industry Analysis with Growth Prospects, Trends, Size, Supply, Share, Pipeline Projects and Survey till 2030 …

United State-Machine learning is a process of data analysis that comprises of statistical data analysis performed to derive desired predictive output without the implementation of explicit programming. It is designed to incorporate the functionalities of artificial intelligence (AI) and cognitive computing involving a series of algorithms and is used to understand the relationship between datasets to obtain a desired output. Machine learning as a service (MLaaS) incorporates range of services that offer machine learning tools through cloud computing services.

The global machine learning as a service market was valued at $571 million in 2016, and is projected to reach $5,537 million by 2023, growing at a CAGR of 39.0% from 2017 to 2023.

Request To Download Sample of This Strategic Report:-https://reportocean.com/industry-verticals/sample-request?report_id=31083

Market Statistics:

The file offers market sizing and forecast throughout 5 primary currencies USD, EUR GBP, JPY, and AUD. It helps corporation leaders make higher choices when foreign money change records are available with ease. In this report, the years 2020 and 2021 are regarded as historic years, 2020 as the base year, 2021 as the estimated year, and years from 2022 to 2030 are viewed as the forecast period.

Increased penetration of cloud-based solutions, growth associated with artificial intelligence and cognitive computing market, and increase in market for prediction solutions drive the market growth. In addition, growth in IT expenditure in emerging nations and technological advancements for workflow optimization fuel the demand for advanced analytical systems driving the machine learning as a service market growth. However, dearth of trained professionals is expected to impede the machine learning as a service market share. Furthermore, increased application areas and growth of IoT is expected to create lucrative opportunities for machine learning as a service market growth.

The global machine learning as a service market is segmented based on component, organization size, end-use industry, application, and geography. The component segment is bifurcated into software and services. Based on organization size, it is divided into large enterprises and small & medium enterprises. The application segment is categorized into marketing & advertising, fraud detection & risk management, predictive analytics, augmented & virtual reality, natural language processing, computer vision, security & surveillance, and others. On the basis of end-use industry, it is classified into aerospace & defense, IT & telecom, energy & utilities, public sector, manufacturing, BFSI, healthcare, retail, and others. By geography, the machine learning as a service market is analyzed across North America, Europe, Asia-Pacific, and LAMEA.

Key players that operate in the machine learning as a service market are Google Inc., SAS Institute Inc., FICO, Hewlett Packard Enterprise, Yottamine Analytics, Amazon Web Services, BigML, Inc., Microsoft Corporation, Predictron Labs Ltd., and IBM Corporation.

Get a Request Sample Report:https://reportocean.com/industry-verticals/sample-request?report_id=31083

KEY BENEFITS FOR STAKEHOLDERS

This report provides an overview of the trends, structure, drivers, challenges, and opportunities in the global machine learning as a service market.Porters Five Forces analysis highlights the potential of buyers & suppliers, and provides insights on the competitive structure of the market to determine the investment pockets.Current and future trends adopted by the key market players are highlighted to determine overall competitiveness.The quantitative analysis of the machine learning as a service market growth from 2017 to 2023 is provided to elaborate the market potential.

According to Statista, as of 2021 data, the United States held over ~36% of the global market share for information and communication technology (ICT). With a market share of 16%, the EU ranked second, followed by 12%, China ranked third. In addition, according to forecasts, the ICT market will reach more than US$ 6 trillion in 2021 and almost US$ 7 trillion by 2027. In todays society, continuous growth is another reminder of how ubiquitous and crucial technology has become. Over the next few years, traditional tech spending will be driven mainly by big data and analytics, mobile, social, and cloud computing.

This report analyses the global primary production, consumption, and fastest-growing countries in the Information and Communications Technology (ICT) market. Also included in the report are prominent and prominent players in the global Information and Communications Technology Market (ICT).

A release on June 8th, 2021, by the Bureau and Economic Analysis and U.S. The Census Bureau reports the recovery of the U.S. market. The report also described the recovery of U.S. International Trade in July 2021.In April 2021, exports in the country reached $300 billion, an increase of $13.4 billion. In April 2021, imports amounted to $294.5 billion, increasing by $17.4 billion. COVID19 is still a significant issue for economies around the globe, as evidenced by the year-over-year decline in exports in the U.S. between April 2020 and April 2021 and the increase in imports over that same period of time. The market is clearly trying to recover. Despite this, it means there will be a direct impact on the Healthcare/ICT/Chemical industries.

Key Market Segments

By Component

SoftwareServicesBy Organization Size

Large EnterprisesSmall & Medium Enterprises

By End-Use Industry

Aerospace & DefenceIT & TelecomEnergy & UtilitiesPublic sectorManufacturingBFSIHealthcareRetailOthers

By Application

Marketing & AdvertisingFraud Detection & Risk ManagementPredictive analyticsAugmented & Virtual realityNatural Language processingComputer visionSecurity & surveillanceOthers

Request full Report-https://reportocean.com/industry-verticals/sample-request?report_id=31083

By Geography

North AmericaU.S.CanadaMexicoEuropeUKFranceGermanyRest of EuropeAsia-PacificChinaJapanIndiaRest of Asia-PacificLAMEALatin AmericaMiddle EastArica

Key players profiled in the report

Google Inc.SAS Institute Inc.FICOHewlett Packard EnterpriseYottamine AnalyticsAmazon Web ServicesBigML, Inc.Microsoft CorporationPredictron Labs Ltd.IBM Corporation

Table of Content:

What is the goal of the report?

Key Questions Answered in the Market Report

How did the COVID-19 pandemic impact the adoption of by various pharmaceutical and life sciences companies? What is the outlook for the impact market during the forecast period 2021-2030? What are the key trends influencing the impact market? How will they influence the market in short-, mid-, and long-term duration? What is the end user perception toward? How is the patent landscape for pharmaceutical quality? Which country/cluster witnessed the highest patent filing from January 2014-June 2021? What are the key factors impacting the impact market? What will be their impact in short-, mid-, and long-term duration? What are the key opportunities areas in the impact market? What is their potential in short-, mid-, and long-term duration? What are the key strategies adopted by companies in the impact market? What are the key application areas of the impact market? Which application is expected to hold the highest growth potential during the forecast period 2021-2030? What is the preferred deployment model for the impact? What is the growth potential of various deployment models present in the market? Who are the key end users of pharmaceutical quality? What is their respective share in the impact market? Which regional market is expected to hold the highest growth potential in the impact market during the forecast period 2021-2030? Which are the key players in the impact market?

Inquire or Share Your Questions If Any Before the Purchasing This Report https://reportocean.com/industry-verticals/sample-request?report_id=31083

About Report Ocean:We are the best market research reports provider in the industry. Report Ocean believes in providing quality reports to clients to meet the top line and bottom line goals which will boost your market share in todays competitive environment. Report Ocean is a one-stop solution for individuals, organizations, and industries that are looking for innovative market research reports.

Get in Touch with Us:Report Ocean:Email:sales@reportocean.comAddress: 500 N Michigan Ave, Suite 600, Chicago, Illinois 60611 UNITED STATESTel:+1 888 212 3539 (US TOLL FREE)Website:https://www.reportocean.com

Read more:
Machine Learning as a Service Market-Industry Analysis with Growth Prospects, Trends, Size, Supply, Share, Pipeline Projects and Survey till 2030 ...