Bringing AI and machine learning to the edge with matter-ready platform – Electropages

28-01-2022 | Silicon Laboratories Inc | Semiconductors

Silicon Labs offers the BG24 and MG24 families of 2.4GHz wireless SoCs for Bluetooth and Multiple-protocol operations and a new software toolkit. This new co-optimised hardware and software platform assists in bringing AI/ML applications and wireless high performance to battery-powered edge devices. Matter-ready, the ultra-low-power families support multiple wireless protocols and include PSA Level 3 Secure Vault protection, excellent for diverse smart home, medical and industrial applications.

The company solutions comprise two new families of 2.4GHz wireless SoCs, providing the industry's first integrated AI/ML accelerators, support for Matter, OpenThread, Zigbee, Bluetooth Low Energy, Bluetooth mesh, proprietary and multi-protocol operation, the highest level of industry security certification, ultra-low power abilities and the largest memory and flash capacity in the company's portfolio. Also offered is a new software toolkit developed to enable developers to quickly build and deploy AI and machine learning algorithms employing some of the most popular tool suites such as TensorFlow.

"The BG24 and MG24 wireless SoCs represent an awesome combination of industry capabilities including broad wireless multi-protocol support, battery life, machine learning, and security for IoT Edge applications," said Matt Johnson, CEO of Silicon Labs.

The families also have the largest Flash and RAM capacities in the company portfolio. This indicates that the device may evolve for multi-protocol support, Matter, and trained ML algorithms for large datasets. PSA Level 3-Certified Secure Vault, the highest level of security certification for IoT devices, offers the security required in products such as medical equipment, door locks, and other sensitive deployments where hardening the device from external threats is essential.

Go here to read the rest:
Bringing AI and machine learning to the edge with matter-ready platform - Electropages

Legal Issues That Might Arise with Machine Learning and AI – Legal Reader

While AI-enabled decision-making seems to take out the subjective human areas of bias and prejudice, many observers worry that machine analytics have the same or different biases embedded in the systems.

As with many advances in technology, the legal issues can be unsettled until a body of case law has been established. This is likely to be the case with artificial intelligence or AI. While legal scholars have already begun discussing the ramifications of this advance, the number of court cases, though growing, has been relatively meager up to this point.

Rapid Advances in AI

New and more powerful chips have the potential to accelerate many applications that rely on AI. This solves some of the impediments that have made advances in AI slower than some observers have anticipated. This speeds up the time it takes to train new machines and new models from months to just a few hours or even minutes. With better and faster chips for machine learning, the AI revolution can begin to reach its potential.

This potent advance will bring an array of important legal questions. This capability will usher in new ideas and techniques that will impact product development, analytics and more.

Important Impacts on Intellectual Property

While AI will impact many areas of the law, a fair share of its influence will be on areas of intellectual property. Certainly, areas of negligence, unfairness, bias, cyber security and other matters will be important, but some might wonder who owns the fruits of innovations that come from AI. In general, the patentability of computer-generated works has not been established, and the default is that the owner of the AI design is the owner of the new material. Since a computer cannot own personal property, at present, the right to intellectual property also does not exist.

More study and discussion will no doubt go into this area of law. This will become more pressing as technological advances will make it more difficult to identify the creator of certain products or innovations.

Increasing Applications in Medical Fields

The healthcare industry is also very much involved in harnessing the power associated with AI. Many of these applications involve routine tasks that are not likely to present overly complex legal concerns, although they could result in the displacement of workers. While the processing of paperwork and billing is already underway, the use of AI for imaging, diagnosis and data analysis is likely to increase in the coming years.

This could have legal implications when regarding cases that deal with medical malpractice. For example, could the creator of a system that is relied upon for an accurate diagnosis be sued if something goes wrong. While the potential is enormous, the possibility of error raises complicated questions when AI systems play a primary role.

Crucial Issues With Algorithmic Decision-Making

While AI-enabled decision-making seems to take out the subjective human areas of bias and prejudice, many observers worry that machine analytics have the same or different biases embedded in the systems. In many ways, these systems could discriminate against certain segments of society when it comes to housing or employment opportunities. These entail ethical questions that at some point will be challenged in a court of law.

The ultimate question is whether or not smart machines can outthink humans, or if they just contain the blind spots of the programmers. In a worst-case scenario, these embedded prejudices would be hard to combat, as they would come with the imprint of scientific progress. In other words, the biases would claim objectivity.

Some observers, though, believe that business practices have always been the arena for discrimination against certain workers. With AI, thoughtfully engaged and carefully calibrated, these practices could be minimized. It could offer more opportunities for a wider pool of individuals while minimizing the influence of favoritism.

The Legal Future of AI

As with other areas of the courts, AI issues will have to be slowly adjudicated in the court system. Certain decisions will establish court precedents that will gain a level of authority. Technological advances will continue to shape society and the international legal system.

Follow this link:
Legal Issues That Might Arise with Machine Learning and AI - Legal Reader

Grant will expand University Libraries’ use of machine learning to identify historically racist laws – UNC Chapell Hill

Since 2019, experts at the University of North Carolina at Chapel Hills University Libraries have investigated the use of machine learning to identify racist laws from North Carolinas past. Now a grant of $400,000 from The Andrew W. Mellon Foundation will allow them to extend that work to two more states. The grant will also fund research and teaching fellowships for scholars interested in using the projects outputs and techniques.

On the Books: Jim Crow and Algorithms of Resistance began with a question from a North Carolina social studies teacher: Was there a comprehensive list of all the Jim Crow laws that had ever been passed in the state?

Finding little beyond scholar and activist Pauli Murrays 1951 book States laws on race and color, a team of librarians, technologists and data experts set out to fill the gap. The group created machine-readable versions of all North Carolina statutes from 1866 to 1967. Then, with subject expertise from scholarly partners, they trained an algorithm to identify racist language in the laws.

We identified so many laws, said Amanda Henley, principal investigator for On the Books and head of digital research services at the University Libraries. There are laws that initiated segregation, which led to the creation of additional laws to maintain and administer the segregation. Many of the laws were about school segregation. Other topics included indigenous populations, taxes, health care and elections, Henley said. The model eventually uncovered nearly 2,000 North Carolina laws that could be classified as Jim Crow.

Henley said that On the Books is an example of collections as datadigitized library collections formatted specifically for computational research. In this way, they serve as rich sources of data for innovative research.

The next phase of On the Books will leverage the teams learnings through two activities:

Weve gained a tremendous amount of knowledge through this project everything from how to prepare data sets for this kind of analysis, to training computers to distinguish between Jim Crow and not Jim Crow, to creating educational modules so others can use these findings. Were eager to share what weve learned and help others build upon it, said Henley.

On the Books began in 2019 as part of the national Collections as Data: Part to Whole project, funded by The Andrew W. Mellon Foundation. Subsequent funding from the ARL Venture Fund and from the University Libraries internal IDEA Action grants allowed the work to continue. The newest grant from The Mellon Foundation will conclude at the end of 2023.

Original post:
Grant will expand University Libraries' use of machine learning to identify historically racist laws - UNC Chapell Hill

Autonomy in Action: These Machines Bring Imagination to Life – Agweb Powered by Farm Journal

By Margy Eckelkamp and Katie Humphreys

Machinery has amplified the workload farmers can accomplish, and technology has delivered greater efficiencies. Now, autonomy is poised to introduce new levels of productivity and fun.

Different than its technology cousins of guidance and GPS-enabled controls, autonomy relocates the operator to anywhere but the cab.

True autonomy is taking off the training wheels, says Steve Cubbage, vice president of services for Farmobile. It doesnt require human babysitting. Good autonomy is prefaced on good data and lots of it.

As machines are making decisions on the fly, companies seek to enable them to provide the quality and consistency expected by the farmer.

We could see mainstream adoption in five to 10 years. It might surprise us depending on how far we advance artificial intelligence (AI), data collection, etc., Cubbage says. Dont say it cant happen in a short time, because it can. Autosteer was a great example of quick and unexpected acceptance.

Learn more about the robots emerging on the horizon.

The NEXAT is an autonomous machine, ranging from 20' to 80', that can be used for tillage, planting, spraying and harvesting. The interchangeable implements are mounted between four electrically driven tracks.Source: NEXAT

The idea and philosophy behind the NEXAT is to enable a holistic crop production system where 95% of the cultivated area is free of soil compaction, says Lothar Fli, who works in marketing for NEXAT. This system offers the best setup for carbon farming in combination with the possibility for regenerative agriculture and optimal yield potential.

The NEXAT system carries the modules, rather than pulls them, as Fli describes, which allowed the company to develop a simpler and lighter machine that delivers 50% more power with 40% less weight. In operation, weight is transferred onto the carrier vehicle and large tracks and optimized so it becomes a self-propelled machine.

This enables the implements to be guided more accurately and with less slip, reducing fuel consumption and CO2 emissions more than 30%, he says. Because the NEXAT carries the implement, theres not an extra chassis with extra wheels. The setup creates the best precision at a high working width that reduces soil compaction on the growing areas.

In the field, the machine is driven horizontally but rotates 90 for road travel. Two independent 545-hp diesel engines supply power. The cab, which can rotate 270, is the basis for fully automated operation but enables manual guidance.

The tillage and planting modules came from Vderstad, a Swedish company. The CrossCutter disks for tillage and Tempo planter components are no different than whats found on traditional Vderstad implements.

The crop protection modules, which work like a conventional self-propelled sprayer, come from the German company Dammann. The sprayer has a 230' boom, with ground clearance up to 6.5', and a 6,340-gal. tank.

The NexCo combine harvester module achieves grain throughputs of 130 to 200 tons per hour.

A 19' long axial rotor is mounted transverse to the direction of travel and the flow of harvested material is introduced centrally into the rotor and at an angle to achieve energy efficiency. The rotor divides it into two material flows, which according to NEXAT, enables roughly twice the threshing performance of conventional machines. Two choppers provide uniform straw and chaff distribution, even with a 50' cutting width.

The grain hopper holds 1,020 bu. and can be unloaded in a minute. See the NEXAT system in action.

At the Consumer Electronics Show, John Deere introduced its full autonomy solution for tractors, which will be available to farmers later in 2022.Its tractors are outfitted with:

Farmers can control machines remotely via the JD Operations Center app on a phone, tablet or computer.

Unlike autonomous cars, tractors need to do more than just be a shuttle from point A to point B, says Deanna Kovar, product strategy at John Deere.

When tractors are going through the field, they have to follow a very precise path and do very specific jobs, she says. An autonomous 8R tractor is one giant robot. Within 1" of accuracy, it is able to perform its job without human intervention.

Artificial intelligence and machine learning are key technologies to John Deeres vision for the future, says Jahmy Hindman, John Deeres chief technology officer. In the past five years the company has acquired two Silicon Valley technology startups: Blue River Technology and Bear Flag Robotics.

This specific autonomy product has been in development for at least three years as the John Deere team collected images for its machine learning library. Users have access to live video and images via the app.

The real-time delivery of performance information is critical, John Deere highlights, to building the trust of the systems performance.

For example, Willy Pell, John Deere senior director of autonomous systems, explains even if the tractor encounters an anomaly or an undetectable object, safety measures will stop the machine.

While the initial introduction of the fully autonomous tractor showed a tillage application, Jorge Heraud, John Deere vice president of automation and autonomy, shares three other examples of how the company is bringing forward new solutions:

See the John Deere autonomous tractor launch.

New Holland has developed the first chopped material distribution system with direct measurement technology: the OptiSpread Automation System. 2D radar sensors mounted on both sides of the combine measure the speed and throw of the chopped material. If the distribution pattern no longer corresponds to the nominal distribution pattern over the entire working width, the rotational speed of the hydraulically driven feed rotors increases or decreases until the distribution pattern once again matches. The technology registers irregular chopped material distribution, even with a tailwind or headwind, and produces a distribution map.

The system received a Agritechnica silver innovation award.Source: CNH

As part of Vermeers 50th anniversary celebration in 2021, a field demonstration was held at its Pella, Iowa, headquarters to unveil their autonomous bale mover. The BaleHawk navigates through a field via onboard sensors to locate bales, pick them up and move them to a predetermined location.

With the capacity to load three bales at a time, the BaleHawk was successfully tested with bales weighing up to 1,300 lb. The empty weight of the vehicle is less than 3 tons. Vermeer sees the lightweight concept as a solution to reduce compaction.

See the Vermeer Bale Hawk in action.Source: Vermeer

In April 2021, Philipp Horsch, with German farm machinery manufacturer Horsch Machinen, tweeted about its Robo autonomous planter. He said the machine was likely to be released for sale in about two years, depending on efforts to change current regulations, which state for fully autonomous vehicle use in Germany, a person must stay within 2,000' to watch the machine.

The Horsch Robo is equipped with a Trimble navigation system and fitted with a large seed hopper. See the system in action.Source: Horsch

Katie Humphreys wears the hat of content manager for the Producer Media group. Along with writing and editing, she helps lead the content team and Test Plot efforts.

Margy Eckelkamp, The Scoop Editor and Machinery Pete director of content development, has reported on machinery and technology since 2006.

View post:
Autonomy in Action: These Machines Bring Imagination to Life - Agweb Powered by Farm Journal

Senior Research Associate in Machine Learning job with UNIVERSITY OF NEW SOUTH WALES | 279302 – Times Higher Education (THE)

Work type:Full-timeLocation:Canberra, ACTCategories:Lecturer

UNSW Canberra is a campus of the University of New South Wales located at the Australian Defence Force Academy in Canberra. UNSW Canberra endeavours to offer staff a rewarding experience and offers many opportunities and attractive benefits, including:

At UNSW, we pride ourselves on being a workplace where the best people come to do their best work.

The School of Engineering and Information Technology (SEIT) offers a flexible, friendly working environment that is well-resourced and delivers research-informed education as part of its accredited, globally recognised engineering and computing degrees to its undergraduate students. The School offers programs in electrical, mechanical, aeronautical, and civil engineering as well as in aviation, information technology and cyber security to graduates and professionals who will be Australias future technology decision makers.

We are seeking a person for the role of Postdoctoral Researcher / Senior Research Fellow in the area of machine learning.

About the Role:

Role:Postdoctoral Researcher / Senior Research FellowSalary:Level B:$110,459 - $130,215 plus 17% SuperannuationTerm:Fixed-term, 12 Months, Full-time

About the Successful Applicants

To be successful in this role you will have:

In your application you should submit a 1-page document outlining how you meet the Skills and Experience outlined in the Position Description.Please clearly indicate the level you are applying for.

In order to view the Position Description please ensure that you allow pop-ups for Jobs@UNSW Portal.

The successful candidate will be required to undertake pre-employment checks prior to commencement in this role. The checks that will be undertaken are listed in the Position Description. You will not be required to provide any further documentation or information regarding the checks until directly requested by UNSW.

The position is located in Canberra, ACT. The successful candidate will be required to work from the UNSW Canberra campus.To be successful you will hold Australian Citizenship and have the ability to apply for a Baseline Security Clearance. Visa sponsorship is not available for this appointment.

For further information about UNSW Canberra, please visit our website:UNSW Canberra

Contact:Timothy Lynar, Senior Lecturer

E: t.lynar@adfa.edu.au

T: 02 51145175

Applications Close:13 February 2022 11:30PM

Find out more about working atUNSW Canberra

At UNSW Canberra, we celebrate diversity and understand the benefits that inclusion brings to the university. We aim to ensure thatour culture, policies, and processes are truly inclusive. We are committed to developing and maintaining a workplace where everyone is valued and respected for who they are and supported in achieving their professional goals. We welcome applications from Aboriginal and Torres Strait Islander people, Women at all levels, Culturally and Linguistically Diverse People, People with Disability, LGBTIQ+ People, people with family and caring responsibilities and people at all stages of their careers. We encourage everyone who meets the selection criteria and shares our commitment to inclusion to apply.

Any questions about the application process - please emailunswcanberra.recruitment@adfa.edu.au

Read this article:
Senior Research Associate in Machine Learning job with UNIVERSITY OF NEW SOUTH WALES | 279302 - Times Higher Education (THE)

An introduction to machine translation for localisation – GamesIndustry.biz

Share this article

Machine learning has made its way into nearly every industry, and game localization is no exception. Software providers claim that their machine translation products mark a new era in localization, but gamers are often left wishing that game publishers would pay more attention to detail.

As a professional localization company that currently is working with machine translation post-editing, Alconost could not pass up the topic. In this article we aim to find out what's hot (and what's not) about machine translation (MT) and how to get the most out of it without sacrificing quality.

When machine learning was introduced to localization, it was seen as a great asset, and for quite a while localization companies worked using the PEMT approach. PEMT stands for post-edited machine translation: it means that after a machine translates your text, translators go through it and edit it. The main problem with PEMT is that the machine translates without comparing the text to previous or current translations and a glossary -- it just translates as it "sees" it. So naturally this method results in numerous mistakes, creating a need for manual editing.

As time passed and technology advanced, NMT (neural machine translation) came into play. This proved a much more reliable and robust solution. NMT uses neural networks and deep learning to not just translate the text but actually learn the terminology and its specifics. This makes NMT much more accurate than PEMT and, with sufficient learning, delivers high-quality results much faster than any manual translation.

It's no surprise that there are dozens of ready-made NMT solutions on the market. These can be divided into two main categories: stock and custom NMT engines. We will talk about custom (or niche-specific) NMT tools a bit later; for now, let's focus on stock NMT.

Stock NMT engines are based on general translation data. While these datasets are vast and rich (for example, Google's database), they are not domain-oriented. This means that when using a stock NMT tool you get a general understanding of the text's meaning, but you don't get an accurate translation of specific phrases and words.

Examples of stock NMT engines include Google Cloud Translation, Amazon Translate, DeepL Translator, CrossLang, Microsoft Translator, Intento, KantanMT.

The chief advantage of these solutions is that most of them are public and free to use (like Google Translate). Commercial stock NMTs offer paid subscriptions with their APIs and integration options. But their biggest drawback is that they don't consider the complexity of game localization. More on that below.

While machine translation works fine in many industries, game localization turned out to be a tough nut to crack. The main reason for this is that gaming (regardless of the type of game) always aims for an immersive experience, and one core part of that experience is natural-sounding dialogue and in-game text. So what's so challenging about translating them properly?

It may sound like a given, but creativity plays a massive role in bringing games to life, especially when it comes to their translation. A translator might have a sudden flash of inspiration and come up with an unexpected phrasing or wording that resonates with players much better than the original text.

Can a machine be creative? Not yet. And that means that machine translations will potentially always lack the creative element that sometimes makes the whole game shine.

One of the biggest challenges in localization is making the translation sound as natural as possible. And since every country and region has its own specific languages and dialects, it takes a thorough understanding of one's culture to successfully adapt a translation to it.

While a machine learning solution can be trained on an existing database, what if it comes across a highly specific phrase that only locals know how to use? This is where professional translation by native speaking linguists and community feedback are highly helpful. Input from native speakers of the target language who know its intricacies can advise on the best wording. And for that, you need to have a feel for the language that you're working with, not just theoretical knowledge.

Certain words convey a certain tone, and this is something that we do without thinking, just by feel. So when translating a game, a human translator can sense the overall vibe of the game (or of a specific dialogue) and use not just the original wording but synonyms that better convey the tone and mood. Conversely, a machine is not able to "sense the mood," so in some cases the translation may not sound as natural as it could.

Despite all the challenges around game localization, machine translation still does a pretty decent job. This technology has several significant benefits that make MT a great choice when it comes to certain tasks.

Speed is probably the biggest benefit of machine translation and its unique selling point. A machine can translate massive chunks of text in mere minutes, compared to the days or even weeks it would take a translator. In many cases it proves faster and more efficient to create a machine translation first and then edit it. Besides, the speed of MT is very handy if you need to quickly release an update and can manage with "good enough" translation quality.

When talking about game localization, the first thing that comes to mind is usually in-game dialogue. But game localization is much more than that: it includes user manuals, how-tos, articles, guides, and marketing texts. This kind of copy doesn't employ much creativity and imagery, since these materials don't really impact how immersive the gaming experience will be. If a user spots a mistake while reading your blog, it's less likely to ruin the game experience for them.

One more huge advantage of machine translation is its relatively low cost. Compared to the rates of professional translators, machine translation tends to be more affordable. Hence, it can save you money while letting you allocate experts to more critical tasks.

One more way MT can benefit your project is translation consistency. When several independent translators work on a text, they may translate certain words differently, so that you end up with different translations. But with machine translation repetitive phrases are always translated the same way, improving the consistency of your text.

MT is not 100% accurate, according to gamers. For example, a recent Reddit discussion features hundreds of comments left by frustrated gamers, the majority of whom say the same thing: companies are going for fast profits instead of investing in high-quality translation. And what's the tool to deliver quick results that are "good enough"? You guessed it -- machine translation.

Alconost's Kris Trusava

Unfortunately, when gaming companies try to release games faster it leads not only to a poor user experience but also to a significant drop in brand loyalty. Many gamers cite poor translations as one of the biggest drawbacks of gaming companies.

So what options are there when Google NMT isn't enough? Here's an idea for what might work best.

While neural machine translation has certain flaws, it has many benefits as well. It's quick, it's moderately accurate, and it can actually be quite helpful if you need to quickly translate massive amounts of documents (such as user manuals). So what we see as the perfect solution is niche-oriented, localization-specific NMT (or custom NMT).

For instance, Alconost is currently working on a product that uses neural machine learning and a vast database of translations in different languages. This lets us achieve higher accuracy and adapt the machine not just for general translation, but for game translation -- and there is a big difference between the two. In addition, we use cloud platforms (such as Crowdin and GitLoalize) with open-source data. That means that glossaries and translation memories from one project can be used for another. And obviously our translators post-edit the text to ensure that the translation was done right.

Custom domain-adapted NMT solutions may become a milestone in localization, as they are designed with a specific domain in mind. Their biggest advantages are high translation accuracy, speed, affordability (as they're cheaper than hiring professional translators), and the option to explore new niches and domains.

Some content, such as user reviews, sometimes goes untranslated because it is too specific and there is not much of it. It wouldn't make much sense to use a stock NMT solution for their translation, as it would require heavy post-editing.

Custom NMT tools, however, can be designed to work with user reviews and "understand" the tone of voice, so that even this specialized content can be translated by a machine. This solution has been implemented by Airbnb, where reviews and other user-generated content are translated in a flash just by pressing the "Translate" button.

In addition, machine translators can be trained to recognize emotions and mood and, when paired with machine-learning classifiers, to label and prioritize feedback. This can also be used to collect data on users' online behavior, which is a highly valuable asset to any company.

Finally, let's talk about the intricacies of localizing a text translated by a machine, and how the process differs from standard localization. We'll compare the two approaches based on our own experience acquired while working on different projects.

When we localize a project from scratch, it's safe to say we are in full control of the quality, since the team has glossaries and context available from the start. Here the text is translated with a specific domain in mind, and only rarely do we have to post-edit the translated copy.

With machine translation, however, things are a bit different. The source text can be translated by different engines, all of which differ in terms of quality and accuracy. So when we start working with these texts, we request all available materials (style guides, glossary, etc.) from the client to ensure that the translation fits the domain and the brand's style. This means that post-editing machine translations requires the additional step of assessing the quality and accuracy for the given project.

When you choose a traditional localization approach, there is a 99% chance that your project will be assigned to a person who has the most experience with your particular language and domain.

But with machine translation you can't really be sure how well the machine has been trained and how much data it has for different languages. One engine may have learned 10,000 pages of Spanish-English translations, while another engine has studied 1,000,000 pages. Obviously, the latter is going to be more accurate.

The bottom line is that when working with a machine translation engine "trained" by a professional localization company on niche topics, there's an excellent chance that they'll ensure the "proficiency" of the customized MT engine and, consequently, the quality of the translation. With an ample translation database and professional editors by side, you can put your mind at ease, knowing that your project is in good hands.

Kris Trusava is localization growth manager at Alconost, a provider of localization services for games and other software into over 80 languages.

Here is the original post:
An introduction to machine translation for localisation - GamesIndustry.biz

Quantum Computing Threatens Everything Could it be Worse Than the Apocalypse? – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

A quantum computer is a machine that uses the laws of quantum theory to solve problems made harder by Moore's law (the number of transistors in a dense integrated circuit doubles about every two years). One example is factoring large numbers. Traditional computers are limited to logical circuits with several tens of transistors, while the number of transistors in a quantum processor may be on the order of one to twomillion. Meaning, these computers will have exponential power, solving problems that traditional computation can't even identify or create solutions for.

In the near future, quantum computers will be so advanced that they will have the capability to simulate very complicated systems. This could be used for simulations in physics, aerospace engineering, cybersecurityand much more. However, once this computer is built, it has the potential to unraveldata encryption protocols. It could also potentially compromise air gaps due to its ability to scan vast distances for nearby networked devices or applications that are open. This means that it can become even simpler for external hackers. Theymay already haveaccess to your computer or computer system via other avenues,like vulnerabilities in web browsers. Theycould find it much easier because you're not locking up all the doors.

Quantum computers point to a radically new understanding of computing. An understandingthat could eventually be used to unlock problems now thought completely intractable. For now, the field seems ripe with potential. Scientists working on quantum computing call it one of the most interesting theoretical toolsin artificial intelligence. Think of it as an incredibly powerful calculator programmed with deep domain expertise. Quantum computers promise answers to all sorts of mathematical, scientificand medical questions humans would never have the guts to tackle otherwise. They promise profound breakthroughs in imaging that will rival even experimental intracellular MRI scans; they may help crack wide-ranging databases that are currently unbreakable orthey might pick up scant details like geological signatures warning us about tsunamis long before they happen.

Quantum computers can theoretically be programmed to solve any complex computational problem. But, the act of programming the computer is so expensive and inflexible that someone would need to program it with all possible solutions. Quantum computers threaten everything. The worst part is that security experts can't ever say for sure what you can do to protect against their programming capabilities. They do know, however, that it's possible to reprogram them just as we would with a normal computer. It's just that the task is so complex and difficult that programming would be such a high-level security risk, it might as well never exist.

What does this all mean? It means we need to develop some sort of encryption technology on our smaller devices so not even those who hold all the worlds data can see or access it. Quantum computers work differently than traditional computers. That gives the maker of a quantum computer more control than with a conventional computer. They can do things like reverse time and process large data with greater speed. The manufacturer will program the machine before release, which also comes with certain risks. Ifthey change their mind and reprogramit per client needs, they put themselves at risk for security breaches. The catch is that the cryptography keys are only secure if you keep them secret. The slightest leak say a pinhole camera across the table from something like a quantum computeror a phone call or email intercepted while being decrypted would enable an adversary to not just unscramble your message but steal your keys. The threat made by quantum computing has been speculated since before it was even technologically feasible to build a quantum computer. But now that were nearly there, the situation might be even more direthan you can imagine.

Related:How Will The World Look Like In 2025 And TheFutureOf ...

As quantum computers allow for more efficient algorithms, the dangers of hacking increase. Such security risks have been a top priority at Google. They havehigh expectations for what approach they will take to create their future quantum machine. In the meantime, DARPA (Defense Advanced Research Projects Agency) has set out grand challengesfor computer science with a hefty $2 million prize. DARPA's goal is to keep U.S. cyber strength relevant amid the rapid decline in Moore's Law and potential loss of global technological leadership. If quantum computers proliferate, they will threaten everything not just bank records and medical documents, but everything. They represent a security leak so fundamental that it could be worse than the apocalypse. The quantum computer poses a possible threat to the infrastructure of the United States. Yet the American authorities do not have enough measures in place to stop this type of danger. One way that they can defend themselves is by inventing new safety standards that work with the current technologies.

Whenever quantum computing matures, however, it will present a vigorous challenge. Computer scientists will need to develop the protocols and protections necessary to ensure security for this emerging technology. If these precautions are not taken, quantum computing could lead to disastrous outcomes in cyber security. There needs to be a protocol developed to provide security for quantum computers. Hackers will be able to access and disrupt live systems, which calls for an urgent need of advancements in cyber security. These new systems can't just implement existing protection protocols because they're not fully developed yet. The cost of research and development is high and the profits once the product is finished are relatively low.

Quantum computing is a hot topic at this moment in time that will impact society in a way we can't even predict if we don't acknowledge its significance now. Most computers today work in accordance with digital signals. If someone tries to hack the computer, it will change that digital signal into another form or cancel it out, which can be easily noticed. However, quantum computers use quantum bits forcalculations. Theyare tied together in a way that makes them so sensitive to changes in information that they are exponentially more vulnerable to hacks than digital computers. If someone manages to hack a quantum computer though not yet possible it would have serious implications for maintaining our safety standards.

Related:Thanks to Blockchain, Decentralization -- and Data Security -- Are ...

If the leaked NSA documents are to be believed, then we may be in for a rude awakening when quantum computers become technologically feasible. These machines will be able to perform calculations in far less time than any conventional computerand render our currentencryptions ineffectual. The leaks claim that in 30 years, two medium-sized quantum computers would be able to even break the security of RSA (cryptosystem) which is currently set at 2048 bits.

Any business that relies on modern cryptography is at risk of being hacked in the near future. But what can companies do to protect themselves? As it turns out, there are some pretty straightforward solutions which firms can preserve (or improve) security amid all this hullabaloo with quantum computing. The authors recommend investing in encryption techniques like Bitcoin, the blockchainand the TLS (Transport Layer Security).

In simple terms, quantum computers process information differently from today's digital computers. This is because of their ability to have bits which sit in more than one state simultaneously, meaning they can perform many calculations at a time. In a future dominated by quantum computing, all regular computing will be made virtually obsolete. Hackers will be able to access the deepest secrets of companies without needing a password. To avoid this fate, companies need to embrace encryption techniques that guard against quantum technology, but they cannot afford to stop innovating too drastically.

The looming potential threat of quantum computing should be taken seriously, but this doesn't mean you should panic. The best way to protect yourself is to plan ahead and think about possible solutions. Incorporating elements of quantum cryptography may not always be possible for every client because of the cost. But, it could help secure an important client who cannot risk future interference in their sensitive operations.

Related:How Companies Can Utilize AI andQuantumTechnologies to ...

Excerpt from:
Quantum Computing Threatens Everything Could it be Worse Than the Apocalypse? - Entrepreneur

Atom Computing Plans To Build A Bigger And Better High-Tech Quantum Computer With Its Latest $60 Million Series B Funding – Forbes

Atom Computing

Atom Computing, a quantum computing company headquartered in Berkeley, California, seems to be on the fast track for funding.

This week Atom announced it had secured$60MSeries B round of financing led by Third Point Ventures. The round also included Prime Movers Lab and insiders Innovation Endeavors, Venrock, and Prelude Ventures.

Atom was founded in 2018 with $5M in seed funds by Benjamin Bloom and Jonathan King. Over two years, the duo used those funds to secretly staff and build a quantum computer with a unique technology. What set Atoms computer apart from other quantum machines was that it was the first quantum computer to use nuclear-spin qubits created from optically-trapped neutral atoms.

First-Generation Quantum Computer, Phoenix

In July 2021, Atom Computingreceived an additional $15M in Series A funding from investorsVenrock, Innovation Endeavors, and Prelude Ventures, plus three grants from the National Science Foundation.

According to a statement on Atom's press release by Rob Hays, Atom Computing's president and CEO, there was no shortage of investment interest. "We've seen a tremendous amount ofinvestor interest in what many are starting to believe is a more promising way to scale quantum computers neutral atoms, he said. Our technology advancements and this investment give us the runway to continue our focus on delivering the most scalable and reliable quantum computers."

Whats different about its technology

Most of todays quantum computers use two types of qubits, either superconducting (IBM & Google) or trapped-ion (Quantinum or IonQ). Amazon doesnt yet have a quantum computer, but it plans to build one using superconducting hardware. In contrast, Psi Quantum and Xanadu use photons of light that act as qubits.

Atom computing chose to use a different technology -nuclear-spin qubits made from neutral atoms.Phoenix, the name of Atoms first-generation, gate-based quantum computer platform, uses 100 optically trapped qubits.

These qubits are created from an isotope of Strontium, a naturally occurring element considered to be a neutral atom. Goingdeeper, neutral atoms have equal numbers of protons and electrons. However, isotopes of Strontium have varying numbers of neutrons. These differences in neutrons produce different energy levels in the atom that allow spin qubits to be created. Atom Computing uses the isotope Strontium-87 and takes advantage of its unique energy levels to create spin qubits.

It is important for qubits to remain in a quantum state long enough to complete running the quantum circuits. The time that a qubit retains its quantum state is called its coherence time. Neutral atom qubits have a longer coherence time than most other qubit technologies.

Lasers instead of wires are used for precision control of the strontium-87 qubits. Lasers eliminates wiring, which can create radiation and noise that negatively affects coherence.

There are many other technical reasons for using neutral atom spin qubits but beyond the scope of this article.

Second generation plans

Artist rendering of Atom Computings second-generation quantum

With its latest $60M Series B funding, Atom Computing plans to build a larger, second-generation neutral-atom quantum computer. Many additional qubits will give the system increased computational ability. Atom Computing is currently likely to have undisclosed customer trials and use cases in progress. However, we expect new and more significant use cases to be publicly announced once the new quantum system is operational.

Patrick Moorhead, president and chief analyst of Moor Insights and Strategy, said, Qubit coherence, fidelity, and scalability are essential factors for achieving quantum advantage. Atom Computing has already demonstrated that Phoenix, its first-generation 100+ nuclear-spin qubit quantum processor, has the potential to check all those boxes. With the additional $60M Series B funding, I believe Atom could build a large qubit, second-generation quantum system that either brings it to the edge of quantum advantage or possibly even achieves it.

Analyst notes:

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, or speaking sponsorships. The company has had or currently has paid business relationships with 88,A10 Networks,Advanced Micro Devices, Amazon,Ambient Scientific,AnutaNetworks,Applied Micro,Apstra,Arm, Aruba Networks (now HPE), AT&T, AWS, A-10 Strategies,Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera,Clumio, Cognitive Systems, CompuCom,CyberArk,Dell, Dell EMC, Dell Technologies, Diablo Technologies,Dialogue Group,Digital Optics,DreamiumLabs, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud,Graphcore,Groq,Hiregenics,HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM,IonVR,Inseego, Infosys,Infiot,Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo,Linux Foundation,Luminar,MapBox, Marvell Technology,Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco),Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek,Novumind, NVIDIA,Nutanix,Nuvia (now Qualcomm), ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Panasas,Peraso, Pexip, Pixelworks, Plume Design, Poly (formerly Plantronics),Portworx, Pure Storage, Qualcomm, Rackspace, Rambus,RayvoltE-Bikes, Red Hat,Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak (now Aruba-HPE), SONY Optical Storage,Springpath(now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity,TensTorrent,TobiiTechnology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications,Vidyo, VMware, Wave Computing,Wellsmith, Xilinx,Zayo,Zebra,Zededa, Zoho, andZscaler.Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is a personal investor in technology companiesdMYTechnology Group Inc. VI andDreamiumLabs.

Follow this link:
Atom Computing Plans To Build A Bigger And Better High-Tech Quantum Computer With Its Latest $60 Million Series B Funding - Forbes

Why Is Silicon Valley Still Waiting for the Next Big Thing? – The New York Times

In the fall of 2019, Google told the world it had reached quantum supremacy.

It was a significant scientific milestone that some compared to the first flight at Kitty Hawk. Harnessing the mysterious powers of quantum mechanics, Google had built a computer that needed only three minutes and 20 seconds to perform a calculation that normal computers couldnt complete in 10,000 years.

But more than two years after Googles announcement, the world is still waiting for a quantum computer that actually does something useful. And it will most likely wait much longer. The world is also waiting for self-driving cars, flying cars, advanced artificial intelligence and brain implants that will let you control your computing devices using nothing but your thoughts.

Silicon Valleys hype machine has long been accused of churning ahead of reality. But in recent years, the tech industrys critics have noticed that its biggest promises the ideas that really could change the world seem further and further on the horizon. The great wealth generated by the industry in recent years has generally been thanks to ideas, like the iPhone and mobile apps, that arrived years ago.

Have the big thinkers of tech lost their mojo?

The answer, those big thinkers are quick to respond, is absolutely not. But the projects they are tackling are far more difficult than building a new app or disrupting another aging industry. And if you look around, the tools that have helped you cope with almost two years of a pandemic the home computers, the videoconferencing services and Wi-Fi, even the technology that aided researchers in the development of vaccines have shown the industry hasnt exactly lost a step.

Imagine the economic impact of the pandemic had there not been the infrastructure the hardware and the software that allowed so many white-collar workers to work from home and so many other parts of the economy to be conducted in a digitally mediated way, said Margaret OMara, a professor at the University of Washington who specializes in the history of Silicon Valley.

As for the next big thing, the big thinkers say, give it time. Take quantum computing. Jake Taylor, who oversaw quantum computing efforts for the White House and is now chief science officer at the quantum start-up Riverlane, said building a quantum computer might be the most difficult task ever undertaken. This is a machine that defies the physics of everyday life.

A quantum computer relies on the strange ways that some objects behave at the subatomic level or when exposed to extreme cold, like metal chilled to nearly 460 degrees below zero. If scientists merely try to read information from these quantum systems, they tend to break.

While building a quantum computer, Dr. Taylor said, you are constantly working against the fundamental tendency of nature.

The most important tech advances of the past few decades the microchip, the internet, the mouse-driven computer, the smartphone were not defying physics. And they were allowed to gestate for years, even decades, inside government agencies and corporate research labs before ultimately reaching mass adoption.

The age of mobile and cloud computing has created so many new business opportunities, Dr. OMara said. But now there are trickier problems.

Still, the loudest voices in Silicon Valley often discuss those trickier problems as if they were just another smartphone app. That can inflate expectations.

People who arent experts who understand the challenges may have been misled by the hype, said Raquel Urtasun, a University of Toronto professor who helped oversee the development of self-driving cars at Uber and is now chief executive of the self-driving start-up Waabi.

Technologies like self-driving cars and artificial intelligence do not face the same physical obstacles as quantum computing. But just as researchers do not yet know how to build a viable quantum computer, they do not yet know how to design a car that can safely drive itself in any situation or a machine that can do anything the human brain can do.

Even a technology like augmented reality eyeglasses that can layer digital images onto what you see in the real world will require years of additional research and engineering before it is perfected.

Andrew Bosworth, vice president at Meta, formerly Facebook, said that building these lightweight eyeglasses was akin to creating the first mouse-driven personal computers in the 1970s (the mouse itself was invented in 1964). Companies like Meta must design an entirely new way of using computers, before stuffing all its pieces into a tiny package.

Over the past two decades, companies like Facebook have built and deployed new technologies at a speed that never seemed possible before. But as Mr. Bosworth said, these were predominantly software technologies built solely with bits pieces of digital information.

Building new kinds of hardware working with physical atoms is a far more difficult task. As an industry, we have almost forgotten what this is like, Mr. Bosworth said, calling the creation of augmented reality glasses a once-in-a-lifetime project.

Technologists like Mr. Bosworth believe they will eventually overcome those obstacles and they are more open about how difficult it will be. But thats not always the case. And when an industry has seeped into every part of daily life, it can be hard to separate hand-waving from realism especially when it is huge companies like Google and well-known personalities like Elon Musk drawing that attention.

Many in Silicon Valley believe that hand-waving is an important part of pushing technologies into the mainstream. The hype helps attract the money and the talent and the belief needed to build the technology.

If the outcome is desirable and it is technically possible then its OK if were off by three years or five years or whatever, said Aaron Levie, chief executive of the Silicon Valley company Box. You want entrepreneurs to be optimistic to have a little bit of that Steve Jobs reality-distortion field, which helped to persuade people to buy into his big ideas.

The hype is also a way for entrepreneurs to generate interest among the public. Even if new technologies can be built, there is no guarantee that people and businesses will want them and adopt them and pay for them. They need coaxing. And maybe more patience than most people inside and outside the tech industry will admit.

When we hear about a new technology, it takes less than 10 minutes for our brains to imagine what it can do. We instantly compress all of the compounding infrastructure and innovation needed to get to that point, Mr. Levie said. That is the cognitive dissonance we are dealing with.

Read the original post:
Why Is Silicon Valley Still Waiting for the Next Big Thing? - The New York Times

Riverlane taking quantum computing to fresh frontiers | Business Weekly – Business Weekly

Cambridge-based quantum engineering company Riverlane is at the heart of two related initiatives to troubleshoot problems and advance risk-free adoption worldwide.

It has head-hunted leading scientist Dr Earl Campbell to accelerate efforts to solve quantum error correction and only last month joined an influential consortium to build error corrected quantum processor.

As head of architecture, Dr Campbell will lead technical development to support the operating system for fault-tolerant quantum computers.

He joins Riverlane from Amazon Web Services Quantum Computing group, and has held a number of academic positions over the past 16 years. His game-changing efforts include leading contributions to quantum error correction, fault-tolerant quantum logic and compilation and quantum algorithms.

He has also made pioneering contributions to random compilers, including the qDRIFT algorithm, which is the only known efficient method for simulating systems with highly complex interactions.

Additionally, while working with IBM and University College London, Earl contributed to the development of near-Clifford emulators that were integrated into Qiskit IBMs open-source software development kit for quantum computers.

At Amazon Web Services he was a leading contributor to its paper proposing a novel quantum computing architecture and established a team working on quantum algorithms.

At Riverlane he will be working alongside leaders who have joined from Microsoft, ARM, Samsung, Intel and the White House! Backed by some of Europes leading venture-capital funds and the University of Cambridge, Riverlane is bringing together leading talent from the worlds of business, academia, and industry to design its modular operating system to work with all hardware providers, whatever the type of qubit.

Riverlane has already partnered with a third of the worlds quantum computing hardware companies, and has successfully tested Deltaflow.OS with multiple hardware approaches, including trapped ions and superconducting circuits.

Dr Campbell said: Error correction is the next defining challenge in quantum computing and we will need to deliver fast, effective software to solve it. Over the past 16 years, I have been tackling questions like this as an academic and Im looking forward to putting theory into practice.

Ive followed Riverlane since its early days and Ive always been drawn to challenging work with the promise of delivering widespread social and commercial impact. Im excited to join a diverse team with a proven track record in developing software used by hardware companies around the world.

Steve Brierley, CEO and founder of Riverlane added: Solving error correction will be key to unlocking quantum usefulness across a range of foundational challenges, including clean energy, drug discovery, material science, and advanced chemistry.

Were delighted that Earl is bringing his world-class expertise in this challenge to the Riverlane team to accelerate our efforts and unlock the potential of this technology.

Just before Christmas, Riverlane joined a 7.5 million consortium to build an error corrected quantum processor working with a range of UK partners, including Rolls-Royce to apply this toward new applications in the aerospace industry. The funding comes via the UK governments National Quantum Technologies Programme.

The project, led by quantum computer manufacturer Universal Quantum, calls on Riverlanes software and expertise to tackle quantum error correction on a trapped-ion quantum computer.

Error correction is a crucial step in unlocking the promise of fault tolerant quantum computers capable of a range of transformative applications, and is at the core of everything Riverlane does.

The work with Rolls-Royce will explore how quantum computers can develop practical applications toward the development of more sustainable and efficient jet engines.

This starts by applying quantum algorithms to take steps to toward a greater understanding of how liquids and gases flow, a field known as fluid dynamics. Simulating such flows accurately is beyond the computational capacity of even the most powerful classical computers today.

The consortium also includes: academic researchers from Imperial College London and the University of Sussex; the Science and Technology Facilities Council (STFC) Hartree Centre; supply chain partners Edwards, TMD Technologies and Diamond Microwave; and commercialisation and dissemination experts Sia Partners and Qureca.Fluids behave according to a famous set of partial differential equations called the Navier-Stokes equations, the solutions to which are important for aircraft and engine design, as well as understanding ocean currents and predicting the weather.

Classical computers can take months or even years to solve some types of these equations but recent research has shown that quantum computers could find the solutions much more quickly.

View original post here:
Riverlane taking quantum computing to fresh frontiers | Business Weekly - Business Weekly

The 4 biggest science breakthroughs that Gen Z could live to see – The Next Web

The only difference between science fiction and science is patience. Yesterdays mainframes are todays smartphones and todays neural networks will be tomorrows androids. But long before any technology becomes reality, someone has to dream it into existence.

The worlds of science and technology are constantly in flux. Its impossible to tell what the future will bring. However we can make some educated guesses based on recent breakthroughs in the fields of nuclear physics, quantum computing, robotics, artificial intelligence, and Facebooks name change.

Lets set our time machines to January 28, 2100 to take an imaginary gander at the four most amazing science and technology breakthroughs the sort-of-far future has to offer.

This could very well be the most important technological breakthrough in human history.

The premise is simple: tiny machines that function at the cellular level capable of performing tissue repairs, destroying intruders, and delivering targeted nano-medications.

And this wouldnt necessarily mean filling your bloodstream with trillions of microscopic hunks of metal and silicon. Theres plenty of reason to believe scientists could take todays biological robots and turn them into artificial intelligence agents capable of executing code functions inside our bodies.

Imagine an AI swarm controlled by a bespoke neural network attached to our brain-computer-interfaces with the sole purpose of optimizing our biological functions.

We might not be able to solve immortality by 2100, but medical nanobots could go a long way towards bridging the gap.

Another technology thats sure to save innumerable human lives is fusion power. Luckily, were on the verge of solving that one already (at least in a rudimentary, proof-of-concept kind of way). With any luck, by the time Gen Zs grandkids are old enough to drive, well have advanced the technology to the point of abundance.

And thats when we can finallystart solving humanitys problems.

The big idea here is that well come close to perfecting fusion power in the future and, because of that, well be able to use quantum computers to optimize civilization.

Fusion could potentially be a limitless form of power and its theoretically feasible that we could eventually scale its energy-producing capabilities to such a degree that energy would be as ubiquitous for private and commercial use as air is.

Under such a paradigm, we can imagine a race to the top for scientific endeavor, the ultimate goal of which would be to produce a utopian society.

With near-infinite energy freely available, there would be little incentive to fight over resources and every incentive to optimize our existence.

And thats where quantum computers come in. If we can make classical algorithms learn to drive cars by building binary supercomputers, imagine what we could do with quantum supercomputing clusters harnessing the unbridled energy of entire stars.

We could assign algorithms to every living creature in the known universe and optimize for their existence. In essence, we could potentially solve the traveling salesman problem at the multiverse scale.

Admittedly, warp drives are a glamour technology. Technically-speaking, with Mars so nearby, we dont really have to travel beyond our own solar system.

But its well-documented that humanity has a need for speed. And if we ever have any intention of seeing stars other than Sol up close, were going to need spaceships that can travel really, really fast.

The big problem here is that the universe doesnt appear to allow anything to travel faster than light. And thats pretty slow. It would take us over four years to travel to the closest star to Earth. In galactic terms, thats like spending a 1/20th of your life walking to the neighbors house.

Warp drives could solve this. Instead of going faster, we could theoretically exploit the wackiness of the universe to go further in a given amount of time without increasing speed.

This involves shifting through warp bubbles in space with exotic temporal properties, but in essence its as simple as Einsteins observations that time works a bit differently at the edge of a black hole.

In the modern era, physicists are excited over some interesting equations and simulations that are starting to make the idea of warp drives seem less like science fiction and more like science.

An added benefit to the advent of the warp drive would be that it would exponentially increase the odds of humans discovering alien life.

If aliens arent right next door, then maybe theyre a few blocks over. If we can start firing probes beyond non-warp ranges by 2100, who knows what our long-range sensors will be able to detect?

Dont laugh. Its understandable if you dont think the metaverse belongs on this list. After all, its just a bunch of cartoon avatars and bad graphics that you need a VR headset for right?

But the metaverse of 2100 will be something different entirely. In 2022, Spotify tries to figure out what song you want to hear based on the music youve listened to in the past. In 2100, your brain-embedded AI assistant will know what song you want to hear because it has a direct connection to the area of your mind that processes sound, memory, and emotion.

The ideal metaverse would be a bespoke environment thats only indistinguishable from reality in its utopianism. In other words, youll only know its fake because you can control the metaverse.

While its obvious that jacking into the Matrix could pose a multitude of risks, the ability to take a vacation from reality could have positive implications ranging from treating depression to giving people with extremely low quality of life a reason to want to continue living.

The ultimate freedom is choosing your own reality. And its a safe bet that whoever owns the server it runs on is whos going to be in charge of the future.

See more here:
The 4 biggest science breakthroughs that Gen Z could live to see - The Next Web

Quantum computing leaders founded Zapata to accelerate the …

Christopher Savoie, Ph.D., JD

CEO & Founder

CSO, Founder & Professor at University of Toronto

CTO & Founder

Professional Service Lead & Founder

Associate Director for Quantum Science IP & Founder

Lead Research Scientist & Founder

We founded Zapata to develop quantum algorithms and software that deliver real-world advances for applications on near term quantum computers.

Christopher Savoie, CEO & Founder

CEO & Founder

CSO, Founder & Professor at University of Toronto

Managing Director, C Sensei Group LLC

Board Director; CEO of RealPage, Inc.

Managing Director, Comcast Ventures

Principal at Prelude Ventures

Founding CEO & Vice Chairman of GRAIL, Former SVP of Google Ads, Apps, Maps and [x]

Board Director; Chair & President Family Foundation; Retired CSO & CMO Honeywell

Partner at Pillar VC (Board Observer)

Strong teams built around innovation in quantum algorithms are going to be the key to make these advances practical and widely available.

Alan Aspuru-Guzik, Co-Founder & CSO

Professor in the Department of Computer Science and Institute for Advanced Computer Studies at the University of Maryland

Associate Professor of Physics at the MIT Center for Theoretical Physics

Landon T. Clay Professor of Mathematics and Theoretical Science at Harvard

Professor of Quantum Physics at Freie Universitt Berlin

Chair of the Zapata SAB, Associate Professor of Physics and Astronomy at Tufts University

Associate Professor of Electrical Engineering and Computer Science and Lincoln Laboratory Fellow at MIT, Director of the MIT Center for Quantum Engineering, and Associate Director of the MIT Research Laboratory of Electronics

To realize the full promise of quantum computing will take time, science, and engineering across the board. Zapata has brought together a fantastic team of researchers who want to work with academia and industry to develop tomorrows quantum algorithms.

Will Oliver, Associate Professor MIT

Vice President Business Development

Vice President of Engineering

Chief Orquestra Evangelist

Chief Marketing Officer

Chief Financial Officer

General Counsel

Vice President of Corporate Operations

Director, Global Channel Partnerships

Director, Quantum Solutions

Deputy General Counsel

Senior Legal Analyst

Zapata is all about bridging the gap and helping those interested parties get into the quantum ecosystem and connect them to the right hardware partner.

Jonny Olson, Co-Founding Scientist

Associate Director of Quantum AI

Lead Quantum Software Engineer

Quantum Application Scientist

Quantum Application Scientist

Quantum Application Scientist

Quantum Application Scientist

Quantum Application Scientist

Quantum Application Scientist

Sr ML-DevOps Engineer

Quantum Software Engineer

Quantum Software Engineer

Quantum Software Engineer

Quantum Software Engineer

Cloud Engineer

Quantum Research Scientist

Quantum Research Scientist

Quantum Platform Engineering Manager

Quantum Application Scientist

Sr Quantum Platform Architect

Senior Quantum Platform Engineer

Quantum Application Scientist

Quantum Application Scientist

Quantum Software Engineer

Machine Learning Engineer

Quantum Platform Engineer

Quantum Platform Engineer

People at Zapata come from many different backgrounds and domains. Everyone is extremely driven, working at quantums edgesbut also genuinely thoughtful and caring.

Micha Stchy, Quantum Software Engineer

Quantum Solutions Engineer

Quantum Solution Engineer

Quantum Solution Engineer

UK/EU Business Development

Quantum Solutions Engineer

Strategic Partner Alliance Manager

Controller

Executive Assistant

People Operations Manager

Operations Administrator

Product Marketing Manager

Marketing Specialist

Marketing & Product Intern

Marketing Coordinator

Administrative Assistant

Here is the original post:
Quantum computing leaders founded Zapata to accelerate the ...

Quantum Computing | Rigetti Computing

Complex problems need powerful computing

We make it possible for everyone to think bigger, create faster, and see further. By infusing AI and machine learning, our quantum solutions give you the power to solve the worlds most important and pressing problems.

When the computer is operational, five casings (like the white one shown at the top of the image) envelop the machine. These cans nest inside each other and act as thermal shields, keeping everything super cold and vacuum-sealed inside.

These photon-carrying cables deliver signals to and from the chip to drive qubit operations and return the measured results.

Beneath the heat exchangers sits the mixing chamber. Inside, different forms of liquid heliumhelium-3 and helium-4separate and evaporate, diffusing the heat.

These gold plates separate cooling zones. At the bottom, they plunge to one-hundredth of a Kelvinhundreds of times as cold as outer space.

The QPU (quantum processing unit) features a gold-plated copper disk with a silicon chip inside that contains the machines brain.

Read the rest here:
Quantum Computing | Rigetti Computing

BMW and AWS Announce Winners of Quantum Computing …

The BMW Group, in collaboration with AWS, last July called on the global quantum computing community to develop innovative quantum algorithms for four specific industrial challenges and test them on real quantum computing technologies. About 70 teams participated in the challenge and a winning team was selected for the four areas:

1. Sensor positions for automated driving functions: AccentureAccentures winning team tackled the problem of optimising the positioning of sensors for highly automated driving functions.

2. Simulation of material deformations: Qu&CoThe jury concluded that the quantum computing start-up Qu&Co stood out with its approach to solving partial differential equations in the field of numerical simulation.

3. Configuration optimisation of pre-series vehicles: 1QBit and NTTThe winning team from 1QBit and NTT came out on top with hybrid algorithms for solving satisfiability problems in propositional logic for optimising equipment configuration.

4. Automated quality analyses: QC WareThe QC Ware team stood out with its approach, drawn from the field of machine learning, that can be used in image recognition in the area of quality analysis.

The BMW Group worked closely with theAmazonQuantum Solutions LabProfessional Services team, an expert group of professionals, throughout the challenge, right up to the moment when the winners were determined. AWS also provided credits for the use of Amazon Braket, enabling the development and testing of the submitted quantum algorithms. Amazon Braket provides a development environment to explore and create quantum algorithms, test them on quantum circuit simulators and run them on different quantum hardware technologies.

The jury that oversaw the challenge and ultimately decided on the winning teams also included professors from the Technical University of Munich (TUM) as well as representatives of the BMW Group and AWS. TUM is an important partner for the BMW Group for research in the field of quantum computing. The BMW Group announced the establishment of theQuantum Algorithms and Applications endowed chair at TUMback in June of this year. Algorithms close to specific use cases along the industrial value chain are being researched at the chair. The BMW Group is providing 5.1 million euros over a period of six years to fund the professorship, staff and equipment at TUM.

Quantum computing is one of the most promising future technologies in the automotive sector. It has enormous potential for research into materials, for complex optimisation problems and for the future of automated driving. The Quantum Computing Challenge once again underlines the BMW Groups leading-edge role in building a quantum ecosystem. As recently as June, the company was a founding member, along with nine other large corporations, of theQuantum Technology and Application Consortium (QUTAC). This aims to specifically accelerate the development of the technology in Germany and Europe. In November this year, the BMW Group and RWTH Aachen University jointly announced the establishment of theQuantum Information Systems endowed chair, where software and industrialisation comeptencies will be created to realise a quantum advantage in the medium term.

Dr Peter Lehnert, Vice President BMW Group Research and New Technologies Digital Car:We at the BMW Group are convinced that future technologies such as quantum computing have the potential to make our products more desirable and sustainable. We have succeeded in reaching the global quantum computing community with our crowd-innovation approach and enthusing them about automotive use cases. We look forward to continuing to work with the winners.

The BMW Group received around 70 submissions from all over the world from different areas such as international and national research groups, the start-up scene and established companies. The exceptionally high quality of the submissions enables new perspectives and offers potential for innovative approaches to solutions such as the development and further development of new algorithms. The expert jury took into account criteria such as comprehensibility, feasibility, scalability, innovation and benefit for the BMW Group when evaluating the submitted solutions.

All 15 finalists set themselves apart with their high innovation potential and have therefore been shortlisted for future projects. The journey continues straight away for the four winners: they immediately gain the BMW Group as a customer and will be involved in the further development of the pilot projects. The company looks forward to working with these four winners.

The BMW Group Quantum Computing Challenge is structured around the Supplierthon methodology, which is the BMW Groups future-oriented supplier scouting method. It marks the companys first global crowd-innovation initiative on this scale. The crowd innovation approach enables innovative solutions to be found within a very short time and to validate them in cooperation with the specialist departments. The challenge also gave the BMW Group invaluable insights into the status quo of the global quantum ecosystem. This knowledge is crucial in determining the future direction of research on the future technology and the long-term establishment of the market for quantum computing. The successful challenge along with the extremely promising submissions encourage the company to continue to look to the crowd innovation approach in the future.

See original here:
BMW and AWS Announce Winners of Quantum Computing ...

From ethical AI to quantum networking Cisco predicts the future of technology – ITP.net

In the thick of action, Cisco has revealed the technology trends that are expected to make a significant impact in 2022 and beyond.

Commenting on the trends and predictions, Osama Al-Zoubi, CTO, Cisco Middle East and Africa, said: Technology is always evolving and moving in exciting new directions. In a time of fast-paced digitization, we identified a range of trends and innovations our customers can expect to see over the next years.

Prediction: Ethical, responsible, and explainable AI will become a top priority

The extreme quantity of data being generated has already exceeded human scale but still needs to be processed intelligently and, in some cases, in near real-time. This scenario is where machine learning (ML) and artificial intelligence (AI) will come into their own.

The challenge is that data has ownership, sovereignty, privacy, and compliance issues associated with it. And if the AI being used to produce instant insights has inherent biases built-in, then these insights are inherently flawed.

This leads to the need for ethical, responsible, and explainable AI. The AI needs to be transparent, so everyone using the system understands how the insights have been produced. Transparency must be present in all aspects of the AI lifecycle its design, development, and deployment.

Prediction: Data driving Edge towards whole new application development

Modern enterprises are defined by the business applications they create, connect to and use. In effect, applications, whether they are servicing end-users or are business-to-business focused or even machine-to-machine connections, will become the boundary of the enterprise.

The business interactions that happen across different types of applications will create an ever-expanding deluge of data. Every aspect of every interaction will generate additional data to provide predictive insights. With predictive insights, the data will likely gravitate to a central data store for some use cases. However, other use cases will require pre-processing of some data at the Edge, including machine learning and other capabilities.

Prediction: Future of innovation and business is tied to unlocking the power of data

Beyond enabling contextual business insights to be generated from the data, teams will be able to better automate many complex actions, ultimately getting to automated self-healing. To achieve this future state, applications must be created with an automated, observable, and API (Application Programming Interface)-first mindset with seamless security embedded from development to run-time. Organisations will have the ability to identify, inspect, and manage APIs regardless of provider or source.

Prediction: Always-on, ubiquitous and cheap internet key to future tech and social equality

There is no doubt that the trend for untethered connectivity and communications will continue. The sheer convenience of using devices wirelessly is obvious to everyone, whether nomadic or mobile.

This always-on internet connectivity will further help alleviate social and economic disparity through more equitable access to the modern economy, especially in non-metropolitan areas, helping create jobs for everyone. But this also means that if wireless connectivity is lost or interrupted, activities must not come to a grinding halt.

The future needs ubiquitous, reliable, always-on internet connectivity at low price points. A future that includes seamless internet services requires the heterogeneity of access meaning AI-augmented and seamless connectivity between every cellular and Wi-Fi generation and the upcoming LEO satellite constellations and beyond.

Prediction: Quantum networking will power a faster, more secure future

Quantum computing and security will interconnect very differently than classical communications networks, which stream bits and bytes to provide voice and data information.

Quantum technology is fundamentally based on an unexplained phenomenon in quantum physics the entanglement between particles that enables them to share states. In the case of quantum networking, this phenomenon can be used to share or transmit information. The prospect of joining sets of smaller quantum computers together to make a very large quantum computer is enticing.

Quantum networking could enable a new type of secure connection between digital devices, making them impenetrable to hacks. As this type of fool proof security becomes achievable with quantum networking, it could lead to better fraud protection for transactions. In addition, this higher quality of secure connectivity may also be able to protect voice and data communications from any interference or snooping. All of these possibilities would re-shape the internet we know and use today.

Also read:

Alibaba: Top 10 trends that will shape the tech industry

Cisco simplifies software and services buying program with new Enterprise Agreement

Follow this link:
From ethical AI to quantum networking Cisco predicts the future of technology - ITP.net

Jet Suit Testing by the British Royal Navy and Gravity Industries – OODA Loop

Ever since Star Wars Episode VI: Return of the Jedi, when Boba Fett busts his jet suit on Jabba the Hutts sail barge during the Battle of the Great Pit of Carkoon, well, this writer was hooked. Jet packs have since been depicted in media and sci-fi, most notably in the dystopian scenario of Spielbergs 2002 Minority Report (an adaptation of a 1956 science fiction novella by Philip K. Dick). The Guardian offers this thorough history of jet packs.

Technological fact now mirrors science fiction, as the British Royal Navy has recently been testing jet suit technology to board ships. A new video (above) was recently released by the UK-based Gravity Industries, which manufactures the jet suit technology.

Already a member?Sign in to your account.

OODA Loop provides actionable intelligence, analysis, and insight on global security, technology, and business issues. Our members are global leaders, technologists, and intelligence and security professionals looking to inform their decision making process to understand and navigate global risks and opportunities.

You can chose to be an OODA Loop Subscriber or an OODA Network Member. Subscribers get access to all site content, while Members get all site content plus additional Member benefits such as participation in our Monthly meetings, exclusive OODA Unlocked Discounts, discounted training and conference attendance, job opportunities, our Weekly Research Report, and other great benefits.

For more information please click here. Thanks!

Already a member?Sign in to your account.

Now more than ever, organizations need to apply rigorous thought to business risks and opportunities. In doing so it is useful to understand the concepts embodied in the terms Black Swan and Gray Rhino. See: Potential Future Opportunities, Risks and Mitigation Strategies in the Age of Continuous Crisis

The OODA leadership and analysts have decades of experience in understanding and mitigating cybersecurity threats and apply this real world practitioner knowledge in our research and reporting. This page on the site is a repository of the best of our actionable research as well as a news stream of our daily reporting on cybersecurity threats and mitigation measures. See:Cybersecurity Sensemaking

OODAs leadership and analysts have decades of direct experience helping organizations improve their ability to make sense of their current environment and assess the best courses of action for success going forward. This includes helping establish competitive intelligence and corporate intelligence capabilities.Our special series on the Intelligent Enterprise highlights research and reports that can accelerate any organization along their journey to optimized intelligence. See: Corporate Sensemaking

This page serves as a dynamic resource for OODA Network members looking for Artificial Intelligence information to drive their decision-making process. This includes a special guide for executives seeking to make the most of AI in their enterprise. See: Artificial Intelligence Sensemaking

From the very beginning of the pandemic we have focused on research on what may come next and what to do about it today. This section of the site captures the best of our reporting plus daily daily intelligence as well as pointers to reputable information from other sites. See: OODA COVID-19 Sensemaking Page.

A dynamic resource for OODA Network members looking for insights into the current and future developments in Space, including a special executives guide to space. See: Space Sensemaking

OODA is one of the few independent research sources with experience in due diligence on quantum computing and quantum security companies and capabilities. Our practitioners lens on insights ensures our research is grounded in reality. See: Quantum Computing Sensemaking.

In 2020, we launched the OODAcast video and podcast series designed to provide you with insightful analysis and intelligence to inform your decision making process. We do this through a series of expert interviews and topical videos highlighting global technologies such as cybersecurity, AI, quantum computing along with discussions on global risk and opportunity issues. See: The OODAcast

Read more here:
Jet Suit Testing by the British Royal Navy and Gravity Industries - OODA Loop

Turns Out Schrdinger, the Father of Quantum Physics, Was a Pedophile – Futurism

A recent investigation that resurfaced damning evidence that famed physicist Erwin Schrdinger was a pedophile is continuing to make waves in the academic community.

Schrdinger, widely cited as the father of quantum physics and perhaps best remembered for his 1935 thought experiment Schrdingers Cat, was widely revealed to be a pedophile by The Irish Times after the newspaper published a report detailing his record as a sexual predator and serial abuser.

Its a stomach-churning revelation about a researcher whose work revolutionized the study of the natural world and even led directly to todays international research frenzy into quantum computing and which shows, once again, that even the powerful and brilliant can be monsters.

The Irish Times identified young girls who Schrdinger became infatuated with, including a 14-year-old girl whom the physicist groomed after he became her math tutor.

Schrdinger, who died in 1961, later admitted to impregnating the girl when she was 17 and he was in his mid-forties. Horrifyingly, she then had a botched abortion that left her permanently sterile, according to the newspaper.

Perhaps most diabolically, the physicist kept a record of his abuse in his diaries, even justifying his actions by claiming he had a right to the girls due to his genius.

Walter Moore, author of the biography Schrdinger, Life and Thought published in 1989, said that the physicists attitudes towards women was essentially that of a male supremacist. Disgustingly, the biography seemed to downplay and even romanticize his abusive habits, and describes him as having a Lolita complex.

Schrdinger also attempted a relationship with a different 12-year-old girl, disgustingly writing in his journal that she was among the unrequited loves of his life. However, he decided not to pursue her after a family member voiced their concerns that the physicist was a, you know, unrepentant abusive predator.

In response, a petition has been launched to change the title of a lecture hall at Dublins Trinity University thats named after him.

We can acknowledge the great mark Schrdinger has left on science through our study, and this petition does not wish to diminish the impact his lectures or ideas had on physics, the petition says. However, it seems in bad taste that a modern college such as Trinity would honor this man with an entire building.

Thats true, of course. You can recognize the contributions someone has had in their field while also acknowledging that they were an absolute scumbag.

Buthonoring them by naming a lecture hall or a giant space telescope is completely unnecessary.

READ MORE: How Erwin Schrdinger insulted his Lolita complex in Ireland [The Irish Times]

More on horrible men: James Webb Hated Gay People. Why Are We Naming a Telescope After Him?

Care about supporting clean energy adoption? Find out how much money (and planet!) you could save by switching to solar power at UnderstandSolar.com. By signing up through this link, Futurism.com may receive a small commission.

Read this article:
Turns Out Schrdinger, the Father of Quantum Physics, Was a Pedophile - Futurism

International Business Machines : Building on Our History of Innovation for the Future of IBM – marketscreener.com

For more than a century, IBM has been rooted in the fundamental promise of technology: We believe that when we apply science to real-world problems, we can make progress - for both business and society. And as those problems have changed over time, so have we. IBM has repeatedly reinvented itself to overcome whatever obstacles stand in the way of innovation and value for our clients.

IBM scientists and engineers have been at the heart of our relentless reinvention. They have always been guided by a core principle, to deliver innovation that matters, for our company and the world.

Our commitment to research as part of our business model means we will continue to create the technologies that our clients and the world rely upon. For example, we have led US companies for decades in the number of patents received annually. Today, it was announced IBM has achieved this milestone for IBM's total of more than 8,500 patents led the IFI Claims Patent Service 2021 rankings.29 years in a row.

We are proud of this accomplishment and our leadership. However, the number of patents we receive has never told the full story of the innovation we drive. Our priority has always been leading the frontiers of computing and its relationship to business, science, professions, and society.

I believe that today, more than ever, we need innovation to meet the demands of many of the major challenges of our time - from models to create sustainable growth, to addressing future pandemics and climate change, to enabling energy and food security. To address them, we need faster discovery, open collaboration, efficient problem solving, and the ability to push science and business into new frontiers.

This future will be powered by a blend of high-performance computing, AI, and quantum computing, all integrated through the hybrid cloud. The confluence of these technologies represents a step change in computing, and the outcomes will surpass anything we've seen before. Together, these advancements can exponentially alter the speed and scale at which we can uncover solutions to complex problems. We've come to call this accelerated discovery.

Our priority has always been leading the frontiers of computing and its relationship to business, science, professions, and society.

But this will not happen in a vacuum. Strong innovation is built on a collaborative ecosystem, a commitment to long-term investment in hard tech challenges and fundamental materials, and the implementation of an open approach.

We have a long history of putting these principles into practice, and it's in this spirit we undertook some of the most daunting hard technology challenges in 2021 - and delivered on them.

To name just a few: we worked with our partners to demonstrate the first 2 nm nanosheet technology for semiconductors, which will support up to 50 billion transistors on a chip the size of a fingernail and offer enormous gains in efficiency. We also collaborated with Samsung on the successful prototype of a chip that defies conventional semiconductor design, and lays the groundwork to achieve energy density and performance levels previously thought unattainable.

And as we lead the quest to reach practical and large-scale quantum computing, we stayed true to the ambitious roadmap we laid out in 2020 and In addition to unveiling Eagle, our 127-qubit quantum processor, and previewing the design for IBM Quantum System Two, our next-generation system that will house future quantum processors, we also introduced, Quantum Serverless, a new programming model for leveraging quantum and classical resources. Read more.delivered Eagle, our first 127-qubit processor, which will be critical to growing the nascent quantum industry IBM is pioneering.

To continue to realize a future marked by fundamental technology progress and the exploration of new scientific boundaries, we are deepening our commitment to this approach.

Building open communities for innovation

As part of our strategy, we are doubling down on our already robust and long-standing commitment to open communities. Innovation can emerge from anywhere, from a tech giant or a disruptive startup. In software, the growth of open source has redefined where innovation can come from, and how it is monetized. IBM has a long history in open source, and that continues today. Our pioneering work in serverless computing, which is quickly becoming the leading platform for the hybrid cloud industry because of the significant growth of Red Hat, is just one example of this.

We will also expand our focus to grow communities of innovation. The most successful technologies and innovations are often found when complementary institutions work together. To take one example among many, our collaboration with the The Cleveland Clinic + IBM Discovery Accelerator is a collaboration set to advance pathogen research, and foster the next-gen tech workforce for healthcare. Read more.Cleveland Clinicwill bring together IBM's technology and expertise in hybrid cloud, AI, and quantum computing to help Cleveland Clinic discover solutions to pressing issues around public health.

These sorts of collaborations will help technology to solve truly profound problems, and we hope to do so in partnership with other institutions adopting our technology, including Fraunhofer-Geselleschaft, Germany's largest research institution, the Hartree Centre, a major AI and high-performance computing research facility in the UK, and Japan's University of Tokyo and Keio University. Worldwide, we will continue to forge partnerships with the broader scientific community as we look to accelerate the pace of discovery.

Pushing discovery beyond patent filings

Moving forward, we're strengthening our companywide approach to focus our innovation efforts around the areas that matter most for our business and for society at large. This will include hybrid cloud, AI, quantum computing, systems and semiconductors, and security.

We believe these areas will have the most impact on our clients, industries, and the world. We also believe they're the ones with the greatest potential for ecosystem collaboration.

While IBM will remain an intellectual property powerhouse with one of the strongest US patent portfolios, as part of our heightened focus moving forward, we'll also take a more selective approach to patenting. We are proud of our decades-long history of topping the US patents chart, but in this new era, our position as the recipient of the most patents in any given year will not be a priority. Instead, our focus will be to prioritize growing these key technology areas of our company.

The problems the world is facing today require us to work faster than ever before. We see it as our duty to catalyze scientific progress by taking the cutting-edge technologies we're working on, scaling them, and deploying them with partners across every industry.

Innovation is the heart and soul of IBM and serves as the engine to make our clients and the world work better. We made enormous strides in the last year, and we plan to achieve even more in 2022.

Read the rest here:
International Business Machines : Building on Our History of Innovation for the Future of IBM - marketscreener.com

What is Quantum Computing? | IBM

Let's look at example that shows how quantum computers can succeed where classical computers fail:

A supercomputer might be great at difficult tasks like sorting through a big database of protein sequences. But it will struggle to see the subtle patterns in that data that determine how those proteins behave.

Proteins are long strings of amino acids that become useful biological machines when they fold into complex shapes. Figuring out how proteins will fold is a problem with important implications for biology and medicine.

A classical supercomputer might try to fold a protein with brute force, leveraging its many processors to check every possible way of bending the chemical chain before arriving at an answer. But as the protein sequences get longer and more complex, the supercomputer stalls. A chain of 100 amino acids could theoretically fold in any one of many trillions of ways. No computer has the working memory to handle all the possible combinations of individual folds.

Quantum algorithms take a new approach to these sorts of complex problems -- creating multidimensional spaces where the patterns linking individual data points emerge. In the case of a protein folding problem, that pattern might be the combination of folds requiring the least energy to produce. That combination of folds is the solution to the problem.

Classical computers can not create these computational spaces, so they can not find these patterns. In the case of proteins, there are already early quantum algorithms that can find folding patterns in entirely new, more efficient ways, without the laborious checking procedures of classical computers. As quantum hardware scales and these algorithms advance, they could tackle protein folding problems too complex for any supercomputer.

How complexity stumps supercomputers

Proteins are long strings of amino acids that become useful biological machines when they fold into complex shapes. Figuring out how proteins will fold is a problem with important implications for biology and medicine.

A classical supercomputer might try to fold a protein with brute force, leveraging its many processors to check every possible way of bending the chemical chain before arriving at an answer. But as the protein sequences get longer and more complex, the supercomputer stalls. A chain of 100 amino acids could theoretically fold in any one of many trillions of ways. No computer has the working memory to handle all the possible combinations of individual folds.

Quantum computers are built for complexityQuantum algorithms take a new approach to these sorts of complex problems -- creating multidimensional spaces where the patterns linking individual data points emerge. Classical computers can not create these computational spaces, so they can not find these patterns. In the case of proteins, there are already early quantum algorithms that can find folding patterns in entirely new, more efficient ways, without the laborious checking procedures of classical computers. As quantum hardware scales and these algorithms advance, they could tackle protein folding problems too complex for any supercomputer.

Read the original post:
What is Quantum Computing? | IBM

What Is Quantum Computing? | NVIDIA Blog

Twenty-seven years before Steve Jobs unveiled a computer you could put in your pocket, physicist Paul Benioff published a paper showing it was theoretically possible to build a much more powerful system you could hide in a thimble a quantum computer.

Named for the subatomic physics it aimed to harness, the concept Benioff described in 1980 still fuels research today, including efforts to build the next big thing in computing: a system that could make a PC look in some ways quaint as an abacus.

Richard Feynman a Nobel Prize winner whose wit-laced lectures brought physics to a broad audience helped establish the field, sketching out how such systems could simulate quirky quantum phenomena more efficiently than traditional computers. So,

Quantum computing is a sophisticated approach to making parallel calculations, using the physics that governs subatomic particles to replace the more simplistic transistors in todays computers.

Quantum computers calculate using qubits, computing units that can be on, off or any value between, instead of the bits in traditional computers that are either on or off, one or zero. The qubits ability to live in the in-between state called superposition adds a powerful capability to the computing equation, making quantum computers superior for some kinds of math.

Using qubits, quantum computers could buzz through calculations that would take classical computers a loooong time if they could even finish them.

For example, todays computers use eight bits to represent any number between 0 and 255. Thanks to features like superposition, a quantum computer can use eight qubits to represent every number between 0 and 255, simultaneously.

Its a feature like parallelism in computing: All possibilities are computed at once rather than sequentially, providing tremendous speedups.

So, while a classical computer steps through long division calculations one at a time to factor a humongous number, a quantum computer can get the answer in a single step. Boom!

That means quantum computers could reshape whole fields, like cryptography, that are based on factoring what are today impossibly large numbers.

That could be just the start. Some experts believe quantum computers will bust through limits that now hinder simulations in chemistry, materials science and anything involving worlds built on the nano-sized bricks of quantum mechanics.

Quantum computers could even extend the life of semiconductors by helping engineers create more refined simulations of the quantum effects theyre starting to find in todays smallest transistors.

Indeed, experts say quantum computers ultimately wont replace classical computers, theyll complement them. And some predict quantum computers will be used as accelerators much as GPUs accelerate todays computers.

Dont expect to build your own quantum computer like a DIY PC with parts scavenged from discount bins at the local electronics shop.

The handful of systems operating today typically require refrigeration that creates working environments just north of absolute zero. They need that computing arctic to handle the fragile quantum states that power these systems.

In a sign of how hard constructing a quantum computer can be, one prototype suspends an atom between two lasers to create a qubit. Try that in your home workshop!

Quantum computing takes nano-Herculean muscles to create something called entanglement. Thats when two or more qubits exist in a single quantum state, a condition sometimes measured by electromagnetic waves just a millimeter wide.

Crank up that wave with a hair too much energy and you lose entanglement or superposition, or both. The result is a noisy state called decoherence, the equivalent in quantum computing of the blue screen of death.

A handful of companies such as Alibaba, Google, Honeywell, IBM, IonQ and Xanadu operate early versions of quantum computers today.

Today they provide tens of qubits. But qubits can be noisy, making them sometimes unreliable. To tackle real-world problems reliably, systems need tens or hundreds of thousands of qubits.

Experts believe it could be a couple decades before we get to a high-fidelity era when quantum computers are truly useful.

Predictions of when we reach so-called quantum computing supremacy the time when quantum computers execute tasks classical ones cant is a matter of lively debate in the industry.

The good news is the world of AI and machine learning put a spotlight on accelerators like GPUs, which can perform many of the types of operations quantum computers would calculate with qubits.

So, classical computers are already finding ways to host quantum simulations with GPUs today. For example, NVIDIA ran a leading-edge quantum simulation on Selene, our in-house AI supercomputer.

NVIDIA announced in the GTC keynote the cuQuantum SDK to speed quantum circuit simulations running on GPUs. Early work suggests cuQuantum will be able to deliver orders of magnitude speedups.

The SDK takes an agnostic approach, providing a choice of tools users can pick to best fit their approach. For example, the state vector method provides high-fidelity results, but its memory requirements grow exponentially with the number of qubits.

That creates a practical limit of roughly 50 qubits on todays largest classical supercomputers. Nevertheless weve seen great results (below) using cuQuantum to accelerate quantum circuit simulations that use this method.

Researchers from the Jlich Supercomputing Centre will provide a deep dive on their work with the state vector method in session E31941 at GTC (free with registration).

A newer approach, tensor network simulations, use less memory and more computation to perform similar work.

Using this method, NVIDIA and Caltech accelerated a state-of-the-art quantum circuit simulator with cuQuantum running on NVIDIA A100 Tensor Core GPUs. It generated a sample from a full-circuit simulation of the Google Sycamore circuit in 9.3 minutes on Selene, a task that 18 months ago experts thought would take days using millions of CPU cores.

Using the Cotengra/Quimb packages, NVIDIAs newly announced cuQuantum SDK, and the Selene supercomputer, weve generated a sample of the Sycamore quantum circuit at depth m=20 in record time less than 10 minutes, said Johnnie Gray, a research scientist at Caltech.

This sets the benchmark for quantum circuit simulation performance and will help advance the field of quantum computing by improving our ability to verify the behavior of quantum circuits, said Garnet Chan, a chemistry professor at Caltech whose lab hosted the work.

NVIDIA expects the performance gains and ease of use of cuQuantum will make it a foundational element in every quantum computing framework and simulator at the cutting edge of this research.

Sign up to show early interest in cuQuantum here.

View original post here:
What Is Quantum Computing? | NVIDIA Blog