Postal censorship – Wikipedia

Postal censorship is the inspection or examination of mail, most often by governments. It can include opening, reading and total or selective obliteration of letters and their contents, as well as covers, postcards, parcels and other postal packets. Postal censorship takes place primarily but not exclusively during wartime (even though the nation concerned may not be at war, e.g. Ireland during 19391945) and periods of unrest, and occasionally at other times, such as periods of civil disorder or of a state of emergency. Both covert and overt postal censorship have occurred.

Historically, postal censorship is an ancient practice; it is usually linked to espionage and intelligence gathering. Both civilian mail and military mail may be subject to censorship, and often different organisations perform censorship of these types of mail. In 20th-century wars the objectives of postal censorship encompassed economic warfare, security and intelligence.

The study of postal censorship is a philatelic topic of postal history.

Military mail is not always censored by opening or reading the mail, but this is much more likely during wartime and military campaigns. The military postal service is usually separate from civilian mail and is usually totally controlled by the military. However, both civilian and military mail can be of interest to military intelligence, which has different requirements from civilian intelligence gathering. During wartime, mail from the front is often opened and offending parts blanked or cut out, and civilian mail may be subject to much the same treatment.

Prisoner-of-war and internee mail is also subject to postal censorship, which is permitted under Articles 70 and 71 of the Third Geneva Convention (19291949). It is frequently subjected to both military and civil postal censorship because it passes through both postal systems.

Until recent years, the monopoly for carrying civilian mails has usually been vested in governments,[1][2] and that has facilitated their control of postal censorship. The type of information obtained from civilian mail is different from that likely to be found in military mail.[citation needed]

Throughout modern history, various governments, usually during times of war, would inspect mail coming into or leaving the country so as to prevent an enemy from corresponding with unfriendly entities within that country.[3]:22 There exist also many examples of prisoner of war mail from these countries which was also inspected or censored. Censored mail can usually be identified by various postmarks, dates, postage stamps and other markings found on the front and reverse side of the cover (envelope). These covers often have an adhesive seal, usually bearing special ID markings, which were applied to close and seal the envelope after inspection.[3]:5556

During the years leading up to the American Revolution, the British monarchy in the American colonies manipulated the mail and newspapers sent between the various colonies in an effort to prevent them from being informed and from organizing with each other. Often mail would be outright destroyed.[4][5]

During the American Civil War both the Union and Confederate governments enacted postal censorship. The number of Union and Confederate soldiers in prisoner of war camps would reach an astonishing one and a half million men. The prison population at the Andersonville Confederate POW camp alone reached 45,000 men by the war's end. Consequently, there was much mail sent to and from soldiers held in POW installations. Mail going to or leaving prison camps in the North and South was inspected both before and after delivery. Mail crossing enemy lines was only allowed at two specific locations.[6][7][8]

In Britain, the General Post Office was formed in 1657, and soon evolved a "Secret Office" for the purpose of intercepting, reading and deciphering coded correspondence from abroad. The existence of the Secret Office was made public in 1742 when it was found that in the preceding 10 years the sum of 45,675 (equivalent to 6,332,000 in 2016[9]) had been secretly transferred from the Treasury to the General Post Office to fund the censorship activities.[10] In 1782 responsibility for administering the Secret Office was transferred to the Foreign Secretary and it was finally abolished by Lord Palmerston in 1847.

During the Second Boer War a well planned censorship was implemented by the British that left them well experienced when The Great War started less than two decades later.[11] Initially offices were in Pretoria and Durban and later throughout much of the Cape Colony as well a POW censorship[12] with camps in Bloemfontein, St Helena, Ceylon, India and Bermuda.[13]

The British Post Office Act 1908 allowed censorship upon issue of warrants by a secretary of state in both Great Britain and in the Channel Islands.[14]

Censorship played an important role in the First World War.[15] Each country involved utilized some form of censorship. This was a way to sustain an atmosphere of ignorance and give propaganda a chance to succeed.[15] In response to the war, the United States Congress passed the Espionage Act of 1917 and Sedition Act of 1918. These gave broad powers to the government to censor the press through the use of fines, and later any criticism of the government, army, or sale of war bonds.[15] The Espionage Act laid the groundwork for the establishment of a Central Censorship Board which oversaw censorship of communications including cable and mail.[15]

Postal control was eventually introduced in all of the armies, to find the disclosure of military secrets and test the morale of soldiers.[15] In Allied countries, civilians were also subjected to censorship.[15] French censorship was modest and more targeted compared to the sweeping efforts made by the British and Americans.[15] In Great Britain, all mail was sent to censorship offices in London or Liverpool.[15] The United States sent mail to several centralized post offices as directed by the Central Censorship Board.[15] American censors would only open mail related to Spain, Latin America or Asia--as their British allies were handling other countries.[15] In one week alone, the San Antonio post office processed more than 75,000 letters, of which they controlled 77 percent (and held 20 percent for the following week).[15]

Soldiers on the front developed strategies to circumvent censors.[16] Some would go on "home leave" and take messages with them to post from a remote location.[16] Those writing postcards in the field knew they were being censored, and deliberately held back controversial content and personal matters.[16] Those writing home had a few options including free, government-issued field postcards, cheap, picture postcards, and embroidered cards meant as keepsakes.[17] Unfortunately, censors often disapproved of picture postcards.[17] In one case, French censors reviewed 23,000 letters and destroyed only 156 (although 149 of those were illustrated postcards).[17] Censors in all warring countries also filtered out propaganda that disparaged the enemy or approved of atrocities.[15] For example, German censors prevented postcards with hostile slogans such as "Jeder Sto ein Franzos" ("Every hit a Frenchman") among others.[15]

Following the end of World War I, there were some places where postal censorship was practiced. During 1919 it was operating in Austria, Belgium, Canada, German Weimar Republic and the Soviet Union as well as other territories.[18]:126139 The Irish Civil War saw mail raided by the IRA that was marked as censored and sometimes opened in the newly independent state. The National Army also opened mail and censorship of irregulars' mail in prisons took place.[19]

Other conflicts during which censorship existed included the Third Anglo-Afghan War, Chaco War,[18]:138 were the Italian occupation of Ethiopia (193536)[20] and especially during the Spanish Civil War of 19361939.[18]:141[21]

During World War II, both the Allies and Axis instituted postal censorship of civil mail. The largest organisations were those of the United States, though the United Kingdom employed about 10,000 censor staff while Ireland, a small neutral country, only employed about 160 censors.[22]

Both blacklists and whitelists were employed to observe suspicious mail or listed those whose mail was exempt from censorship.[22]

British censorship was primarily based in the Littlewoods football pools building in Liverpool with nearly 20 other censor stations around the country.[23] Additionally the British censored colonial and dominion mail at censor stations in the following places:

In the United States censorship was under the control of the Office of Censorship whose staff count rose to 14,462 by February 1943 in the censor stations they opened in New York, Miami, New Orleans, San Antonio, Laredo, Brownsville, El Paso, Nogales, Los Angeles, San Francisco, Seattle, Chicago, San Juan, Charlotte Amalie, Balboa, Cristbal, David, Panama and Honolulu.

The United States blacklist, known as U.S. Censorship Watch List, contained 16,117 names.[24]

Neutral countries such as Ireland,[22] Portugal and Switzerland also censored mail even though they were not directly involved in the conflict.

Following the end of hostilities in Europe, Germany was occupied by the Allied Powers in zones of control. Censorship of mail that had been impounded during the Allies advances, when postal services were suspended, took place in each zone though by far the least commonly seen mail is from the French Zone.[25]:78 When most the backlog had been cleared regular mail was controlled as well as in occupied Austria.[25]:101107 Soviet zone mail is considered scarce.[25]:92, 109

In the German Democratic Republic, the Stasi, established in 1950, were responsible for the control of incoming and outgoing mail; at their height of operations, their postal monitoring department controlled about 90,000 pieces of mail daily.[26]

Several small conflicts saw periods of postal censorship, such as the 1948 Palestine war,[27] Korean War (19501953), Poland (1980s), or even the 44-day Costa Rican Civil War in 1948.[28]

Notes

Books

Papers & reports

See the original post here:

Postal censorship - Wikipedia

Art Exhibit Hits Back at Censorship, Abductions of Dissidents – Khaosod English

BANGKOK In Conflicted Visions Again, six artists are gathered to give one message: urging the viewers to question the state of freedom in Thailand.

The exhibition, held at WTF Gallery and Caf, is a tour on issues considered by many to be sensitive in Thai society issues that the media is often discouraged from asking bold questions about from censorship of discussions about the Royal Family to the mysterious disappearances of anti-monarchy activists.

Inside the gallery, there is a TV showing nothing except an English text saying, program will resume shortly, the same words used in real life to black out foreign news broadcasts that may touch on the monarchy.

I think its strange. It makes me feel that they want to hide the truth, artist Manit Sriwanichpoom said. It makes issues about the monarchy sound scary.

Another installation, called Thailands New Normal, questioned the balance of public safety and civil liberties during the coronavirus pandemic, an era that the government and its regime of revered doctors can impose any policy on the public without any debates.

Do you want health or freedom seems to be the choice given to the public by the authorities, without anything in the middle, said Prakit Kobkjwattana, the artist responsible for the piece.

I think this new normal thing is a coup, Prakit said. Do I really have to really choose?

Abduction and murders of anti-monarchy activists who fled overseas is discussed at Iconoclastor Stickers & Military Track Down, by Pisitakun Kuantalaeng.

Images of political dissidents who disappeared in neighboring countries of Laos and Cambodia are represented as colorful stickers at the exhibit. It seems to imply someone out there is making a collection of those individuals.

The latest victim in the string of unexplained disappearance was Wanchalearm Satsaksit, seen smiling in his yellow sticker here. The bespectacled activist was kidnapped on June 4 in front of his residence in Phnom Penh, where he had lived in exile since 2014.

Although the tone of the exhibition may slant toward anti-government activism, curator Somrak Sila said the artists were picked from different political factions. Somrak said she did it once just before the May 2014 coup, and she succeeded again.

She lamented that Thailands political division over the past decade means many artists have refused to participate in productive and civil debate, not to mention holding a joint art exhibit. The gallery wants to show its time to move beyond colors and sides.

It no longer works for them to mud sling one another. The approach should be changed, Somrak said. If we do not unite. We cant fight the powers that be.

Conflicted Visions Again exhibition runs until August 23. It opens everyday except Monday from 4pm to 10pm at WTF Gallery and Caf, Sukhumvit 51. Call 02-662-6246 for details.

The rest is here:

Art Exhibit Hits Back at Censorship, Abductions of Dissidents - Khaosod English

The Best 10 Open Source Software Examples Of 2020

Open source software can be used, modified and distributed by anyone who has the knowledge to work with code.

Businesses are constantly searching for digital solutions to help them run more efficiently and turn bigger profits faster.

And one common term they may or may not have heard of that can further this agenda is open-source software.

In this article, you will find out what open source software is and get familiar with the most required types.

Whats more, we will also discover the best open source software examples of 2020.

The term open source was introduced in the late 1990s by The Open Source Initiative (OSI).

Open-source software is, essentially, a software solution whose code is publically available and free for its users and anyone else who is able to use, modify and distribute in various formats.

Open-source software solutions don't always solve the same problems. In fact, most open-source software is geared towards different niche solutions.

However, because it is accessible to the general public, it is typically very easy to obtain and incorporate into digital solutions.

Now, it is important to remember that just because open source software is free to use, it doesn't mean that just anyone can use it. Open-source software refers to lines of code (and its variations) that are available.

So, depending on the type of this software, you'll likely still need a qualified software developer to inspect that code, customize the software to your specifications, and integrate it into your current operations.

Very often free source software is used as a synonym with open source software.

Though similar, these two are different types of software.

Both of them offer similar licenses but share different ideologies.

Richard Stallman introduced the concept of free source software back in the 1980s. Its main goal was that all users have the right to operate, copy, share, study, change, and improve this software.

Open source software, on the other hand, was introduced in the late 1990s by a group of individuals as a reaction to the limitations of free source software.

The main difference they presented was that they changed the emphasis from freedom to security. And, they also brought other pragmatic benefits like transparency and cost savings.

All free source software can qualify as open source software. However, not all open source software can be free.

For instance, Open Watcom is an integrated development environment whose license cannot be modified and used privately.

Open source software licenses allow users and commercial companies to run, modify and share different sets of software code.

In other words, these licenses are legal contracts between the creator and the user. They imply that anyone with a license can use the software under specific conditions.

They are mostly available free of charge and sometimes may have restrictions.

For example, users may be confined to preserve the name of the authors. Or, it may also happen that they are not able to redistribute the licensed software under the same license only.

There are over 200 licenses of this type.

Here are the most popular:

Before you start using any open source code, understand the types of licenses and its rules to stay compliant.

Looking for the top software development companies in the US?

Here are some of the most popular types of open source software:

CRM (customer relationship management) software allows companies to manage customer interactions and meet their requirements easier.

In other words, it helps businesses improve customer care which is essential in boosting client satisfaction and bringing profitability.

This software makes it possible to stay organized and boost your productivity as well. There are different open source examples for CRM and they are usually free to download.

However, they do require technical ability to use and are customizable to your needs.

The best free open source CRM solutions in 2020 are:

These top three software examples for CRM also offer paid versions that come with an extended list of features. You can visit their websites for more info.

Open source project management software can be of great assistance in keeping track of assignments and tasks.

They allow you to manage different projects at the same time and stay organized.

Most of the open source project management tools on the market are free and offer paid versions as well.

Here are the top three of them:

Open source project management software is important in enhancing the business performance since it makes collaboration easier and delegating tasks simpler.

Most of the open source video games are free to use and modify. Developers and game designers can freely share them across platforms.

Many of these games are also incorporated in Linux distributions by default. And, users can download and install the more popular ones on other platforms like Mac OS and Windows.

Some of the open source video games may be under restrictive licenses as well.

Here are the best open source software examples of video games in 2020:

Blockchain open source is a software that users run to record transactions between two parties.

Thus, every time someone makes a transaction, the information is documented on a spreadsheet to which all the participants have access to.

However, its downside is that it cannot be modified and users actually agree via consensus to add data on the platform.

Whats great about this software, on the other hand, is that it is secure.

Blockchain software mainly targets the financial sector. But it is also widely used by eCommerce businesses, in online voting, e-governance, etc.

These are the most popular blockchain open source software examples of 2020:

Looking for the top software development companies in Massachusetts?

Mozilla Firefox is a customizable internet browser and a free open source software. It offers thousands of plugins that are accessible with a single click of your mouse.

The platform holds 4.39% of the worldwide browser market share and it is available for Android, iOS, Windows and Linux.

According to CNET, Mozilla reshaped the technology industry and fanned the flames of open source software that changed the way social networks and operating systems function.

LibreOffice is a complete office suite that offers presentations, documents, spreadsheets and databases.

Unlike Microsoft Office, which is not accessible for everyone due to its pricing model, LibreOffice is totally free.

To support it, its users can make donations when they download. So, it has a huge community of contributors.

It is available for Mac, Linux and Windows and it also has a live chat and a forum where you can turn to when searching for help.

Another of the best open software source examples that is worth mentioning is the photo editing tool GIMP.

It offers similar features like some of the expensive tools on the market including various filters and effects, and yet it is free.

GIMP is available across different platforms including Windows and Linux and it has different 3d party plugins and customization options.

Plenty of illustrators, graphic designers and photographers use it to improve their pictures and enhance their work.

VLC Media Player is one of the most popular open source software examples that you can use for free.

This multimedia player is used for video, media and audio files and it plays discs, webcams, streams and devices. Most of the users use it for streaming podcasts as well.

It allows you to optimize your audio and video files for a particular hardware configuration and also offers a plethora of extensions and skins which allows you to create customized designs.

Whats more, it runs on different platforms such as Android, Mac OS X, Linux, Windows, iOS and more.

[Image source: Linux]

According to a Stack Overflow survey, 83.1% of developers claimed that Linux is the most wanted platform.

Linux is one of the most user-friendly open source software on the market. It is most commonly used on Android devices and desktops.

What makes this operating system different from the others is that it costs nothing and it is incredibly customizable.

Most companies also choose it because it is secure because of the excellent support its community offers.

Blender is another of the best open source software examples of 2020.

It is a 3D graphics and animation tool that supports motion tracking, simulation, animation, video editing, rendering, modeling and much more.

It also offers a set of modeling tools and features including real-time viewpoint prereview, multi-resolution and support for Planar tracking and Tripod solvers.

GNU Compiler Collection is a collection of compilation tools for software development in the C, C++, Ada, Fortran and other programming languages.

It provides high-quality releases regularly and works with native and cross targets.

The sources it offers are freely available via weekly snapshots as well as SVN.

Python is common programming and scripting language used by custom software developers.

According to IEEE, it was the most popular language in 2019. In recent years, it attracts plenty of new users because of its fast-growing field of machine learning.

It is also easy to use which is why most of the developers also choose this open source software.

When talking about the best open source software examples of 2020, we shouldnt miss PHP.

It is a software development language used for creating websites and other digital platforms.

It is fast and flexible and powers some of the most popular websites around the globe including Slack and Spotify.

Shotcut is a video editor that offers powerful features including audio and webcam capture, color, text, noise, and counter generators, support of popular image formats, EDL export and much more.

It is a great tool to edit your audio and video files with and it is available for Windows, macOS and Linux.

On its website, you can also find great resources and tutorials on how to use this free open source software.

Here are the top five software development companies on the market that you can choose for your next project:

Location: Massachusetts

Notable clients: Lionbridge, Accenture, Bayer HealthCare

Website: https://www.kandasoft.com/

Kanda Software is a custom software development and quality assurance agency serving both Fortune 500 companies and dynamic startups. They have worked on more than 2000 projects and partnered with clients from around the globe.

Some of the industries they have experience in are healthcare, retail and technology.

Location: Boston

Notable clients: Level Up, Commonwealth of Massachusetts, Conformis

Website: htts://www.thegnar.co/

The Gnar Company is a software consultancy that specializes in software development. It creates robust, reliable products for various industries including technology, healthcare and eCommerce.

The company has enterprise-level engineering experience and works with businesses of different sizes, from small startups to large enterprises.

Location: Colorado

Notable clients: Expedia, Xerox, Toyota

Website: https://www.itransition.com/

iTransition is a full-service software development company that helps brands bring their ideas to life. It partners with large and medium-sized companies as well as startups.

Visit link:
The Best 10 Open Source Software Examples Of 2020

MadHive announces $50M deal with SADA to expand its Google Cloud deployment – SiliconANGLE

Digital TVadvertising firm MadHive Inc.announced a five-year deal worth $50 million today with Google Cloud consultancy firm SADA Systems Inc. to expand its use of Googles cloud for launching a range of new products and services.

MadHive is already heavily reliant on Google Cloud. It uses a number of Googles cloud services, including Google BigTable, Google Kubernetes Engine, TensorFlow and Google BigQuery, to power its advertising platform, which provides companies with tools for audience forecasting, precision targeting and activation. The platform also relies on advanced cryptography to prevent fraud and increase margins for customers, and these services are also powered by Googles cloud services.

Enormous cloud deployments of this kind are a challenge to implement for even the biggest companies, which is why MadHive turned to SADA for help when it first decided to go all-in on Google in 2017. SADA, a professional services provider that helps companies with tasks such as application migrations and managing databases, was tasked with migrating MadHives advertising platform to Google cloud so it could run at large scale with low latency while supporting a rapid development cycle and taking advantage of Googles advanced machine learning capabilities.

SADAs first step with MadHive was analyzing the limits of the Kubernetes- and Docker-based implementation they had previously used for prototypes, Simon Margolis, director of cloud adoption at SADA, said in a statement. We then applied our in-depth knowledge of Google Cloud to help MadHive redesign the entire platform using Google BigTable, Google Kubernetes Engine, TensorFlow, Google BigQuery and a multitude of additional Google Cloud services.

MadHive said its advertising platform now runs perfectly in Googles cloud environment, maintaining low latency and high availability for all users, even when it experiences unexpected surges in traffic. The company said its saving around 60% in costs on cloud services with Google, thanks to SADAs efforts in building more efficient scaling and performance-optimized reads and writes.

SADA has been instrumental in helping us through even our most nuanced and sophisticated technical needs, said MadHive Chief Scientist Aaron Brown. With their help, we move from research to deployment, sometimes within the very same day.

MadHive saysdemand for its advertising platform has grown rapidly, and that its eager to offer new products that enable sophisticated cross-screen planning and precision targeting to its customers. But those products will demand even greater artificial intelligence, machine learning and big data resources.

As such, MadHive has decided to expand its use of Googles cloud services, and once again it will rely on SADAs expertise to help.

MadHive has incredibly aggressive technology, compute power and system performance needs, and a mandate to become the worlds leading ad-tech firm, said SADA Chief Executive Officer Tony Safoian. Theyve spent the past three years pushing the limits on Google Cloud and having seen a clear ROI.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Read more:
MadHive announces $50M deal with SADA to expand its Google Cloud deployment - SiliconANGLE

Understanding the importance of Quantum Computing – Analytics Insight

Quantum Computing is slowly becoming a focal point of interest among the researchers and technology enthusiasts. This enticing technology promises to be an advanced version of the standard computers we use today. Unlike the binary encoding system of a conventional computer, quantum computer is powered by superposition and entanglement. Also, developments in quantum computing translate to advances in Artificial Intelligence and machine learning. These can lead us to breakthroughs in drug discovery, cybersecurity, cryptography, robotics, and banking. Areport by McKinsey & Partner, predicts the field of quantum computing technology to have a global market value of US$1 trillion by 2035.

In scientific parlance, quantum computing is a subfield of quantum information science. This form of computing is focused on developingcomputertechnology based on the principles ofquantumtheory, which explains the behavior of energy and material on the atomic and subatomic levels. It enables to process massive and complex datasets more efficiently than classical computers, which rely on transistors and microchips.

Quantum systems use qubits (quantum bits) as basic units for processing information. Unlike binary values that can either be 0 or 1, a qubit is not confined to a two-state solution, but can also exist in superposition. This means qubits can be employed at 0, 1, and both 1 and 0 at the same time. Therefore, it can perform many calculations in parallel owing to the ability to pursue simultaneous probabilities through superposition along with manipulating them with magnetic fields. Also, because of this, quantum computers can perform exceptionally complex tasks at supersonic velocities.

Another interesting aspect of qubits is that the superpositions can be entangled with those of others via pairing, meaning their outcomes will be mathematically related even if we dont know yet what they are. So, changing the state of one of the qubits will instantaneously change the state of the other one predictably. This can empower companies to have instant communication relays.

Superposition and entanglement are impressive physical phenomena, but leveraging them to do the computation, generating and managing qubits is a scientific and engineering challenge. Several companies, like IBM, Google, and Rigetti Computing, use superconducting circuits cooled to temperatures colder than deep space. For instance, in the case ofIBMs Quantum One, it is cooled to .015 degrees Kelvin. Others, like IonQ, trap individual atoms in electromagnetic fields on a silicon chip in ultra-high-vacuum chambers. In both cases, the goal is to isolate the qubits in a controlled quantum state.

Another challenge is they need to be run many times, as current qubit implementations have a high error rate. When it comes to hardware implementation, entanglement isnt easy to achieve. In many designs, only some of the qubits are entangled, so the compiler needs to be smart enough to swap bits around as necessary to help simulate a system where all the bits can potentially be entangled.

Once we overcome the hurdles in developing and designing a quantum computer, we are left with endless possibilities that these systems can offer. In manufacturing, automobile leaders, Volkswagen, and Daimler are using quantum computers to simulate the chemical composition of electrical-vehicle batteries to help find new ways to improve their performance.

In banking, JP Morgan is exploring the utilization of quantum computing in option pricing. The bank believes that quantum computing has the capability to curtail expenses and accelerate the number of simulations essential to compute the exact option price. In the pharmaceutical sector, companies are leveraging them to analyze and compare compounds that could lead to the creation of new drugs. This is receiving massive uptick due to the COVID-19 pandemic. Also, quantum computing gave birth to a better crypto-security system called quantum encryption. The quantum encryption involves sending entangled particles of light (entangled photons) over long distances in what is known as Quantum Key Distribution (QKD) to secure sensitive communications. Moreover, it is speculated that the RSA and ECC cryptographic algorithms can be broken down by quantum computing in the future.

Using quantum annealing(a type of quantum computing), one can improve the logistics industry in terms of calculation of optimal routes of traffic management, fleet operations, air traffic control, and freight distribution. Further, quantum computing can improve the accuracy of weather forecasting. Director of engineering at Google Hartmut Neven says thatquantum computers could help build better climate modelsthat could give us more insight into how humans are influencing the environment. These models also help in determining what steps must be taken to prevent disasters.

Continue reading here:
Understanding the importance of Quantum Computing - Analytics Insight

Open source success has everything to do with innovation, not vendor lock-in concerns – TechRepublic

Commentary: A new survey suggests many get it wrong when they assume companies choose open source to avoid lock-in.

Image: Getty Images/iStockphoto

Every enterprise uses open source, but the reasons for doing so often vary depending on one's role within the enterprise. Anaconda, a popular data science platform with over 20 million users, surveyed its users to better understand the current state of data science adoption, including open source's role therein. Among other findings, developers value open source so they can get work done right now, while their colleagues may value the price tag or utility.

But exactly no group puts "avoiding vendor lock-in" as their first (or even fourth) consideration for using open source. Open source can help companies achieve multi-cloud strategies, but by itself open source doesn't magically make any workload portable. That's simply not how open source (or enterprise) software works.

The good news is that no one seems to be waiting around on the "avoiding lock-in" argument.

SEE:How to build a successful developer career (free PDF)(TechRepublic)

As noted in the report, the survey respondents were asked to assign a proportional value to each of five commonly-cited benefits of open source software. Of the five, "most suitable tool for my needs" and "speed of innovation" claimed the most points, with "avoiding vendor lock-in" scraping into last place (Figure A).

Figure A

Image: Anaconda

If you've been paying attention to open source over the years, these numbers won't be surprising. The closer the respondent is to the code itself, the more they care about the speed of innovation that open source enables, and the less they fret about lock-in. "Lock-in" is something vendors talk about--customers don't seem to obsess over it in the same way.

Don't believe me? Over the past few decades while open source has been booming, we've seen proprietary databases, ERP systems, etc. boom right alongside it. Indeed, over the 20 years I've worked for open source companies, I have almost never had a customer "vote" against lock-in with their wallets.

This is not to say that companies aren't buying into open source in a big way--they are. It's just that "no lock-in" is the puniest of reasons for doing so.

Instead, organizations have long chosen open source to save money while boosting innovation, with the latter reason by far the more compelling. You'd struggle to find companies using TensorFlow to help with their machine learning aspirations because "it's free"--they use it because it's a great way to do things like fraud detection, as PayPal has found. Others like Twitter turn to Redis not because it's free, but because it helps the company achieve dramatic scale.

And so on.

Developers, closest to the code, figured this out long ago--that's why they picked "speed of innovation" at roughly twice the rate of any other open source benefit. I recently discussed whether open source drives business innovation with Weaveworks CTO Cornelia Davis: "No one cares about lock-in if the software isn't very good. The first order of priority is that most want super innovative software." That's what open source increasingly delivers.

Disclosure: I work at AWS, but this article reflects my views, not those of my employer.

From the hottest programming languages to the jobs with the highest salaries, get the developer news and tips you need to know. Weekly

Read the rest here:
Open source success has everything to do with innovation, not vendor lock-in concerns - TechRepublic

Open Source Initiative to Host Virtual State of the Source Summit, September 9-10 – WP Tavern

OSI (Open Source Initiative) is hosting a new 24-hour, virtual conference called State of the Source Summit, September 9-10. The non-profit organization plays an important role in the open source ecosystem as stewards of theOpen Source Definition (OSD).OSI is responsible for reviewing and approving licenses as OSD-conformant, which indirectly helps mediate community conflicts.

As part of the organizations overall mission to educate the public on the economic and strategic advantages of open source technologies and licenses, OSI is hosting a global summit to facilitate conversations on the current state of open source software.

We are so very excited to host our first-ever conference, with a global approach, OSI Board President Josh Simmons said. State of the Source provides an opportunity for both the open source software community and the OSIall those who have contributed so muchto reflect on how we got here, why we have succeeded, and what needs to happen now.

The conference will run four tracks with sessions that fall under these general groupings:

OSI has identified several example topics for each track, to guide potential presenters in writing a proposal. The first track encompasses more OSI-specific topics, such as license proliferation and license enforcement.

Projects & People includes topics that apply more broadly to communities and organizations open source business models, sustainability, patents, and trademarks. The Principles, Policy, and Practices track is geared towards application and example topics include things like explaining a license to your peers, learning how to select a license for your project, and compliance, compatibility, and re-licensing.

As more conferences are forced to move to a virtual format, the wider open source community has the opportunity to be more engaged in an event like State of the Source. Its a good venue for addressing non-technical issues related to the challenges facing open source maintainers and the community. The call for proposals ends July 16, and speakers will be announced August 25.

Like Loading...

Read more from the original source:
Open Source Initiative to Host Virtual State of the Source Summit, September 9-10 - WP Tavern

Global Open Source Software Market Projected to Reach USD XX.XX billion by 2025- Intel, Epson, IBM, Transcend, Oracle, Acquia, etc. – Market Research…

This research report studies and gauges through the current market forces that replicate growth trajectory and holistic growth trends.

Aligning with changing market scenario in the wake of COVID-19 outbreak , this in-depth research offering shares a clear perspective of resultant output that tend to directly impact the overall growth trajectory of the Open Source Software market.

This thoroughly compiled research output shares relevant details on overall industry production chain amidst the COVID-19 pandemic.Besides assessing details pertaining to production, distribution and sales value chain, this detailed research output on the Open Source Software market specifically highlights crucial developments across regions and vital countries, also lending a decisive understanding of the upcoming development scenario likely to be witnessed in the Open Source Software market in the near future.

This study covers following key players:IntelEpsonIBMTranscendOracleAcquiaOpenTextAlfrescoAstaroRethinkDBCanonicalClearCenterCleversafeCompiereContinuent

Request a sample of this report @ https://www.orbismarketreports.com/sample-request/80775?utm_source=Pooja

In this latest research publication on the Open Source Software market, a thorough overview of the current market scenario has been portrayed, in a bid to aid market participants, stakeholders, research analysts, industry veterans and the like to borrow insightful cues from this ready-to-use market research report, thus influencing a definitive business discretion.

The aim of the report is to equip relevant players in deciphering essential cues about the various real-time market based developments, also drawing significant references from historical data, to eventually present a highly effective market forecast and prediction, favoring sustainable stance and impeccable revenue flow despite challenges such as sudden pandemic, interrupted production and disrupted sales channel in the Open Source Software market.

Access Complete Report @ https://www.orbismarketreports.com/global-open-source-software-market-growth-analysis-by-trends-and-forecast-2019-2025?utm_source=Pooja

Market segment by Type, the product can be split into SharewareBundled SoftwareBSD(Berkeley Source Distribution)

Market segment by Application, split into BMForumphpBBPHPWind

The report is targeted to offer report readers with essential data favoring a seamless interpretation of the Open Source Software market.Therefore, to enable and influence a flawless market specific business decision, aligning with the best industry practices, this specificresearch report on the Open Source Software market also lends a systematic rundown on vital growth triggering elements comprising market opportunities, persistent market obstacles and challenges, also featuring a comprehensive outlook of various drivers and threats that eventually influence the growth trajectory in the Open Source Software market.

Some Major TOC Points:1 Report Overview2 Global Growth Trends3 Market Share by Key Players4 Breakdown Data by Type and ApplicationContinued

The report also incorporates ample understanding on numerous analytical practices such as SWOT and PESTEL analysis to source optimum profit resources in Open Source Software market.

Besides presenting a discerning overview of the historical and current market specific developments, inclined to aid a future-ready business decision, this well compiled research report on the Open Source Software market also presents vital details on various industry best practices comprising SWOT and PESTEL analysis to adequately locate and maneuver profit scope.The report in its subsequent sections also portrays a detailed overview of competition spectrum, profiling leading players and their mindful business decisions, influencing growth in the Open Source Software market.

For Enquiry before buying report @ https://www.orbismarketreports.com/enquiry-before-buying/80775?utm_source=Pooja

About Us : With unfailing market gauging skills, has been excelling in curating tailored business intelligence data across industry verticals. Constantly thriving to expand our skill development, our strength lies in dedicated intellectuals with dynamic problem solving intent, ever willing to mold boundaries to scale heights in market interpretation.

Contact Us : Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: USA: +1 (972)-362-8199 | IND: +91 895 659 5155

More here:
Global Open Source Software Market Projected to Reach USD XX.XX billion by 2025- Intel, Epson, IBM, Transcend, Oracle, Acquia, etc. - Market Research...

In the Summer of Software Even Oracle Is Winning – InvestorPlace

The year 2020 will be remembered on Wall Street as the Summer of Software. Because software can be made from home, delivers productivity, and scales in the age of the cloud, every software company looks like gold. Even Oracle (NASDAQ:ORCL) and ORCL stock.

Source: JHVEPhoto / Shutterstock.com

The insanity of the current market is shown by this. Oracle, a slow-growth company with a paltry dividend and only middling prospects, continues to grind higher. Its up 8% so far in 2020.

At $57 per share, a market cap of $173.4 billion, it trades at 18.4 times earnings. The 24 cent per share dividend yields 1.69%.

Oracle today claims to be a cloud platform with data centers on nearly every continent. But its footprint is small compared to Cloud Czars like Microsoft (NASDAQ:MSFT), Amazon (NASDAQ:AMZN), and Alphabets (NASDAQ:GOOGL, NASDAQ:GOOG) Google Cloud.

A decade ago, Oracle and Microsoft were very similar and roughly the same size. Then Microsoft began making its peace with open source, while Oracle went to war. Today Microsofts market cap of $1.61 trillion is nine times bigger than Oracles, its revenue four times bigger.

Not only did Oracle miss the bus on cloud early in the last decade, it actively resisted the trend. It bought Sun Microsystems for its open source software, then fought to make that software closed source. It won a long legal fight with Google, but open source responded by putting projects inside foundations Oracle couldnt buy. It maintained a proprietary model that squeezed customers for profits long after that stopped making sense.

Now Oracle is fighting to defend its remaining turf and its on the back foot. Oracle dominated the era of on-premise databases, servers installed at a companys offices. The growth is in cloud databases. Oracle insists it can win there, against companies founded by ex-Oracle employees like Salesforce.com (NYSE:CRM).

But founder Larry Ellison, now the fifth-richest person in the world, is selling his shares as fast as he can give them to himself. The share price is maintained by buybacks that are no longer fashionable. Oracle insists it can regain market share, but analysts no longer believe it.

A database is a very complicated thing. The larger it grows, the harder it is to make changes. Oracle has taken advantage of that. Its strength is with governments and large enterprises, who find it easier to keep paying Oracles prices than tear out what they have and re-build it.

Oracle grows its bottom line by squeezing these customers, holding down expenses, and with those buybacks. ORCL stocks share count has fallen by one-quarter over the last four years.

For investors in ORCL stock, this delivers slow and steady growth. The value of Oracle marches ahead by 10% per year. The dividend has risen 60% over the last five years, from 15 cents to 24 cents per quarter.

Analysts usually compare it with SAP (NYSE:SAP), the European database company, which makes it look good. But its real competition today is from nimbler, cloud-based companies like Workday (NASDAQ:WDAY), where the comparisons arent as good.

Oracle is a General Motors (NYSE:GM) Cadillac in a sea of Teslas (NASDAQ:TSLA).

I have spent most of my working life as a reporter, covering the business of technology. Ever since I joined InvestorPlace I have been warning readers away from ORCL stock.

You may think that, since my last warning about Oracle was on March 6 and its up almost 20% since then, I dont know what Im talking about. You can make a little money with Oracle. Its a safe holding, a favorite among conservative institutions.

But Oracle missed the boat on cloud, it missed the trend in open source, and its gains during this summer of software are small. Thats also the best it can do.

Dana Blankenhorn has been a financial and technology journalist since 1978. His latest book is Technologys Big Bang: Yesterday, Today and Tomorrow with Moores Law, essays on technology available at the Amazon Kindle store. Write him at danablankenhorn@gmail.com or follow him on Twitter at @danablankenhorn. As of this writing he owned shares in MSFT and AMZN.

Go here to read the rest:
In the Summer of Software Even Oracle Is Winning - InvestorPlace

The biggest flipping challenge in quantum computing – Science Magazine

By Adrian ChoJul. 9, 2020 , 2:00 PM

In October 2019, researchers at Google announced to great fanfare that their embryonic quantum computer had solved a problem that would overwhelm the best supercomputers. Some said the milestone, known as quantum supremacy, marked the dawn of the age of quantum computing. However, Greg Kuperberg, a mathematician at the University of California, Davis, who specializes in quantum computing, wasnt so impressed. He had expected Google to aim for a goal that is less flashy but, he says, far more important.

Whether its calculating your taxes or making Mario jump a canyon, your computer works its magic by manipulating long strings of bits that can be set to 0 or 1. In contrast, a quantum computer employs quantum bits, or qubits, that can be both 0 and 1 at the same time, the equivalent of you sitting at both ends of your couch at once. Embodied in ions, photons, or tiny superconducting circuits, such two-way states give a quantum computer its power. But theyre also fragile, and the slightest interaction with their surroundings can distort them. So scientists must learn to correct such errors, and Kuperberg had expected Google to take a key step toward that goal. I consider it a more relevant benchmark, he says.

If some experts question the significance of Googles quantum supremacy experiment, all stress the importance of quantum error correction. It is really the difference between a $100 million, 10,000-qubit quantum computer being a random noise generator or the most powerful computer in the world, says Chad Rigetti, a physicist and co-founder of Rigetti Computing. And all agree with Kuperberg on the first step: spreading the information ordinarily encoded in a single jittery qubit among many of them in a way that maintains the information even as noise rattles the underlying qubits. Youre trying to build a ship that remains the same ship, even as every plank in it rots and has to be replaced, explains Scott Aaronson, a computer scientist at the University of Texas, Austin.

The early leaders in quantum computingGoogle, Rigetti, and IBMhave all trained their sights on that target. Thats very explicitly the next big milestone, says Hartmut Neven, who leads Googles Quantum Artificial Intelligence lab. Jay Gambetta, who leads IBMs quantum computing efforts, says, In the next couple of years, youll see a series of results that will come out from us to deal with error correction.

Physicists have begun to test their theoretical schemes in small experiments, but the challenge is grand. To demonstrate quantum supremacy, Google scientists had to wrangle 53 qubits. To encode the data in a single qubit with sufficient fidelity, they may need to master 1000 of them.

The quest for quantum computers took off in 1994 when Peter Shor, a mathematician at the Massachusetts Institute of Technology, showed that such a machinethen hypotheticalshould be able to quickly factor huge numbers. Shors algorithm represents the possible factorizations of a number as quantum waves that can slosh simultaneously through the computers qubits, thanks to the qubits two-way states. The waves interfere so that the wrong factorizations cancel one another and the right one pops out. A machine running Shors algorithm could, among other things, crack the encryption systems that now secure internet communications, which rely on the fact that searching for the factors of a huge number overwhelms any ordinary computer.

However, Shor assumed each qubit would maintain its state so the quantum waves could slosh around as long as necessary. Real qubits are far less stable. Google, IBM, and Rigetti use qubits made of tiny resonating circuits of superconducting metal etched into microchips, which so far have proved easier to control and integrate into circuits than other types of qubits. Each circuit has two distinct energy states, which can denote 0 or 1. By plying a circuit with microwaves, researchers can ease it into either state or any combination of the twosay, 30% 0 and 70% 1. But those in-between states will fuzz out or decohere in a fraction of a second. Even before that happens, noise can jostle the state and alter it, potentially derailing a calculation.

Whereas an ordinary bit must be either 0 or 1, a qubit can be in any combination of 0 and 1 at the same time. Those two parts of the state mesh in a way described by an abstract angle, or phase. So the qubits state is like a point on a globe whose latitude reveals how much the qubit is 0 and how much it is 1, and whose longitude indicates the phase. Noise can jostle the qubit in two basic ways that knock the point around the globe.

Bit-flip error Exchanges 0 and 1, flippingthe qubit in latitudeQubitstatePhase-flip error Pushes the qubits state halfwayaround the sphere in longitudePhaseEqual mixof 1 and 0Mixtureof 1 and 010

C. Bickel/Science

Such noise nearly drowned out the signal in Googles quantum supremacy experiment. Researchers began by setting the 53 qubits to encode all possible outputs, which ranged from zero to 253. They implemented a set of randomly chosen interactions among the qubits that in repeated trials made some outputs more likely than others. Given the complexity of the interactions, a supercomputer would need thousands of years to calculate the pattern of outputs, the researchers said. So by measuring it, the quantum computer did something that no ordinary computer could match. But the pattern was barely distinguishable from the random flipping of qubits caused by noise. Their demonstration is 99% noise and only 1% signal, Kuperberg says.

To realize their ultimate dreams, developers want qubits that are as reliable as the bits in an ordinary computer. You want to have a qubit that stays coherent until you switch off the machine, Neven says.

Scientists approach of spreading the information of one qubita logical qubitamong many physical ones traces its roots to the early days of ordinary computers in the 1950s. The bits of early computers consisted of vacuum tubes or mechanical relays, which were prone to flip unexpectedly. To overcome the problem, famed mathematician John von Neumann pioneered the field of error correction.

Von Neumanns approach relied on redundancy. Suppose a computer makes three copies of each bit. Then, even if one of the three flips, the majority of the bits will preserve the correct setting. The computer can find and fix the flipped bit by comparing the bits in pairs, in so-called parity checks. If the first and third bits match, but the first and second and second and third differ, then most likely, the second bit flipped, and the computer can flip it back. Greater redundancy means greater ability to correct errors. Ironically, the transistors, etched into microchips, that modern computers use to encode their bits are so reliable that error correction isnt much used.

But a quantum computer will depend on it, at least if its made of superconducting qubits. (Qubits made of individual ions suffer less from noise, but are harder to integrate.) Unfortunately for developers, quantum mechanics itself makes their task much harder by depriving them of their simplest error-correcting tool, copying. In quantum mechanics, a no-cloning theorem says its not possible to copy the state of one qubit onto another without altering the state of the first one. This means that its not possible to directly translate our classical error correction codes to quantum error correction codes, says Joschka Roffe, a theorist at the University of Sheffield.

In a conventional computer, a bit is a switch that can be set to either 0 or 1. To protect a bit, a computer can copy it. If noise then flips a copy, the machine can find the error by making parity measurements: comparing pairs of bits to see whether theyre the same or different.

10101010101010101010Parity measurementsErrorcorrectionNoiseFlipped bitCopying

C. Bickel/Science

Even worse, quantum mechanics requires researchers to find errors blindfolded. Although a qubit can have a state that is both 0 and 1 at the same time, according to quantum theory, experimenters cant measure that two-way state without collapsing it into either 0 or 1. Checking a state obliterates it. The simplest [classical error] correction is that you look at all the bits to see whats gone wrong, Kuperberg says. But if its qubits then you have to find the error without looking.

Those hurdles may sound insurmountable, but quantum mechanics points to a potential solution. Researchers cannot copy a qubits state, but they can extend it to other qubits using a mysterious quantum connection called entanglement.

How the entangling is done shows just how subtle quantum computing is. Prodded with microwaves, the original qubit interacts with another that must start in the 0 state through a controlled not (CNOT) operation. The CNOT will change the state of the second qubit if the state of the first is 1 and leave it unchanged if the first qubit is 0. However, the maneuver doesnt actually measure the first qubit and collapse its state. Instead, it maintains the both-ways state of the first qubit while both changing and not changing the second qubit at the same time. It leaves the two qubits in a state in which, simultaneously, they are both 0 and both 1.

If the original qubit is in, for example, a 30% 0 and 70% 1 state, physicists can link it to other qubits to make a chain of, say, three qubits that share an entangled state thats 30% all three are 0 and 70% all three are 1. That state is distinct from three copies of the original qubit. In fact, none of the three entangled qubits in the string possesses a well defined quantum state of its own. But now, the three qubits are completely correlated: If you measure the first one and it collapses to 1, then the other two must also instantly collapse to 1. If the first collapses to 0, the others must also. That correlation is the essence of entanglement.

With that bigger entangled state, scientists can now keep an eye out for errors. To do that, they entangle still other ancillary qubits with the chain of three, one with first and second qubits in the string and another with the second and third. They then use measurements on the ancillas to make the quantum mechanical equivalent of parity checks. For example, without breaking the entanglement, noise can flip any one of the three coding qubits so that its 0 and 1 parts get switched, changing the latent correlations among all three. If researchers set things up right, they can make stabilizer measurements on the ancillary qubits to probe those correlations.

Although measuring the ancillary qubits collapses their states, it leaves the coding qubits unperturbed. These are specially designed parity measurements that dont collapse the information encoded in the logical state, Roffe says. For example, if the measurement shows the first ancilla is 0, it reveals only that the first and second coding qubits must be in the same state, but not which state that is. If the ancilla is 1, then the measurement reveals only that the coding qubits must be in opposite states. If researchers can find a flipped qubit more quickly than the qubits tend to fuzz out, they can use microwaves to flip it back to its original state and restore its coherence.

The rules of quantum mechanics make it impossible to watch for errors by copying and measuring qubits (top). Instead, physicists want to spread the qubits state to other qubits through entanglement (middle) and monitor those to detect errors; then nudge an errant bit back to the correct state (bottom).

Bigger is betterInstead of trying to copy the state of a qubit, physicists can enlarge it by entangling the qubit with others, resulting in a single state that corresponds to the same point on a sphere.Lost identity In the entangledcondition, none of thethree qubits has awell-defined quantumstate of its own. 0101EntanglementCopyingNot so fast! Quantum mechanicsdoes not allow the stateof one qubit to becopied onto others.Originalqubit010101000111111000101010111000NoiseAncillary qubit entangled with the first and second qubitsAncillary qubit entangled with the second and third qubitsCorrectionGentle correctivesNow, if noise flips one of the qubits, physicists can detect the change without actually measuring the state. They entangle pairs of the main qubits with other ancillary qubits whose state can be measured and will be 0 if the correlation between a pair remains the same and 1 if the correlation is flipped. Microwaves can then unflip the qubit and restore the initial entangled state.

C. Bickel/Science

Thats just the basic idea. The state of a qubit is more complex than just a combination of 0 and 1. It also depends on exactly how those two parts mesh, which, in turn, depends on an abstract angle called the phase. The phase can range from 0 to 360 and is key to the wavelike interference effects that give a quantum computer its power. Quantum mechanically, any error in a qubits state can be thought of as some combination of a bit-flip error that swaps 0 and 1 and a phase flip that changes the phase by 180.

To correct both types, researchers can expand into another dimensionliterally. Whereas a string of three entangled qubits, with two ancillas woven between them, is the smallest array that can detect and correct a bit-flip error, a three-by-three grid of qubits, with eight interspersed ancillas, is the simplest one that can detect and correct both bit-flip and phase-flip errors. The logical qubit now resides in an entangled state of the nine qubitsbe thankful you dont have to write it out mathematically! Stabilizer measurements along one dimension of the grid check for bit-flip errors, while slightly different stabilizer measurements along the other dimension check for phase-flip errors.

Schemes for pushing into two dimensions vary, depending on the geometric arrangement of the qubits and the details of the stabilizer measurements. Nevertheless, researchers road to error correction is now clear: Encode a single logical qubit in a grid of physical qubits and show that the fidelity of the logical qubit gets better as the size of the grid increases.

Experimenters have already made a start. For example, in aNature Physicsstudy published on 8 June, Andreas Wallraff at ETH Zurich and colleagues demonstrated that they could detectbut not correcterrors in a logical qubit encoded in a square of four qubits with three ancillary qubits.

But experimenters face a daunting challenge. Manipulating individual qubits can introduce errors, and unless that error rate falls below a certain level, then entangling more qubits with the original one only adds more noise to the system, says Maika Takita, a physicist at IBM. To demonstrate anything you have to get below that threshold, she says. The ancillary qubits and other error-correction machinery add even more noise, and once those effects are included, the necessary error threshold plummets further. To make the scheme work, physicists must lower their error rate to less than 1%. When I heard we achieved an 3% error rate, I thought that was great, Takita says. Now, it needs to be much lower.

Error correction also requires twiddling with qubits repeatedly. That makes the process more demanding than quantum supremacy, which involved measuring all the qubits just once, says Marissa Giustina, a physicist with Google. Error correction requires you to measure and measure and measure over and over again in a cycle, and that has to be done quickly and reliably, she says.

Although a handful of qubits would suffice to demonstrate the principle of quantum error correction, in practice physicists will have to control huge numbers of them. To run Shors algorithm well enough to factor, say, a number 1000 bits longroughly the size used in some internet encryption schemestheyll need to maintain logical qubits with a part-in-1-billion error rate. That may require entangling a grid of 1000 physical qubits to safeguard a single logical qubit, researchers say, a prospect that will take generations of bigger and better quantum computing chips.

Ironically, overcoming that challenge would put developers back where they were 20 years ago, when they were just setting out to make pairs of physical qubits interact to perform the various logical operations, or gates, needed for computation. Once scientists have begun to master error correction, theyll have to repeat nearly every development so far in quantum computing with the more robust but highly complex logical qubits. People say that error correction is the next step in quantum computing; its the next 25 steps, Giustina quips.

Retracing those steps wont be easy. Its not just that any logical gate currently involving two qubits will require thousands of them. Worse, another theorem from quantum mechanics states that, no matter what scheme researchers use, not all of the logical gates can be easily translated from individual physical qubits to diffuse logical ones.

Researchers think they can sidestep that problem if they can initialize all the qubits in their computer in particular magic states that, more or less, do half the work of the problematic gates. Unfortunately, still more qubits may be needed to produce those magic states. If you want to perform something like Shors algorithm, probably 90% of the qubits would have to be dedicated to preparing these magic states, Roffe says. So a full-fledged quantum computer, with 1000 logical qubits, might end up containing many millions of physical qubits.

Google has a plan to build just such a machine within 10 years. At first blush, that sounds preposterous. Superconducting qubits need to be cooled to near absolute zero, in a device called a cryostat that fills a small room. A million-qubit machine conjures visions of a thousand cryostats in a huge factory. But Google researchers think they can keep their device compact. I dont want to tip my hand, but we believe we figured this out, Neven says.

Others are taking different tacks. Googles scheme would require 1000 physical qubits to encode a single logical qubit because its chip allows only neighboring qubits to interact. If more distant qubits can be made to interact, too, the number of physical qubits could be much smaller, Gambetta says. If I can achieve that, then these ridiculously scary numbers for the overhead of error correction can come crashing down, he says. So IBM researchers are exploring a scheme with more distant interconnections among the qubits.

Nobody is willing to predict how long it will take researchers to master error correction. But it is time to turn to the problem in earnest, Rigetti says. Thus far, substantially all the researchers who would identify themselves as error correction researchers are theorists, he says. We have to make this an empirical field with real feedback on real data generated with real machines. Quantum supremacy is so 2019. In quantum computing, error correction is the next hot thing.

See the article here:
The biggest flipping challenge in quantum computing - Science Magazine