U.S. Promises Not to Imprison Julian Assange Under Harsh Conditions …

If a British court permits the extradition of the WikiLeaks founder Julian Assange to face criminal charges in the United States, the Biden administration has pledged that it will not hold him under the most austere conditions reserved for high-security prisoners and that, if he is convicted, it will let him serve his sentence in his native Australia.

Those assurances were disclosed on Wednesday as part of a British High Court ruling in London. The court accepted the United States governments appeal of a ruling that had denied its extradition request for Mr. Assange who was indicted during the Trump administration on the grounds that American prison conditions for the highest-security inmates were inhumane.

The new ruling was not made public in its entirety. But in an email, the Crown Prosecution Service press office provided a summary showing that the High Court had accepted three of five grounds for appeal submitted by the United States and disclosing the promises the Biden administration had made.

A lower-court judge, Vanessa Baraitser of the Westminster Magistrates Court, had held in January that the mental condition of Mr. Assange is such that it would be oppressive to extradite him to the United States given American prison conditions. The summary of the decision to accept the appeal said that the United States had provided the United Kingdom with a package of assurances which are responsive to the district judges specific findings in this case.

Specifically, it said, Mr. Assange would not be subjected to measures that curtail a prisoners contact with the outside world and can amount to solitary confinement, and would not be imprisoned at the supermax prison in Florence, Colo., unless he later did something that meets the test for imposing such harsh steps.

The United States has also provided an assurance that the United States will consent to Mr. Assange being transferred to Australia to serve any custodial sentence imposed on him, the summary said.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the article here:

U.S. Promises Not to Imprison Julian Assange Under Harsh Conditions ...

Quantum computers in 2023: how they work, what they do, and where they …

In June, an IBM computing executive claimed quantum computers were entering the utility phase, in which high-tech experimental devices become useful. In September, Australias Chief Scientist Cathy Foley went so far as to declare the dawn of the quantum era.

This week, Australian physicist Michelle Simmons won the nations top science award for her work on developing silicon-based quantum computers.

Obviously, quantum computers are having a moment. But to step back a little what exactly are they?

One way to think about computers is in terms of the kinds of numbers they work with.

The digital computers we use every day rely on whole numbers (or integers), representing information as strings of zeroes and ones which they rearrange according to complicated rules. There are also analogue computers, which represent information as continuously varying numbers (or real numbers), manipulated via electrical circuits or spinning rotors or moving fluids.

Read more: There's a way to turn almost any object into a computer and it could cause shockwaves in AI

In the 16th century, the Italian mathematician Girolamo Cardano invented another kind of number called complex numbers to solve seemingly impossible tasks such as finding the square root of a negative number. In the 20th century, with the advent of quantum physics, it turned out complex numbers also naturally describe the fine details of light and matter.

In the 1990s, physics and computer science collided when it was discovered that some problems could be solved much faster with algorithms that work directly with complex numbers as encoded in quantum physics.

The next logical step was to build devices that work with light and matter to do those calculations for us automatically. This was the birth of quantum computing.

We usually think of the things our computers do in terms that mean something to us balance my spreadsheet, transmit my live video, find my ride to the airport. However, all of these are ultimately computational problems, phrased in mathematical language.

As quantum computing is still a nascent field, most of the problems we know quantum computers will solve are phrased in abstract mathematics. Some of these will have real world applications we cant yet foresee, but others will find a more immediate impact.

One early application will be cryptography. Quantum computers will be able to crack todays internet encryption algorithms, so we will need quantum-resistant cryptographic technology. Provably secure cryptography and a fully quantum internet would use quantum computing technology.

In materials science, quantum computers will be able to simulate molecular structures at the atomic scale, making it faster and easier to discover new and interesting materials. This may have significant applications in batteries, pharmaceuticals, fertilisers and other chemistry-based domains.

Quantum computers will also speed up many difficult optimisation problems, where we want to find the best way to do something. This will allow us to tackle larger-scale problems in areas such as logistics, finance, and weather forecasting.

Machine learning is another area where quantum computers may accelerate progress. This could happen indirectly, by speeding up subroutines in digital computers, or directly if quantum computers can be reimagined as learning machines.

In 2023, quantum computing is moving out of the basement laboratories of university physics departments and into industrial research and development facilities. The move is backed by the chequebooks of multinational corporations and venture capitalists.

Contemporary quantum computing prototypes built by IBM, Google, IonQ, Rigetti and others are still some way from perfection.

Read more: Error correcting the things that go wrong at the quantum computing scale

Todays machines are of modest size and susceptible to errors, in what has been called the noisy intermediate-scale quantum phase of development. The delicate nature of tiny quantum systems means they are prone to many sources of error, and correcting these errors is a major technical hurdle.

The holy grail is a large-scale quantum computer which can correct its own errors. A whole ecosystem of research factions and commercial enterprises are pursuing this goal via diverse technological approaches.

The current leading approach uses loops of electric current inside superconducting circuits to store and manipulate information. This is the technology adopted by Google, IBM, Rigetti and others.

Another method, the trapped ion technology, works with groups of electrically charged atomic particles, using the inherent stability of the particles to reduce errors. This approach has been spearheaded by IonQ and Honeywell.

A third route of exploration is to confine electrons within tiny particles of semiconductor material, which could then be melded into the well-established silicon technology of classical computing. Silicon Quantum Computing is pursuing this angle.

Yet another direction is to use individual particles of light (photons), which can be manipulated with high fidelity. A company called PsiQuantum is designing intricate guided light circuits to perform quantum computations.

There is no clear winner yet from among these technologies, and it may well be a hybrid approach that ultimately prevails.

Attempting to forecast the future of quantum computing today is akin to predicting flying cars and ending up with cameras in our phones instead. Nevertheless, there are a few milestones that many researchers would agree are likely to be reached in the next decade.

Better error correction is a big one. We expect to see a transition from the era of noisy devices to small devices that can sustain computation through active error correction.

Another is the advent of post-quantum cryptography. This means the establishment and adoption of cryptographic standards that cant easily be broken by quantum computers.

Read more: Quantum computers threaten our whole cybersecurity infrastructure: here's how scientists can bulletproof it

Commercial spin-offs of technology such as quantum sensing are also on the horizon.

The demonstration of a genuine quantum advantage will also be a likely development. This means a compelling application where a quantum device is unarguably superior to the digital alternative.

And a stretch goal for the coming decade is the creation of a large-scale quantum computer free of errors (with active error correction).

When this has been achieved, we can be confident the 21st century will be the quantum era.

Read the original:
Quantum computers in 2023: how they work, what they do, and where they ...

Multiverse and Single Quantum Receive a $1.4 Million Contract from the German Aerospace Center (DLR) for Quantum Materials Science Research – Quantum…

Multiverse and Single Quantum Receive a $1.4 Million Contract from the German Aerospace Center (DLR) for Quantum Materials Science Research  Quantum Computing Report

Originally posted here:
Multiverse and Single Quantum Receive a $1.4 Million Contract from the German Aerospace Center (DLR) for Quantum Materials Science Research - Quantum...

Cryptology | Definition, Examples, History, & Facts | Britannica

Because much of the terminology of cryptology dates to a time when written messages were the only things being secured, the source information, even if it is an apparently incomprehensible binary stream of 1s and 0s, as in computer output, is referred to as the plaintext. As noted above, the secret information known only to the legitimate users is the key, and the transformation of the plaintext under the control of the key into a cipher (also called ciphertext) is referred to as encryption. The inverse operation, by which a legitimate receiver recovers the concealed information from the cipher using the key, is known as decryption.

The most frequently confused, and misused, terms in the lexicon of cryptology are code and cipher. Even experts occasionally employ these terms as though they were synonymous.

A code is simply an unvarying rule for replacing a piece of information (e.g., letter, word, or phrase) with another object, but not necessarily of the same sort; Morse code, which replaces alphanumeric characters with patterns of dots and dashes, is a familiar example. Probably the most widely known code in use today is the American Standard Code for Information Interchange (ASCII). Employed in all personal computers and terminals, it represents 128 characters (and operations such as backspace and carriage return) in the form of seven-bit binary numbersi.e., as a string of seven 1s and 0s. In ASCII a lowercase a is always 1100001, an uppercase A always 1000001, and so on. Acronyms are also widely known and used codes, as, for example, Y2K (for Year 2000) and COD (meaning cash on delivery). Occasionally such a code word achieves an independent existence (and meaning) while the original equivalent phrase is forgotten or at least no longer has the precise meaning attributed to the code worde.g., modem (originally standing for modulator-demodulator).

Ciphers, as in the case of codes, also replace a piece of information (an element of the plaintext that may consist of a letter, word, or string of symbols) with another object. The difference is that the replacement is made according to a rule defined by a secret key known only to the transmitter and legitimate receiver in the expectation that an outsider, ignorant of the key, will not be able to invert the replacement to decrypt the cipher. In the past, the blurring of the distinction between codes and ciphers was relatively unimportant. In contemporary communications, however, information is frequently both encoded and encrypted so that it is important to understand the difference. A satellite communications link, for example, may encode information in ASCII characters if it is textual, or pulse-code modulate and digitize it in binary-coded decimal (BCD) form if it is an analog signal such as speech. The resulting coded data is then encrypted into ciphers by using the Data Encryption Standard or the Advanced Encryption Standard (DES or AES; described in the section History of cryptology). Finally, the resulting cipher stream itself is encoded again, using error-correcting codes for transmission from the ground station to the orbiting satellite and thence back to another ground station. These operations are then undone, in reverse order, by the intended receiver to recover the original information.

In the simplest possible example of a true cipher, A wishes to send one of two equally likely messages to B, say, to buy or sell a particular stock. The communication must take place over a wireless telephone on which eavesdroppers may listen in. It is vital to As and Bs interests that others not be privy to the content of their communication. In order to foil any eavesdroppers, A and B agree in advance as to whether A will actually say what he wishes B to do, or the opposite. Because this decision on their part must be unpredictable, they decide by flipping a coin. If heads comes up, A will say Buy when he wants B to buy and Sell when he wants B to sell. If tails comes up, however, he will say Buy when he wants B to sell, and so forth. (The messages communicate only one bit of information and could therefore be 1 and 0, but the example is clearer using Buy and Sell.)

With this encryption/decryption protocol being used, an eavesdropper gains no knowledge about the actual (concealed) instruction A has sent to B as a result of listening to their telephone communication. Such a cryptosystem is defined as perfect. The key in this simple example is the knowledge (shared by A and B) of whether A is saying what he wishes B to do or the opposite. Encryption is the act by A of either saying what he wants done or not as determined by the key, while decryption is the interpretation by B of what A actually meant, not necessarily of what he said.

This example can be extended to illustrate the second basic function of cryptography, providing a means for B to assure himself that an instruction has actually come from A and that it is unalteredi.e., a means of authenticating the message. In the example, if the eavesdropper intercepted As message to B, he couldeven without knowing the prearranged keycause B to act contrary to As intent by passing along to B the opposite of what A sent. Similarly, he could simply impersonate A and tell B to buy or sell without waiting for A to send a message, although he would not know in advance which action B would take as a result. In either event, the eavesdropper would be certain of deceiving B into doing something that A had not requested.

To protect against this sort of deception by outsiders, A and B could use the following encryption/decryption protocol.

They secretly flip a coin twice to choose one of four equally likely keys, labeled HH, HT, TH, and TT, with both of them knowing which key has been chosen. The outcome of the first coin flip determines the encryption rule just as in the previous example. The two coin flips together determine an authentication bit, 0 or 1, to be appended to the ciphers to form four possible messages: Buy-1, Buy-0, Sell-1, and Sell-0. B will only accept a message as authentic if it occurs in the row corresponding to the secret key. The pair of messages not in that row will be rejected by B as non-authentic. B can easily interpret the cipher in an authentic message to recover As instructions using the outcome of the first coin flip as the key. If a third party C impersonates A and sends a message without waiting for A to do so, he will, with probability 1/2, choose a message that does not occur in the row corresponding to the key A and B are using. Hence, the attempted deception will be detected by B, with probability 1/2. If C waits and intercepts a message from A, no matter which message it is, he will be faced with a choice between two equally likely keys that A and B could be using. As in the previous example, the two messages he must choose between convey different instructions to B, but now one of the ciphers has a 1 and the other a 0 appended as the authentication bit, and only one of these will be accepted by B. Consequently, Cs chances of deceiving B into acting contrary to As instructions are still 1/2; namely, eavesdropping on A and Bs conversation has not improved Cs chances of deceiving B.

Clearly, in either example, secrecy or secrecy with authentication, the same key cannot be reused. If C learned the message by eavesdropping and observed Bs response, he could deduce the key and thereafter impersonate A with certainty of success. If, however, A and B chose as many random keys as they had messages to exchange, the security of the information would remain the same for all exchanges. When used in this manner, these examples illustrate the vital concept of a onetime key, which is the basis for the only cryptosystems that can be mathematically proved to be cryptosecure. This may seem like a toy example, but it illustrates the essential features of cryptography. It is worth remarking that the first example shows how even a child can create ciphers, at a cost of making as many flips of a fair coin as he has bits of information to conceal, that cannot be broken by even national cryptologic services with arbitrary computing powerdisabusing the lay notion that the unachieved goal of cryptography is to devise a cipher that cannot be broken.

Follow this link:
Cryptology | Definition, Examples, History, & Facts | Britannica