‘Crypto’ is deadlong live the digital transaction era – CoinGeek

Using the word crypto to describe Bitcoin is an inaccuracy that comes from early misunderstandings and a false connection to distinctly different systems. As we move into an era where Bitcoin will begin to power the digital transaction economy, its time to let go and use more appropriate terms to describe this industry. So, digital currency it is.

On that note, mining also needs to go. Transaction processors play a far more important role than simply digging up new coins. Mining only applies to the disappearing block subsidy, which is only there to prime the pumpthe true function of the nodes is transaction processing and this takes over as the subsidy dies. No more coal mine stock photos, or gold mining puns in articles, lets give them the respect they deserve.

After 11 years of Bitcoin, were grown accustomed to seeing these expressions. But something never felt quite right about them. They seemed forced, bolted-on, a rushed attempt to describe a new industry using words people could relate to somehow. And over those years, crypto and cryptocurrency have also acquired the stigma of anonymity, crime, and get-rich-quick schemes like ICOs.

Digital transaction and transaction processorsget used to those expressions, because youll be seeing them a lot more often in the future.

Bitcoin is not like this

Bitcoin creator Dr. Craig Wright described the need for more accurate descriptors, saying:

Cryptocurrency is linked to a lot of discredited systems (right back to eCash) that are linked to black market and illegal use cases (drugs, money laundering etc). Next, Bitcoin is not encrypted. Blind cash systems such as Digicash used cryptographic constructs directly and are associated with encrypted transactions that cannot be traced. Bitcoin is not like this.

Bitcoin is sent in clear text. The hash is an index to the identity exchange between peers (people) but nothing is secret. It is private but not hidden.

Theres another, er, key difference. Dr. Wright continued:

With encrypted systems, the loss of the key means the inability to access data. Bitcoin is not encrypted. The transaction is published publicly. So, nothing is going to stop recovery if the nodes agree (nodes enforce rules and courts issue rules).

Even the NSA cannot force access to an encrypted file when the private key is destroyed, but, again Bitcoin is clear text and public. So, losing a key doesnt mean the transaction is lost.

I used digital signature algorithms in bitcoin, but this is still not cryptography. It is a system that uses the same maths in a new manner.

He also noted that Bitcoin was never a new technology. In his 1996 piece, The Wild, Wild Web, Gregory Spears articulated web IPOs, which used tokens to raise funds. We now call these ICOs, but the difference is that a blockchain stops the Ponzi schemes from deleting logs. Dr. Wright said:

Token security offers and capital raising dates back decades. Bitcoin was never a new technology in this area.

In the Nineties

Cryptocurrency is a word that comes from Dr. Wrights favorite decade, the 1990s. Early allusions to the term run alongside untraceable and anonymous moneywhich has shown itself to be undesirable. Interestingly, some of the first published references come from those concerned with investigating its uses. Theres the NSAs How to Make a Mint: the Cryptography of Anonymous Electronic Cash from 1996, also referenced in The American Law Review in 1997.

Its a difficult word to say, and one that causes outsiders eyes to glaze over. If they can relate to it at all, their minds subconsciously connect it to more negative aspects of the past like scams, bubbles, and crime.

Writers also know its a difficult word to type. If you dont believe us, search the web for crypocurrency and crytocurrency and see how many results you get.

We know youll check the whitepaper, so here it is

For the record, Satoshi Nakamotos 2008 Bitcoin whitepaper does not refer to cryptocurrency or miners the way theyre commonly used in todays discourse. There are just two sentences that allude to these terms. This one:

What is needed is an electronic payment system based on cryptographic proof instead of trust

And this one:

The steady addition of a constant of amount of new coins is analogous to gold miners expending resources to add gold to circulation.

Bitcoin is based on cryptography, but it is not itself a form of cryptography and it doesnt function the same way. Contrary to popular belief, it is not secured by cryptography either. Bitcoin has an economic model secured by incentives and coststransaction processors do their job for profit, and hacking or taking over the network is prohibitively expensive.

Cryptographers themselves have never liked the word crypto for digital transaction networks. That abbreviation describes their field, and many have expressed displeasure at it being reapplied to just one specific application. And miners? Like Satoshi said, its only meant to be an analogya comparison to allow newcomers to more easily grasp whats going on.

Were happy for the real cryptographers to have their word back, and restore it to its proper meaning.

Bitcoin and the digital transaction industry are mature now. The industry is ready to move on to greater challenges, and in doing so must discard the expressions that narrowed its mission so much. It gives us great pleasure to announce that crypto is dead long live the digital transaction era.

New to Bitcoin? Check out CoinGeeksBitcoin for Beginnerssection, the ultimate resource guide to learn more about Bitcoinas originally envisioned by Satoshi Nakamotoand blockchain.

See more here:
'Crypto' is deadlong live the digital transaction era - CoinGeek

Disruptive Defenses Are The Key To Preventing Data Breaches – Forbes

A report from DLA Piper states that more than 160,000 data breach notifications have been reported across 28 nations in the European Union since the General Data Protection Regulation (GDPR) went into effect in May 2018 -- an average of more than 260 data breaches per day. And, when you consider that since California's 2004 law on privacy breaches, over 9,000 breaches have been recorded and 11.5 billion records have been exposed,it is evident that threats against sensitive data are unprecedented.

Is this the new normal? Can there be any expectation of security and privacy when even the most stringent of data privacy regulations appear to have little effect?

Companies, government agencies and consumers must change their behavior if they expect to stem this tide. They must adopt disruptive defenses to make it extremely hard for attackers to compromise data.

What is a disruptive defense? It is an uncommon defense, based on existing industry standards, that raises application security to higher levels than what is currently used by most applications.

There are six disruptive defenses that, when deployed, create significant barriers to attackers. They are as follows:

1. Eliminate shared-secret authentication schemes.

This includes passwords, one-time PINs, knowledge-based authentication, etc. This should be replaced with public key cryptography-based authentication that uses cryptographic hardware to protect keys.

Public key cryptography authentication (also known as "strong authentication") does not store secrets on the server. The secret remains with the user, stored in special hardware available on business desktops, laptops, modern mobile phones, smartcards and security keys. This is a modern authentication standard that eliminates passwords and is supported by all major operating systems as well as browsers. According to NIST, it provides the "highest assurance" among authentication technologies currently. Eliminating a 1960s authentication scheme on a 21st-century application should be the first defensive step of every web application.

2. Ensure the provenance of a transaction before it is committed.

This is accomplished through the use of a digital signature on a transaction, applied by the user using the same technology for strong authentication. Not only does this establish an authoritative source for the transaction (since only the user could have applied that digital signature with their consent), but it provides the business with a transaction confirmation, which is becoming increasingly necessary in many business environments.

3. Preserve the confidentiality of sensitive data within the application layer.

This excludes the practice of encrypting data within the database, operating system or disk drive. Encrypting sensitive data has become mandatory via multiple recent regulations. But application developers fool themselves when they use data at rest (data in a database, operating system or disk drive) encryption instead of ensuring that only authorized applications can decrypt sensitive data. Systems are rarely at rest; they're working 24 hours a day and decrypting data for attackers when a legitimate user's password-based credential is compromised. By combining disruptive defenses No. 1 and 3, applications will ensure unauthorized users never get to see decrypted data.

4. Preserve the integrity of a transaction through its lifetime.

This is accomplished, once again, by a digital signature, but it is applied by the application itself. While a digital signature acquired at the source of a transaction guarantees authenticity, transactions are modified in many applications. When data within the transaction changes, a new digital signature must be applied by the application to preserve the integrity of the modified transaction. Verifying the digital signatures of a transaction from its origin to its current state assures applications that unauthorized changes have not been made to data.

5. Use cryptographic hardware wherever cryptographic keys are stored and used.

Cryptography represents the last bastion of defense when protecting sensitive data. As such, cryptographic keys are the only objects standing between an attacker and a major headache for your company. While convenient, keys protected in files are protected by passwords and are subject to the same attacks that compromise user passwords. By using cryptographic hardware -- present in all modern systems -- applications create major barriers to attacks. While it may be argued that cryptographic hardware is also subject to attacks, evidence shows that these attacks are neither scalable nor common, as attackers would need access to the physical computer on which your cryptographic keys are stored to be able to compromise the keys.

6. Ensure cloud applications access cryptographic services from within a secure zone.

While the cloud offers many business benefits, attempting to access cryptographic services from within a public cloud's virtual machine is a recipe for disaster -- the credentials necessary to authenticate to the cryptographic services are vulnerable to compromise in a public virtual machine (as some recent breaches highlight), enabling attackers to use legitimate credentials to command key management systems to decrypt sensitive data for the attacker. Using an application architecture that guarantees access to cryptographic services only from a secure zone eliminates that risk completely.

All these disruptive defenses are based on industry standards and have been around for decades in most cases. Security-conscious professionals recognize protecting data by focusing on system security is a secondary defense; network defenses are generally nonproductive and should be minimized because the use of disruptive defenses assumes an attacker is on the network. The objective, now, is to protect data, even in the presence of this threat.

Visit link:
Disruptive Defenses Are The Key To Preventing Data Breaches - Forbes

Xilinx announces addition to ACAP platform with Versal Premium – Electronics Weekly

Versal is an adaptive compute acceleration platform (ACAP), a heterogeneous compute device with capabilities that the firm claims far exceed those of conventional silicon architectures.

Added to the Premium series are 112Gbps pam4 transceivers, multi-hundred GBe and Interlaken connectivity, cryptography, and PCIe gen5 with built-in DMA, supporting both CCIX and CXL.

Developed on TSMCs 7nm process, Versal Premium combines vector and scalar processing elements, coupled to programmable logic and tied together with a network-on-chip (NoC) which provides memory-mapped access to all three processing element types.

The scalar engines are built from the dual-core Arm Cortex-A72, the adaptable engines are made up of programmable logic and memory cells and the intelligent engines are an array of very long instruction word (VLIW) and single instruction, multiple data (SIMD) processing engines and memories.

The device has over 120TB/s of on-chip memory bandwidth which, coupled with the customizable memory hierarchy, is designed to reduce data movement and remove key bottlenecks, while pre-engineered connectivity and cores allow integration into existing cloud infrastructure.

Versal Premium is designed for high bandwidth networks operating in thermally and spatially constrained environments, as well as for cloud providers. It delivers up to 9Tb/s of serial bandwidth and 5Tb/s of throughput with Ethernet, with flexibility to support various data rates and protocols, the firm says.

Its cryptography engines provide up to 1.6Tb/s of encrypted line rate throughput and support for AES-GCM-256/128, MACsec and IPsec.

The firm suggests the device will find applications in 5G communications, aerospace and defense, along with ADAS.

The Versal Premium series takes ACAPs to the next level delivering breakthrough networked hard IP integration enabling the development of single chip 400G and 800G solutions, says Kirk Saban, vice president of product and platform marketing at Xilinx.

Targeting next-generation networks and cloud deployments, Versal Premium delivers superior bandwidth and compute density in a scalable platform that is readily programmable by hardware and software developers alike for optimized acceleration and reduced TCO.

Link:
Xilinx announces addition to ACAP platform with Versal Premium - Electronics Weekly

5 Actionable Takeaways from Ponemon and KeyFactor’s 2020 PKI Study – Hashed Out by The SSL Store – Hashed Out by The SSL Store

Looking for the latest stats and info about public key infrastructure? Lookno further

74%. Thats how many organizations report not knowing how many keys and certificates they have. This unsettling statistic was reported in the latest data from The Impact of Unsecured Digital Identities, a new public key infrastructure (PKI)-focused research study by the Ponemon Institute and KeyFactor

Last year, KeyFactor and the Ponemon Institute joined forces to publish a study on public key infrastructure. This years publication is chock full of goodies and valuable insights on PKI as a whole. In early March, Chris Hickman, chief security officer at KeyFactor, and Larry Ponemon, chairman and founder of the Ponemon Institute, shared key insights from the study during a webinar. And in this years report, they included something new the Critical Trust Index. This 16-question core competency measurement aims to help businesses measure their certificate management capabilities, the effectiveness of their PKI efforts, and their agility and growth.

Its a great study one well definitely quote cybersecurity statistics from throughout the year. But what makes it so good? The items highlighted in the study are the ones we see every day from our clients across multiple industries both good and bad.

So, what can the results of this study tell you and how canit help you make informed decisions for your own PKI? And who was involved withthe study?

Lets hash it out.

The study, sponsored by our friends at KeyFactor, was independently conducted by the Ponemon Institute, both of which are well-known names within the industry.

The data in the study comes from the survey responses of 603IT and infosec professionals from across North America. The majority of therespondents (61%) reported their positions as supervisor or above and another30% indicated that they are at the staff/technician level. The majority arefrom large Enterprises, with 64% of the respondents indicating that they workfor organizations with at least 5,001 employees.

The participants were asked to respond to a series ofquestions relating to cyber security threats, strategies, budgets, certificatemanagement, compliance, and financial impacts relating to several of theseareas.

From a 30,000-foot perspective, the current mechanisms forsecuring and managing digital certificates and cryptographic keys are lacking.Many companies lack the personnel and technical resources, budgets, procedures,or policies to effectively support public key infrastructure. As such, thisleaves organizations open to significant risks from a variety of cybersecuritythreats the world over.

But no matter how challenging it can be, IT security andinformation security practitioners alike know that public key infrastructure iscritical to organizations. After all, PKI helps organizations to increase trustwith end users and clients (their web browsers) alike through authenticationand encryption. As certificate lifespans shrink and threats continue to evolve,the risk that your organization will be impacted increases with them.

But how important is PKI in the eyes of the C-suite executivesabove them? Lets find out as we glean insights about this topic and othersrelating to the PKI ecosystem.

Perception and reality are frequently two different things this is particularly the case regarding how PKI tasks and IT securitychallenges are handled. Probably the biggest takeaway that the study highlightsthe tremendous gap in perceptions in terms of confidence in the responses toquestions between the technical guardians within an organization and those who areamong the executive leadership above them.

In that data alone, it showed us very significantly how the problems of managing these types of critical assets in the organization, from the practitioners to the executives, differ when asked the same questions, Hickman said in the webinar on the study.

Their observation made them question why theres such adifference in the landscape between these different ranks within anorganization. Executives tend to be significantly more optimistic in theirresponses than their staff/technician counterparts averaging 6.2 on a 1-10scale, versus staff/technicians, who have an average confidence rating of 3.7. Thisis particularly true concerning issues relating to managing critical assets.

These responses demonstrate why challenges might exist withinorganizations leaders think issues are being handled or resolved, andpractitioners are struggling to keep up with the never-ending demands.

As with any organization and tasks, communication is key.There needs to be clear communication and transparency about the situation. Ifthere are deficiencies, insufficient resources, or other challenges, everyoneneeds to be on the same page.

Dont sugar coat things. Be open and honest about PKI and ITsecurity-related issues that exist within your organization. Make yourleadership aware of any challenges and offer recommendations and solutions toaddress the issues. Most importantly: Learn to speak their language.

One suggestion from Hickman and Ponemon shared during the webinar comes from Gartner:

Security leaders that successfully reposition X.509 certificate management to a compelling business story, such as digital business and trust enablement, will increase program success by 60%, up from less than 10% today.

Essentially, executives want to know the bottom line costsinvolved and how circumstances will affect the operation and organization as awhole. Dont speak technical mumbo-jumbo. Give them what they want while stillpushing for the resources you need by changing how you frame the situation.

Listen to your experts. Listen to understand and not to reply. Recognize that theyre humans and that the industry and cyber threats are continually changing. The threats we face today arent necessarily the same as those well face in the future. Be flexible and open to change. If you want to protect your organization, dont put off investing in your cybersecurity infrastructure and resources until tomorrow. Commit to making those changes today.

According to the report, 60% of respondents believe theyhave more than 10,000 certificates in use across their organization. Thats alot of cats to herd. Interestingly, though, the respondents arent all thatconfident in their estimates 74% indicate that they have no clue how manycertificates and keys they actually are using for certain.

So, what do all of these statistics have in common? A lackof certainty (and clarity), for one. Thats because these organizations lackvisibility into their PKI certificate management. Essentially, they dont know:

This lackadaisical approach is kind of like trying to run a restaurant without any clue about whos responsible for what and how its all getting done. For a restaurant to work, you need to know whos ordering the supply deliveries, whos making the food, whether the food thats available to serve to customers meets certain quality and hygienic standards (it hasnt expired), and whos serving it.

If you dont know these things because you lack visibilitywithin your operation, then, frankly, youre not going to be in business forvery long.

Honestly, this finding that organizations have a lack ofvisibility into their PKI doesnt strike me as surprising. After all, a lack ofvisibility is an ongoing issue for many organizations within the industry as awhole and was also an issue in their previous study from 2018. But it doessurprise me a little is that the organizations are willing to admit thatthey lack this visibility and that it continues to be an ongoing issue.

According to their data, 55% of surveyed organizations saidthey had four or more certificate outages over the last 2 years! And 73% saidthat their organizations still experience unplanned downtime and outages due tomismanaged digital certificates.

So, what can done to help you address this lack ofvisibility and poor certificate management within your organization?

Here at Hashed Out, were all about helping our readers avoidcommon PKI certificate management mistakes. One of the things we always emphasizeis the importance of having visibility over your PKI. An issue that many adminshave is that theyre trying to manage their keys and certificates using manualmethods such as Excel spreadsheets. This is not only clunky and cumbersome, butit leads to a variety of issues.

One such example is shadow ITcertificates. If youre not the only person in charge of installing,renewing, and managing X.509 digital certificates, then some certificates canget installed that you dont know about. And certificates that you may haveinstalled yourself may fall through the cracks and expire without yourknowledge. And you cant effectively manage what you dont know you have.

Using a reliable and reputable certificatemanagement solution can help you to avoid this issue. The best certificatemanagement tools enable you to

This provides you with full visibility of your public keyinfrastructure. Considering that many organizations believe they have at least10,000 certificates, you can see how trying to manually manage these assets is virtuallyimpossible.

Manage Digital Certificates like a Boss

14 Certificate Management Best Practices to keep your organization running, secure and fully-compliant.

We get it. Everyones busy and, frankly, there just arentenough hours in the day to handle every task that comes our way. But thatdoesnt change the importance of having a specific team or department thatsresponsible for handling essential tasks.

Despite this need, study respondents indicate that digitalcertificate budgets and responsibility ownership are lacking. The tasks,responsibilities, and budgetary requirements associated with certificatemanagement are often times spread among various departments within differentorganizations. Essentially, theres no clear center of excellence forcryptography.

Their findings also report that nearly just a third (38%) oforganizations claim that their organizations have the human resources dedicatedto their PKI deployment. Part of this might be because of the stagnantcybersecurity budgets in comparison to the industrys growing costs, or itcould be related to the challenges companies report facing in terms of hiringand retaining talent.

Organizations represented in the KeyFactor/Ponemon Institutestudy reported spending only 16% of their budgets on PKI. Thats approximately$3 million from the reported average IT security annual budget of $19.4 million!And they also discovered that the responsibilities and ownership is frequentlyspread among other departments:

In the U.S., were experiencing some of the lowest unemployment levels in more than two decades. The U.S. Bureau of Labor Statistics (BLS) reports that for college grads, the unemployment rate is at 2.0% and 3.8% for high school graduates as of January 2020. Were experiencing the lowest unemployment rates in IT security and technology, which is literally at 0%, according to Cybersecurity Ventures.

While this is great for jobseekers, its not as great fororganizations looking to hire them. Why? Because it would imply that theres agreater demand for skilled workers than there are people looking for jobs. Thismeans that businesses and organizations are competing for talent. So, what canyou do to combat growing workloads when you have static resources?

Some organizations are turning to automation and the use of artificial intelligence (AI). Automation can help reduce the load on your staff and augment their capabilities by eliminating the menial tasks from their workloads. Predictive analytics, language processes, authentication, and log analysis to identify anything unusual. Using AI helps to free up your employees so they can focus some of their attention on higher-level priorities and tasks.

One example of automation in PKI is a certificate management solution. You can use this tool to gain visibility into your PKI and discover shadow certificates. Its also invaluable in terms of helping your team effectively manage all aspects of the certificate lifecycle and avoid certificate expirations, which Gartner estimates can cost an average of $300,000 per hour.

SSL/TLS certificates are a must for any ecommerce business(or any website, really, that wants to rank on Google and other searchengines). And as more organizations readily adopt PKI solutions, it means thereare more keys and digital certificates to manage. Using certificate managementtools and other automation solutions can help you to not only streamline youroperations and make them more effective, but it also helps you to controlrising operational costs.

While certificate outages are a major cause of concern, theresponses received during the study indicate that failed audits due to insufficient key management practices, rogue orcompromised certificate authorities (CA), and misuse of code signingcertificates and keys are even bigger areas of concerns. This is true both interms of financial costs as well as compliance.

The seriousness of failedaudits and compliance headed up the rankings (4.1 on a 1-10 scale where 1 isconsidered a least serious problem and 10 is most serious problem). Inparticular, survey respondents are worried about insufficient or unenforced keymanagement policies and practices. The next most serious issue related toman-in-the-middle (MitM) and phishing attack vulnerabilities that stem from CAcomprormise.

We mentioned earlier that nearly three-quarters (73%) of respondents indicate that they experience unplanned outages and downtime due to mis-managed digital certificates. These occurences are more frequent than unplanned outages that result from certificate expiration. What makes these numbers even more dire is that disruptive outages are expected to keep increasing rather than decreasing. According to the report:

59 percent of respondents say the misuse of keys and certificates by cybercriminals is increasing the need to better secure these critical assets. Yet, more than half (54 percent) of respondents are concerned about their ability to secure keys and certificates throughout all stages of their lifecycle from generation to revocation

If youre using a private CA, its not really surprisingwhen things go sour. One of the best things you can do to avoid issues relatingto rogue or compromised certificate authorities is to work with established,reputable commercial CAs who provide managedPKI services. It would be best to stay away from free PKI certificateproviders because they lack the support and resources that commercial digitalcertificate providers have at their disposal.

The final insight well share from the survey is that respondents concerns stemming from post-quantum cryptography are decreasing for now. The KeyFactor and Ponemon report says:

Only 47 percent of respondents are concerned about the impact that quantum computing will have on their key and certificate management practices, but we expect this number will rise as recent advances in quantum technology bring us closer to the potential breaking point of the keys and algorithms we rely upon today.

Essentially, there is and has been hype surrounding thetopic for several years. But until quantum computing is available at the commerciallevel, well overestimate the potential negative impacts rather than highlightits positive impacts on security, Ponemon said.

Hickman says that quantum computing is our future reality its just a matter of when, not if it will become a thing. Thats why theindustrys work on post-quantum algorithms is critical (see our previousblog post highlighting DigiCerts work on post-quantum cryptography) andwhy organizations need to:

Rarely have we seen something in this industry with thepotential cataclysmic effect of quantum, and the disruptive nature that it willbring from a security standpoint, says Hickman, who emphasizes the importanceof planning, which seems to be taking a back seat in terms of being considereda priority.

Hickman continues:

Having a plan, understanding where your digital assets live, where your cryptography is deployed, having ways to manage that crypto is absolutely important. Things are going to happen along the way such as the deprecation of algorithms But youll be able to reuse that same plan and actually validate it top make sure that youre ready for a post-quantum world.

From these survey responses, its obvious that theres noone clear owner of PKI budgets and efforts with multi-discipline and multi-functionalteams. And theres also no one agreed upon method that these surveyedorganizations rely on to deal with these increasing crypto responsibilities. Butits obvious that having a governance process in place and clear visibility ofyour public key infrastructure are essential to improving a businesss certificatemanagement capabilities. Part of this entails establishing a cryptographiccenter of excellence if one doesnt already exist.

The increasing use of encryption technologies, digital certificates,etc. for compliance with regulations and policies dictates the need for better certificatemanagement practices. And as operational costs continue to increase without a parallelincrease in operating budgets to cover those costs, automation will becomeimportant the closer we get to a PQC world.

Read the original here:
5 Actionable Takeaways from Ponemon and KeyFactor's 2020 PKI Study - Hashed Out by The SSL Store - Hashed Out by The SSL Store

Novel error-correction scheme developed for quantum computers – News – The University of Sydney

Dr Arne Grimsmo from Sydney Nano and the School of Physics. Photo: Stefanie Zingsheim

Scientists in Australia have developed a new approach to reducing the errors that plague experimental quantum computers; a step that could remove a critical roadblock preventing them scaling up to full working machines.

By taking advantage of the infinite geometric space of a particular quantum system made up of bosons, the researchers, led by Dr Arne Grimsmo from the University of Sydney, have developed quantum error correction codes that should reduce the number of physical quantum switches, or qubits, required to scale up these machines to a useful size.

The beauty of these codes is they are platform agnostic and can be developed to work with a wide range of quantum hardware systems, Dr Grimsmo said.

Many different types of bosonic error correction codes have been demonstrated experimentally, such as cat codes and binomial codes, he said. What we have done in our paper is unify these and other codes into a common framework.

The research, published this week in Physical Review X, was jointly written with Dr Joshua Combes from the University of Queensland and Dr Ben Baragiola from RMIT University. The collaboration is across two leading quantum research centres in Australia, the ARC Centre of Excellence for Engineered Quantum Machines and the ARC Centre of Excellence for Quantum Computation and Communication Technology.

Our hope is that the robustness offered by spacing things out in an infinite Hilbert space gives you a qubit that is very robust, because it can tolerate common errors like photon loss, said Dr Grimsmo from the University of Sydney Nano Institute and School of Physics.

Scientists in universities and at tech companies across the planet are working towards building a universal, fault-tolerant quantum computer. The great promise of these devices is that they could be used to solve problems beyond the reach of classical supercomputers in fields as varied as materials science, drug discovery and security and cryptography.

With Google last year declaring it has a machine that has achieved quantum supremacy performing an arguably useless task but beyond the scope of a classical computer interest in the field of quantum computing and engineering continues to rise.

But to build a quantum machine that can do anything useful will require thousands, if not millions of quantum bits operating without being overwhelmed with errors.

And qubits are, by their very nature, error prone. The quantumness that allows them to perform a completely different type of computing operation means they are highly fragile and susceptible to electromagnetic and other interference.

Identifying, removing and reducing errors in quantum computation is one of the central tasks facing physicists working in this field.

Quantum computers perform their tasks by encoding information utilising quantum superposition a fundamental facet of nature where a final outcome of a physical system is unresolved until it is measured. Until that point, the information exists in a state of multiple possible outcomes.

Dr Grimsmo said: One of the most fundamental challenges for realising quantum computers is the fragile nature of quantum superpositions. Fortunately, it ispossible to overcome this issue using quantum error correction.

This is done by encoding information redundantly, allowing the correction of errors as they happenduring a quantum computation. The standard approach to achieve this is to use a large number of distinguishable particles asinformation carriers. Common examples are arrays of electrons, trapped ions or quantum electrical circuits.

However, this creates a large network of physical qubits in order to operate a single, logical qubit that does the processing work you require.

This need to create a large network of physical qubits to support the work of a single operating qubit is a non-trivial barrier towards constructing large-scale quantum machines.

Dr Grimsmo said: In this work, we consider analternative approach based on encoding quantum information in collections of bosons. The most common type of boson is the photon, a packet of electromagnetic energy and massless light particle.

By trapping bosons in a particular microwave or optical cavity, they become indistinguishable from one another, unlike, say, an array of trapped ions, which are identifiable by their location.

Theadvantage of this approach is that large numbers of bosons can be trapped in a single quantum system such as photons trappedin a high-quality optical or microwave cavity, Dr Grimsmo said. This could drastically reduce the number of physical systems required to build a quantum computer.

The researchers hope their foundational work will help build a roadmap towards fault tolerance in quantum computing.

Go here to see the original:
Novel error-correction scheme developed for quantum computers - News - The University of Sydney

The top 4 reasons Edward Snowden deserves a fair trial | TheHill – The Hill

Its been nearly seven years since Edward Snowden shook the world by orchestrating the largest security leak in world history. Still today, little question remains as to how profoundly Snowdens courageous actions impacted the conversation on mass surveillance not just within the U.S., but around the world.

"Of course I would like to return to the United States," said Snowden in a September 2019 interview with CBS. "But if I'm going to spend the rest of my life in prison, then one bottom line demand that we all have to agree to is at least I get a fair trial The government wants to have a different kind of trial They want to be able to close the courtroom. They want the public not to be able to know what's going on.

As Congress argues over whether to renew the Foreign Intelligence Surveillance Act (FISA) before the approaching expiration date of March 15 (an act that would allow federal officials to continue seizing business records by extending Section 215 of the Patriot Act), the time is now to understand why Snowden should be afforded a fair shot in the courtroom.

In June 1917, Congress passed the Espionage Act in an attempt to prevent insubordination within the U.S. military and silence critics of Americas involvement in World War I. In his 1915 State of the Union address, President Woodrow Wilson begged Congress to pass the law, declaring, Such creatures of passion, disloyalty, and anarchy must be crushed out they are infinitely malignant, and the hand of our power should close over them at once.

After the bills passage, antiwar activist Charles Schenck was arrested for distributing flyers encouraging men to resist Wilsons draft. That same year, socialist Eugene V. Debs was sentenced to 10 years in prison, stripped of his citizenship, and disenfranchised for life over a speech he made criticizing the war. In January 1919, the Supreme Court heard the cases Schenck v. United States and Debs v. United States, concluding that neither mans arrest constituted a violation of the First Amendment.

Then, in 1973, the law was unsuccessfully used against economist Daniel Ellsberg, the man behind the release of the Pentagon Papers in 1971. Following repeated attempts on part of intelligence officials to intimidate The New York Times into ceasing publication of the documents, an appellate court finally succeeded in temporarily ordering the newspaper to discontinue publication. While the courts eventually restored publication rights to the press, other victims of the Espionage Act throughout history have not been as fortunate, including journalist Victor L. Berger, activists Emma Goldman and Alexander Berkman, former U.S. Army soldier Chelsea Manning, and former Defense Intelligence Agency (DIA) employee Henry Kyle Frese.

In reviewing the advances that have been made in recent decades towards the construction of an Orwellian national security state, its not overstating it to say what once seemed possible only on the ink-lettered pages of dystopian science fiction novels has entered the realm of reality.

A 2016 study from Georgetown University, for example, estimates that at least one in four state or local police departments in the U.S. now have access to facial recognition software. Coupled with the steady increase in surveillance cameras, its not difficult to see where the current trends are headed. In 2019, the estimated number of surveillance cameras in the U.S. stood at 70 million; experts now estimate this number to reach 85 million by 2021.

And when it comes to the data, the only country with a surveillance apparatus worthy of comparison to the U.S. is communist China, where police officers regularly sport facial recognition glasses and artificial intelligence to hunt down suspects and political enemies. Tragically, Beijing recently began weaponizing these technologies against the countrys Muslim and minority populations (particularly in the Xinjiang region).

Even considering the fact that China has more than four times the population of the U.S., however, the two countries stand neck-and-neck in terms of the number of cameras per individual; while China holds an average of one camera for every 4.1 people, the U.S. trails closely behind at 4.6. By a similar metric, the U.S. surpasses China with more closed-circuit television (CCTV) cameras per person. To no surprise, this has predominantly impacted Americas major cities; Chicago, for instance, now has 30,000 surveillance cameras, equipped with night vision, facial recognition, and license plate-reading technology.

Roughly 66 percent of Americans agree that the potential risks outweigh the benefits regarding government collection of data, according to polls released by the Pew Research Center in November 2019. Out of the same pool of respondents, 84 percent answered that they feel very little or no control over the data collected about them by the government.

A similar poll released by Morning Consult in December 2019 found that 79 percent of Americans believe Congress should make crafting a bill to better protect consumers online data a priority, while 65 percent answered that data privacy is one of the biggest issues our society faces and legislation is needed to stop data breaches.

Ironically enough, it seems as though the American surveillance state wants everything monitored and scrutinized except itself. Shielding the so-called intelligence community from the public eye is a towering wall of government agencies, unaccountably bureaucratic review processes, and strict nondisclosure agreements designed to muzzle dissenters. One day after the release of Snowdens memoir Permanent Record in September 2019, the Justice Department filed a civil lawsuit against him for alleged breaches in previous nondisclosure agreements.

Regardless of what people think of Snowden, it is irrefutable that, without others like him, a tremendous amount of our governments failures and atrocities would go unseen (as untold scores likely already do). Without Daniel Ellsberg, the world would be oblivious to the true extent of Lyndon B. Johnson lies surrounding Vietnam. Without Mark Felt, Richard Nixon might have finished a second term as president. And without Edward Snowden, the realization that the American government is spying on its own citizens would have never occurred.

Politicians pay lip service to the Constitution. Snowden put his life on the line to defend the Fourth Amendment and the right to privacy for every American. The least we can do is offer him a fair trial.

Cliff Maloney is the president of Young Americans for Liberty (YAL).

*DISCLAIMER: For the record, Cliff Maloney is not the Cliff Snowden references in Permanent Record.

Visit link:
The top 4 reasons Edward Snowden deserves a fair trial | TheHill - The Hill

Edward Snowden warns about an upcoming bill that threatens digital security and freedom of speech. – Coinnounce

US Senators Lindsey Graham (R-SC) and Richard Blumenthal (D-CT) introduced a bill that threatens the digital security and freedom of speech in the United Nations. The proposed bill EARN IT forces providers of the private messaging services to serious legal risk, potentially forcing them to undermine their tools security. Edward Snowden warned about the proposed bill.

The former NSA agent and the well-known whistleblower Edward Snowden tweeted that the government is attempting to exploit anger at tech companies to pass a law that intentionally undermines digital security and censors speech. He added that such a law is even being considered by Congress is a national disgrace. Online privacy advocate Edward Snowden, the author of Permanent Record, revealed how the National Security Agency had tools of mass surveillance and used them to spy on the USA and foreign citizens.

The proposed bill undermines section 320 that permits the online platform to allow its users to share their thoughts on their platforms without any fear. EARN IT act weakens section 320, forcing platforms to kick innocent people off of the Internet entirely. The bill aims to protect child exploitation on the Internet and forces online platforms to adhere to best practices.

This is not the first time US lawmakers have blamed end-to-end encryption for weakening national security and child exploitation, earlier Attorney General William Barr, blamed encryption for sexual crimes against children

Read the original:
Edward Snowden warns about an upcoming bill that threatens digital security and freedom of speech. - Coinnounce

Over Objections From Privacy Advocates, Tame Surveillance Bill Sails Through the House – Reason

It took all of a day after the text was released for the House of Representatives to vote for a surveillance reform and reauthorization bill that privacy groups (and some members of Congress) say doesn't go nearly far enough.

On Tuesday evening, Reps. Jerry Nadler (DN.Y.) and Adam Schiff (DCalif.) released the text of the USA Freedom Reauthorization Act. On Wednesday evening, it sailed through the House by a vote of 278136.

The bill renews but revises the USA Freedom Act, which was passed in 2015 after Edward Snowden revealed that the National Security Agency (NSA) had secretly been collecting and storing massive amounts of Americans' phone and internet records. The USA Freedom Act was a compromise between those who pointed out these acts violated Americans' privacy and Fourth Amendment rights and those who insisted the United States needed the info to fight terrorism. The law allowed the NSA and FBI to access these collected records under more strict guidelines and authorized the use of roving wiretaps to keep track of "lone wolf" terrorists.

The USA Freedom Act sunsets this weekend, and privacy activists on both the left and the right have used the opportunity to push for stronger protections from secret surveillance and unwarranted data collection.

Last night's vote suggests we will not see tougher reforms. The bill does include some milder (but nevertheless welcome) changes. It ends the records retention program entirelynot as big a deal as it might sound, since the NSA has already abandoned it. The Foreign Intelligence Surveillance Amendment (FISA) Court will have modestly expanded powers to bring in outside advisers when the feds want a warrant and to review decisions.And the attorney general will have to sign off on any secret surveillance warrant applications that target federal officials or federal candidates for office. But the bill does not grant civil libertarians' demands for limits on how business records can be secretly collected and used, for stronger protections against secret surveillance of First Amendmentprotected activities, and for a stronger role for those outside advisers.

The vote did not follow party lines. There is a consistent group of Democrats and Republicans who support strong privacy and Fourth Amendment protections, even if they don't see eye to eye on most other issues. Among the 60 Republicans who voted against the limper reforms were Louis Gohmert of Texas, Thomas Massie of Kentucky, Jeff Duncan of South Carolina, and Tom McClintock of California. Among the 75 Democrats who voted no were Zoe Lofgren of California, Alexandria Ocasio-Cortez of New York, Ted Lieu of California, Rashida Tlaib of Michigan, and Tulsi Gabbard of Hawaii. Independent Justin Amash of Michigan also voted against the bill.

But they're the minority. The larger, more establishment-minded leadership of Congress seems fine with kicking the can down the road yet again (the law will sunset once more in 2023) and reforming as little as they can get away with.

One of the more notable "yea" votes comes from Rep. Devin Nunes (RCalif.). A vocal defender of the president, Nunes has long insisted that the feds and the FISA Court abused their powers when they snooped on Trump aide Carter Page. (Subsequent investigation shows he was right to be concerned.) Nunes has even gone so far as to call for the entire FISA Court to be dismantled. Yet when it came time to vote, he, like he has done historically, voted to preserve the wider surveillance authorities.

This bill wouldn't have done anything to stop the FBI from wiretapping Page. He was neither a candidate for office nor a federal official at the time. But it will make it harder for the feds to wiretap Nunes.

The legislation heads over to the Senate now, where Rand Paul (RKy.) is trying to use his influence over Trump to stop the bill and demand stronger reforms. A tweet from Trump suggests Paul has the president's ear:

We went through this once before. That time, Trump wound up approving legislation that actually expanded the feds' authority to secretly spy on American citizens. Let's hope this isn't yet another case where the people in power care only about whether they are the ones being surveilled.

Read the rest here:
Over Objections From Privacy Advocates, Tame Surveillance Bill Sails Through the House - Reason

Congress Is Ready for Surveillance Reform Will the House Rise to the Occasion? – brennancenter.org

This was originally published by Just Security.

Something quite unusual happened last fall. After an internal Justice Department watchdog reported that the government had submitted flawed documents to get court approval for surveillance of former Trump campaign aide Carter Page, Republicans who had long supported broad surveillance powers began calling for greater civil liberties protections. A rare window opened for reform of the Foreign Intelligence Surveillance Act (FISA)the law that governs surveillance of suspected terrorists, spies, and other foreign agents. That window coincides with an upcoming deadline to reauthorize three expiring FISA authorities, providing a perfect opportunity for Congress to make much-needed changes.

Something much more ordinary is happening now: Members of Congress are poised to let this opportunity pass them by. Senators Ron Wyden (D-Or.) and Steve Daines (R-Mont.) have introduced astrong, bipartisan reform bill, but the administration and intelligence hawks in Congress are seeking straight reauthorization without reform. The House Judiciary Committee decided to split the baby, and on Monday unveiledmodest reform legislationthat is missing key safeguards against surveillance abuse. Committee members should insist on strengthening the bill when it goes to markup on Wednesday.

What weve learned since the last reauthorization of Section 215

The main authority at issue is Section 215 of the 2001 USA Patriot Act, which amended FISAs business records provision. It allows the government to get an order from the secret FISA Court compelling a third party, such as a bank or telephone company, to turn over any tangible thing in their possession. The government need only show that the item is relevant to an ongoing counterterrorism or foreign intelligence investigation.

This is one of the lowest legal standards availableso low, in fact, that the FISA Courtinterpreted itto justify the National Security Agencys (NSA) indiscriminate or bulk collection of Americans telephone records, based on the theory that some relevant information must surely buried within them. Whats more, while the government cannot obtain thecontentof phone calls or emails under Section 215, it can obtain information that is often every bit as personal, such as medical records, book sales, library records, and tax returns.

In 2015, after Edward Snowden revealed the NSAs phone records program, Congress passed the USA Freedom Act to prohibit bulk collection under Section 215 and other FISA authorities. But there were two flaws in Congresss approach. First, instead of requiring the government to focus on individual suspects, Congress allowed the government to collect information about entire companies, organizations, and IP addresses, which can encompass thousands of people. Civil liberties advocates thus feared that bulk collection might simply be replaced with bulky collectioni.e., collection that is tied to a particular target but still sweeps in the information of large numbers of innocent Americans.

A second problem was that Congress created a new program within Section 215 for collecting Americans phone records. Under the so-called CDR Program (for call detail records), the government can collect the phone records of suspected terroristsandanyone who has ever been in contact with them. Withtwoindependentexecutive branch commissions having concluded that the NSAs bulk collection program provided little to no counterterrorism value, it was unclear why a scaled-down version was deemed necessary.

Five years later, we can see how these flaws have played out. The law requires the government toreportboth the number of targets of Section 215 orders, and a number that indicates how many peoples information is actually collected. There were 56 targets of Section 215 orders in 2018, but 214,860 people had their information collected under those orders. If that isnt bulky collection, its hard to think what is.

As for the CDR program, it has been, without exaggeration, a disaster. Although intended to replace bulk collection, itswept inmore than a billion phone records between 2015-2018. Moreover, in 2018, the NSAdisclosedthat it had been collecting data it was not legally authorized to collect, due to technical problems it was unable to fix. NSA officials bluntlyadmittedthat the program wasnt generating sufficient benefit to justify its continuance, and the agencydecidedto pause collection in early 2019. A government study declassified and released todayconfirmsthat the program provided scant value while costing $100 million to operate.

The opportunity now

As a starting point, any legislation to reauthorize Section 215 must revoke authorization for the CDR program. Even the leaders of the Senate intelligence committeewhich is notoriously pro-surveillance and anti-reformhaveacknowledgedthis necessity.

Thebilloffered by Senators Wyden and Daines, with a companion bill offered by Representatives Lofgren (D-Ca.), Jayapal (D-Wash.), and Davidson (R-Ohio), would go much further. It would tackle the problem of bulky collection by limiting the targets of Section 215 collection to foreign powers, agents of foreign powers, or people in contact with thembasically, people who could themselves be legitimate focuses of a counterterrorism or foreign intelligence investigation.

Their proposal would also take on a host of other problems with FISA. For instance, the governments default practice is to keep the records it collects for at least five years, even if the information is highly personal and contains no evidence of wrongdoing. This exposes Americans private data to theft, negligent mishandling, or abuse. The Wyden-Daines bill would require the government to delete data within three years unless it is determined to constitute foreign intelligence. It also specifies certain categories of data, such as geolocation information and web browsing history, which the government must obtain a warrant to access. And it includes several provisions that would enhance transparency and oversight of the FISA process.

The billoffered by the House Judiciary Committee, by contrast, is much less ambitious. It would end the CDR program, as it must. But it would do nothing to address the problem of continuing bulky collection under Section 215. It allows the government to continue hoarding the data of innocent Americans. It prohibits the use of Section 215 to collect items that would otherwise require a warrant, but it does not specify what any of those items are, leaving the government with far too much wiggle room to rely on its own cramped legal interpretations of the Fourth Amendment. And its transparency and oversight provisions, while important, are less far-reaching than those in the Wyden-Daines bill.

Committee members should insist on amendments to fill these gaps during Wednesdays markup. Opportunities to build meaningful civil liberties protections into our sprawling surveillance laws are few and far between. The politics of fear that underlie most national security debates generally create a one-way ratchet, in which government authorities grow ever broader while protections for Americans privacy are eroded. We are in that rarest of moments in which Democrats and Republicans alike are calling for civil liberties enhancements. The House Judiciary Committee should not allow this moment to pass with a business-as-usual compromise between the reform seekers and the defenders of the status quo.

Go here to read the rest:
Congress Is Ready for Surveillance Reform Will the House Rise to the Occasion? - brennancenter.org

Bitcoin falls past $6,000, leading a cryptocurrency rout as global markets slip on coronavirus concerns – Business Insider

  1. Bitcoin falls past $6,000, leading a cryptocurrency rout as global markets slip on coronavirus concerns  Business Insider
  2. Cryptocurrency Market Update: Bitcoin tumbles to $7,300, why no one wants to buy the dip?  FXStreet
  3. Bitcoin, Ethereum, XRP Nosedive Strategist Says Trump Effect Underway in Cryptocurrency Markets  The Daily Hodl
  4. Rare Bitcoin Price Chart Pattern May Be the Cryptocurrencys Last Hope  newsBTC
  5. Ethereum Price Analysis: ETH Sees 30% Sell-Off As Cryptocurrency Market Risk-On Develops $100 Next?  Coingape
  6. View Full Coverage on Google News

See the rest here:
Bitcoin falls past $6,000, leading a cryptocurrency rout as global markets slip on coronavirus concerns - Business Insider