Looking for simplicity in the cloud? The future is going to be open and hybrid – The Register

Sponsored Feature SUB:

Hybrid cloud used to be seen as a waypoint on the transition to an entirely-cloud-based future. But it's becoming increasingly clear that it's likely to be the default destination for many organizations, and this leaves them facing tough choices about how they manage their tech budgets and their tech workforces.

Research conducted by analyst firm IDC and commissioned by Red Hat, points out that enterprises are "under tremendous pressure when it comes to modern application deployment." While businesses are demanding tech teams develop complex applications, the entire industry faces a shortage of developers.

So it's all the more frustrating when those skilled developers and other specialists are forced to spend their time maintaining infrastructure and legacy tech, rather than focusing on delivering modern applications, or introducing key new technologies such as data science and machine learning.

The cloud ostensibly offers a quick path to new technologies and more easily manageable infrastructure. But the reality is more nuanced. Organizations will inevitably have legacy applications which are unsuitable for lifting and shifting onto the cloud. And data sovereignty and resiliency issues can also force companies to keep data and infrastructure confined to particular locations or regions.

Working with multiple cloud providers may address some of the geographic coverage or sovereignty concerns, but can be more complex, with different providers having different tooling or technology offerings, for example.

While going all in with a single cloud provider might provide more consistency in terms of tooling and services, concerns about vendor lock-in mean many organizations remain wary of trusting their entire operation to a single vendor and its associated software stack.

All of which has led to something of an industry consensus that hybrid cloud will be the preferred operating model for most organizations for the foreseeable future.

As IDC points out, "for developers to create applications for multi-cloud environments, the major challenge lies in inconsistency across complex technology ecosystems. The opportunity therein lies in abstraction of those complex technology ecosystems to reduce friction for developers and enable high availability of applications in production."

That generally leads to a broader reappraisal of how applications are managed and developed. If an organization needs to retain some capacity on-prem or maintain legacy code or datastores, it should still be able to take advantage of the cloud or at least enjoy a cloud-like experience. At the same time, when applications are developed for the cloud, they should be designed from the beginning to be portable between on-prem environments and the customer's choice of clouds without the need for refactoring, or for developers to retrain on new tools or platforms.

This creates the need for an "open" hybrid cloud, an IT architecture that offers workload portability, orchestration, and management across environments, including on-prem and one or more clouds. This means development teams and their businesses can utilize the optimal solution for a given workload or task, to the point of choosing a specific cloud provider for an AI workload, for example.

In practice, this has meant applications have become containerized, with an orchestration layer taking care of container management and deployment. Together with the use of APIs to connect containers and services, and modern development pipelines developed around continuous integration and deployment, this makes it easier to update and modernize applications certainly compared to traditional "monoliths".

Kubernetes may have become the default when it comes to open-source container orchestration but it can be a challenge to implement. And developers will still need other tooling, and to take care of security and authentication issues, and the underlying infrastructure.

Of course, major cloud platforms offer their own services and native tooling. Sometimes these are clearly proprietary and sometimes they do appear in sync with wider trends in open source. But this can mask divergences between cloud platforms' offerings and upstream projects, which mean features and tooling differ between providers. In some instances, license changes within open source projects have resulted in cloud providers offering commercial services based on an earlier version.

Both scenarios will be a concern for development teams who want to make their applications as open as possible, to ensure they can be as portable as possible.

It's worth noting that in the latest version of Red Hat's State of Enterprise Open Source report, 80 percent of IT leaders said they expected to increase their use of enterprise open source software, while over three quarters considered it "instrumental" in enabling them to take advantage of hybrid cloud architectures.

But when it comes to adopting containers, almost half of those same IT leaders worried that they do not have the necessary skills (43 percent) and almost as many were concerned that a lack of the necessary development staff or resources will hold them back.

Favoring open source software should mean that companies have a broader talent pool to recruit from because so many teams, or individual developers, already have a strong bias towards open source software. Those preferences often influence the tools and platforms developers want to work with, the projects they want to work on, and even the employers they will consider joining.

The key question then is how can developers and development teams get access to a common application and service software development and deployment experience no matter where they are working, and without the configuration and management headaches that can eat up precious developer bandwidth.

IDC identifies the cloud services model as the best approach to enabling organizations to "shift those valuable resources to making software that competitively differentiates, brings in revenue, and improves business operations empowering developers to do more of what they want to be doing."

That's also the approach Red Hat has taken by putting OpenShift, its enterprise container platform, at the heart of a broad portfolio of managed cloud services. OpenShift provides container orchestration across on-prem, private, and public clouds. While most of OpenShift is self- managed (including versions for the public cloud), there are also managed versions available on AWS, Azure, GCP, and IBM Cloud. This delivers extended support, as well as tested and verified fixes for upstream container platforms like Kubernetes. It also means validated integrations, for storage and third-party plug-ins, for example, and software defined networking.

It also provides a full range of additional integrated services that are essential for developers building cloud native applications. These include OpenShift API management, which allows developers to configure, publish and monitor APIs for their cloud-native applications.

Similarly OpenShift Streams for Apache Kafka lets developers exploit real time data streams while offloading the management of the underlying infrastructure. And that allows them to enable the real time, analytics driven and scalable applications which are needed to power modern ecommerce or the sort of instant decision making or fraud detection that businesses now expect.

In addition, OpenShift Database Access offers on-demand data access, sharing, storage, synchronization and analysis. OpenShift Service Registry allows teams to publish, discover and reuse artifacts built on these services, which further accelerates the development process. And OpenShift Data Science helps machine learning and AI specialists to build their models, and ease the deployment of AI and ML applications to production.

Customers using the managed versions of OpenShift on the public cloud also get access to Red Hat's global Site Reliability Engineering team, which provides the proactive management and automated scaling that underpins resilient cloud native applications.

There are other benefits too. Red Hat's dynamic approach to the underlying infrastructure ensures that customers are only using the capacity they need when they need it, for example. So scaling up an application around a major event or key business period can be automated, with resource provisioning levels returning to their previous state immediately once the surge in demand has passed.

Because these are all managed services, enterprises and other organizations don't need to allocate responsibility for the day-to-day management of the platform onto their dev or ops teams. Developers can get resources up and running quickly, without the need to wait for infrastructure, or indeed, the experts to manage it.

That's important when it comes to addressing some of the key challenges organizations typically face in forging ahead with their digital transformation - i.e. overcoming the in-house skills gap and the technical debt which sometimes arises when overworked developers rush to finalize an application only to spend more precious time refactoring it later.

A managed service also means organizations get to enjoy the three key tactical benefits for development teams identified by IDC. Firstly, it allows development teams to "get out of the business of infrastructure administration", and to focus on developing features that deliver value for the business, and for end users.

Multi-cloud and hybrid environments already account for the bulk of the market, with OpenShift designed to provide the common platform that enables flexible applications and services that can work seamlessly together across both on- and off-prem infrastructure. That delivers a consistency and abstract complexity which makes developers more productive, while their applications are more likely to be more resilient and fault tolerant as a result.

Last, but definitely not least, it provides a consistent experience that simply makes for happier developers.

If resources including Dev and Ops team members are not being squandered and budgeting becomes more transparent, surely this keeps the CEO and CFO happy too?

Sponsored by Red Hat.

JavaScript Disabled Please Enable JavaScript to use this feature.

Here is the original post:
Looking for simplicity in the cloud? The future is going to be open and hybrid - The Register

What Is Cryptography: Definition and Common Cryptography Techniques

What Is Cryptography?

The parameters that define data compilation, storage, and transport are constantly expanding in the digital age. While this growth adds convenience and efficiency to our lives, it also provides additional avenues for data breaches and compromises to occur. This aspect of technology makes the concept of cryptography more important than ever, and it also makes it an exciting field for students to consider. It is important for individuals to be able to answer the question of what is cryptography before pursuing a position in the field.

As the use of tech-centric data storage and transport increases in the corporate world, the need for qualified cryptographers will likely grow. The US Bureau of Labor Statistics (BLS) projects a 28 percent job growth in the information security field between 2016 and 2026, a figure thats significantly higher than the 7 percent job growth BLS predicts for the average profession.

Earning an advanced degree, such as a Master of Professional Studies in Cybersecurity Management, can help students to stand out in an increasingly competitive field. The degree can demonstrate to prospective employers that job candidates have a deep knowledge of the fundamental concepts and techniques that govern cryptography. As such, it can also function as one of the first steps toward a satisfying career in a thriving and critical industry.

Cryptography is the use of coding to secure computer networks, online systems, and digital data. It is a concept whose endgame is to keep vital information that is subject to potential data breaches safe and confidential. While the term tends to be associated with the modern digital era, the concept has played a significant role for centuries in military and government operations. For example, the Navajo code talkers from World War II, who communicated in their native tongue, deployed cryptography tactics to convey crucial data.

The primary element behind cryptography is the creation of ciphers. Ciphers are written codes that disguise key information to entities that arent authorized for access. The stronger the cipher, the more effective the security.

In the modern business era, cryptographers use a host of tech-driven techniques to protect data used by the private and public sectors, from credit card information to medical records. While these techniques differ in approach, they collectively carry the same goal of keeping data secure.

The primary technique behind the concept of cryptography is a process known as encryption. Encryption deploys algorithm strategies to rearrange vital information. Only those who have a bit of code known as a key can unlock the information and access the data in a non-scrambled form.

Also known as Rijndael, the Advanced Encryption Standard (AES) is an encryption technique that uses block ciphers, or algorithms that apply data encryption in measured blocks that match the plain text entered. For instance, if a piece of text or data is 144 characters, the block cipher would also be 144 characters.

AES provides the backbone of several security tactics that tend to go by names familiar to the public sector. Compression tools, such as WinZip, use AES, as do virtual private networks (VPNs). Even peer-to-peer messaging apps, such as Facebook Messenger, use AES to keep their data secure.

One of the encryption keys used in cryptography is private key encryption, which uses one bit of code to access data. Since this form of encryption entails only one key, it tends to be efficient to use; however, its efficiency also increases the importance of protecting the key from leaks.

Public key encryption is more complex than private key encryption because it uses two types of keys to grant access. The first key is public, which is distributed and shared to everyone. The second key is private, which is always withheld from the public. Sometimes, this private key can be referred to as a digital signature.

A hash function converts data into a string of letters and numbers. This string, which is produced in a uniform length, can be referred to by many names, including hash value, digital fingerprint, and checksum. The code produced on a piece of data is like a snowflake no two codes should be identical. Identifying these codes can help cryptographers confirm correct data, and it can also help them spot potential attacks posing as trusted programs or data.

Any business that deals with private information can view cryptography as a necessary tool for its organization. The techniques that cryptographers utilize can ensure the confidential transfer of private data. Techniques relating to digital signatures can prevent imposters from intercepting corporate data, while companies can use hash function techniques to maintain the integrity of data. Collectively, these benefits allow companies to conduct business in the digital era with complete confidence.

Go here to see the original:
What Is Cryptography: Definition and Common Cryptography Techniques

What is Cryptography? – Kaspersky

Cryptography is the study of secure communications techniques that allow only the sender and intended recipient of a message to view its contents. The term is derived from the Greek word kryptos, which means hidden. It is closely associated to encryption, which is the act of scrambling ordinary text into what's known as ciphertext and then back again upon arrival. In addition, cryptography also covers the obfuscation of information in images using techniques such as microdots or merging. Ancient Egyptians were known to use these methods in complex hieroglyphics, and Roman Emperor Julius Caesar is credited with using one of the first modern ciphers.

When transmitting electronic data, the most common use of cryptography is to encrypt and decrypt email and other plain-text messages. The simplest method uses the symmetric or "secret key" system. Here, data is encrypted using a secret key, and then both the encoded message and secret key are sent to the recipient for decryption. The problem? If the message is intercepted, a third party has everything they need to decrypt and read the message. To address this issue, cryptologists devised the asymmetric or "public key" system. In this case, every user has two keys: one public and one private. Senders request the public key of their intended recipient, encrypt the message and send it along. When the message arrives, only the recipient's private key will decode it meaning theft is of no use without the corresponding private key.

Users should always encrypt any messages they send, ideally using a form of public key encryption. It's also a good idea to encrypt critical or sensitive files anything from sets of family photos to company data like personnel records or accounting history. Look for a security solution that includes strong cryptography algorithms along with an easy-to-use interface. This helps ensure the regular use of encryption functions and prevents data loss even if a mobile device, hard drive or storage medium falls into the wrong hands.

Cryptography is the study of secure communications techniques that allow only the sender and intended recipient of a message to view its contents. The term is derived from the Greek word kryptos, which means hidden.

Read more from the original source:
What is Cryptography? - Kaspersky

What is Cryptography? Types of Algorithms & How Does It Work?

With the growing worry of losing ones privacy. The safety of consumers is at an all-time high. Technology has made our lives so much easier whilestill delivering a basic measure of assurance for our personal information. It is critical to learn how to protect our data and stay up with the emerging technology.

Lets have a look at the topics that will be discussed in this blog.

Before going any further, have a look at this video, in which our Cybersecurity specialists go over every detail of the technology.

Cryptography is the study of encrypting and decrypting data to prevent unauthorized access. The ciphertext should be known by both the sender and the recipient. With the advancement of modern data security, we can now change our data such that only the intended recipient can understand it.

Cryptography allows for the secure transmission of digital data between willing parties. It is used to safeguard company secrets, secure classified information, and sensitive information from fraudulent activity, among other things. Crypto means hidden and graph means writing.

Encryption is a fundamental component of cryptography, as it jumbles up data using various algorithms. Data encryption is the method of undoing the work done by encrypting data so that it can be read again. Cryptography is dependent on both of these methods.

In cryptography, a plaintext message is converted to ciphertext when using a technique, or a combination of numerical computations, that appear incomprehensible to the untrained eye.

Have a look at Intellipaats Cyber Security coursesand sign up today!

Cryptography is classified into two categories based on the types of keys and encryption algorithms:

Lets take a closer look at each type.

Also known as Secret Key Cryptography, private key encryption encrypts data using a single key that only the sender and receiver know. The secret key must be known by both the sender and the receiver, but should not be sent across the channel; however, if the hacker obtains the key, deciphering the message will be easier. When the sender and the receiver meet on the handset, the key should be addressed. Although this is not an ideal method. Because the key remains the same, it is simpler to deliver a message to a certain receiver. The data encryption framework (DES Algorithm) is the most widely used symmetric key system.

For instance, Tom is sending a message to Mary that he does not want anyone else to see. Hed like to encrypt his message. That is simply because Tom and Mary exchange the same key. They will use the same key for encrypting and decrypting. Heres how it works: First, Tom encrypts his signal with his key. His message has now been encrypted and scrambled. It cant be read by anyone. When Mary receives the encrypted message, she decrypts it with the same key so she can read it in plaintext.

Enroll in our Cyber Security course in Bangalore and get certified.

Asymmetric key cryptography, also known as public-key cryptography, consists of two keys, a private key, which is used by the receiver, and a public key, which is announced to the public. Two different keys are used in this method to encrypt and decrypt the data. These two distinct keys are mathematically linked. They are sold in pairs. The public key is accessible to anyone, whereas the private key is only accessible to the person who generates these two keys.

For example, Bob wants to send an encrypted message to Alice, and they agree to encrypt his message using public-key encryption. The receiver initiates public key encryption to encrypt the senders message. The receiver, not the sender, initiates the public key method to encrypt the senders message. Everyone has access to the public key. The receiver, Alice, is the only one who has access to the private key. The following is how it works:

Step 1: Alice generates two keys: one public and one private. Alice stores the public key on a public key server that anyone can access.

Step 2: Alice informs Bob of the location of her public key.

Step 3: Bob obtains Alices public key by following Alices instructions.

Step 4: Bob composes a message and encrypts it with Alices public key. Bob sends Alice the encrypted message via the network.

Step 5: Alice decrypts Bobs message using her private key.

Although Alices private key can confirm that no one read or changed the document while it was in transit, it cannot confirm the sender. Because Alices public key is available to the public, anyone can use it to encrypt his document and send it to Alice while posing as Bob. The digital signature is another technique that is required to prove the sender.

A digital signature is equivalent to a handwritten signature. It is an electronic verification of the sender. Digital signatures are commonly used for software distribution, financial transactions. The digital signature serves three purposes:

Lets look at an example of cryptography to see what it is:

Samuel wishes to communicate with his colleague Yary, who is currently residing in another country. The message contains trade secrets that should not be accessed or seen by any third party. He sends the message via a public platform such as Skype or WhatsApp. The foremost aim is to create a secure connection.

Assume Evy, a hacker who has obtained access to the message. Evy can now change or corrupt the message before it reaches Yary. Evy alters the message that Yary receives. Neither Samuel nor Yary are aware of the underground work. The outcomes are dreadful.

Now, cryptography can help. It can aid in the security of the connection between Samuel and Yary.

Now that we understand what cryptography is, let us learn how cryptography aids in the security of messages.

Samuel first converts a readable message or Plain text into a series of digits using various cryptographic algorithms to protect the message. He then encrypts the message with a key. The ciphertext is a term used in cryptography. Samuel uses the internet to send an encrypted message to Yary. If Evy gains access to it and modifies the message before it reaches Yary. Yary now requires a key to decrypt Samuels message. The message can be converted from cipher text to plain text using the decryption key.

Because Evy altered the plain text, the result of the decryption will be the original plain text as an error.

The error indicates that the message has been changed and is no longer the original message. As a result, encryption is critical for secure communication.

Plain text is simply a human-readable message, text, or information.

Cipher text- It is the output of the input plain text that gets converted after the encryption process. Basically, Cipher text is a type of plain text that is unreadable.

The history of cryptography finds its roots in Egypt around 4000 years ago. The Egyptians used hieroglyphics, the oldest cryptography technique, to communicate with each other. Later, in 500 BC, the technique was modified by replacing the characters with alphabets based on some secret rule known to only a few. This rule came to be known as the key to decipher hidden codes or messages.

Later, in the 15th century, more sophisticated techniques evolved such as Vigenere cipher and coding machines like the Enigma rotor machine. Years later, cryptography was born!

The functioning of cryptography revolves around cryptographic algorithms. Cryptographic algorithms or ciphers are mathematical functions that are combined with keys, such as phrase, digit, word, etc., to encrypt text. The effectiveness depends on the strength of the cryptographic algorithms and the secrecy level of the key.

Multiple complex combinations of algorithms and keys boost the effectiveness of a cryptosystem.

Some major techniques of Cryptography are listed below:

Also, look into our blog on Hill Cipher and learn more about cipher!

Cryptography algorithms are the means of altering data from a readable form to a protected form and back to the readable form. Cryptographic algorithms are used for important tasks such as data encryption, authentication, and digital signatures.

RSA is an asymmetric cryptographic algorithm. RSA Algorithm that works on a block cipher concept that converts plain text into ciphertext and vice versa at the receiver side. If the public key of User A is used for encryption, we have to use the private key of the same user for decryption.

Step 1: Select two prime numbers p and q where p not equal to q.

Step 2: Calculate n= p*q and z=(p-1)*(q-1)

Step 3: Choose number e: Such that e is less than n, which has no common factor (other than one) with z.

Step 4: Find number d: such that (ed-1) is exactly divisible by 2.

Step 5: Keys are generated using n, d, and e

Step 6: Encryption

c=m pow(e) mod n

(where m is plain text and c is ciphertext)

Step 7: Decryption

m= c pow(d) mod n

Step 8: Public key is shared and the private key is hidden.

Note: (e, n) is the public key used for encryption. (d, n) is the private key used for decryption

The RSA algorithm has the drawback of being quite inefficient in cases in which large volumes of data must be authenticated by the same virtual machine. A foreign entity must substantiate the dependability of authentication tokens. Data is routed through middlemen, who may corrupt with the cryptosystem.

Data Encryption Standard is a symmetric cipher algorithm and uses the block cipher method for encryption and decryption. DES is the landmark in cryptographic algorithms. It works based on Fiesta Cipher Structure.

DES operates on a plaintext block of 64 bits and returns ciphertext of the same size.

Step 1: Sub-key Generation

Step 2: Encryption

Preparing for an Ethical Hacking job interview? Have a look at our blog on ethical hacking interview questions and start preparing!

Advantages of Cryptography

Disadvantages of Cryptography

There are two types of cryptography attacks, passive and active attacks.

In a passive attack, the intruder can only see the private data but can hardly make any changes to it or alter it. Passive attacks are more dangerous because the intruder only sees the message without altering it. Then no one will ever know that an attack is taking place, and their hidden messages will no longer be hidden.

In this type of attack, the intruder can alter the private data.

Enroll in the Ethical Hacking course offered by Intellipaat and train under their experts.

Cybersecurity has continued to evolve into one of the most innovative technologies. Both Cybersecurity and cryptography are interrelated.

Cryptography is now being used to hold confidential data, including private passwords, secure online. It is now used by cybersecurity experts to foster innovation, ciphertext, as well as other protective measures that enforce but also insulate business and personal info.

With the growing worry of losing ones privacy. The safety of consumers is at an all-time high. Cryptography is the study of encrypting and decrypting data to prevent unauthorized access. The ciphertext should be known by both the sender and the recipient.

Cybersecurity has continued to evolve into one of the most innovative technologies.

At these significant stages, cryptography comes to the rescue. Having a solid foundation in cryptography basics allows us to secure our confidential data.

Whether youd like to gain knowledge on cybersecurity for your personal use or your new career, you can sign up for a beginner lesson to provide you with a fundamental insight into the prevailing scene of data security.

Please leave all your cybersecurity issues in the Intellipaat Cybersecurity community.

See the article here:
What is Cryptography? Types of Algorithms & How Does It Work?

Protect your privacy with cybersecurity and cryptography – Geeky Gadgets

If you would like to learn more about how to protect yourself or teach your students how to stay safe online. You might be interested know that the recent Hello World magazine created by the team over at the Raspberry Pi Foundation features articles on security issues and the ethics and legalities of hacking, advice about teaching cybersecurity to primary-school children, and an introduction to quantum cryptography.

Other articles and features within the Hello World Issue 18 digital magazine which is now available to download for free include using computational methods to analyze literature, developing computational thinking skills through Japanese logic puzzles, top tips for representing computing at school open days.

We also share some fantastic ideas for making this topic as hands-on as possible, including through using network robots, using tools and techniques used by real-life penetration testers, and by taking part in a capture the flag competition. https://www.raspberrypi.org/blog/classroom-activity-machine-learning-accuracy-ethics-hello-world-18/

For the worried, there is absolutely no coding involved in this resource; the machine behind the portal does the hard work for you. For my Year 9 classes (students aged 13 to 14) undertaking a short, three-week module, this was ideal. The coding is important, but was not my focus. For this module, Im more concerned with the fuzzy end of AI, including how credible AI decisions are, and the elephant-in-the-room aspect of bias and potential for harm.

Source : RPiF

More:
Protect your privacy with cybersecurity and cryptography - Geeky Gadgets

Former Google CEO: Bitcoin is a remarkable achievement of cryptography – The Cryptonomist

The famous 2014 statement by former Google CEO Eric Schmidt, Bitcoin is a remarkable achievement of cryptography, has resurfaced on the web.

It was 2014, when Eric Schmidt, former CEO and chairman of Google, had made his comments about Bitcoin during his speech at the Computer History Museum, and right now, the video clip is again circulating the web.

Former Google CEO and chairman Eric Schmidt says Bitcoin is a remarkable cryptographic achievement.

Among other statements, Schmidt emphasized his interest in the architecture and design behind Bitcoin.

Not only that, Schmidt also gave a future perspective, stating that in his opinion BTC is an amazing advancement. Lots of people will build businesses on top of that.

Schmidt is an American businessman and software engineer who served as CEO of Google from 2001 to 2011 and oversaw one of the companys most significant growth phases. He remained as executive chairman until 2017 and as a technical advisor until 2020. Schmidt currently has a net worth of $20 billion, making him the 70th richest person in the world, according to Forbes.

Schmidt had stated during April of this year, that he was a crypto-investor although he would highlight how his interest was more dedicated to blockchain and Web3, rather than virtual currencies.

Schmidt reportedly did not name any specific cryptocurrency he currently owns, emphasizing only that he just started investing in cryptocurrencies.

Being more interested in Web3, Schmidt had described it as follows:

A new model [of the internet] where you as an individual [can] control your identity, and where you dont have a centralized manager, is very powerful. Its very seductive and its very decentralized. I remember that feeling when I was 25 that decentralized would be everything.

Since leaving Google, Schmidt has devoted most of his time to philanthropic endeavors through his Schmidt Futures initiative, where he funds basic research in fields such as artificial intelligence, biology and energy.

Apparently, despite his favorable stance on Bitcoin and cryptocurrencies with the prospect of decentralization and the Web3, Schmidt appears to be opposed to the metaverse.

During the Aspen Ideas Festival event in Colorado, the former Google CEO expressed all his scepticism towards metaverse and Facebook, which since October 2021 is called Meta precisely to highlight its position in this regard.

Essentially, Schmidt says there is currently no clear definition of the concept of metaverse and how it will affect peoples lives.

Related postsMore from author

The rest is here:
Former Google CEO: Bitcoin is a remarkable achievement of cryptography - The Cryptonomist

Saving Private Keys From The Courts – Bitcoin Magazine

This is an opinion editorial by Christopher Allen, founder and executive director of the Blockchain Commons.

*Quotes from this article stem from sources here and here.

Increasingly, attorneys in the United States are asking courts to force the disclosure of cryptographic private keys as part of discovery or other pre-trial motions, and increasingly courts are acceding to those demands.

Though this is a relatively recent phenomenon, its part of a larger problem of law enforcement seeking back doors to cryptography that goes back at least to the U.S. governments failed introduction of the Clipper Chip in 1993.

Unfortunately, todays attacks on private keys in the courtroom have been more successful, creating an existential threat to digital assets, data and other information protected by digital keys. That danger arises from a fundamental disconnect between this practice and the realities of technologies that leverage public-key cryptography for security: private-key disclosure can cause irreparable harm, including the loss of funds and the distortion of digital identities.

As a result, we need to support legislation that will protect digital keys while allowing courts to access information and assets in a way that better recognizes those realities. The private-key disclosure law currently being considered in Wyoming is an excellent example of the sort of legislation that we could put forth and advocate for in order to maintain the proper protection for our digital assets and identities.

Wyoming Senate Filing 2021-0105

No person shall be compelled to produce a private key or make a private key known to any other person in any civil, administrative, legislative or other proceeding in this state that relates to a digital asset, other interest or right to which the private key provides access unless a public key is unavailable or unable to disclose the requisite information with respect to the digital asset, other interest or right. This paragraph shall not be interpreted to prohibit any lawful proceeding that compels a person to produce or disclose a digital asset, other interest or right to which a private key provides access, or to disclose information about the digital asset, other interest or right, provided that the proceeding does not require production or disclosure of the private key.

The forced disclosure of private keys is deeply harmful because it fundamentally runs at odds with how private keys work. Attorneys (and courts) are usually trying to force the disclosure of information or (later) the relinquishment of assets, but theyre treating private keys just like theyre physical keys that they can demand, use and give back.

Private keys do not match any of these realities. As Wyoming State Legislature Senate Minority Leader Chris Rothfuss says:

"There is no perfect analog for a modern cryptographic private key in existing statute or case law; it is unique in its form and function. As we build a policy framework around digital assets, it is essential that we appropriately recognize and reflect the characteristics of the underlying public / private key and cryptographic technologies. Without clear, unambiguous legal protection for the sanctity of the private key, it is impossible to ensure the integrity of the associated digital assets, information, smart contracts and identities.

That appropriation recognition and reflection requires us to understand that:

1. Private keys are not assets.

Private keys are fundamentally the way we exert authority in the digital space, an interface between our physical reality and the digital reality. They may give us the ability to control a digital asset: to store it, to send it or to use it. Similarly, they may give us the ability to decrypt protected data or to verify a digital identity. However, they are not the assets, the data nor the identity themselves.

Its the obvious difference between your car and your electronic key fob. The one is an asset, while the other lets you control that asset.

As Jon Callas, Director of Technology Projects at the Electronic Frontier Foundation (EFF), says:

They don't even want the key, they want the data; asking for the key is like asking for the filing cabinet rather than the file.

2. Private keys are not the proper tool for discovery.

Treating private keys as a tool to ensure the discovery of information fundamentally misunderstands their purpose. Private keys are not how we see something in digital space, but instead how we exert authority in digital space!

Turning back to comparisons, its the difference between a ledger and a pen. If you wanted accounting information, youd ask for the ledger; you wouldnt ask for the pen especially not if it was a pen that allowed you to write undetectably in the handwriting of the accountant!

Former federal prosecutor Mary Beth Buchanan, when offering testimony in favor of Wyomings private-key disclosure law, said:

The court could order a disclosure or an accounting of all the digital assets that are held, and then those assets could be disclosed and the location of whether they are held across different platforms or even different wallets. But giving the key is actually giving access to those assets. That is the difference.

Fortunately, there is an electronic tool that meets the needs of discovery: public keys.

Wyoming has recognized that in their legislation, which says that a private key should never be required if a public key would do the job (and they parenthetically noted at hearings that their current understanding is that a public key will always do the job). If our concern is revealing information that will help to catch and prosecute criminals, then public keys are the answer.

3. Private keys are not physical.

Electronic private keys and physical keys are very different. A physical key could pass through many hands and there could be the expectation that it was very likely not duplicated (especially if it were a special key, such as a safe-deposit box key), and that when the key was returned to the original holder, they would once again have control of all of the linked assets. The same is not true for a private key, which could be easily duplicated by any of the many hands it passed through, with no way to ascertain that that had happened.

Returning to the example of a cars key fob, it would not be appropriate to force the disclosure of the unique serial number stored within a car fob for the same reason its not appropriate to force the disclosure of a private key. Doing so would give anyone who gets that serial number the ability to create a new fob and steal your car!

4. Private keys serve many purposes.

Finally, private keys are likely to have a lot more purposes than physical keys, especially if a court decides to go after not just a specific private key, but the root key from an HD wallet or a seed phrase. Root keys (and seeds) might be used to protect a wide variety of assets as well as private data. They may also be used to control identities and to offer irrefutable proof that the owner agreed to something through digital signatures.

The authoritative uses of private keys are so wide and all-encompassing that its hard to come up with a physical equivalent. The closest analogy, which I explained at one of the Wyoming hearings, is that this would be like if a court demanded access to a hotel room by requiring the hotels master key, which can provide access to all rooms. But, a private key is more than that; it would be as if the court also required that someone with signatory powers at the hotel sign a bunch of blank contracts and blank checks. The potential for harm with the disclosure of a private key is just that high for someone who is using it for a variety of purposes and there will be more and more people doing so as the importance of the digital world continues to increase.

Going beyond the fact that a private key is the wrong tool for courts and that its often being used in the wrong way, there are a number of other problematic realities related to the courts themselves and how and when theyre trying to access private keys.

5. Courts are not prepared to protect private keys.

To start with, courts dont have the experience needed to protect private keys. This danger is made worse by the fact that a single private key is likely to pass through the hands of many different court staff over time.

But, this isnt just about courts. The problem of creating safe ways to transfer private keys is far bigger. Its something that the cryptographic field as a whole does not have good answers for. I attested in Wyoming that the immense difficulties of transferring a private key are a risk that allows bearing of false witness. Putting courts, without cryptocurrency expertise, in the middle of the problem could be catastrophic.

Perhaps cryptographers will resolve these issues in time, and perhaps someday courts will be able to share in that expertise if they decide doing so is a good use of their time and resources, but we need to consider keys whose disclosures are being forced now.

6. Courts are requiring premature disclosure.

The current situation with key disclosure is even more problematic because its occurring as part of discovery or other pre-trial motions. Discovery rulings are almost impossible to appeal which means that in todays environment key holders have almost no recourse for protecting the token of their own authority in digital space.

7. Courts are more demanding of digital assets than physical assets.

We recognize that courts should be able to require the usage of a key. Compelling usage is nothing new, but the private key is not required for that; a simple court order is enough.

If someone refuses to use their private key in a way compelled by a court, thats nothing new either. The physical world already has plenty of examples of people refusing such orders, such as by hiding assets or just refusing to pay judgements. They are handled with sanctions such as contempt of court.

Asking for more from the electronic world is an overreach of traditional judgements that also creates much greater repercussions.

Using the wrong tool for the wrong reasons and putting it in hands not ready to deal with it will have calamitous results. Here are some of the most obvious repercussions.

1. Asset Theft.

Obviously, there is a danger of the assets being stolen, as a private key gives total control over those assets. These assets could go far beyond the specifics of what a court is interested in because of the multitude of uses for keys.

2. Asset Loss.

Beyond the problem of purposeful theft, keys could be lost, and with them digital assets. Former federal prosecutor Mary Beth Buchanan raised this concern in her testimony, saying:

"Evidence is lost all the time."

If that evidence was a private key, which might hold a variety of assets, information, and proofs of identity, the loss could be tremendous.

3. Collateral Damage.

Thefts or losses resulting from the disclosure of a private key could also go far beyond an individual before the court. Increasingly, assets are being held in multisignatures, which may grant multiple people control over the same assets. By requiring the disclosure of a key, a court could negatively impact people entirely unrelated to the proceedings.

4. Identity Theft.

Because private keys might also protect the identifier for digital identity, their loss, theft or misuse could put someones entire digital life at risk. If a key was copied, someone else could pretend to be the holder and even make digital signatures that are legally binding for them.

Protecting private keys is one of the most important things that Blockchain Commons has ever worked on. As I said:

"I find the protections of this Private Key Disclosure bill crucial for the future of digital rights."

Wyoming State Legislature Senate Minority Leader Chris Rothfuss affirmed this, adding:

Christopher Allen has been an invaluable member of our blockchain policy community, bringing a lifetime of technical expertise to advise our committee work and inform our legislative drafting. Mr. Allen has emphasized the particular importance of protecting private keys from any form of compulsory disclosure.

We need your help to make it a reality.

If youre an experienced member of the cryptocurrency or digital asset field or a human rights activist, please submit your own testimony in support of the Wyoming Select Committee on Blockchain, Financial Technology and Digital Innovation Technology. The bill will be coming up for further discussion on September 19-20 in Laramie, Wyoming.

But, Wyoming is just the start. They are doing an excellent job of leading the way, but we need other states and countries to follow. If you have connections to another legislature, please suggest they introduce legislation with similar language to Wyomings bill.

Even if you dont feel comfortable talking with a legislature, you can help by advocating for the protection of private keys as something different than assets.

Ultimately, our new world of digital assets and digital information will succeed or fail based upon how we lay its foundations today. It could become a safe space for us or a dangerous Wild West.

Properly protecting private keys (and using public keys and other tools for legitimate judicial needs) is a keystone that will help us to build a sturdy edifice.

This is a guest post by Christopher Allen. Opinions expressed are entirely their own and do not necessarily reflect those of BTC Inc or Bitcoin Magazine.

Read the rest here:
Saving Private Keys From The Courts - Bitcoin Magazine

NTT Research and NTT Corporation Engage in Breakthrough Research at Crypto 2022 – Business Wire

SUNNYVALE, Calif.--(BUSINESS WIRE)--NTT Research, Inc., a division of NTT (TYO:9432), today announced that members of its Cryptography & Information Security (CIS) Lab authored or co-authored 17 papers that are being delivered at Crypto 2022, one of the leading international conferences on cryptologic research. A paper co-authored by CIS Lab Director Brent Waters won the events Best Paper Award, his second such award in the past three years. In addition, NTT Corporation and NTT Social Informatics Laboratories contributed another six papers. Organized by the International Association for Cryptologic Research (IACR), this years hybrid event will take place in Santa Barbara, August 13-18. NTT Research is one of the conferences eight gold-level sponsors.

The Crypto 2022 program committee, comprised of more than 70 experts, accepted nearly 100 submissions this year. According to the posted conference program, the 23 papers associated with CIS Lab and other NTT cryptographers will be presented in sessions with the following topics: coding theory, distributed algorithms, idealized models, lattice-based signatures, lattice-based zero knowledge, lower bounds, post-quantum cryptography, quantum cryptography, secret sharing, secure hash functions, secure messaging and secure multiparty computation. Dr. Waters will present his paper, titled Batch Arguments for NP and More from Standard Bilinear Group Assumptions on Tuesday, August 16, at 11:20 (PST) during a session that acknowledges it with the conferences only Best Paper award this year. Two best early career researcher papers will also be recognized. Dr. Waters, who is also a professor of computer science at the University of Texas (UT) at Austin, was named CIS Lab Director in June, succeeding Dr. Tatsuaki Okamoto. At Crypto 2020, a paper co-authored by Dr. Waters won one of three Best Paper Awards given that year. (One of the other winners was co-authored by a senior researcher at NTT Secure Platforms Labs.) Dr. Waters collaborator in this years paper is Dr. David Wu, an assistant professor at UT Austin. Their breakthrough is to show how to batch the proofs of nondeterministic polynomial (NP)-class and other problems using standard assumptions and relatively non-complex techniques.

It is exciting to see our CIS Lab and other parts of NTT engaged in so much cutting-edge research, NTT Research President and CEO Kazuhiro Gomi said. Congratulations to Brent Waters and David Wu for their Best Paper Award, and the research itself, which appears to have such timely applications. Best wishes to all for a very productive conference.

The Waters-Wu paper introduces a new kind of proof system, which in cryptography consists of a proving party and a verifying party, where the prover is trying to convince the verifier of a statement. Typically, the verifier relies on the prover to provide a witness. An example might be a digital signature, acting as a witness to the statement that a software update is not malware, but in fact produced by the vendor. In this paper, the authors develop techniques that allow for efficiently batching the transmission and verification of several statements. In so doing, they improve upon what Dr. Waters said are two main lines of prior work in this direction, namely: one that uses less standard and thus more risky computational assumptions for security; and the other, which uses certain types of lattice assumptions and probabilistic checkable proofs.

In this work we show that batchable proof systems can be achieved from standard and well-studied assumptions on bilinear groups, Dr. Waters said. Moreover, our techniques are very direct and show that complex probabilistic checkable proofs are not needed.

Two potential use cases involve the aggregation of signatures and the delegation of computation to cloud services. The first case relates to applications such as blockchains, in which each update consists of several signatures representing various transactions that users want to have processed. Instead of simply including all signatures from the transaction as part of an update (the default solution, which can incur a significant overhead), batchable verification enables aggregating these into one shorter object, the size of which is independent of the number of signatures included. The second case involves the increasingly large amounts of information storage and processing being done via cloud services.

The problem of delegation asks, how can I verify that a computation was performed correctly in a more efficient manner than simply performing it myself, Waters said. Our work on batch argument systems can be immediately applied to tackle that problem.

The proceedings of the IACRs flagship conferences, which draw the worlds leading cryptographers, are published by Springer in its Lecture Notes in Computer Science series. Dr. Yehuda Lindell, CEO and co-founder of Unbound Security, is scheduled to deliver this years invited talk. To attend, see this registration page.

About NTT Research

NTT Research opened its offices in July 2019 as a new Silicon Valley startup to conduct basic research and advance technologies that promote positive change for humankind. Currently, three labs are housed at NTT Research facilities in Sunnyvale: the Physics and Informatics (PHI) Lab, the Cryptography and Information Security (CIS) Lab, and the Medical and Health Informatics (MEI) Lab. The organization aims to upgrade reality in three areas: 1) quantum information, neuroscience and photonics; 2) cryptographic and information security; and 3) medical and health informatics. NTT Research is part of NTT, a global technology and business solutions provider with an annual R&D budget of $3.6 billion.

NTT and the NTT logo are registered trademarks or trademarks of NIPPON TELEGRAPH AND TELEPHONE CORPORATION and/or its affiliates. All other referenced product names are trademarks of their respective owners. 2022 NIPPON TELEGRAPH AND TELEPHONE CORPORATION

Read the original post:
NTT Research and NTT Corporation Engage in Breakthrough Research at Crypto 2022 - Business Wire

Can WhatsApp messages be secure and encryptedbut traceable at the same time? – EurekAlert

Cryptographers love an enigma, a problem to solveand this one has it all. Indestructible codes, secret notes, encryption and decryption.

Heres the puzzle: Someone wants to send a secure message online. It has to be so private, so secret, that they can deny they ever sent it. If someone leaks the message, it can never be traced back to the sender. Its all veryMission: Impossible. But theres a kicker: if that message peddles abuse or misinformation, maybe threatens violence, then anonymity may need to go out the windowthe sender needs to be held to account.

And thats the challenge: is there a way to allow people to send confidential, secure, untraceable messages, but still track any menacing ones?

Mayank Varia might have cracked the conundrum. A cryptographer and computer scientist, Varia is an expert on the societal impact of algorithms and programs, developing systems that balance privacy and security with transparency and social justice. Working with a team of Boston University computer scientists, hes designed a program called Hecatefittingly named after the ancient Greek goddess of magic and spellsthat can be bolted onto a secure messaging app to beef up its confidentiality, while also allowing moderators to crack down on abuse. The team is presentingits findingsat the31st USENIX Security Symposium.

Our goal in cryptography is to build tools and systems that allow people to get things done safely in the digital world, saysVaria, a BU Faculty of Computing & Data Sciences associate professor. The question at play in our paper is what is the most effective way to build a mechanism for reporting abusethe fastest, most efficient way to provide the strongest security guarantees and provide the weakest possible puncturing of that?

Its an approach hes also applying beyond messaging apps, building online tools that allow local governments to track gender wage gapswithout accessing private salary dataand enable sexual assault victims to more safely report their attackers.

When two people chat in a private room, what they talk about is just between themtheres no paper trail, no recording; the conversation lives on in memory alone. Put the same conversation onlineTwitter, Facebook, emailand its a different story. Every word is preserved for history. Sometimes thats good, but just as often its not. An activist in an authoritarian state trying to get word to a journalist or a patient seeking help for a private health issue might not want their words broadcast to the world or held in an archive.

Thats where end-to-end encryption comes in. Popularized by apps like WhatsApp and Signal, it scrambles sent messages into an unreadable format, only decrypting them when they land on the recipients phone. It also ensures messages sent from one person to another cant be traced back to the sender; just like that private in-person chat, its a conversation without a trail or recordeverything is deniable.

The goal of these deniable messaging systems is that even if my phone is compromised after weve had an encrypted messaging conversation, there are no digital breadcrumbs that will allow an external person to know for sure what we sent or even who said it, says Varia.

Amnesty International calls encryption a human right, arguing its an essential protection of [everyones] rights to privacy and free speech, and especially vital for those countering corruption or challenging governments. Like much in the online world though, that privacy can be exploited or bent to more sinister ends. There are specific times where this can be a bad thing, says Varia. Suppose the messages someone is sending are harassing and abusive and you want to go seek help, you want to be able to prove to the moderator what the message contents were and who said them to you.

A study of elementary, middle, and high school students in Israel, where more than 97 percent of kids reportedly use WhatsApp,found 30 percent had been bullied on the app, while UK prosecutors have said end-to-end encryption couldharm their ability to catch and stop child abusers. Extremist groups,from Islamic State to domestic terrorists, have leaned on encrypted apps like Telegram and Signal to spread their calls for violence.

The task for tech companies is finding a way to support the right to privacy with the need for accountability. Hecate offers a way to do bothit allows app users to deny they ever sent a message, but to also be reported if they say something abusive.

Developed by Varia and doctoral students Rawane Issa (GRS22) and Nicolas Alhaddad (GRS24), Hecate starts with the accountability side of that contradictorydeniable and traceablecombination. Using the program, an apps moderator creates a unique batch of electronic signaturesor tokensfor each user. When that user sends a message, a hidden token goes along for the ride. If the recipient decides to report that message, the moderator will be able to verify the senders token and take action. Its called asymmetric message franking.

The fail-safe, says Varia, the part that allows for deniability, is that the token is only useful to the moderator.

The token is an encrypted statement that only the moderator knows how to readits like they wrote a message in invisible ink to their future self, says Varia. The moderator is the one who builds these tokens. Thats the nifty part about our system: even if the moderator goes rogue, they cant show and convince the rest of the worldthey have no digital proof, no breadcrumbs they can show to anyone else.

The user can maintain deniabilityat least publicly.

Similar message franking systems already existFacebook parent Meta uses one on WhatsAppbut Varia says Hecate is faster, more secure, and futureproof in a way current programs are not.

Hecate is the first message franking scheme that simultaneously achieves fast execution on a phone and for the moderator server, support for message forwarding, and compatibility with anonymous communication networks like Signals sealed sender, says Varia. Previous constructions achieved at most two of these three objectives.

The team says Hecate could be ready for implementation on apps like Signal and WhatsApp with just a few months of custom development and testing. But despite its technological advantages, Varia suggests companies approach Hecate with caution until theyve fully investigated its potential societal impact.

Theres a question of can we build this, theres also a question ofshouldwe build this? says Varia. We can try to design these tools that provide safety benefits, but there might be longer dialogues and discussions with affected communities. Are we achieving the right notion of security for, say, the journalist, the dissident, the people being harassed online?

As head ofCDS Hub for Civic Tech Impact, Varia is used to considering the societal and policy implications of his research. The hubs aim is to develop software and algorithms that advance public interest, whether they help to fight misinformation or foster increased government transparency. A theme through recent projects is the creation of programs that, like Hecate, straddle the line between privacy and accountability.

During a recent partnership with theBoston Womens Workforce Council, for example, BU computer scientists built agender wage gap calculatorthat enables companies to share salaries with the citywithout letting sensitive pay data leave their servers.

Were designing tools that allow peopleit sounds counterintuitiveto compute data that they cannot see, says Varia, whos a member of the federal governmentsAdvisory Committee on Data for Evidence Building. Maybe I want to send you a message, but I dont want you to read it; its weird, but maybe a bunch of us are sending information and we want you to be able to do some computation over it.

Thats caught the interest of the Defense Advanced Research Projects Agency and Naval Information Warfare Center, which both funded the work that led to Hecate and have an interest in asking computer experts to crunch data without ever seeing the secrets hidden within it.

Varias approach to encryption could also benefit survivors of sexual abuse. He recently partnered with San Franciscobased nonprofitCallistoto developa new secure sexual assault reporting system. Inspired by the #MeToo movement, its goal is to help assault victims who are frightened of coming forward.

They report their instance of sexual assault into our system and that report kind of vanishes into the ether, says Varia. But if somebody else reports also being assaulted by the same perpetrator, thenand only thendoes the system identify the existence of this match.

That information goes to a volunteer attorneybound by attorney-client privilegewho can then work with the victims and survivors on next steps. Just like Hecate, Varia says it finds a balance between privacy and openness, between deniability and traceability.

When we talk about trade-offs between privacy, digital civil liberties, and other rights, sometimes there is a natural tension, says Varia. But we can do both: we dont have to build a system that allows for bulk surveillance, wide-scale attribution of metadata of whos talking to who; we can provide strong personal privacy and human rights, while also providing online trust and safety, and helping people who need it.

See the original post:
Can WhatsApp messages be secure and encryptedbut traceable at the same time? - EurekAlert

Why 2023 is the year of passwordless authentication – TechTarget

The explosion of available services has overwhelmed users with accounts and passwords to remember, which has led to them creating simple passwords and reusing passwords across multiple accounts.

Unfortunately, short and easy-to-remember passwords are insecure. Using brute-force methods, a hacker can determine an eight-character password in under an hour. Malicious hackers also create dictionaries with hundreds of millions of existing usernames and passwords stolen during data breaches.

Threat actors may also masquerade as a trustworthy source to force users to inadvertently reveal their usernames and passwords. Known as phishing, these social engineering attacks target email, malware, typosquatting -- for example, a malicious website using the URL gogle.com instead of google.com -- and SMS texts.

To protect against brute-force and dictionary attacks, passwords need to use uppercase and lowercase letters, numbers and symbols, and be at least 14 characters long. Following these guidelines makes passwords harder to remember, leading to password reuse. This, in turn, leads to credential stuffing attacks, where malicious hackers take advantage of password reuse and a known username to attempt to log in to multiple services.

Passwords are a form of knowledge-based authentication. For a user to prove they are who they claim to be, they need a secret -- the password -- that has been previously stored by the service.

Multifactor authentication (MFA) is a technique designed to strengthen the authentication process by adding possession-based authentication to knowledge-based authentication. A service can only authenticate a user when they prove they have knowledge of the shared secret in addition to something they have or are.

The ubiquity of smartphones makes the phone the ideal physical item for possession-based authentication. To prove a user is in physical possession of the device, the service sends a message -- a challenge -- to the phone, which the user must then interact with.

While MFA serves as an improvement over traditional password-based authentication, many MFA techniques have their own security issues:

MFA also increases friction by requiring the user to go through a multistep process: entering the password, waiting for a challenge and then entering the challenge.

Eliminating shared secrets removes the intrinsic weakness of password-based authentication and MFA. A secure form of possession-based authentication is the best alternative. Passwordless authentication based on FIDO standards is considered the archetype.

FIDO passwordless authentication is based on public-key cryptography. This asymmetric cryptography uses pairs of keys; any system can encrypt a message using the public key, and the message can only be decrypted with the private key. This system also works in the reverse direction: Any message encrypted by the private key can only be decrypted by the public key. As long as the private key remains private, the public key can be shared without compromising security.

With FIDO passwordless authentication, when a user registers with a service, the user generates a public/private key pair. The public key is shared with the service, and the private key is kept in a hardware-based vault on the device.

During the authentication process, the service sends a challenge to the user. The user encrypts the challenge with the private key and sends the encrypted challenge back to the service. If the service successfully uses the public key to decrypt the challenge, the user has proved who they are.

What prevents an attacker from using a stolen device to authenticate to the service? The user's hardware vault and private keys are protected by either a PIN or biometrics, such as a fingerprint or facial recognition. Biometrics or PINs never get shared or transmitted across the network. This ensures only the legitimate user can access the private keys and is in possession of the device.

Thus, FIDO passwordless authentication is more secure than password-based or multifactor authentication. FIDO passwordless authentication also removes friction from the process: Users only need to look at the phone's camera, swipe their finger or enter a PIN.

While FIDO protocols have been standardized since 2019, a passel of startups -- including 1Kosmos, Acceptto, Axiad, Beyond Identity, Hypr, Nok Nok Labs, Secret Double Octopus, Stytch, Transmit Security and Trusona -- are innovating products to add passwordless authentication to apps.

Identity and access management (IAM) providers haven't been idle, either. Auth0, CyberArk, ForgeRock, IBM, JumpCloud, Microsoft, Okta, OpenText, Oracle, Ping Identity, SailPoint, Saviynt and WSO2 have added passwordless authentication to their workforce and customer IAM products.

Thanks to the above, organizations can now transition to passwordless authentication. A survey from Enterprise Strategy Group (ESG), a division of TechTarget, revealed the following:

Of organizations transitioning to passwordless strategies, more than half experienced a significant positive impact to risk reduction and improved UX. Almost two-thirds reported increased efficiency for IT and security teams.

With these benefits and the ability for organizations to move to a passwordless approach for their IAM systems and applications, 2023 can and should be the year of passwordless authentication.

See the article here:
Why 2023 is the year of passwordless authentication - TechTarget