Op Ed: A Cryptographic Design Perspective of Blockchains: From Bitcoin to Ouroboros – Nasdaq

How does one design a blockchain protocol? Back in 2013, while in Athens, I set out to design a non-proof-of-work-based blockchain protocol motivated by the debt crisis in Greece, looming bank liquidity problems and the increasing discussions about the possibility of having a parallel currency. The new protocol had to be based on proof of stake to make sure that it can run even on cellphones and be secure independent of any computational power existing that is external to it.

Very soon it became clear that the problem was going to need much more than a few months' work. Fast-forward three years to 2016: I was at the University of Edinburgh and had joined forces with IOHK whose CEO, Charles Hoskinson, was poised to solve the same problem. The protocol, "Ouroboros" as it would be eventually named, was there but the core of the security proof was still elusive when my good friend Alexander Russell visited me.

Together, we tackled the problem of proving the security of the system. Whiteboards were filled over and over again until we felt we mined a true gem: a clean combinatorial argument that enabled us to argue mathematically the security of the scheme.

Security is an elusive concept. Take a system that is able to withstand a given set of adverse operational conditions. When can we call it secure? What if it collapses in the next moment when it is subjected to a slightly different set of conditions? Or when it is given inputs different from any that have been tried before?

Security cannot be demonstrated via experiment alone since attacker ingenuity can rarely be completely enumerated within any reasonable timeframe. Cryptographic design, thus, has to somehow scale this "universal quantifier": the system should be called secure only if it withstands all possible attacks.

In response to this fundamental problem, "provable security" emerged as a rigorous discipline within cryptography that promotes the co-development of algorithms and (so-called) proofs of security. Such proofs come in the form of theorems that, under certain assumptions and threat models that describe what the attacker can and cannot do, establish the security of cryptographic algorithms. In this fashion, modern cryptographic design pushes the "burden of proof" to the proposer of an algorithm.

In the world of academic cryptography, gone are the days when someone could propose a protocol or algorithm and proclaim it secure because it was able to withstand a handful of known attacks. Instead, modern cryptographic design requires due diligence by the designers to ensure that no attack exists within a convincing and well-defined threat model.

This approach has been a tremendously powerful and inspiring paradigm within cryptography. For instance, the notion of a secure channel has been studied for more than 40 years. This is the fundamental cryptographic primitive that allows the proverbial Alice and Bob to send messages to each other safely in the presence (and possibly active interference) of an attacker. Today's provable security analysis, even using automated tools, has unearthed attacks against secure channel protocols like TLS that were unanticipated by the security community.

Back in 2009 though, the blockchain was a concept that was presented outside regular academic cryptographic discourse. A brief white paper and a software implementation were sufficient to fuel its initial adoption that expanded rapidly. In retrospect, this was perhaps the only way for this fringe idea to ripple the waters of scientific discourse sufficiently and force a paradigm shift (in the sense of Thomas S. Kuhn's " Structure of Scientific Revolutions ") in terms of how the consensus problem was to be studied henceforth.

As the shift settled though, a principled approach became direly needed. The newly discovered design space appears to be vast and the avenues of exploring it too numerous. The "burden of proof" needs to return to the designer.

Blockchain protocols need to become systematized, as they have gradually become one of the dominant themes in distributed consensus literature. The blockchain is not the problem; it is the solution. But in this case, one may wonder, what was the problem?

In 2014, jointly with Juan Garay and Nikos Leonardos, we put forth a first description of "the problem" in the form of what we called a "robust transaction ledger." Such a ledger is implemented by a number of unauthenticated nodes and provides two properties, called persistence and liveness. Persistence mandates that nodes never disagree about the placement of transactions once they become stable, while liveness requires that all (honestly generated) transactions eventually become stable. Using this model, we provided a proof of security for the core of the Bitcoin protocol (a suitably simplified version of the protocol that we nicknamed the "bitcoin backbone").

Given this proof, a natural question a cryptographer will ask is whether this protocol is really the best possible solution to the problem. "Best" here is typically interpreted in two ways: first, in terms of the efficiency of the solution; and second, in terms of the relevance and applicability of the threat model and the assumptions used in the security proof.

Efficiency is a particular concern for the Bitcoin blockchain. With all its virtues, the protocol is not particularly efficient in terms of processing time or resource consumption. This is exactly where "proof of stake" emerged as a possible alternative and a more efficient primitive for building blockchain protocols.

So, is it possible to use proof of stake to provably implement a robust transaction ledger? By 2016, with our Bitcoin backbone work already presented, this was a well-defined question; and the answer came with Ouroboros: our proof-of-stake-based blockchain protocol.

The unique characteristic of Ouroboros is that the protocol was developed in tandem with a proof of security that aims to communicate in a succinct way that the proposed blockchain protocol satisfies the properties of a robust transaction ledger. Central to the proof is a combinatorial analysis of a class of strings that admit a certain discrete structure that maps to a blockchain fork. We called "forkable" those strings that admit a non-trivial such structure, and our proof shows that their density becomes minutely small as the length of the string grows.

With this argument, we showed how there is an opportunity for the nodes running the protocol to converge to a unique history. The protocol then dictates how to take advantage of this opportunity by running a cryptographic protocol that enables the nodes to produce a random seed, which, in turn, is used to sample the next sequence of parties to become active. As a result, the protocol facilitates the next convergence step to take place; in this way, it can continue ad infinitum following a cyclical process that was also the inspiration for its name. Ouroboros is the Greek word for the snake that eats its tail, an ancient Greek symbol for re-creation.

Having the protocol and its proof in hand gave us the unique opportunity for peer review, i.e., asking fellow cryptographers to evaluate the construction and its associated security proof as part of the formal submission process to a major cryptology conference.

Peer reviewing at the top cryptology venues is a painstakingly rigorous process that goes on for months. Papers are first reviewed independently by at least three experts, and afterward a discussion for each paper rages on as the three reviewers, as well as other members of the scientific committee, get involved and try to converge on the intellectual merits of each submission.

As a result of successfully passing this rigorous peer review process, Ouroboros was accepted and included in the program of Crypto 2017 , the 37th annual cryptology conference. Crypto is one of the flagship conferences of the International Association for Cryptologic Research (IACR) and is one of the most exciting places for a cryptographer to be, as the program always contains research on the cutting edge of the discipline.

Furthermore, Ouroboros will be the settlement layer of the Cardano blockchain to be rolled out by IOHK in 2017, making it one of the swiftest technology transfer cases from a basic research publication to a system to be used by many thousands in just one year.

While all this may seem like a happy conclusion to the quest for a proof-of-stake blockchain, we are far from being done. On the contrary, we are still, as a community, at the very beginning of this expedition that will delve deep into blockchain design space. There are still too many open questions to solve, and new systems will be built on the foundations of the research that our community is laying out today.

Ouroboros image courtesy of Wikimedia Commons .

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

See the article here:
Op Ed: A Cryptographic Design Perspective of Blockchains: From Bitcoin to Ouroboros - Nasdaq

Cryptographers and Geneticists Unite to Analyze Genomes They Can’t See – Scientific American

A cryptographer and a geneticist walk into a seminar room. An hour later, after a talk by the cryptographer, the geneticist approaches him with a napkin covered in scrawls. The cryptographer furrows his brow, then nods. Nearly two years later, they reveal the product of their combined prowess: an algorithm that finds harmful mutations without actually seeing anyones genes.

The goal of the scientists, Stanford University cryptographer Dan Boneh and geneticist Gill Bejerano, along with their students, is to protect the privacy of patients who have shared their genetic data. Rapid and affordable genome sequencing has launched a revolution in personalized medicine, allowing doctors to zero in on the causes of a disease and propose tailor-made solutions. The challenge is that such comparisons typically rely on inspecting the genes of many different patientsincluding patients from unrelated institutions and studies. The simplest means to do this is for the caregiver or scientist to obtain patient consent, then post every letter of every gene in an anonymized database. The data is usually protected by licensing agreements and restricted registration, but ultimately the only thing keeping it from being shared, de-anonymized or misused is the good behavior of users. Ideally, it should be not just illegal but impossible for a researchersay, one who is hacked or who joins an insurance companyto leak the data.

When patients share their genomes, researchers managing the databases face a tough choice. If the whole genome is made available to the community, the patient risks future discrimination. For example, Stephen Kingsmore, CEO of Rady Children's Institute for Genomic Medicine, encounters many parents in the military who refuse to compare their genomes with those of their sick children, fearing they will be discharged if the military learns of harmful mutations. On the other hand, if the scientists share only summaries or limited segments of the genome, other researchers may struggle to discover critical patterns in a diseases genetics or to pinpoint the genetic causes of individual patients health problems.

Boneh and Bejerano promise the best of both worlds using a cryptographic concept called secure multiparty computation (SMC). This is, in effect, an approach to the millionaires problema hypothetical situation in which two individuals want to determine who is richest without revealing their net worth. SMC techniques work beautifully for such conjectural examples, but with the exception of one Danish sugar beet auction, they have almost never been put into practice. The Stanford groups work, published last week in Science, is among the first to apply this mind-bending technology to genomics. The new algorithm lets patients or hospitals keep genomic data private while still joining forces with faraway researchers and clinicians to find disease-linked mutationsor at least that is the hope. For widespread adoption, the new method will need to overcome the same pragmatic barriers that often leave cryptographic innovations gathering dust.

Intuitively, Boneh and Bejeranos plan seems preposterous. If someone can see they can leak it. And how could they infer anything from a genome they cant see? But cryptographers have been grappling with just such problems for years. Cryptography lets you do a lot of things like [SMC]keep data hidden and still operate on that data, Boneh says. When Bejerano attended Bonehs talk on recent developments in cryptography, he realized SMC was a perfect fit for genomic privacy.

The particular SMC technique that the Stanford team wedded to genomics is known as Yaos protocol. Say, for instance, that Alice and Bobthe ever-present denizens of cryptographers imaginationswant to check whether they share a mutation in gene X. Under Yaos protocol Alice (who knows only her own genome) writes down the answer for every possible combination of her and Bobs genes. She then encrypts each one twiceanalogous to locking it behind two layers of doorsand works with Bob to find the correct answer by strategically arranging a cryptographic garden of forking paths for him to navigate.

She sets up outer doors to correspond to the possibilities for her gene. Call them Alice doors: If Bob enters door 3, any answers he finds inside will assume that Alice has genetic variant 3. Behind each Alice door, Alice adds a second layer of doorsthe Bob doorscorresponding to the options for Bobs gene. Each combination of doors leads to the answer for the corresponding pair of Alice and Bobs genes. Bob then simply has to get the right pair of keys (essentially passwords) to unlock the doors. By scrambling the order of the doors and carefully choosing who gets to see which keys and labels, Alice can ensure that the only answer Bob will be able to unlock is the correct one, although still preventing herself from learning Bobs gene or vice versa.

Using a digital equivalent of this process, the Stanford team demonstrated three different kinds of privacy-preserving genomic analyses. They searched for the most common mutations in patients with four rare diseases, in all cases finding the known causal gene. They also diagnosed a babys illness by comparing his genome with those of his parents. Perhaps the researchers biggest triumph was discovering a previously unknown disease gene by having two hospitals search their genome databases for patients with identical mutations. In all cases the patients full genomes never left the hands of their care providers.

In addition to patient benefits keeping genomes under wraps would do much to soothe the minds of the custodians of those genome databases, who fear the trust implications of a breach, says Giske Ursin, director of the Cancer Registry of Norway. We [must] always be slightly more neurotic, she says. Genomic privacy likewise offers help for second- and third-degree relatives, [who] share a significant fraction of the genome, notes Bejeranos student Karthik Jagadeesh, one of the papers first authors. Bejerano further points to the conundrums genomicists face when they spot harmful mutations unrelated to their work. The ethical question of what mutations a genomicist must scan for or discuss with the patient does not arise if most genes stayed concealed.

Bejerano argues the SMC technique makes genomic privacy a practical option. Its a policy statement, in some sense. It says, If you want to both keep your genome private and use it for your own good and the good of others, you can. You should just demand that this opportunity is given to you.

Other researchers and clinicians, although agreeing the technique is technically sound, worry that it faces an uphill battle on the practical side. Yaniv Erlich, a Columbia University assistant professor of computer science and computational biology, predicts the technology could end up like PGP (pretty good privacy) encryption. Despite its technical strengths as a tool for encrypting e-mails, PGP is used by almost no onelargely because cryptography is typically so hard to use. And usability is of particular concern to medical practitioners: Several echo Erlichs sentiment that their priority is diagnosing and treating a condition as quickly as possible, making any friction in the process intolerable. Its great to have it as a tool in the toolbox, Erlich says, but my senseis that the field is not going in this direction.

Kingsmore, Erlich and others are also skeptical that the papers approach would solve some of the real-world problems that concern the research and clinical communities. For example, they feel it would be hard to apply it directly to oncology, where genomes are useful primarily in conjunction with detailed medical and symptomatic records.

Still, Kingsmore and Erlich do see some potential for replacing todays clunky data-management mechanisms with more widespread genome sharing. In any case, the takeaway for Bejerano is not that genome hiding is destined to happen, but that it is a technological possibility. You would think we have no choice: If we want to use the data, it must be revealed. Now that we know that is not true, it is up to society to decide what to do next.

Go here to see the original:
Cryptographers and Geneticists Unite to Analyze Genomes They Can't See - Scientific American

How to use Firefox Send for secure file sharing – TechRepublic

Image: Jack Wallen

Sharing files has become standard operating procedure. We do it every day, with files of varying size and importance. Many times we resort to the likes of Dropbox or Googleboth services are relatively easy to use. However, Mozilla thinks there's an even easier way to share those larger files (up to 1GB) and have created Send.

At the moment, Send is still labeled as a "web experiment." I've tested this experiment and, from my experience, this is one of the easiest means of sharing larger files to come across my path. In fact, it's so easy to use, this could be the file sharing service your company might want to consider. It's not perfect, but at this point in the game, I'm happy there is an option to securely share files that anyone can use.

How does it work? Simple:

That's it. You can do this for files up to 1GB in size.

Figure A

Sending a file is but a click or two away.

Let's deconstruct Send a bit.

Send uses AES-128 to encrypt and authenticate data. Even before the file is uploaded to the Send servers, it is encrypted and authenticated. Send also makes use of the Web cryptography API; an agnostic API that performs basic cryptographic operations (such as hashing, signature generation and verification, encryption, and decryptionall from within a web interface). Thanks to the Web cryptography API, there is no need for users to deal with encryption or decryption keys; thereby making Send quite simple to use.

The one major caveat to the security of Send is that anyone who has the download link can download the file. That means, should someone intercept the email with the link, they could have at your file. There three saving graces to this are:

Figure B

Manually deleting the file is an option.

A word of warning about the Delete file button. Once you move away from the upload page, that page will no longer be available. That means, if you plan on making use of that Delete file button, you need to keep that page open until you know there's no need to manually delete the file.

SEE: Essential reading for IT leaders: 10 books on cybersecurity (free PDF) (TechRepublic) SEE: Cyber Security Volume IV: End Point Protection (TechRepublic Academy)

I've tested Send in Chrome, Firefox, Epiphany, Vivaldi, and Microsoft Edge, as well as the mobile versions of Firefox and Chrome. Each of those browsers used Send without issue. The only browser I did not test was Safari. Upon initial release, Send did not support the MacOS browser. It would be a safe bet that support will soon be coming for Apple's browser.

Because Send is a part of Test Pilot, Mozilla will be gathering statistics to see if the service is something they want to continue offering. In other words, the more people that kick the tires of Send, the better the chance it will remain.

If you're looking for an easy way to safely share a one-time download link to a file, you'd be hard-pressed to find an easier solution than Send.

View original post here:
How to use Firefox Send for secure file sharing - TechRepublic

The Power of Pervasive Encryption – Security Intelligence (blog)

The new z14 mainframe computer offers a chance to re-evaluate what a mainframe can do for an organization. Gone are the days when the mainframe was the only way to do computing. Today, there are new and different choices, and the z14 can make those choices practical.

The z14 features standard improvements that users have come to expect, such as faster, more efficient hardware chips. It also includes a pervasive encryption scheme that may prove to be as important as anything that was done to the computing hardware.

Transitioning away from selective encryption toward end-to-end protection will help organizations secure enterprise data while reducing the cost and complexity of meeting emerging compliance mandates. It is a far more general approach that applies to data in transit and at rest. This routine and pervasive use of cryptography is performed all the time to all data, except that which is immediately processed inside the mainframe.

The details of the new cryptography system start with the z14s new coprocessor, the Central Processor Assist for Cryptographic Function (CPACF). This high-performance, low-latency coprocessor performs symmetric key encoding and calculates message digests (hashes) in hardware. It is standard on every core, directly supports cryptography and offers hardware acceleration for all encryption operations that occur on the core processor.

According to IBM Systems Magazine, a Solitaire Interglobal report found that this cryptographic acceleration provides six times more performance than the previous z13 model. Additionally, z14 is more than 18 times faster than competing platforms.

The CPACF also has extended key and hash sizes used in the Advanced Encryption Standard (AES) and Secure Hash Algorithm (SHA), as well as support for UTF8-to-UTF16 conversion. The cryptography hardware is available to all processor types used in the z14.

Bulk file and dataset cryptographic operations were specifically placed within the mainframes operating system software to maximize transparency to the running files and optimize performance. This is a critical point: All the potential benefits of pervasive encryption are lost if a required intermediary step interferes with getting the work done. With the z14, users can transition DB2 and information management system (IMS) high-availability databases from unencrypted to encrypted without stopping the database or the application.

The ability to seamlessly encrypt is a big deal to users. The data used by an application or database is protected, but no user changes are required. Additionally, this means service-level agreements can be maintained.

Both the financial and data processing businesses need this kind of encryption in all places due to the rush of new regulatory compliance mandates that will soon affect them. Additionally, cloud-based data stored in x86 boxes are encrypted at the source and protected at rest. A business using a z14 platform does not have to depend on the low-throughput encryption of such cloud solutions. Data stored in these boxes will already be in an acceptable state without the need for further processing.

No other platform can do this. And it took both advanced hardware and software to pull this off, not just one or the other.

Even with the mainframe doing all it can to keep things secure, bad policy decisions by the user can undercut everything. Users need to maintain security policies and enforce them not count on the machine alone to wave a magic encryption wand to keep data safe.

The z14 is a unique and effective tool to help organizations achieve their security goals. However, the mainframe cannot do this alone: It needs informed and committed users to maximize its effectiveness.

Read the white paper: Pervasive Encryption, The New Paradigm for Protection

The rest is here:
The Power of Pervasive Encryption - Security Intelligence (blog)

Can A Blockchain Computer With Governance Be The Future of Cloud? – HuffPost

In the latest season of HBOs popular series Silicon Valley, the iconic CEO of Pied Piper, Richard Hendrick proposes a stunningly ambitious idea for his startup: rebuild the Internet as a decentralized network that utilizes the computing power of billions of phones in our pockets and spare computers in our living room.

This is in stark contrast to the world we live in today, where more than a third of all Internet traffic goes through the few dozen massive data centers of Amazon Web Services across the globe. Recently, that architecture led to a massive Internet meltdown after a regional outage at Amazons Virginia data center.

Hendricks big idea to decentralize the Internet was obviously inspired by the Ethereum project, a world computer based on complex cryptographic protocols led by now 23 year old russian whiz kid Vitalik Buterin. Recently, we witnessed Ethereum tokens meteoric rise to over $25B in market cap, all happening in less than three years since its launch. If Ethereum was a startup, it would be considered as one of the fastest growing unicorns in history.

Unlike the traditional cloud that can crash or be hacked, Ethereum is often referred to as a perfect virtual computer. It is unstoppable, uncensorable, tamper-proof, and impossible to catch malware or a virus. This is achieved by a complex piece of cryptographic protocol that runs over a large network of individual computers across the world; rather than being concentrated in a few data centers. Anyone can join that network and become a miner by lending their computation services and earn digital tokens, known as Ether.

Yet, despite its phenomenal growth and serious enterprise backing from worlds leading enterprises and financial institutions from Microsoft, Accenture, and JP Morgan, researchers in the space so far caution that were still at the earliest stages of development. Vlad Zamfir, a well-known crypto researcher and core member of Ethereum Foundation put it as follows:

Lets look at what he meant by not scalable. Take Twitter, a popular and centralized social media service as an example. The site processes hundreds of millions of tweets per day on average. And thats just a single service. The capacity limit of the entire Ethereum network is currently on a scale of less than a million per day. Yet, adding new computers, aka nodes, to the Ethereum network doesnt help it to scale, due to limitations of its current algorithm design.

Solving for coordination of millions of computers across the Internet while able to achieve scalability is certainly not a trivial task. Many of the brightest minds in mathematics, cryptography, and economics are in a major race to solve this hard problem, including Ethereums own next-generation research initiative, codenamed Casper.

Among many contestants, the DFINITY project is an intriguing one with its strong technical underpinnings and a particularly ambitious vision to focus on delivering a Decentralized Cloud, rather than automating trust which most blockchain projects tend to focus on.

Supported by DFINITY Stiftung, a not-for-profit headquartered in Zug, Switzerland, the project boasts over $20M in funding (with another main round fundraiser upcoming), and a strong team comprised of top engineers, scientists and economists, many with background from leading organizations such as Stanford, Google, Yale and University of Chicago.

Our objective is to deliver a public decentralized cloud computing resource, with vastly improved performance already seeing more than 50x that of Ethereum in its first release this year with the goal of ultimately achieving infinite capacity. says its founder Dominic Williams, a British-born serial entrepreneur. Williams last multi-million user startup in the massive multiplayer space brought him to the Valley, after fundraising from leading US venture investors.

The ingenious way that DFINITY achieves such performance boost comes partly from a piece of cryptography invented at Stanford, named BLS, standing for its three co-authors: Dan Boneh, Ben Lynn and Hovav Shacham.

Today, Boneh is best known as the head of applied cryptography at Stanford and one of the worlds leading cryptographers. Lynn, his former Ph.D. student, spent the last ten years at Google and recently joined String Labs, the incubator and a lead contributor to DFINITY Project, to work on its core protocol.

Seeing BLS cryptography been applied to power the next generation decentralized cloud, and potentially used by tens of millions of people makes me incredibly excited as a cryptographer and engineer. says Lynn, when asked about why he decided to join as a full-time contributor.

A vastly performant decentralized network could replace todays unnecessarily complicated IT systems on centralized infrastructure. In Williams words, Enterprise IT systems running on this computer will be unstoppable and won't need to involve complex components such as databases, backup and restore systems or Amazon Web Services, allowing costs to be cut by 90% or more by reducing the supporting human capital required.

The disruption potential doesnt stop with that, he stated, A highly performant decentralized cloud will also enable the creation of open source decentralized businesses using self-updating autonomous software systems that may eventually be able to disintermediate and beat out monopolistic organizations such as Uber, eBay, Gmail and others.

Still, theres another major hurdle it must overcome. The blockchain computer, while featuring immutability and trust, carries with it a new dimension of challenges that hasnt been seen in traditional cloud computing. What if the software on blockchain is buggy? What if the funds on it are stolen by malicious hackers? Will these problems be immutable as well?

This class of problems are commonly known as governance issues in the blockchain community. Consider the infamous The DAO heist in the summer of 2016. A decentralized venture fund robot, named the DAO, on Ethereum collected $150M worth of Ether tokens from 20,000 individual anonymous investors in less than 50 days. An amazing feat by all accounts.

Yet only a week later, an anonymous hacker was able to exploit vulnerability in its code that caused a loss of $90M. This ultimately led to a decision to fork Ethereum into a new network that reversed the hack and rescued the user funds.

That decision came from months of endless debates and controversies that haunt it even today. In absence of a formal governance process, people questioned the legitimacy of such a decision in a decentralized system, where software code is often believed to be the ultimate law and immutable.

The DFINITY team took apparently a different philosophy:

While Code is Law is indeed valuable in some cases, we think a different paradigm is needed for mainstream business uses. A world where a twenty-something hacker could happily walk away with multi-million dollars of theft by watching and exploiting software vulnerabilities on the blockchain, is not particularly an appealing one for enterprises or commercial applications. We need a blockchain algorithmic court to settle these cases.

They propose a solution to such problem called the Blockchain Nervous System, which adopts an AI is Law design. By utilizing a hybrid human and AI governance algorithm, this algorithm could essentially overrule any previous code execution result when deemed necessary.

It also borrows a page from the political governance experience, something known as Liquid Democracy, that allows anyone in the community to delegate decisions to their trusted person, in order to reach high quality and rapid decisions en masse. For example, a DAO-like hack could potentially be reversed in a matter of hours given its non-controversy among community members with a strong following relationship, yet without risk of fragmenting the community with a hard fork.

Yet, with these powerful designs comes inevitably many uncertainties. What kind of proposal would be made? What if controversial proposals are mysteriously passed by the AI? Will crowd wisdom bend toward long-term optimizations or short-term market gains?

For now, the DFINITY team seems to confidently stand by this new experimental philosophy, and believes that it introduces a level of governance protocol that is desirable by mainstream enterprises. Leading organizations including Boston Consulting Group seem to concur with this in partnering with the project.

With its recent briefing, the SEC deemed the sale of digital assets the same as selling securities, which means tokens may be subject to the same laws and regulations. While some say this may have a chilling effect on the industry in the short-term, more regulation will benefit the industry in the long term. The market also seems to have already priced in this decision by the SEC, ruling out the worst case scenario of a complete ban of tokens.

I see this as a positive long-term development. The SEC alluded to the same thing in 2013 and 2014. The real issue is secondary sales. Free movement of tokens is an essential part of the value of the token. So it has to qualify as not being a security which means structuring it as such needs to be front and center from day one, said Artia Moghbel, DFINITYs Head of Operations and Communications.

Controversies and obstacles aside, what DFINITY has successfully accomplished has expanded the scope of what blockchain systems are capable of, and offers a glimpse of what the future of the Internet could become. Its indeed be an interesting time to watch how decentralized computation, crowd wisdom combined with artificial intelligence, could perhaps finally make blockchain the challenger to the Amazon Web Services and the like.

The Morning Email

Wake up to the day's most important news.

More:
Can A Blockchain Computer With Governance Be The Future of Cloud? - HuffPost

Mozilla’s new file-transfer service isn’t perfect, but it’s drop-dead easy – Ars Technica UK

Mozilla is testing a new service that makes it dead simple and quick for people to semi-securely share files with anyone on the Internet.

Send, as the service is called, allows senders to encrypt any 1-gigabyte or less file and upload it to a Mozilla server. The service then creates a link with a long, complex string of letters in it that's required to download and decrypt the file. Mozilla will automatically delete the encrypted file as soon as it's downloaded or within 24 hours of being uploaded, even if no one has downloaded it.

Send offers reasonable security and privacy assurances. The service uses an algorithm known as AES-GCM-128 to encrypt and authenticate data on the sender's computer before uploading it to Mozilla servers. And it also uses the Web crypto programming interface, which is one of the better-tested ways Internet applications can perform cryptographic operations without having access to decryption keys. Still, Send shouldn't be trusted with the most sensitive types of data, such as files that might land a dissident or whistleblower in prison.

"Of course, you'll probably hear from naysayers who say doing crypto in the browser with JavaScript is a terrible thing," Justin Troutman, a cryptography and privacy expert and program manager at the Freedom of the Press Foundation, told Ars. "But they're using the WebCrypto API, which is probably the sanest way to do it, if you're going to do it."

Another potential weak point: a quick test by researchers at antivirus provider Bitdefender found that the one-download limitation can be bypassed when two users access the link at the same time. The researchers found that there's a delay of a few seconds for servers to be notified that a download has completed. That delay, they discovered, is longer for bigger files. In certain cases, the delay might allow an attacker to download a file the legitimate parties believe was no longer available.

Another drawback: Send will store basic information on the sender's local device. This information includes the Send identifier for the file, the filename, and the unique download link for the transmitted file. The information, however, is deleted once the sender deletes the uploaded file or visits the Send service after the file has expired. Users are also subject to Mozilla's privacy policy, which, among other things, allows the service to temporarily retain IP addresses in server logs.

Send also collects performance and diagnostic information, including how often users upload files, how long the files remain before expiring, any errors related to file transfers, and what cryptographic protocols a user's browser supports.

Last, the security of the service requires the generated download link to remain private. Anyone who obtains it can download and decrypt the uploaded file.

Those weaknesses or limitations aside, Send may be a better way to transmit files. Many e-mail services limit attachments to 100 megabytes or less. And unless the sending and receiving parties clear special hurdles, the transmitted data can sit unencrypted on e-mail servers indefinitely. Besides the crypto and self-expiration happening automatically, the service also provides an extremely simple interface.

At the moment, Mozilla is describing Send as a test-pilot experiment.

This post originated on Ars Technica

Originally posted here:
Mozilla's new file-transfer service isn't perfect, but it's drop-dead easy - Ars Technica UK

Mozilla’s new file-transfer service isn’t perfect, but it’s drop-dead easy – Ars Technica

Mozilla is testing a new service that makes it dead simple and quick for people to semi-securely share files with anyone on the Internet.

Send, as the service is called, allows senders to encrypt any 1-gigabyte or less file and upload it to a Mozilla server. The service then creates a link with a long, complex string of letters in it that's required to download and decrypt the file. Mozilla will automatically delete the encrypted file as soon as it's downloaded or within 24 hours of being uploaded, even if no one has downloaded it.

Send offers reasonable security and privacy assurances. The service uses an algorithm known as AES-GCM-128 to encrypt and authenticate data on the sender's computer before uploading it to Mozilla servers. And it also uses the Web crypto programming interface, which is one of the better-tested ways Internet applications can perform cryptographic operations without having access to decryption keys. Still, Send shouldn't be trusted with the most sensitive types of data, such as files that might land a dissident or whistleblower in prison.

"Of course, you'll probably hear from naysayers who say doing crypto in the browser with JavaScript is a terrible thing," Justin Troutman, a cryptography and privacy expert and program manager at the Freedom of the Press Foundation, told Ars. "But they're using the WebCrypto API, which is probably the sanest way to do it, if you're going to do it."

Another potential weak point: a quick test by researchers at antivirus provider Bitdefender found that the one-download limitation can be bypassed when two users access the link at the same time. The researchers found that there's a delay of a few seconds for servers to be notified that a download has completed. That delay, they discovered, is longer for bigger files. In certain cases, the delay might allow an attacker to download a file the legitimate parties believe was no longer available.

Another drawback: Send will store basic information on the sender's local device. This information includes the Send identifier for the file, the filename, and the unique download link for the transmitted file. The information, however, is deleted once the sender deletes the uploaded file or visits the Send service after the file has expired. Users are also subject to Mozilla's privacy policy, which, among other things, allows the service to temporarily retain IP addresses in server logs.

Send also collects performance and diagnostic information, including how often users upload files, how long the files remain before expiring, any errors related to file transfers, and what cryptographic protocols a user's browser supports.

Last, the security of the service requires the generated download to remain private. Anyone who obtains it can download and decrypt the uploaded file.

Those weaknesses or limitations aside, Send may be a better way to transmit files. Many e-mail services limit attachments to 100 megabytes or less. And unless the sending and receiving parties clear special hurdles, the transmitted data can sit unencrypted on e-mail servers indefinitely. Besides the crypto and self-expiration happening automatically, the service also provides an extremely simple interface.

At the moment, Mozilla is describing Send as a test-pilot experiment.

Continue reading here:
Mozilla's new file-transfer service isn't perfect, but it's drop-dead easy - Ars Technica

SK Telecom to Accelerate Popularization of Quantum Cryptography for IoT Security – IoT Business News (press release) (blog)

Successfully develops quantum random number generator (QRNG) chip prototype that is smaller than the size of a fingernail. Expects the price of per QRNG chipset to be the worlds lowest level, which will propel the adoption of quantum cryptography in areas of IoT, AI and autonomous driving.

A QRNG generates true random numbers without any kind of pattern, meaning that it is ideal for use in cryptography. However, so far, the cost and size of QRNGs currently on market have prevented them from becoming widespread.

With the successful development of an ultra-small QRNG chip measuring 5mm by 5mm, SK Telecom expects that it will soon be able to embed QRNG to a wide variety of the Internet of Things (IoT) products, including autonomous vehicles, drones and smart devices, to dramatically enhance the level of security for IoT services. Although the price of each QRNG chip has not been set yet, the company said that it will be the lowest price ever for a QRNG.

Meanwhile, SK Telecom is also developing a QRNG in the form of USB and PCIe. While the QRNG chip has to be embedded from the beginning of the product development, QRNG in the form of USB or PCIe can be simply connected to any product already on market to provide genuine randomness.

Park Jin-hyo, Senior Vice President and Head of Network R&D Center of SK Telecom, said:

Understanding the importance of data and data security, SK Telecom has focused on developing quantum cryptography technologies to guarantee secure transmission of data in areas including artificial intelligence (AI), IoT and autonomous driving. We will continue to work with partners, both home and abroad, to accelerate the popularization of quantum cryptography and strengthen our presence in the global market.

More here:
SK Telecom to Accelerate Popularization of Quantum Cryptography for IoT Security - IoT Business News (press release) (blog)

Allegro Software Expands FIPS 140-2 Support For IoT Applications Needing Validated Cryptography in Military … – Benzinga

Today, Allegro Software announced it has earned FIPS 140-2 level 1 validation on four additional platforms with the Allegro Cryptography Engine, ACE from the U.S. government's National Institute of Standards and Technology (NIST).

BOXBOROUGH, MA (PRWEB) August 01, 2017

Allegro Software, a leading supplier of Internet component software for the Internet of Things (IoT), today announced it has earned FIPS 140-2 level 1 validation on four additional platforms with the Allegro Cryptography Engine (ACE) from the U.S. government's National Institute of Standards and Technology (NIST). This marks the culmination of Allegro's largest validation effort to date with the U.S. government. Specifically engineered for the rigors of resource constrained IoT computing environments, ACE enables manufacturers to leverage standards-based cryptography in IoT environments with ease. ACE is ideally suited for use in embedded systems and IoT applications in the military, energy, medical and communications industries.

ACE AND FIPS 140-2 VALIDATION

Since the passage of the Federal Information Security Management Act (FISMA), Federal agencies and contractors have a mandate to maintain greater control over data and information systems as a whole. U.S. Federal agencies that use cryptographic-based systems to protect sensitive information in military, medical, telecommunications, IoT applications and other IT-related products must use FIPS 140-2 validated modules to meet these security requirements. FIPS 140-2 validation is also required by national agencies in Canada and is recognized in Europe and Australia.

ACE is one of the smallest, fastest, and most comprehensive FIPS 140-2 validated software modules on the market for IoT applications. Specifically engineered for the critical cryptographic computing needs of IoT applications, ACE is easily used, highly portable, and uniquely configurable to operate in the toughest resource sensitive environments. With a rich software API, IoT developers can easily perform bulk encryption and decryption, message digests, digital signature creation and validation, along with key generation and exchange. ACE also includes a platform independent implementation of NSA defined Suite B cryptographic algorithms as well as other FIPS approved algorithms. The FIPS approved algorithms are listed on the NIST CAVP sites along with the final validation designation on the NIST CVMP site.

To further aid developers implementing IoT security, ACE is pre-integrated with the full suite of Allegro AE IoT connectivity and security toolkits including RomSTL (TLS 1.2), RomCert (SCEP and OCSP), RomSShell AE (SSH), RomPager AE (web server) and RomWebClient AE (web client).

IoT SECURITY AND HARDWARE CRYPTOGRAPHIC ACCELERATION

IoT applications are engineered from the ground up for resource sensitive execution environments. Typically, the primary driving factor in these applications aims to deliver the highest value IoT product at the lowest cost. Unfortunately, implementing cryptographic security protocols in any environment is resource intensive in CPU, RAM and ROM which IoT devices often find difficult to support. To help address these needs, silicon manufacturers augment their chipsets with specifically engineered cryptographic engines to off-load resource intensive cryptographic calculations. Two of Allegro's most recent FIPS 140-2 validated ACE modules have the flexibility to utilize on-board cryptographic acceleration when available. This greatly increases throughput while reducing the demand for CPU, RAM and ROM. These validations have been configured to support the on-board cryptographic acceleration from Intel (AES-NI) in addition to hardware based entropy to meet the latest NIST Implementation Guidance for FIPS modules.

"The need is critical for advanced security in IoT devices," says Bob Van Andel, President of Allegro. "With the culmination of Allegro's latest validations, IoT developers have access to the most essential component of seven key elements needed for proactive IoT security highly portable, reliable, FIPS 140-2 validated cryptography." ACE is delivered as an ANSI-C source code toolkit and is available now. To learn more about the "7 Key Elements for Proactive IoT Security" visit our website: https://www.allegrosoft.com/secure-iot. For additional information on Allegro Software and the full suite of Allegro AE IoT connectivity and security toolkits, visit our website: https://www.allegrosoft.com/iot-device-cybersecurity .

ABOUT ALLEGRO

Allegro Software Development Corporation is a premier provider of embedded Internet software components with an emphasis on industry-leading device management, embedded device security, UPnP-DLNA networking, and the Internet of Things. Since 1996, Allegro has been on the forefront of leading the evolution of secure device management solutions with its RomPager embedded web server and security toolkits. Also an active contributor to UPnP and DLNA initiatives, Allegro supplies a range of UPnP and DLNA toolkits that offer portability, easy integration, and full compliance with UPnP and DLNA specifications. Allegro is headquartered in Boxborough, MA.

For the original version on PRWeb visit: http://www.prweb.com/releases/2017/08/prweb14562144.htm

See the rest here:
Allegro Software Expands FIPS 140-2 Support For IoT Applications Needing Validated Cryptography in Military ... - Benzinga

In Other API Economy News: Apple Brings Native Cryptography to the Web Browser and More – ProgrammableWeb

We start your week off with a review of the stories we couldnt cover with a look at what what going on in the world of APIs. Leading off on the security front, Apple recently dedicated a blog post to their implementation of the WebCrypto APITrack this API in an effort to bring cryptography native to the web browser. The API is included in Webkit, the web browser engine that powers Safari, and is used for various information security tasks such as data confidentiality, data integrity, authentication and more. There are a number of third party JavaScript cryptography libraries available but Apples claim is that by being built on native APIs, that WebCrypto is more secure and outperforms the available libraries. The blog post goes over a number of tests to back up the assertion that WebCypto should be the standard that developers use for implementing secure interfaces on the Web.

Fresh off this months Net Neutrality Day of Action comes an interesting SDK that looks to do its part to preserve net neutrality. AnchorFree, a provider of a freemium based virtual private network (VPN) has announced an SDK that claims to protect apps so internet service providers cant throttle traffic. The FCC does not currently support net neutrality which is the principle that Internet service providers should enable equal access to content and applications regardless of the source, and without favoring or blocking particular products or websites. The SDK lets companies send their traffic through AnchorFrees servers via their Hotspot Shield VPN product, thus blocking ISPs from tracking user data or censoring or throttling traffic.

Lastly, Atlassian announced the Commit API that lets developers integrate Bitbucket Cloud into external services and retrieve information that is normally accessible in the command line or commit. Features of the API include:

Go here to see the original:
In Other API Economy News: Apple Brings Native Cryptography to the Web Browser and More - ProgrammableWeb