Kubernetes and the Industrial Edge to be Researched by ARC – ARC Viewpoints

I'd like to talk about what I'll call The Cloud Native Tsunami, which is the emerging software architecture for cloud, but also for enterprise, and eventually for edge and even embedded software as well.

It has been my thesis for a couple of years that when Marc Andreessen (the co-founder of Netscape) said, "Software is eating the world," the software that's really going to eat the world has been cloud software, and this is especially true for software development. My thesis is that the methods and technologies people use to develop and deploy cloud software will eventually swallow and take over the enterprise space in our corporate data centers, will take over the edge computing space, and will even threaten the embedded software space. Today each of these domains has different development tools in use, but my thesis is that cloud software tools will eventually take over, and the same common development tools and technologies will end up being used across all of these domains.

In mid-November, I attended the KubeCon America event in San Diego, California. KubeCon is an event sponsored by the Cloud Native Computing Foundation (CNCF), which is an umbrella organization under the Linux Foundation. CNCF manages a number of open source software projects critical to cloud computing. The growth of this conference has been phenomenal, and its name KubeCon stems from Kubernetes, which is the flagship software project managed by this organization.

Kubernetes is an open source software project that orchestrates large deployments of containerized software applications in a distributed system. These can be deployed across a cloud provider, or across a cloud provider and an enterprise, or basically anywhere. The growth of this conference, as you can see from the chart, has been phenomenal. Five years ago, the Kubernetes project the software was private and maintained within Google. About 5 years ago, Google released Kubernetes to Open Source, and since then the KubeCon event and the interest in this software has grown exponentially.

It certainly seems to me that Kubernetes represents a software inflection point similar to ones we've seen in the past. For instance, when Microsoft presented it's Microsoft Office Suite, they defined personal productivity applications for the PC. Or before Y2K, when enterprises were rewriting their existing software to avoid Y2K bugs, but in doing so were generally leaping onto SAP R/2 in order to avoid a issues with Y2K. Or maybe its a little bit like the introduction of Java, which defined a multi-platform execution environment in a virtual machine, and maybe also a bit like the early days of e-commerce when for the first time the worldwide web was linked to enterprise databases, transactions, and business processes.

This rapid growth in interest in Kubernetes has been phenomenal, but exponential growth is obviously unsustainable or the whole planet would soon be going to one software development conference. One thing that's very important to point out with this rapid growth (from basically nothing to 23,000 people attending these events) is that there is a people constraint in this ecosystem right now. There is a shortage of people who are deeply experienced. And even some of the exhibitors and sponsors at KubeCon came to the event just to recruit talented software developers with Kubernetes experience. But you can see from the chart that there's not a lot of people in the world who have more than five years of Kubernetes experience!

In addition to Kubernetes, the Cloud Native Computing Foundation curates several other Open Source software projects. These projects provide services or other kinds of auxiliaries that are important for distributed applications. While Kubernetes is the flagship product, there are other projects that are in different stages of development. The CNCF breaks projects into three areas that they call graduated for the software projects that are ready to be incorporated into commercial products, incubating which refers to an open source software project that is in a more rapid state of development and change, and finally there's a third tier below this which CNCF calls sandbox projects, which are more embryonic projects that are newer, less fully developed, still emerging. And of course, there are any number of software projects outside of this CNCF ecosystem, but CNCF is a major ecosystem for open source software for the cloud.

From the conference, we could see the enterprise impact of Kubernetes is still relatively low. In other words, market leaders are using this technology now, but in general it's at its early stages of deployment even among the leaders, and most enterprises have not yet adopted containerized applications with Kubernetes for orchestration. But, growth in this area is inevitable. This is, as I said before, like Microsoft Office, or like SAP, or like Java; it's coming to the enterprise. Even though the penetration is still low, leaders are rolling out distributed applications and managing distributed applications at scale, Kubernetes is the tool that people are turning to in order to do this.

The auxiliary open source projects I mentioned before will grow the capabilities of Kubernetes over time. So, a number of auxiliary services for data storage, for stateful behavior, for network communications, software defined networking, etc. are going to supplement Kubernetes and make it more powerful. While at the same time, other engineers are working to make this kind of technology, as complex as it is, easier to use and to deploy.

I should mention a couple of vertical industry initiatives where Kubernetes is especially attractive. One of them is 5G telecommunications. Telecom service providers are extremely interested in digitizing their services as they move to 5G. Instead of maintaining services at a tower base cell and providing them via hardware/software appliances that are dedicated-function, telecom providers are now looking to virtualize these network functions and deploy them digitally. So they will have a very large set of applications to manage at a huge scale, and so they've turned to Kubernetes to do this.

A second vertical industry area that is important is the ability to manage new automotive products. This can be autonomous vehicles, fleets of vehicles, or just vehicles that have much more software content than vehicles used to have. Clearly, there's a need for these automotives to manage large scale software deployments at hundreds or thousands of end points and do so with very low costs and very high reliability. So, there are certainly vertical industry initiatives that are driving Kubernetes from the cloud service providers through the data centers toward the edge.

But what about the industrial edge? When we turned to the industrial edge (and the figure below is from Siemens Research) we can divide the compute world into four different domains. We really have at the industrial edge, much more restricted capability in terms of compute power, in terms of storage, in terms of networking, than we find at a data center, be that a corporate data center and much less than we find within the commercial public cloud. And we can go a level further and even see within the automation or manufacturing devices, things such as program logic controllers, CNC machines, robotics, etc., that these are generally addressed by an embedded system that is built for purpose.

One difficulty is that deploying Kubernetes and managing containerized applications at scale, requires larger amounts of compute, network and storage capacity than these edge domain and device domain systems now have. So, this is an area where there's a big challenge to adopt this new technology. Why am I so optimistic that this is going to happen? I'm very optimistic, because there are very similar challenges in the two huge vertical industries that I mentioned, the automotive and the telecommunications industry. These industries also have thousands, or tens of thousands of small systems on the edge on which they need to maintain and deploy software. And that challenge is going to have to be met one way or another. And there's extensive research and development going on now to do just that.

So, in terms of its industrial and industrial IoT impact (though industrial automation is traditionally a technology laggard), industrial IoT applications are definitely a target for Kubernetes. And this involves moving orchestrated, containerized software apps to the edge. As I mentioned, both automotive and industrial applications have similar kind of constraints. They have low compute capability, small footprint, and generally they also demand low bill-of-material costs for the kind of solutions that they can provide. This remains a challenge, but again, I think there are a number of venture stage companies and a lot of research going on to bridge this gap, and people are going to find a way to do that effectively.

But that makes the future very difficult to map out. This ecosystem is extremely dynamic. As I mentioned, Kubernetes was not even in the public domain five years ago. Now it has, if you will, taken over mind share in terms of the technology that people are going to use to orchestrate containerized applications. But, the next five years are likely to be equally revolutionary. So, it's absurdly difficult to map out this space and say, "Here is where it's going to go in 5 years."

But I found that this little quote I saw at KubeCon was interesting and I think if you're working in manufacturing or manufacturing automation, you'll find this interesting, too. This is a description of Kubernetes by one of the co-chairs of their architecture special interest group.

The entire system [that being a Kubernetes deployment] can now be described as an unbounded number of independent, asynchronous control loops reading from and writing to a schematasized resource store as the source of truth. This model has proven to be resilient, evolvable and extensible.

What he's talking about here in terms of control loops are not control loops in the automation sense. They are control loops in the enterprise software sense. These control loops are functions that Kubernetes is performing to maintain a software deployment and monitor the health of this deployment. I found this interesting in that at this level (at the deployment level) for huge distributed applications, people view Kubernetes as a driver of a large number of independent and asynchronous control loops. It points out, to me, that the same sort of technology could be used to manage other types of control loops in automation within a manufacturing operation.

This leads to an upcoming ARC research topic. ARC Advisory Group is beginning research into industrial edge orchestration, specifically the orchestration of applications that are distributed in industrial manufacturing, the industrial internet of things and infrastructure. Because this state of the technology is so early (even though it's critical for the future of industrial automation and for the fourth industrial revolution or industry 4-0) the field is very dynamic, and it's very difficult to map out such a nascent and varied landscape of technologies for integrating and orchestrating the industrial edge. During this research ARC, will be studying the products and technologies of many venture stage firms as well as open source projects that are designed to bridge the gap between the cloud and the industrial edge and these include infrastructure for 5G telecommunications, edge networks, requirements to manage fleets of vehicles, as well as the networking opportunities that are afforded by 5G itself.

With this industry at such an early stage, any detailed market forecast would be highly speculative and very uncertain. But ARC has decided to map out this landscape and plans to provide as deliverables for this research a series of podcasts, webcasts, and reports for our ARC Advisory Service clients. So, ARC is reaching out to relevant suppliers in this space, be they hardware, software or services suppliers, to participate in this research initiative. If your firm would like to participate in this research, ARC welcomes your input. Please use this link to connect with ARC or feel free to contact me at hforbes@arcweb.com and I'll be happy to discuss this project with you.

Read more:
Kubernetes and the Industrial Edge to be Researched by ARC - ARC Viewpoints

Hash Check – How, why, and when you should hash check – proprivacy.com

Here at ProPrivacy we justloveopen source software. This is mainly because, despite not being perfect, open source provides the only way to know for sure that a program is on the level.

One problem, though, is how do you know that an open source program you download from a website is the program its developer(s) intended you to download? Cryptographic hashes are a partial solution to this problem.

A cryptographic hash is a checksum or digital fingerprint derived by performing a one-way hash function (a mathematical operation) on the data comprising a computer program (or other digital files).

Any change in just one byte of the data comprising the computer program will change the hash value. The hash value is, therefore, a unique fingerprint for any program or other digital files.

Ensuring that a program has not been tampered with, or just corrupted, is a fairly simple matter of calculating its hash value and then comparing it with the hash checksum provided by its developers.

If it's the same, then you have a reasonable degree of confidence that the program you have downloaded is exactly the same as the one published by its developer. If it is not, then the program has been changed in some way.

The reasons for this are not always malicious (see below), but a failed hash check should set alarm bells ringing.

You may have detected a note of caution when singing the praises of hash checks...

Hash checks are useful for ensuring the integrity of files, but they do not provide any kind of authentication. That is, they are good for ensuring the file or program you have matches the source, but they provide no way of verifying that the source is legitimate.

Hash checks provide no guarantee as to the source of the hash checksum.

For example, fake websites exist which distribute malicious versions of popular open source software such as KeePass. Many of these websites even provide hash checksums for the programs they supply and, were you to check these against the fake program, they would match. Oops.

An additional problem is that mathematical weaknesses can mean that hashes are not as secure as they should be.

The MD5 algorithm, for example, remains a highly popular hash function, despite its known vulnerability to collision attacks. Indeed, even SHA1 is no longer considered secure in this regard.

Despite this, MD5 and SHA1 remain the most popular algorithms used to generate hash values. SHA256, however, remains secure.

Developers sometimes update their programs with bug fixes and new features, but neglect to publish an updated hash checksum. This results in a failed hash check when you download and try to verify their program.

This is, of course, nowhere near as serious as a hash check giving malicious software a pass, but it can degrade trust in the ecosystem, resulting in people not bothering to check the integrity of files they download...

Most of the problems with cryptographic hashes are fixed by the use of digital signatures, which guarantee both integrity and authentication.

Developers who are happy to use proprietary code can automatically and transparently validate signatures when their software is first installed, using mechanisms such as Microsoft, Apple, or Google PKI (public key infrastructure) technologies.

Open source developers do not have this luxury. They have to use PGP, which is not natively supported by any proprietary operating system, and why no equivalent of Microsoft, Apple or Google PKIs exist in Linux.

So PGP digital signatures must be verified manually. But PGP is a complete pig to use and is not a simple process, as a quick glance at our guide to checking PGP signatures in Windows will demonstrate.

Neither is the actual signing process for developers, who are well aware that in the real world few people bother to check digital signatures manually, anyway.

Cryptographic hashes are nowhere near as secure as PGP digital signatures, but they are much easier to use, with the result that many developers simply choose to rely on them instead of digitally signing their work.

This is a less than ideal situation, and you should always check an open source programs digital signature when one is available. If one is not, however, then checking its cryptographic hash is much better than doing nothing.

As long as you are confident about the source (for example you are sure it's from the developers real website, which has not been hacked to display a fake cryptographic hash), then checking its hash value provides a fair degree of coincidence that the software you have downloaded is the software its developer intended for you to download.

If neither a digital signature nor a checksum is available for open source software, then do not install or run it.

The basic process is as follows:

If they are identical, then you have the file the developer intended you to have. If not, then it has either become corrupted or has been tampered with.

If an SHA256+ hash is available, check against that. If not, then use SHA1. Only as a last resort should you check against an MD5 hash.

The simplest way to generate the hash value of files is by using a website such as Online Tools. Just select the kind of hash value you need to generate, then drag-and-drop the required file into the space provided and the relevant hash value will be generated.

We want to check the integrity of the KeePass installer file that we have downloaded from the KeePass.org website (which we know to be the correct domain). The website publishes MD5, SHA1, and SHA256 hashes for all versions of its KeePass, so we will check the SHA256 for the version we downloaded.

This method works out-of-the box in Windows 10, while Windows 7 users need to first update Windows PowerShell with Windows Management Framework 4.0.

To obtain an SHA256 hash, right-click Start -> Windows PowerShell and type:

Get-FileHash [path/to/file]

For example:

Get-FileHash C:UsersDouglasDownloadsKeePass-2.43-Setup.exe

MD5 and SHA1 hashes can be calculated using the syntax:

Get-FileHash [path to [path/to/file] -Algorithm MD5

and

Get-FileHash [path to [path/to/file] -Algorithm SHA1

For example:

Get-FileHash C:UsersDouglasDownloadsKeePass-2.43-Setup.exe -Algorithm MD5

Open Terminal and type:

openssl [hash type] [/path/to/file]

Hash type should be md5, SHA1, or SHA256.

For example, to check the SHA256 hash for the Windows KeePass installer (just to keep things simple for this tutorial), type:

openssl sha256 /Users/douglascrawford/Downloads/KeePass-2.43-Setup.exe

Open Terminal and type either:

Md5sum [path/to/file]Sha1sum [path/to/file]

or

Sha256sum [path/to/file]

For example:

sha256sum /home/dougie/Downloads/KeePass-2.43-Setup.exe

Read more here:
Hash Check - How, why, and when you should hash check - proprivacy.com

The Top Five Apache Software Projects in 2019: From Kafka to Zookeeper – Computer Business Review

Add to favorites

We are heavy Lucene users and have forked the Lucene / SOLR source code to create a high volume, high performance search cluster with MapReduce

The Apache Foundation is 20 years old this year and has grown to the point where it now supports over 350 open source projects; all maintained by a community of more than 770 individual members and 7,000 committers distributed across six continents.Here are the Top Five Apache Software projects in 2019, as listed by the foundation.

Released in 2006, Apache Hadoop is an open source software library used to run distributed processing of large datasets on computers using simple programing models. A key feature of Hadoop is that the library will detect and handle failures at the application level. Essentially its a framework that facilities distributed big data storage and big data processing.

The Java-based programming framework consists of a storage element called Hadoop Distributed File System. The file system splits large files into blocks which are then spread out across different nodes in a computer cluster. Hadoop Common creates the main framework as its holds all of the common libraries and files that support the Hadoop modules.

Since Hadoop has the most active visits and downloads out of all of Apaches software offerings its no surprise that a long list of companies rely on it for their data storage and processing needs.

One such user is Adobe, which notes: We currently have about 30 nodes running HDFS, Hadoop and HBase in clusters ranging from 5 to 14 nodes on both production and development. We constantly write data to Apache HBase and run MapReduce jobs to process then store it back to Apache HBase or external systems.

Apache Kafka developed in 2011 is a distributed streaming platform that lets developers publish and subscribe record streams in a method similar to a message queue. Kafka is used to build data pipelines that can stream in real-time, it is also used to create applications that can react or transform according to a ingested real-time data stream.

Kafka is writing in Scala and Java programming languages. When it stores streams of records in a cluster it calls them topics, each topic consists of a value, a key and a timestamp. It runs using four key APIs; Producer, Consumer, Streams and Connector. Kafka is used by many companies as a fault-tolerant publish-subscribe messaging system as well as means to run real-time analytics on data streams.

The open-source software is used by Linkedin which incidentally first developed the software platform to activity stream data and operation metrics. Twitter use it as part of its processing and archival infrastructure: Because Kafka writes the messages it receives to disk and supports keeping multiple copies of each message, it is a durable store. Thus, once the information is in it we know that we can tolerate downstream delays or failures by processing, or reprocessing, the messages later.

Lucene is a search engine software library that provides a java-based search and indexing platform. The engine can process ranked searching as well as a number of query types such as phrase queries, wildcard queries, proximity queries and range queries. Apache estimate text indexed using Lucene is done at 20-30 percent of its original size.

Lucene was first written in Java back in 1999 by Doug Cutting before the platform joined the Apache Software Foundation in 2001. Users can now get a version of it writing in the following programming languages; Perl, C++, Python, Object Pascal, Ruby and PHP.

Lucene is used by Benipal Technologies which states: We are heavy Lucene users and have forked the Lucene / SOLR source code to create a high volume, high performance search cluster with MapReduce, HBase and katta integration, achieving indexing speeds as high as 3000 Documents per second with sub 20 ms response times on 100 Million + indexed documents.

POI is an open-source API that is used by programmers to manipulate file formats related to Microsoft Office such as Office Open XML standards and Microsofts OLE 2 Compound Document format. With POI; programmes can create, display and modify Microsoft Office files using Java programs.

The German railway company Deutsche Bahn is among the major users, creating a software toolchain in order to establish a pan-European train protection system.

A part of that chain is a domain-specific specification processor which reads the relevant requirements documents using Apache POI, enhances them and ultimately stores their contents as ReqIF. Contrary to DOC, this XML-based file format allows for proper traceability and versioning in a multi-tenant environment. Thus, it lends itself much better to the management and interchange of large sets of system requirements. The resulting ReqIF files are then consumed by the various tools in the later stages of the software development process.

The name POI is an acronym for Poor Obfuscation Implementation which was the original developers making a joke that the file formats they handled appear to be deliberately obfuscated.

ZooKeeper is a centralised service that is used for maintaining configuration information. Its a service for distributed systems and acts as a hierarchical key-value store, which is used for storing, manage and retrieving data. Essentially ZooKeeper is used to synchronise applications that are distributed across a cluster.

Working in conjunction with Hadoop it effectively works like a centralised repository where distributed applications can store and retrieve data.

AdroitLogic a enterprise integration and B2B service provider state that they use: ZooKeeper to implement node coordination, in clustering support. This allows the management of the complete cluster, or any specific node from any other node connected via JMX. A Cluster wide command framework developed on top of the ZooKeeper coordination allows commands that fail on some nodes to be retried etc.

View original post here:
The Top Five Apache Software Projects in 2019: From Kafka to Zookeeper - Computer Business Review

The US media is in the gutter with Trump – The Japan Times

NEW YORK How you respond to an attack defines you. Keep your cool, remain civil and others will respect the way you handle yourself, even if they disagree with you. Lower yourself to your assailants level and at best spectators will dismiss your dispute as a he-said-she-said between two jerks.

So much has been written about U.S. President Donald Trumps debasement of rhetorical norms and his gleeful contempt for truth that there is no need to cite examples or quote studies that count the prolificacy of his lies. Trumps attacks on journalists fake news, mocking a disabled reporters body movements are contemptible. They undermine citizens trust in news media a serious menace to democracy and civil society.

Less noticed is how major news organizations, incensed by the presidents trolling, have debased themselves to Trumps moral level.

American journalism used to adhere to strict standards. Though impossible to achieve, objectivity was paramount. At bare minimum, reporters were expected to project an appearance of political neutrality.

Truth only derived from facts verifiable facts. Not conjecture and never wishful thinking. Sources who wanted to be quoted had to go on the record. Anonymous sources could flesh out background but could not be the entire basis for a story.

From the start of Trumps run for president before the start Democratic-leaning media outlets abandoned their own long-cherished standards to declare war on him. Every day during the 2016 campaign The New York Times led its coverage with its forecast of Hillary Clintons supposed odds of defeating Trump. Setting aside the fact of the Times embarrassing wrongness the day before Election Day they gave Clinton an 85 percent chance of winning it cited odds rather than polls. Maximizing a sense of Clintonian inevitability was intended to demoralize Republicans so they wouldnt turn out to vote. The two figures might mean the same thing. But 85-15 odds look worse than a 51-49 poll.

Its downright truthy. And when truthiness goes sideways it makes you look really, really dumb. 51-49 could go either way. 85-15, not so much.

The impeachment battle marks a new low in partisanship among media outlets.

After Trumps surprise-to-those-whod-never-been-to-the-Rust-Belt win, outlets like the Times declared themselves members of a so-called resistance. Opinion columnists like Charles M. Blow pledged never to normalize Trumpism; what this has meant, ironically, is that Blows essays amount to rote recitations on the same topic: Normally, about the argument that Trump sucks. Which he does. There are, however, other issues to write about, such as the fact that we are all doomed. It would be nice to hear Blows opinions about taxes, militarism and abortion.

Next came years years! of Robert Muellerpalooza. Russia, corporate media outlets said repeatedly, had meddled in the 2016 election. Russian President Vladimir Putin installed Trump; Hillary Clintons snubbing of her partys 72 percent-progressive base had nothing to do with the loss of the most qualified person blah blah blah to an inductee in the WWE Hall of Fame.

Whatever happened to the journalistic chestnut: If your mother says she loves you, check it out? Russiagate wasnt a news report. It was religious faith. Russia fixed the election because we, the media, say so, we say so because we were told to say so by politicians, who were told to say so by CIA people, whose job is to lie and keep secrets. No one checked out anything.

What we knew and still know is that a Russia-based troll farm spent either $100,000 or $200,000 on Facebook ads to generate clickbait. Most of those ads were apolitical. Many were pro-Clinton. The company has no ties to the Russian government. It was a $6.8 billion election; $200,000 couldnt have and didnt move the needle.

Anonymous congressional sources told reporters that anonymous intelligence agents told them that there was more. The Mueller report implies as much. But no one went on the record. No original or verifiable copies of documentary evidence has been leaked. The reports numerous citations are devoid of supporting material. By pre-Trump journalistic standards Russiagate wasnt a story any experienced editor would print.

It was barely an idea for a story.

Russiagate fell apart so decisively that Democratic impeachers now act like the Mueller report a media obsession for three years never even happened.

Speaking of impeachment, mainstream media gatekeepers are so eager to see Trump removed from office that theyre violating another cardinal rule of journalism: If its news, print it. The identity of the CIA whistleblower (scare quotes because actual whistleblowers reveal truths that hurt their bosses) who triggered impeachment over Trumps menacing phone call to the president of Ukraine has been known in Washington, and elsewhere if you know where to look, for months.

Federal law prohibits the government from revealing his identity, and rightly so. But it has leaked. Its out. Its news. Nothing in the law or journalistic custom prevents a media organization from publishing it. News outlets felt no compulsion to similarly protect the identity of Bradley Manning or Edward Snowden. So why arent newspapers and broadcast networks talking about it?

Im not convinced his identity is important at this point, or at least important enough to put him at any risk, or to unmask someone who doesnt want to be identified, New York Times editor Dean Baquet said. So much for the peoples right to know. Why should subscribers buy a newspaper that doesnt print the news?

There is a because Trump change in media ethics that I welcome. Whats suspect is the timing.

Trump is the first president to get called out for his lies right in the news section. Great! Imagine how many lives could have been saved by a headline like Bush Repeats Debunked Falsehood That Iraq Has WMDs. A headline like Slurring Sanders Numerous Female Supporters as Bros, Hillary Clinton Lies About Medicare-for-All could have nominated and elected Bernie and saved many Americans from medical bankruptcy.

But all presidents lie. Why pick on Trump? His lies are (perhaps) more numerous. But theyre no more bigger than his predecessors (see Iraq WMDs, above). Yet discussion of former presidents remains respectful and slavish as ever.

I say, give coverage of Obama and other ex-presidents the same tone and treatment as the current occupant of the White House gets from the news media:

Wallowing in Corrupt Wall Street Cash, Obama Drops $11.75 Million on Gaudy Marthas Vineyard Mansion Estate

Ellen DeGeneres Sucks Up to Mass Murderer George W. Bush

Jimmy Carter, First Democratic President to Not Even Bother to Propose an Anti-Poverty Program, Dead at TK

Ted Rall (Twitter: @tedrall), a political cartoonist, columnist and graphic novelist, is the author of Francis: The Peoples Pope.

Link:
The US media is in the gutter with Trump - The Japan Times

If You Think Encryption Back Doors Won’t Be Abused, You May Be a Member of Congress – Reason

The FBI was way too lax when it sought a secret warrant to wiretap former Trump aide Carter Page. Yet some of the very same people who have been publicly aghast at the circumstances Page scandal are still trying to hammer companies like Apple and Facebook into compromising everybody's data security to give law enforcement access to your stuff.

You're forgiven if you missed this news, as it happened at the exact same time last week that the impeachment countsagainst President Donald Trump were revealed. Our extremely tech-unsavvy lawmakers brought in a few experts to a Senate Judiciary Committee hearing and essentially ignored what they said and yelled demands at them. Virtually every tech expert and privacy advocate under the sun has warned virtually every government official in the world that "back doors" that let police bypass encryption has the potential to cause huge harms and actually makes citizens even more vulnerable to crime. But the legislators want their back doors, dammit.

Here's Sen. Lindsey Graham (RS.C.), who just a day later would express shock that the process for the FBI to get a FISA warrant was not as thorough as he believed: "My advice to you is to get on with it, because this time next year, if we haven't found a way that you can live with, we will impose our will on you." When a witness attempted to explain how complicated an issue encryption is, Graham responded, "Well, it ain't complicated for me."

The Democrats haven't been impressive on this issue either. Sen. Dianne Feinstein (DCalif.) still holds the position that it's no big deal if tech companies just let law enforcement officials in to read encrypted material, as long as they've got a warrant. Sen. Dick Durbin (DIll.) thinks the debate is about whether encryption implemented by companies puts information "beyond the reach of the law." He doesn't seem to care about the arguments that weakening encryption and providing back doors will let hackers and hostile nations access the private data and communications of people around the world (including Americans).

The talking point both the Justice Department and the lawmakers have settled on is that they need to be able demand back doors for the children. Apparently, we all need weaker protections in order to fight child sexual abuse and trafficking.

Sen. Sheldon Whitehouse (DR.I.) asked the tech industry witnesses if they'd be willing to "take responsibility for the harm" that might be caused if law enforcement didn't have back door access. But is Congress and the Justice Department going to "take responsibility for the harm" when these vulnerabilities make it out into the wild (as they inevitably would) and are abused by criminals or by authoritarian states?

This encryption fight has been going on for years, and the back door advocates has resolutely refused to consider the possibility of abuse. Graham in particular has been unwilling to consider the possibility that FISA warrants could ever be used to secretly snoop on Americans inappropriately. But by Thursday, he had changed his tune; if nothing else, the Trump case has forced him to think about what can go wrong when the government can secretly access people's private information without their permission.

Go here to see the original:
If You Think Encryption Back Doors Won't Be Abused, You May Be a Member of Congress - Reason

Internet of crap (encryption): IoT gear is generating easy-to-crack keys – The Register

A preponderance of weak keys is leaving IoT devices at risk of being hacked, and the problem won't be an easy one to solve.

This was the conclusion reached by the team at security house Keyfactor, which analyzed a collection of 75 million RSA certificates gathered from the open internet and determined that number combinations were being repeated at a far greater rate than they should, meaning encrypted connections could possibly be broken by attackers who correctly guess a key.

Comparing the millions of keys on an Azure cloud instance, the team found common factors were used to generate keys at a rate of 1 in 172 (435,000 in total). By comparison, the team also analyzed 100 million certificates collected from the Certificate Transparency logs on desktops, where they found common factors in just five certificates, or a rate of 1 in 20 million.

The team believes that the reason for this poor entropy is down to IoT devices. Because the embedded gear is often based on very low-power hardware, the devices are unable to properly generate random numbers.

The result is keys that could be easier for an attacker to break, leaving the device and all of its users vulnerable.

"The widespread susceptibility of these IoT devices poses a potential risk to the public due to their presence in sensitive settings," Keyfactor researchers Jonathan Kilgallin and Ross Vasko noted.

"We conclude that device manufacturers must ensure their devices have access to sufficient entropy and adhere to best practices in cryptography to protect consumers."

The recommendation is that IoT hardware vendors step up their security efforts to improve the entropy of these devices and make sure that their hardware is able to properly set up secure connections.

If vendors don't step up and address the issue, there is a good chance that criminal hackers will. The team says its experiments showed that this sort of attack could be pulled off without much in the way of an up-front investment.

"With modest resources, we were able to obtain hundreds of millions of RSA keys used to protect real-world traffic on the internet," said Kilgallin and Vasko.

"Using a single cloud-hosted virtual machine and a well-studied algorithm, over 1 in 200 certificates using these keys can be compromised in a matter of days."

Sponsored: From CDO to CEO

Read more:
Internet of crap (encryption): IoT gear is generating easy-to-crack keys - The Register

Facebook refuses to break end-to-end encryption – Naked Security

Congress on Tuesday told Facebook and Apple that they better put backdoors into their end-to-end encryption, or theyll pass laws that force tech companies to do so.

At a Senate Judiciary Committee hearing on Tuesday that was attended by Apple and Facebook representatives who testified about the worth of encryption that hasnt been weakened, Sen. Linsey Graham had this to say:

Youre going to find a way to do this or were going to do this for you.

Were not going to live in a world where a bunch of child abusers have a safe haven to practice their craft. Period. End of discussion.

Its the latest shot fired in the ongoing war over encryption. The most recent salvos have been launched following the privacy manifesto that Facebook CEO Mark Zuckerberg published in March.

At the time, Zuckerberg framed the companys new stance as a major strategy shift that involves developing a highly secure private communications platform based on Facebooks Messenger, Instagram, and WhatsApp services.

Facebooks plan is to leave the three chat services as standalone apps but to also stitch together their technical infrastructure so that users of each app can talk to each other more easily.

The plan also includes slathering the end-to-end encryption of WhatsApp which keeps anyone, including Facebook itself, from reading the content of messages onto Messenger and Instagram. At this point, Facebook Messenger supports end-to-end encryption in secure connections mode: a mode thats off by default and has to be enabled for every chat. Instagram has no end-to-end encryption on its chats at all.

You had better end or at least pause your plan, three governments warned Facebook in October.

US Attorney General William Barr and law enforcement chiefs of the UK and Australia signed an open letter calling on Facebook to back off of its encryption on everything plan unless it figures out a way to give law enforcement officials backdoor access so they can read messages.

No, Facebook said with all due respect to law enforcement and its need to keep people safe.

On Monday, Facebook released an open letter it penned in response to Barr.

In the letter, WhatsApp and Messenger heads Will Cathcart and Stan Chudnovsky said that any backdoor access into Facebooks products created for law enforcement would weaken security and let in bad actors who would exploit the access. Thats why Facebook has no intention of complying with Barrs request that the company make its products more accessible, they said:

The backdoor access you are demanding for law enforcement would be a gift to criminals, hackers and repressive regimes, creating a way for them to enter our systems and leaving every person on our platforms more vulnerable to real-life harm.

Peoples private messages would be less secure and the real winners would be anyone seeking to take advantage of that weakened security. That is not something we are prepared to do.

In his opening statement on Tuesday, Sen. Graham the chairman of the Senate Judiciary Committee told Apple and Facebook representatives that he appreciates the fact that people cannot hack into my phone, but encrypted devices and messaging create a safe haven for criminals and child exploitation.

In Facebooks letter, Cathcart and Chudnovsky pointed out that cybersecurity experts have repeatedly shown that weakening any part of an encrypted system means that its weakened for everyone, everywhere. Its impossible to create a backdoor just for law enforcement that others wouldnt try to open, they said.

Theyre not alone in that belief, they said. Over 100 organizations, including the Center for Democracy and Technology and Privacy International, responded to Barrs letter to share their views on why creating backdoors jeopardizes peoples safety. Facebooks letter also quoted Cryptography Professor Bruce Schneier from comments he made earlier this year:

You have to make a choice. Either everyone gets to spy, or no one gets to spy. You cant have We get to spy, you dont. Thats not the way the tech works.

And as it is, Facebook is already working on making its platforms more secure, they said. Its more than doubled the number of employees who are working on safety and security, and its using artificial intelligence (AI) to detect bad content before anyone even reports it or, sometimes, sees it. For its part, WhatsApp is detecting and banning two million accounts every month, based on abuse patterns. It also scans unencrypted information such as profile and group information looking for tell-tale content such as child abuse imagery.

Facebook says that its been meeting with safety experts, victim advocates, child helplines and others to figure out how to better report harm to children, in ways that are more actionable for law enforcement. Its doing so while trying to balance the demands of other needs: as in, its also working to collect less personal data, as governments are demanding, and to keep users interactions private, as those users are demanding.

At a Wall Street Journal event on Tuesday, AG Barr granted that yes, there are benefits to encryption, such as to secure communications with a bank a financial institution that will, and can, give investigators what they need when served with a warrant.

But he said that the growth of consumer apps with warrant-repellent, end-to-end encryption, like WhatsApp and Signal, have aided terrorist organizations, drug cartels, child molesting rings and kiddie porn type rings.

This war over encryption has been going on since the FBIs many attempts to backdoor Apples iPhone encryption in the case of the San Bernardino terrorists.

Both sides are sticking to the same rationales theyve espoused since the start of this debate. The only real difference in the events of this week is the renewed call for legislation to force backdoors: a threat that is apparently uniting both sides of this otherwise extremely partisan Congress and hence carries that much more weight.

View post:
Facebook refuses to break end-to-end encryption - Naked Security

Changing the Locks: Proposed Amendments to the Australian Encryption Act – Lexology

The Australian Encryption Act was passed last year in response to the governments concern about misuse of encrypted social media platforms to advance terrorist activities. The Act extended ASIO, Federal, and State law enforcement powers to enable them to issue notices to request access to otherwise encrypted messages from designated communication providers. This was construed broadly to include social media giants such as Whatsapp, device manufacturers, and free WIFI providers. Authorities were also permitted to detain people without a warrant or allowing them to contact a lawyer.

Initial Response

Since then, the Act has been received with significant caution from the industry. The new Technical Capability Notices (TCN) enabled authorities to require communications providers to establish back doors to allow for interceptions and decryptions of otherwise encrypted messages on specific devices without the customers knowledge. Agencies can also circumvent encryption by installing key logging software or by taking repeated screenshots of a customers screen and messages. Concerns have been raised about individuals privacy and systemic vulnerabilities caused by techniques to obtain and compromise encrypted data. Managing these concerns is important in a world increasingly concerned about misuse, control and regulation of civilian data, media and digital platforms.

Proposed Amendments

In response to bipartisan recommendations from the inquiry by the Parliamentary Committee on Intelligence and Security (PJCIS), the Labor opposition has proposed amendments to the Act. The first reading of the Telecommunications Amendment (Repairing Assistance and Access) Bill 2019 noted that the legislation has been holding the [Australian] tech sector back from achieving [its] potential. It expressed concerns that the Act undermines our relationships with key international strategic partners including by slowing discussions with the United States for a bilateral agreement under the US CLOUD Act (Clarifying Lawful Overseas Use of Data).

The Explanatory Memorandum for the Bill describes the following effects of the amendments, if passed:

Regulation plays a vital, but complex role in a society increasingly reliant on technology. The Bills objectives shed light on the governments increasing focus on the role of effective encryption on national security, the important of strong security regulatory frameworks and the impact these have on foreign trust in Australias technology sector.

The rest is here:
Changing the Locks: Proposed Amendments to the Australian Encryption Act - Lexology

The Defense Department Says It Needs the Encryption the FBI Wants to Break – Free

Even the Defense Department is now pointing out that the governments quest to weaken encryption lies somewhere between counterproductive and downright harmful.

Attorney General Bill Barr and Senate Judiciary Committee Chair Lindsey Graham have been on a tear lately in a bid to undermine encryption standards. Those efforts culminated in a hearing this week whose primary purpose appears to have been to demonize encryption by falsely proclaiming it poses a risk to public safety.

Many staffers at both the Department of Justice and FBI have joined the festivities, arguing that encryption enables all manner of nefarious behavior, from human trafficking to child exploitation as they push for the inclusion of law enforcement backdoors in everything from routers to smartphones.

Actual security expertsand tech giants like Facebook and Applehave long highlighted the foolishness of such efforts. Encryption aids everybody, theyll note, protecting consumers, activists, and criminals alike. Embed backdoors in encryption and network gear, theyve warned, and youre undermining an essential security tool, putting everybody at risk.

We do not know of a way to deploy encryption that provides access only for the good guys without making it easier for the bad guys to break in, Apples director of user privacy, Erik Neuenschwander told hearing attendees.

While vast segments of government have embraced the recent war on encryption, some government officials seem to understand the benefits of retaining strong encryption. This week, Representative Ro Khanna forwarded a letter to Lindsay Graham from the Defense Department's Chief Information Officer Dana Deasy.

In the letter, first reported by Techdirt, Deasy notes that all DOD issued unclassified mobile devices are required to be password protected using strong passwords, and that any data-in-transit on DOD issued mobile devices be encrypted via VPN.

The importance of strong encryption and VPNs for our mobile workforce is imperative, Deasy wrote.

As the use of mobile devices continues to expand, it is imperative that innovative security techniques, such as advanced encryption algorithms, are constantly maintained and improved to protect DoD information and resources, he said. The Department believes maintaining a domestic climate for state of the art security and encryption is critical to the protection of our national security.

Theres endless examples of governments, organizations, and corporations attempting to undermine encryption standards for both surveillance and profit. Comcast, for example, has worked to undermine recent efforts to encrypt Domain Name Server (DNS) traffic because doing so would threaten the companys efforts to monetize user behavior online.

Facebook sent a letter this week to Bill Barr, in which the company made it clear that it would not backdoor its encrypted messaging apps at the governments request.

Cybersecurity experts have repeatedly proven that when you weaken any part of an encrypted system, you weaken it for everyone, everywhere, Facebook wrote.

But while cybersecurity experts and tech giants spent the week warning that weakening encryption harms everyone, a bipartisan coalition of lawmakers remain stubbornly impervious to the argument.

Democratic Senator Dick Durbin largely mirrored Grahams rhetoric at this weeks hearings, insisting the latest war on encryption was about ensuring big tech companies werent beyond the reach of the law. Were talking about our government protecting our citizens, he insisted, seemingly oblivious that eroding encryption would likely have the exact opposite impact.

The Justice Department has argued for years that by including strong encryption on their networks and in their products, Silicon Valley giants are undermining the governments quest to rein in criminals. But security experts, and now the DOD, have made it abundantly clear that encryption protects everybody, not just the worst segments of society.

So far, politicians like Graham have made it abundantly clear theyre not listening, insisting that if tech companies dont set about backdooring their products and weakening encryption, there will soon be hell to pay.

My advice to you is to get on with it, because this time next year, if we havent found a way that you can live with, we will impose our will on you, Graham said.

Read the original:
The Defense Department Says It Needs the Encryption the FBI Wants to Break - Free

Inspecting TLS Web Traffic Part 1 – Security Boulevard

In this series of blogs Im going to talk about how the continued move towards all web traffic being encrypted has impacted enterprise security. In this blog Im going to focus on the basics what is encrypted web traffic and how can you proactively control this.

TLS encryption is the de-facto encryption technology for delivering secure web browsing, and the benefits it provides are driving the levels of HTTPS traffic to new heights. Every day, more HTTPS web traffic traverses the internet in a form that provides security and trust for users. This traffic is encrypted with TLS, a transport layer encryption protocol that protects data against unauthorized access and eavesdropping. Current estimates indicate that over 90% of all web traffic is now encrypted.

However, not all HTTPS traffic is benign; attackers and malware writers also leverage encryption to hide their activities. In a recent report, it was stated that 60% of malicious traffic is encrypted. Without the proper security controls, encrypted web traffic can be a blind spot in securing your network and users.

TLS Primer

Secure Sockets Layer (SSL) was originally developed by Netscape Communications in 1995 to provide security for internet communications. However, in 1999, Netcscape handed over the protocol to the Internet Engineering Task Force (IETF). Later that year, the IETF released TLS 1.0, which was, in reality, SSL 3.1. Recently, TLS 1.3 was released, but most web sites still use TLS 1.2.

For clarity, in these blogs, I exclusively use TLS, but this has exactly the same meaning as SSL or SSL/TLS.

TLS provides a secure channel between two endpoints, typically a client browser and a web server, to provide protection against eavesdropping, forgery of, or tampering with the traffic. To provide this security, SSL uses X.509 digital certificates for authentication and encryption to ensure privacy and digital signatures to ensure integrity.

Essentially, SSL/TLS creates a secure tunnel between the two endpoints, and the web traffic is transmitted inside the tunnel. The encrypted traffic is called HTTPS and uses TCP port 443 to communicate between the client browser and the Web server; unencrypted HTTP traffic uses TCP port 80.

It is worth noting that, although SSL/TLS is primarily used to secure HTTP traffic, SSL/TLS was designed so that it could provide security for many other application protocols that run over TCP.

HTTPS Web Traffic: An OverviewTo allow proactive inspection and control of HTTPS web traffic, it is necessary to look inside the secure tunnel and examine the encrypted traffic. One effective way to deliver this capability is to deploy a Secure Internet Gateway (SIG) or Secure Web Gateway (SWG) that is able to intercept and decrypt the HTTPS traffic. This technique of intercepting and decrypting traffic is known as Man-in-The-Middle (MITM).

To achieve MITM, a secure connection is created between the client browser and the Secure Internet Gateway (SIG) or Secure Web Gateway (SWG), which decrypt the HTTPS traffic into plain text. Then, after being analyzed, the traffic is re-encrypted, and another secure connection is created between the SIG or SWG and the web server. This means that the SIG or SWG is effectively acting as a SSL/TLS proxy server and can both intercept the SSL/TLS connection and inspect the requested content.

This capability is available in Akamais Enterprise Threat Protector service, and it allows inspection of the requested URL to determine if the requested URL is safe or malicious. Payloads received from the web servers are also decrypted and inspected by the ETP Payload Analysis functions to determine if the content is safe or malicious.

Recent Articles By Author

*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Jim Black. Read the original post at: http://feedproxy.google.com/~r/TheAkamaiBlog/~3/SmvM3N8ShWc/inspecting-tls-web-traffic---part-1.html

Follow this link:
Inspecting TLS Web Traffic Part 1 - Security Boulevard