AI, 5G, ‘ambient computing’: What to expect in tech in 2020 and beyond – USA TODAY

Tis the end of the year when pundits typically dust off the crystal ball and take a stab at what tech, and its impact on consumers,will look like over the next12 months.

But we're also on the doorstep of a brand-new decade, which this time around promisesfurther advances in 5G networks, artificial intelligence, quantum computing, self-driving vehicles and more, all of which willdramatically alter the way we live, work and play.

So what tech advances can we look forward to in the new year? Heres what we can expect to see in 2020 and in some cases beyond.

(Photo: Getty Images)

The next generation of wireless has showed up on lists like this for years now. But in 2020, 5G really will finally begin to make its mark in the U.S., with all four major national carriers three if the T-Mobile-Sprint merger finally goes through continue to build out their 5G networks across the country.

Weve been hearing about the promise of 5G on the global stage for what seems like forever, and the carriersrecently launched in select markets. Still, the rollout in most places will continue to take time, as will the payoff: blistering fast wireless speeds and network responsiveness on our phones, improved self-driving cars and augmented reality, remote surgery, and entire smartcities.

As 2019 winds down, only a few phones can exploit the latest networks, not to mention all the remaining holes in 5G coverage. But youll see a whole lot more 5G phone introductions in the new year, including what many of us expect will be a 5G iPhone come September.

Dark side of sharing economy: Use Airbnb, Uber or Lyft? Beware. There can be a dark side to sharing with strangers

A look back at the 2010s: Revisiting everything in tech from Alexa to Xbox

When those holes are filled, roughly two-thirds of consumers said theyd be more willing to buy a 5G-capable smartphone, according to a mobile trends survey by Deloitte.

But Deloitte executive Kevin Westcott also said that telcos will need to manage consumer expectations about what 5G can deliver and determine what the killer apps for 5G will be.

The Deloitte survey also found that a combination of economic barriers (pricing, affordability) and a sense that current phones are good enough, will continue to slow the smartphone refresh cycle.

Are you ready for all the tech around you to disappear? No, not right away.The trend towards so-called ambient computing is not going to happen overnight, nor is anyone suggesting that screens and keyboards are going to go away entirely, or that youll stop reaching for a smartphone. But as more tiny sensorsare built into walls, TVs, household appliances, fixtures, what you're wearing, and eventually even your own body, youll be able to gesture or speak to a concealed assistant to get things done.

Steve Koenig, vice president of research at the Consumer Technology Association likens ambient computing to Star Trek, and suggests that at some point we won't need to place Amazon Echo Dots or other smart speakers in every room of house, since well just speak out loud to whatever, wherever.

Self-driving cars have been getting most the attention. But its not just cars that are going autonomous try planes and boats.

Cirrus Aircraft, for example, is in the final stages of getting Federal Aviation Administration approval for a self-landing system for one of its private jets, and the tech, which I recently got to test, has real potential to save lives.

How so? If the pilot becomes incapacitated, a passenger can press a single button on the roof of the main cabin. At that moment, the plane starts acting as if the pilot were still doing things. It factors in real-time weather, wind, the terrain, how much fuel remains, all the nearby airports where an emergency landing is possible, including the lengths of all runways, and automatically broadcasts its whereaboutsto air traffic control.From there the system safely lands the plane.

Or consider the 2020 version of the Mayflower, not a Pilgrim ship, but rather a marine research vessel from IBM and a marine exploration non-profit known as Promare. The plan is to have the unmanned shipcross the Atlantic in September from Plymouth, England to Plymouth, Massachusetts. The ship will be powered by a hybrid propulsion system, utilizing wind, solar, state-of-the-art batteries, and a diesel generator. It plans to follow the 3,220-mile route the original Mayflower took 400 years ago.

Two of Americas biggest passions come together. esports is one of the fastest growing spectator sports around the world, and the Supreme Court cleared a path last year for legalized gambling across the states. The betting community is licking their chops at the prospect of exploiting this mostly untapped market. Youll be able to bet on esports in more places, whetherat a sportsbook inside a casino or through an app on your phone.

One of the scary prospects about artificial intelligence is that it is going to eliminate all these jobs. Research out of MIT and IBM Watson suggests that while AI will for sure impact the workplace, it wont lead to a huge loss of jobs.

That's a somewhat optimistic take given an alternate view thatAI-driven automation is going to displace workers.The research suggests thatAI will increasingly help us with tasks that can be automated, but will have a less direct impact on jobs that require skills such as design expertise and industrial strategy. The onus will be on bosses and employeesto start adapting to newroles and to try and expandtheirskills, effortsthe researchers say will beginin the new year.

The scary signs are still out there, however. For instance, McDonalds is already testing AI-powered drive-thrus that can recognize voice, which could reduce the need for human order-takers.

Perhaps its more wishful thinking than a flat-out prediction, but as Westcott puts it, Im hoping what goes away are the 17 power cords in my briefcase. Presumably a slight exaggeration.

But the thing we all want to see are batteries that dont prematurely peter out, and more seamless charging solutions.

Were still far off from the day where youll be able to get ample power to last all day on your phone or other devices just by walking into a room. But over-the-air wireless charging is slowly but surely progressing. This past June, for example, Seattle company Ossiareceived FCC certification for a first-of-its kind system to deliver over-the-air power at a distance. Devices with Ossias tech built-in should start appearing in the new year.

The Samsung Galaxy Fold smartphone featuring a foldable OLED display.(Photo: Samsung)

We know how the nascent market for foldable phones unfolded in 2019 things were kind of messy.Samsungs Galaxy Fold was delayed for months following screen problems, and even when the phone finally did arrive, it cost nearly $2,000. But that doesnt mean the idea behind flexible screen technologies goes away.

Samsung is still at it, and so is Lenovo-owned Motorola with its new retroRazr. The promise remains the same: let a devicefold or bend in such a way that you can take a smartphone-like form factor and morph it into a small tablet or computer. The ultimate success of such efforts will boil down to at least three of the factors that are always critical in tech: cost, simplicity, andutility.

Data scandals and privacy breaches have placed Facebook, Google and other others under the government's cross-hairs, and ordinary citizens are concerned. Expect some sort of reckoning, though it isn't obviousat this stage what that reckoningwill look like.

Pew recently put out a report that says roughly 6 in 10 Americans believe it is not possible to go about their daily lives without having their data collected.

"The coming decade will be a period of lots of ferment around privacy policy and also around technology related to privacy," says Lee Rainie, director of internet and technology research at Pew Research Center. He says consumers will potentially have more tools to give them a bit more control over how and what data gets shared and under whatcircumstances. "And there will be a lot of debate over what the policy should be."

Open question: Will there be national privacy regulations, perhaps ones modeled after the California law that is set to go into effect in the new year?

It isnt easy to explain quantum computing or the field it harnesses, quantum mechanics. In the simplest terms, think something exponentially more powerful than what we consider conventional computing, which is expressed in1s or 0s of bits. Quantum computing takes a quantum leap with whatare known as "qubits."

And while IBM, Intel, Google, Microsoft and others are all fighting for quantum supremacy, the takeaway over the next decadeis that thetechmay helpsolve problems far faster than before, fromdiagnosing disease to crackingforms of encryption, raising the stakes in data security.

Quantum computing: Google claims quantum computing breakthrough

What tech do you want or expect to see? Email: ebaig@usatoday.com; Follow @edbaig on Twitter.

Read or Share this story: https://www.usatoday.com/story/tech/2019/12/18/tech-trends-2020-battery-power-ai-privacy/4360879002/

Read this article:

AI, 5G, 'ambient computing': What to expect in tech in 2020 and beyond - USA TODAY

What Was The Most Important Physics Of 2019? – Forbes

So, Ive been doing a bunch of talking in terms of decades in the last couple of posts, about the physics defining eras in the 20th century and the physics defining the last couple of decades. Ill most likely do another decadal post in the near future, this one looking ahead to the 2020s, but the end of a decade by definition falls at the end of a year, so its worth taking a look at physics stories on a shorter time scale, as well.

New year 2019 change to 2020 concept, hand change wooden cubes

You can, as always, find a good list of important physics stories in Physics Worlds Breakthrough of the Year shortlist, and there are plenty of other top science stories of 2019 lists out there. Speaking for myself, this is kind of an unusual year, and its tough to make a call as to the top story. Most of the time, these end-of-year things are either stupidly obvious because one story towers above all the others, or totally subjective because there are a whole bunch of stories of roughly equal importance, and the choice of a single one comes down to personal taste.

In 2019, though, I think there were two stories that are head-and-shoulders above everything else, but roughly equal to each other. Both are the culmination of many years of work, and both can also claim to be kicking off a new era for their respective subfields. And Im really not sure how to choose between them.

US computer scientist Katherine Bouman speaks during a House Committee on Science, Space and ... [+] Technology hearing on the "Event Horizon Telescope: The Black hole seen Round the World" in the Rayburn House office building in Washington, DC on May 16, 2019. (Photo by Andrew CABALLERO-REYNOLDS / AFP) (Photo credit should read ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)

The first of these is the more photogenic of the two, namely the release of the first image of a black hole by the Event Horizon Telescope collaboration back in April. This one made major news all over, and was one of the experiments that led me to call the 2010s the decade of black holes.

As I wrote around the time of the release, this was very much of a piece with the preceding hundred years of tests of general relativity: while many stories referred to the image as a shadow of the black hole, really its a ring produced by light bending around the event horizon. This is the same basic phenomenon that Eddington measured in 1919 looking at the shift in the apparent position of stars near the Sun, providing confirmation of Einsteins prediction that gravity bends light. Its just that scaling up the mass a few million times produces a far more dramatic bending of spacetime (and thus light) than the gentle curve produced by our Sun.

This Feb. 27, 2018, photo shows electronics for use in a quantum computer in the quantum computing ... [+] lab at the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y. Describing the inner workings of a quantum computer isnt easy, even for top scholars. Thats because the machines process information at the scale of elementary particles such as electrons and photons, where different laws of physics apply. (AP Photo/Seth Wenig)

The other story, in very 2019 fashion, first emerged via a leak: someone at NASA accidentally posted a draft of the paper in which Googles team claimed to have achieved quantum supremacy. They demonstrated reasonably convincingly that their machine took about three and a half minutes to generate a solution to a particular problem that would take vastly longer to solve with a classical computer.

The problem they were working with was very much in the quantum simulation mode that I talked about a year earlier, when I did a high-level overview of quantum computing in general, though a singularly useless version of that. Basically, they took a set of 50-odd qubits and performed a random series of operations on them to put them in a complicated state in which each qubit was in a superposition of multiple states and also entangled with other qubits in the system. Then they measured the probability of finding specific output states.

Qubit, or quantum bit, illustration. The qubit is a unit of quantum information. As a two-state ... [+] system with superposition of both states at the same time, it is fundamental to quantum computing. The illustration shows the Bloch sphere. The north pole is equivalent to one, the south pole to zero. The other locations, anywhere on the surface of the sphere, are quantum superpositions of 0 and 1. When the qubit is measured, the quantum wave function collapses, resulting in an ordinary bit - a one or a zero - which effectively depends on the qubit's 'latitude'. The illustration shows the qubit 'emitting' a stream of wave functions (the Greek letter psi), representing the collapse of the wave function when measured.

Finding the exact distribution of possible outcomes for such a large and entangled system is extremely computationally intensive if youre using a classical computer to do the job, but it happens very naturally in the quantum computer. So they could get a good approximation of the distribution within minutes, while the classical version would take a lot more time, where a lot more time ranges from thousands of years (Googles claim) down to a few days (the claim by a rival group at IBM using a different supercomputer algorithm to run the computation). If youd like a lot more technical detail about what this did and didnt do, see Scott Aaronson.

As with the EHT paper, this is the culmination of years of work by a large team of people. Its also very much of a piece with past work quantum computing as a distinct field is a recent development, but really, the fundamental equations used to do the calculations were pretty well set by 1935.

Glowing new technology in deep space, computer generated abstract background, 3D rendering

Both of these projects also have a solid claim to be at the forefront of something new. The EHT image is the first to be produced, but wont be the last theyre crunching numbers on the Sag A* black hole at the center of the Milky Way, and theres room to improve their imaging in the future. Along with the LIGO discovery from a few years ago, this is the start of a new era of looking directly at black holes, rather than just using them as a playground for theory.

Googles demonstration of quantum supremacy, meanwhile, is the first such result in a highly competitive field: IBM and Microsoft are also invested in similar machines, and there are smaller companies and academic labs exploring other technologies. The random-sampling problem they used is convenient for this sort of demonstration, but not really useful for anything else, but lots of people are hard at work on techniques to make a next generation of machines that will be able to do calculations where people care about the answer. Theres a good long way to go, yet, but a lot of activity in the field driving things forward.

So, in the head-to-head matchup for Top Physics Story of 2019, these two are remarkably evenly matched, and it could really go either way. The EHT result has a slightly deeper history, the Google quantum computer arguably has a brighter future. My inclination would be to split the award between them; if you put a gun to my head and made me pick one, Id go with quantum supremacy, but Id seriously question the life choices that led you to this place, because theyre both awesome accomplishments that deserve to be celebrated.

Original post:

What Was The Most Important Physics Of 2019? - Forbes

Is getting paid to write open source a privilege? Yes and no – TechRepublic

Commentary: While we tend to think of open source as a community endeavor, at its heart open source is inherently selfish, whether as a contributor or user.

Image: SIphotography, Getty Images/iStockphoto

"[O]ften [a] great deal of privilege is needed to be able to dedicate one's efforts to building Free Software full time," declared Matt Wilson, a longtime contributor to open source projects like Linux. He's right. While there are generally no legal hurdles for a would-be contributor to clear, there are much more pragmatic constraints like, for example, rent. While many developers might prefer to spend all of their time writing and releasing open source software, comparatively few can afford to do so or, at least, on a full-time basis.

And that's OK. Because maybe, just maybe, "privilege" implies the wrong thing about open source software.

Who are these developers privileged to get to write open source software? According to GitHub COO Erica Brescia, 80% of the developers actively contributing to open source GitHub repositories come from outside the US. Of course, the vast majority of these developers aren't contributing full-time. According to an IDC analysis, of the roughly 24.2 million global developers, roughly half (12.5 million) get paid to write software full-time, while another seven million get paid to write software part-time.

But this doesn't mean they're getting paid to write open source software, whether full-time or part-time.

SEE: Open source vs. proprietary software: A look at the pros and cons (TechRepublic Premium)

If there are 12.5 million paid full-time developers globally, a small percentage of that number gets paid to write open source software. There simply aren't that many companies that clearly see a return on open source investments. Using open source? Of course. Contributing to open source? Not so much. This is something Red Hat CEO Jim Whitehurst called out as a problem over 10 years ago. It remains a problem.

As for individual developers like osxfuse maintainer Benjamin Fleischer, it's a persistent struggle to figure out how to get paid for the valuable work they do. Going back to Wilson's point, most developers simply don't get to enjoy the privilege of spending their time giving away software.

Is this a bad thing?

When I asked if full-time open source is an activity only the rich (individuals or companies) can indulge, developer Henrik Ingo challenged the assumptions underlying my question. "Why should we expect anyone to contribute to open source in the first place?" he queried. Then he struck at the very core of the assumption that anyone should contribute to open source:

Some of us donate to charity, some others receive that gift. Some do both, at different phases in life, or even at the same time. Yet neither of those roles makes us a better person than the other. With open source, the idea is that we share an abundant resource. If you go back to Cathedral and Bazaar, the idea of "scratch your own itch" is prevalent. You write a tool, or fix an existing one, because you needed it. Or you write code to learn. Or just social reasons! Whatever your reasons, nobody should be expected to contribute code as some kind of tax you have to pay to justify your existence on this planet.

Open source, in other words, is inherently self-interested, and that self-interest brings its own rewards. Sometimes unpaid work becomes paid work, as was the case with Andy Oliver. Sometimes it doesn't. If the work is fulfilling in and of itself, it may not matter whether that developer ever gets the "privilege" to spend all of her time getting paid to write open source software. It also may not matter whether that software is open source or closed.

SEE: How to build a successful developer career (free PDF) (TechRepublic)

To Ingo's point, we may need to stop trying to impose ethical obligations on software developers and users, whether open source or not. I personally think more open source tends toward more good and, frankly, more realization of self-interest, because it can be a great way to share the development load. For downstream users, contributing back can be a great way to minimize the accumulation of technical debt that can collect on a fork.

But in any case, there isn't a compelling reason to browbeat others into contributing. Open source is inherently self-interested, to Ingo's point, whether as a user or contributor. When I use open source software, I benefit. When I contribute, I benefit. Either way, I (and we) am privileged.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

See the original post:
Is getting paid to write open source a privilege? Yes and no - TechRepublic

May the Open Source Force Be with You – Enterprise License Optimization Blog

Image by GooKingSword from Pixabay

Im giving away my affinity for Star Wars. Its true. I was there when the first movie hit the big screen (lets just say, a while ago) and now dreading while at the same time wildly anticipating the release of this last movie in the Skywalker saga, Start Wars: The Rise of Skywalker. As mawkishly sentimental as it may appear, I had to work it into a blog. Only here, in the end, will we most likely understand some of the true plot lines of this epic story.

How does this relate to my thoughts about open source software? I look at how the Skywalker story evolved over the years, how it took shape, and what and who impacted the narrative.

The same lookback can be undertaken for open source and, in fact, it has, right here in my own blog on the occasion (Happy Birthday, Open Source. The Term Turns 21). Lets take that story a little further.

There are events that have shaped the course of open source and how companies use and implement it; lawsuits such as Oracle v. Google, Versata v. Ameriprise, and a number of years ago Free Software Foundation went after Cisco over some GPL code in one of their routers. The Heartbleed vulnerability, of course, has a place in history given the impact it had on companies such as Equifax.

Likewise, Software Composition Analysis (SCA) has changed the course of SCA and the perceived risks associated with open source. SCA enables companies to be more proactive in their management of open source. There was a realization after Heartbleed that just because the source code is open doesnt mean that its without examination and oversight to mitigate risks.

SCA allows companies to better understand what open source software they are using. It allows for the discovery of that open source, and it allows for the remediation of threat issues in a way that isnt possible without the ongoing automated and controlled monitoring of a SCA platform.

During the exposure of the Heartbleed vulnerability, development teams went on hunting missions. They scrambled to understand what version of OpenSSL they had and then conducted fire drills to figure out how to rectify and remediate quickly and efficiently. SCA is a game changer for those companies that had to crawl painstakingly through complicated processes and manual work to get a handle on the situation. With SCA there is a continuous process that allows for in-the-moment understanding of what open source libraries youre using and what versions are in use.

One of the challenges is that companies could have multiple versions of the same open source library in their product. Version control is a real issue. The ability to leverage SCA to make sure you have the latest version of a particular library and the version that is approved, safe, has the most desirable license terms according to your policies, and is used consistently across the entire product line is a huge benefit.

In the end, youre accountable. Youre accountable from a reporting standpoint and to stakeholders about whats in your solutions. Thats peace of mind.

Your focus determines your reality. Qui-Gon Jinn

Follow this link:
May the Open Source Force Be with You - Enterprise License Optimization Blog

Ubuntu turns 15: what impact has it had and what does the future hold? – TechRadar

In 1991, Linus Torvolds created the Linux operating system just for fun and, 15 years later, Ubuntu was born - offering developers a more lightweight, user-friendly distribution.

This year marks Ubuntus 15th birthday, with it now having established itself as the leading open source software operating system across public and OpenStack clouds.

As we reflect on this milestone, those of us at Canonical are thinking about what it is that sets us apart from other Linux distributions and has driven Ubuntu to underpin so many successful projects in its time.

One thing which has really come to the foreground during this time is Ubuntus popularity both as a desktop and a server. There is immense appreciation and adoption from the developer community, with millions of PC users around the world running Ubuntu. As the cloud landscape has matured, so too has the popularity of this OS as a server. Ubuntu runs on the majority of public cloud workloads across the board and, with our continued promise of free access for everyone, it has helped democratise access to innovation.

We frequently hear from people who tell us that they wouldn't be where they are if they hadn't had access to Ubuntu. In fact, our CEO, Mark Shuttleworth, recently said in an interview that "Ubuntu gives millions of software innovators around the world a platform that's very cheap, if not free, with which they can innovate." It helps the future to arrive faster, no matter who or where you are. In this respect it is unique as the one Linux distribution which has made Linux accessible to - and consumable by - everyone.

It is this ideology that has resulted in the numerous Ubuntu success stories. We so often hear from people about their stories and how they have used this open source platform to build incredible businesses upon. As mentioned earlier, the majority of the public cloud workloads run on Ubuntu, and so almost any hyper-scale company today is using Ubuntu. But often the best stories are those where people found ways to be creative and build something new using Ubuntu, or when scientists have made significant breakthroughs and the accompanying photo published in the press shows an Ubuntu desktop in the background.

Paramount in this success is the community mindset. At Canonical, we talk a lot about how the open source community powers fast and efficient innovation through a collaborative approach. The next wave of innovation will be powered by software which builds upon this collaborative effort, not just from a single company, but from a community committed to improving the entire landscape. The future success of self-driving cars, medical robots or smart cities is not something which should be entrusted to a select few companies, but instead to a global community of innovators who can come together to achieve the very best outcome.

Ubuntu will continue to be a platform which serves its users, no matter what their needs. On the desktop, we will continue to focus on performance and the engineering use case. We have increasingly seen requests for Ubuntu as an enterprise desktop for development teams, and that will also play a role. The majority of our OEM partners are looking to partner around AI/ML workstations for data scientists, so again that is another focus area.

On the server side, the focus is on security, performance, and long-term support. We're looking closely at the latest innovations around performance, especially in the Linux kernel, to provide the best possible OS for high-performance workloads. Canonical already already provides five years of free security updates with every LTS release. Furthering this support offering is the continued expansion of the Extended Security Maintenance program, within which our paid customer base benefits from access to even more security patches.

Ubuntu has been a springboard for many and we will continue our commitment to this mission. With the developer community at the heart of this distribution, Canonical will continue to provide the accessibility to development tools that will enable fast and reliable innovation, to power a more successful future.

View original post here:
Ubuntu turns 15: what impact has it had and what does the future hold? - TechRadar

Kubernetes and the Industrial Edge to be Researched by ARC – ARC Viewpoints

I'd like to talk about what I'll call The Cloud Native Tsunami, which is the emerging software architecture for cloud, but also for enterprise, and eventually for edge and even embedded software as well.

It has been my thesis for a couple of years that when Marc Andreessen (the co-founder of Netscape) said, "Software is eating the world," the software that's really going to eat the world has been cloud software, and this is especially true for software development. My thesis is that the methods and technologies people use to develop and deploy cloud software will eventually swallow and take over the enterprise space in our corporate data centers, will take over the edge computing space, and will even threaten the embedded software space. Today each of these domains has different development tools in use, but my thesis is that cloud software tools will eventually take over, and the same common development tools and technologies will end up being used across all of these domains.

In mid-November, I attended the KubeCon America event in San Diego, California. KubeCon is an event sponsored by the Cloud Native Computing Foundation (CNCF), which is an umbrella organization under the Linux Foundation. CNCF manages a number of open source software projects critical to cloud computing. The growth of this conference has been phenomenal, and its name KubeCon stems from Kubernetes, which is the flagship software project managed by this organization.

Kubernetes is an open source software project that orchestrates large deployments of containerized software applications in a distributed system. These can be deployed across a cloud provider, or across a cloud provider and an enterprise, or basically anywhere. The growth of this conference, as you can see from the chart, has been phenomenal. Five years ago, the Kubernetes project the software was private and maintained within Google. About 5 years ago, Google released Kubernetes to Open Source, and since then the KubeCon event and the interest in this software has grown exponentially.

It certainly seems to me that Kubernetes represents a software inflection point similar to ones we've seen in the past. For instance, when Microsoft presented it's Microsoft Office Suite, they defined personal productivity applications for the PC. Or before Y2K, when enterprises were rewriting their existing software to avoid Y2K bugs, but in doing so were generally leaping onto SAP R/2 in order to avoid a issues with Y2K. Or maybe its a little bit like the introduction of Java, which defined a multi-platform execution environment in a virtual machine, and maybe also a bit like the early days of e-commerce when for the first time the worldwide web was linked to enterprise databases, transactions, and business processes.

This rapid growth in interest in Kubernetes has been phenomenal, but exponential growth is obviously unsustainable or the whole planet would soon be going to one software development conference. One thing that's very important to point out with this rapid growth (from basically nothing to 23,000 people attending these events) is that there is a people constraint in this ecosystem right now. There is a shortage of people who are deeply experienced. And even some of the exhibitors and sponsors at KubeCon came to the event just to recruit talented software developers with Kubernetes experience. But you can see from the chart that there's not a lot of people in the world who have more than five years of Kubernetes experience!

In addition to Kubernetes, the Cloud Native Computing Foundation curates several other Open Source software projects. These projects provide services or other kinds of auxiliaries that are important for distributed applications. While Kubernetes is the flagship product, there are other projects that are in different stages of development. The CNCF breaks projects into three areas that they call graduated for the software projects that are ready to be incorporated into commercial products, incubating which refers to an open source software project that is in a more rapid state of development and change, and finally there's a third tier below this which CNCF calls sandbox projects, which are more embryonic projects that are newer, less fully developed, still emerging. And of course, there are any number of software projects outside of this CNCF ecosystem, but CNCF is a major ecosystem for open source software for the cloud.

From the conference, we could see the enterprise impact of Kubernetes is still relatively low. In other words, market leaders are using this technology now, but in general it's at its early stages of deployment even among the leaders, and most enterprises have not yet adopted containerized applications with Kubernetes for orchestration. But, growth in this area is inevitable. This is, as I said before, like Microsoft Office, or like SAP, or like Java; it's coming to the enterprise. Even though the penetration is still low, leaders are rolling out distributed applications and managing distributed applications at scale, Kubernetes is the tool that people are turning to in order to do this.

The auxiliary open source projects I mentioned before will grow the capabilities of Kubernetes over time. So, a number of auxiliary services for data storage, for stateful behavior, for network communications, software defined networking, etc. are going to supplement Kubernetes and make it more powerful. While at the same time, other engineers are working to make this kind of technology, as complex as it is, easier to use and to deploy.

I should mention a couple of vertical industry initiatives where Kubernetes is especially attractive. One of them is 5G telecommunications. Telecom service providers are extremely interested in digitizing their services as they move to 5G. Instead of maintaining services at a tower base cell and providing them via hardware/software appliances that are dedicated-function, telecom providers are now looking to virtualize these network functions and deploy them digitally. So they will have a very large set of applications to manage at a huge scale, and so they've turned to Kubernetes to do this.

A second vertical industry area that is important is the ability to manage new automotive products. This can be autonomous vehicles, fleets of vehicles, or just vehicles that have much more software content than vehicles used to have. Clearly, there's a need for these automotives to manage large scale software deployments at hundreds or thousands of end points and do so with very low costs and very high reliability. So, there are certainly vertical industry initiatives that are driving Kubernetes from the cloud service providers through the data centers toward the edge.

But what about the industrial edge? When we turned to the industrial edge (and the figure below is from Siemens Research) we can divide the compute world into four different domains. We really have at the industrial edge, much more restricted capability in terms of compute power, in terms of storage, in terms of networking, than we find at a data center, be that a corporate data center and much less than we find within the commercial public cloud. And we can go a level further and even see within the automation or manufacturing devices, things such as program logic controllers, CNC machines, robotics, etc., that these are generally addressed by an embedded system that is built for purpose.

One difficulty is that deploying Kubernetes and managing containerized applications at scale, requires larger amounts of compute, network and storage capacity than these edge domain and device domain systems now have. So, this is an area where there's a big challenge to adopt this new technology. Why am I so optimistic that this is going to happen? I'm very optimistic, because there are very similar challenges in the two huge vertical industries that I mentioned, the automotive and the telecommunications industry. These industries also have thousands, or tens of thousands of small systems on the edge on which they need to maintain and deploy software. And that challenge is going to have to be met one way or another. And there's extensive research and development going on now to do just that.

So, in terms of its industrial and industrial IoT impact (though industrial automation is traditionally a technology laggard), industrial IoT applications are definitely a target for Kubernetes. And this involves moving orchestrated, containerized software apps to the edge. As I mentioned, both automotive and industrial applications have similar kind of constraints. They have low compute capability, small footprint, and generally they also demand low bill-of-material costs for the kind of solutions that they can provide. This remains a challenge, but again, I think there are a number of venture stage companies and a lot of research going on to bridge this gap, and people are going to find a way to do that effectively.

But that makes the future very difficult to map out. This ecosystem is extremely dynamic. As I mentioned, Kubernetes was not even in the public domain five years ago. Now it has, if you will, taken over mind share in terms of the technology that people are going to use to orchestrate containerized applications. But, the next five years are likely to be equally revolutionary. So, it's absurdly difficult to map out this space and say, "Here is where it's going to go in 5 years."

But I found that this little quote I saw at KubeCon was interesting and I think if you're working in manufacturing or manufacturing automation, you'll find this interesting, too. This is a description of Kubernetes by one of the co-chairs of their architecture special interest group.

The entire system [that being a Kubernetes deployment] can now be described as an unbounded number of independent, asynchronous control loops reading from and writing to a schematasized resource store as the source of truth. This model has proven to be resilient, evolvable and extensible.

What he's talking about here in terms of control loops are not control loops in the automation sense. They are control loops in the enterprise software sense. These control loops are functions that Kubernetes is performing to maintain a software deployment and monitor the health of this deployment. I found this interesting in that at this level (at the deployment level) for huge distributed applications, people view Kubernetes as a driver of a large number of independent and asynchronous control loops. It points out, to me, that the same sort of technology could be used to manage other types of control loops in automation within a manufacturing operation.

This leads to an upcoming ARC research topic. ARC Advisory Group is beginning research into industrial edge orchestration, specifically the orchestration of applications that are distributed in industrial manufacturing, the industrial internet of things and infrastructure. Because this state of the technology is so early (even though it's critical for the future of industrial automation and for the fourth industrial revolution or industry 4-0) the field is very dynamic, and it's very difficult to map out such a nascent and varied landscape of technologies for integrating and orchestrating the industrial edge. During this research ARC, will be studying the products and technologies of many venture stage firms as well as open source projects that are designed to bridge the gap between the cloud and the industrial edge and these include infrastructure for 5G telecommunications, edge networks, requirements to manage fleets of vehicles, as well as the networking opportunities that are afforded by 5G itself.

With this industry at such an early stage, any detailed market forecast would be highly speculative and very uncertain. But ARC has decided to map out this landscape and plans to provide as deliverables for this research a series of podcasts, webcasts, and reports for our ARC Advisory Service clients. So, ARC is reaching out to relevant suppliers in this space, be they hardware, software or services suppliers, to participate in this research initiative. If your firm would like to participate in this research, ARC welcomes your input. Please use this link to connect with ARC or feel free to contact me at hforbes@arcweb.com and I'll be happy to discuss this project with you.

Read more:
Kubernetes and the Industrial Edge to be Researched by ARC - ARC Viewpoints

Hash Check – How, why, and when you should hash check – proprivacy.com

Here at ProPrivacy we justloveopen source software. This is mainly because, despite not being perfect, open source provides the only way to know for sure that a program is on the level.

One problem, though, is how do you know that an open source program you download from a website is the program its developer(s) intended you to download? Cryptographic hashes are a partial solution to this problem.

A cryptographic hash is a checksum or digital fingerprint derived by performing a one-way hash function (a mathematical operation) on the data comprising a computer program (or other digital files).

Any change in just one byte of the data comprising the computer program will change the hash value. The hash value is, therefore, a unique fingerprint for any program or other digital files.

Ensuring that a program has not been tampered with, or just corrupted, is a fairly simple matter of calculating its hash value and then comparing it with the hash checksum provided by its developers.

If it's the same, then you have a reasonable degree of confidence that the program you have downloaded is exactly the same as the one published by its developer. If it is not, then the program has been changed in some way.

The reasons for this are not always malicious (see below), but a failed hash check should set alarm bells ringing.

You may have detected a note of caution when singing the praises of hash checks...

Hash checks are useful for ensuring the integrity of files, but they do not provide any kind of authentication. That is, they are good for ensuring the file or program you have matches the source, but they provide no way of verifying that the source is legitimate.

Hash checks provide no guarantee as to the source of the hash checksum.

For example, fake websites exist which distribute malicious versions of popular open source software such as KeePass. Many of these websites even provide hash checksums for the programs they supply and, were you to check these against the fake program, they would match. Oops.

An additional problem is that mathematical weaknesses can mean that hashes are not as secure as they should be.

The MD5 algorithm, for example, remains a highly popular hash function, despite its known vulnerability to collision attacks. Indeed, even SHA1 is no longer considered secure in this regard.

Despite this, MD5 and SHA1 remain the most popular algorithms used to generate hash values. SHA256, however, remains secure.

Developers sometimes update their programs with bug fixes and new features, but neglect to publish an updated hash checksum. This results in a failed hash check when you download and try to verify their program.

This is, of course, nowhere near as serious as a hash check giving malicious software a pass, but it can degrade trust in the ecosystem, resulting in people not bothering to check the integrity of files they download...

Most of the problems with cryptographic hashes are fixed by the use of digital signatures, which guarantee both integrity and authentication.

Developers who are happy to use proprietary code can automatically and transparently validate signatures when their software is first installed, using mechanisms such as Microsoft, Apple, or Google PKI (public key infrastructure) technologies.

Open source developers do not have this luxury. They have to use PGP, which is not natively supported by any proprietary operating system, and why no equivalent of Microsoft, Apple or Google PKIs exist in Linux.

So PGP digital signatures must be verified manually. But PGP is a complete pig to use and is not a simple process, as a quick glance at our guide to checking PGP signatures in Windows will demonstrate.

Neither is the actual signing process for developers, who are well aware that in the real world few people bother to check digital signatures manually, anyway.

Cryptographic hashes are nowhere near as secure as PGP digital signatures, but they are much easier to use, with the result that many developers simply choose to rely on them instead of digitally signing their work.

This is a less than ideal situation, and you should always check an open source programs digital signature when one is available. If one is not, however, then checking its cryptographic hash is much better than doing nothing.

As long as you are confident about the source (for example you are sure it's from the developers real website, which has not been hacked to display a fake cryptographic hash), then checking its hash value provides a fair degree of coincidence that the software you have downloaded is the software its developer intended for you to download.

If neither a digital signature nor a checksum is available for open source software, then do not install or run it.

The basic process is as follows:

If they are identical, then you have the file the developer intended you to have. If not, then it has either become corrupted or has been tampered with.

If an SHA256+ hash is available, check against that. If not, then use SHA1. Only as a last resort should you check against an MD5 hash.

The simplest way to generate the hash value of files is by using a website such as Online Tools. Just select the kind of hash value you need to generate, then drag-and-drop the required file into the space provided and the relevant hash value will be generated.

We want to check the integrity of the KeePass installer file that we have downloaded from the KeePass.org website (which we know to be the correct domain). The website publishes MD5, SHA1, and SHA256 hashes for all versions of its KeePass, so we will check the SHA256 for the version we downloaded.

This method works out-of-the box in Windows 10, while Windows 7 users need to first update Windows PowerShell with Windows Management Framework 4.0.

To obtain an SHA256 hash, right-click Start -> Windows PowerShell and type:

Get-FileHash [path/to/file]

For example:

Get-FileHash C:UsersDouglasDownloadsKeePass-2.43-Setup.exe

MD5 and SHA1 hashes can be calculated using the syntax:

Get-FileHash [path to [path/to/file] -Algorithm MD5

and

Get-FileHash [path to [path/to/file] -Algorithm SHA1

For example:

Get-FileHash C:UsersDouglasDownloadsKeePass-2.43-Setup.exe -Algorithm MD5

Open Terminal and type:

openssl [hash type] [/path/to/file]

Hash type should be md5, SHA1, or SHA256.

For example, to check the SHA256 hash for the Windows KeePass installer (just to keep things simple for this tutorial), type:

openssl sha256 /Users/douglascrawford/Downloads/KeePass-2.43-Setup.exe

Open Terminal and type either:

Md5sum [path/to/file]Sha1sum [path/to/file]

or

Sha256sum [path/to/file]

For example:

sha256sum /home/dougie/Downloads/KeePass-2.43-Setup.exe

Read more here:
Hash Check - How, why, and when you should hash check - proprivacy.com

The Top Five Apache Software Projects in 2019: From Kafka to Zookeeper – Computer Business Review

Add to favorites

We are heavy Lucene users and have forked the Lucene / SOLR source code to create a high volume, high performance search cluster with MapReduce

The Apache Foundation is 20 years old this year and has grown to the point where it now supports over 350 open source projects; all maintained by a community of more than 770 individual members and 7,000 committers distributed across six continents.Here are the Top Five Apache Software projects in 2019, as listed by the foundation.

Released in 2006, Apache Hadoop is an open source software library used to run distributed processing of large datasets on computers using simple programing models. A key feature of Hadoop is that the library will detect and handle failures at the application level. Essentially its a framework that facilities distributed big data storage and big data processing.

The Java-based programming framework consists of a storage element called Hadoop Distributed File System. The file system splits large files into blocks which are then spread out across different nodes in a computer cluster. Hadoop Common creates the main framework as its holds all of the common libraries and files that support the Hadoop modules.

Since Hadoop has the most active visits and downloads out of all of Apaches software offerings its no surprise that a long list of companies rely on it for their data storage and processing needs.

One such user is Adobe, which notes: We currently have about 30 nodes running HDFS, Hadoop and HBase in clusters ranging from 5 to 14 nodes on both production and development. We constantly write data to Apache HBase and run MapReduce jobs to process then store it back to Apache HBase or external systems.

Apache Kafka developed in 2011 is a distributed streaming platform that lets developers publish and subscribe record streams in a method similar to a message queue. Kafka is used to build data pipelines that can stream in real-time, it is also used to create applications that can react or transform according to a ingested real-time data stream.

Kafka is writing in Scala and Java programming languages. When it stores streams of records in a cluster it calls them topics, each topic consists of a value, a key and a timestamp. It runs using four key APIs; Producer, Consumer, Streams and Connector. Kafka is used by many companies as a fault-tolerant publish-subscribe messaging system as well as means to run real-time analytics on data streams.

The open-source software is used by Linkedin which incidentally first developed the software platform to activity stream data and operation metrics. Twitter use it as part of its processing and archival infrastructure: Because Kafka writes the messages it receives to disk and supports keeping multiple copies of each message, it is a durable store. Thus, once the information is in it we know that we can tolerate downstream delays or failures by processing, or reprocessing, the messages later.

Lucene is a search engine software library that provides a java-based search and indexing platform. The engine can process ranked searching as well as a number of query types such as phrase queries, wildcard queries, proximity queries and range queries. Apache estimate text indexed using Lucene is done at 20-30 percent of its original size.

Lucene was first written in Java back in 1999 by Doug Cutting before the platform joined the Apache Software Foundation in 2001. Users can now get a version of it writing in the following programming languages; Perl, C++, Python, Object Pascal, Ruby and PHP.

Lucene is used by Benipal Technologies which states: We are heavy Lucene users and have forked the Lucene / SOLR source code to create a high volume, high performance search cluster with MapReduce, HBase and katta integration, achieving indexing speeds as high as 3000 Documents per second with sub 20 ms response times on 100 Million + indexed documents.

POI is an open-source API that is used by programmers to manipulate file formats related to Microsoft Office such as Office Open XML standards and Microsofts OLE 2 Compound Document format. With POI; programmes can create, display and modify Microsoft Office files using Java programs.

The German railway company Deutsche Bahn is among the major users, creating a software toolchain in order to establish a pan-European train protection system.

A part of that chain is a domain-specific specification processor which reads the relevant requirements documents using Apache POI, enhances them and ultimately stores their contents as ReqIF. Contrary to DOC, this XML-based file format allows for proper traceability and versioning in a multi-tenant environment. Thus, it lends itself much better to the management and interchange of large sets of system requirements. The resulting ReqIF files are then consumed by the various tools in the later stages of the software development process.

The name POI is an acronym for Poor Obfuscation Implementation which was the original developers making a joke that the file formats they handled appear to be deliberately obfuscated.

ZooKeeper is a centralised service that is used for maintaining configuration information. Its a service for distributed systems and acts as a hierarchical key-value store, which is used for storing, manage and retrieving data. Essentially ZooKeeper is used to synchronise applications that are distributed across a cluster.

Working in conjunction with Hadoop it effectively works like a centralised repository where distributed applications can store and retrieve data.

AdroitLogic a enterprise integration and B2B service provider state that they use: ZooKeeper to implement node coordination, in clustering support. This allows the management of the complete cluster, or any specific node from any other node connected via JMX. A Cluster wide command framework developed on top of the ZooKeeper coordination allows commands that fail on some nodes to be retried etc.

View original post here:
The Top Five Apache Software Projects in 2019: From Kafka to Zookeeper - Computer Business Review

The US media is in the gutter with Trump – The Japan Times

NEW YORK How you respond to an attack defines you. Keep your cool, remain civil and others will respect the way you handle yourself, even if they disagree with you. Lower yourself to your assailants level and at best spectators will dismiss your dispute as a he-said-she-said between two jerks.

So much has been written about U.S. President Donald Trumps debasement of rhetorical norms and his gleeful contempt for truth that there is no need to cite examples or quote studies that count the prolificacy of his lies. Trumps attacks on journalists fake news, mocking a disabled reporters body movements are contemptible. They undermine citizens trust in news media a serious menace to democracy and civil society.

Less noticed is how major news organizations, incensed by the presidents trolling, have debased themselves to Trumps moral level.

American journalism used to adhere to strict standards. Though impossible to achieve, objectivity was paramount. At bare minimum, reporters were expected to project an appearance of political neutrality.

Truth only derived from facts verifiable facts. Not conjecture and never wishful thinking. Sources who wanted to be quoted had to go on the record. Anonymous sources could flesh out background but could not be the entire basis for a story.

From the start of Trumps run for president before the start Democratic-leaning media outlets abandoned their own long-cherished standards to declare war on him. Every day during the 2016 campaign The New York Times led its coverage with its forecast of Hillary Clintons supposed odds of defeating Trump. Setting aside the fact of the Times embarrassing wrongness the day before Election Day they gave Clinton an 85 percent chance of winning it cited odds rather than polls. Maximizing a sense of Clintonian inevitability was intended to demoralize Republicans so they wouldnt turn out to vote. The two figures might mean the same thing. But 85-15 odds look worse than a 51-49 poll.

Its downright truthy. And when truthiness goes sideways it makes you look really, really dumb. 51-49 could go either way. 85-15, not so much.

The impeachment battle marks a new low in partisanship among media outlets.

After Trumps surprise-to-those-whod-never-been-to-the-Rust-Belt win, outlets like the Times declared themselves members of a so-called resistance. Opinion columnists like Charles M. Blow pledged never to normalize Trumpism; what this has meant, ironically, is that Blows essays amount to rote recitations on the same topic: Normally, about the argument that Trump sucks. Which he does. There are, however, other issues to write about, such as the fact that we are all doomed. It would be nice to hear Blows opinions about taxes, militarism and abortion.

Next came years years! of Robert Muellerpalooza. Russia, corporate media outlets said repeatedly, had meddled in the 2016 election. Russian President Vladimir Putin installed Trump; Hillary Clintons snubbing of her partys 72 percent-progressive base had nothing to do with the loss of the most qualified person blah blah blah to an inductee in the WWE Hall of Fame.

Whatever happened to the journalistic chestnut: If your mother says she loves you, check it out? Russiagate wasnt a news report. It was religious faith. Russia fixed the election because we, the media, say so, we say so because we were told to say so by politicians, who were told to say so by CIA people, whose job is to lie and keep secrets. No one checked out anything.

What we knew and still know is that a Russia-based troll farm spent either $100,000 or $200,000 on Facebook ads to generate clickbait. Most of those ads were apolitical. Many were pro-Clinton. The company has no ties to the Russian government. It was a $6.8 billion election; $200,000 couldnt have and didnt move the needle.

Anonymous congressional sources told reporters that anonymous intelligence agents told them that there was more. The Mueller report implies as much. But no one went on the record. No original or verifiable copies of documentary evidence has been leaked. The reports numerous citations are devoid of supporting material. By pre-Trump journalistic standards Russiagate wasnt a story any experienced editor would print.

It was barely an idea for a story.

Russiagate fell apart so decisively that Democratic impeachers now act like the Mueller report a media obsession for three years never even happened.

Speaking of impeachment, mainstream media gatekeepers are so eager to see Trump removed from office that theyre violating another cardinal rule of journalism: If its news, print it. The identity of the CIA whistleblower (scare quotes because actual whistleblowers reveal truths that hurt their bosses) who triggered impeachment over Trumps menacing phone call to the president of Ukraine has been known in Washington, and elsewhere if you know where to look, for months.

Federal law prohibits the government from revealing his identity, and rightly so. But it has leaked. Its out. Its news. Nothing in the law or journalistic custom prevents a media organization from publishing it. News outlets felt no compulsion to similarly protect the identity of Bradley Manning or Edward Snowden. So why arent newspapers and broadcast networks talking about it?

Im not convinced his identity is important at this point, or at least important enough to put him at any risk, or to unmask someone who doesnt want to be identified, New York Times editor Dean Baquet said. So much for the peoples right to know. Why should subscribers buy a newspaper that doesnt print the news?

There is a because Trump change in media ethics that I welcome. Whats suspect is the timing.

Trump is the first president to get called out for his lies right in the news section. Great! Imagine how many lives could have been saved by a headline like Bush Repeats Debunked Falsehood That Iraq Has WMDs. A headline like Slurring Sanders Numerous Female Supporters as Bros, Hillary Clinton Lies About Medicare-for-All could have nominated and elected Bernie and saved many Americans from medical bankruptcy.

But all presidents lie. Why pick on Trump? His lies are (perhaps) more numerous. But theyre no more bigger than his predecessors (see Iraq WMDs, above). Yet discussion of former presidents remains respectful and slavish as ever.

I say, give coverage of Obama and other ex-presidents the same tone and treatment as the current occupant of the White House gets from the news media:

Wallowing in Corrupt Wall Street Cash, Obama Drops $11.75 Million on Gaudy Marthas Vineyard Mansion Estate

Ellen DeGeneres Sucks Up to Mass Murderer George W. Bush

Jimmy Carter, First Democratic President to Not Even Bother to Propose an Anti-Poverty Program, Dead at TK

Ted Rall (Twitter: @tedrall), a political cartoonist, columnist and graphic novelist, is the author of Francis: The Peoples Pope.

Link:
The US media is in the gutter with Trump - The Japan Times

If You Think Encryption Back Doors Won’t Be Abused, You May Be a Member of Congress – Reason

The FBI was way too lax when it sought a secret warrant to wiretap former Trump aide Carter Page. Yet some of the very same people who have been publicly aghast at the circumstances Page scandal are still trying to hammer companies like Apple and Facebook into compromising everybody's data security to give law enforcement access to your stuff.

You're forgiven if you missed this news, as it happened at the exact same time last week that the impeachment countsagainst President Donald Trump were revealed. Our extremely tech-unsavvy lawmakers brought in a few experts to a Senate Judiciary Committee hearing and essentially ignored what they said and yelled demands at them. Virtually every tech expert and privacy advocate under the sun has warned virtually every government official in the world that "back doors" that let police bypass encryption has the potential to cause huge harms and actually makes citizens even more vulnerable to crime. But the legislators want their back doors, dammit.

Here's Sen. Lindsey Graham (RS.C.), who just a day later would express shock that the process for the FBI to get a FISA warrant was not as thorough as he believed: "My advice to you is to get on with it, because this time next year, if we haven't found a way that you can live with, we will impose our will on you." When a witness attempted to explain how complicated an issue encryption is, Graham responded, "Well, it ain't complicated for me."

The Democrats haven't been impressive on this issue either. Sen. Dianne Feinstein (DCalif.) still holds the position that it's no big deal if tech companies just let law enforcement officials in to read encrypted material, as long as they've got a warrant. Sen. Dick Durbin (DIll.) thinks the debate is about whether encryption implemented by companies puts information "beyond the reach of the law." He doesn't seem to care about the arguments that weakening encryption and providing back doors will let hackers and hostile nations access the private data and communications of people around the world (including Americans).

The talking point both the Justice Department and the lawmakers have settled on is that they need to be able demand back doors for the children. Apparently, we all need weaker protections in order to fight child sexual abuse and trafficking.

Sen. Sheldon Whitehouse (DR.I.) asked the tech industry witnesses if they'd be willing to "take responsibility for the harm" that might be caused if law enforcement didn't have back door access. But is Congress and the Justice Department going to "take responsibility for the harm" when these vulnerabilities make it out into the wild (as they inevitably would) and are abused by criminals or by authoritarian states?

This encryption fight has been going on for years, and the back door advocates has resolutely refused to consider the possibility of abuse. Graham in particular has been unwilling to consider the possibility that FISA warrants could ever be used to secretly snoop on Americans inappropriately. But by Thursday, he had changed his tune; if nothing else, the Trump case has forced him to think about what can go wrong when the government can secretly access people's private information without their permission.

Go here to see the original:
If You Think Encryption Back Doors Won't Be Abused, You May Be a Member of Congress - Reason