How 2019 Was The Tipping Point For Adoption Of Private Blockchain Solutions – Analytics India Magazine

The year 2019 saw the launch of several private blockchain roll outs in the enterprise space, both in India and across the globe. Here, one of the most important developments was the launch of Hyperledger Fabric 1.4 in January 2019, which was its first long term support release. This was an important milestone in the adoption of enterprise blockchain as maintainers of Fabric network will now provide continuous bug fixes for each following versions. Also, programming model improvements in the Node.js SDK and Node.js chaincode makes the development of decentralized applications more intuitive, allowing developers to focus on application logic.

Hyperledger has been the most prominent open source enterprise blockchain network launched in December 2015 by the Linux Foundation, and receiving contributions from IBM, Intel and SAP Ariba, to support the collaborative development of blockchain-based distributed ledgers. In 2019, apart from its Fabric blockchain product which has been used by hundreds of companies across the globe, Hyperledger Sawtooth also saw adoption from companies like Salesforce, Lamborghini, Target, Cargill.

Apart from using Hyperledger for specific enterprise use cases like supply chain management and distributed applications, other companies tweaked the open source software for serving their own customers. For example, In November 2019, Accenture announced that it developed and tested a solution called Blockchain Integration Framework which allows two or more blockchain enabled ecosystems to integrate and achieve interoperability as an end goal. A tutorial demonstrated sending an asset file between two enterprise blockchain networks, namely a generic deployment of Hyperledger Fabric and JP Morgans Quorum network using Accentures own blockchain interoperability solution, which Accenture has opensourced for all developers. Given there is a large interest among enterprises, Indias tech companies like

MindTree, Tech Mahindra joined Hyperledger Foundation to leverage its blockchain capabilities in 2019.

As far as enterprise vendors in the blockchain space are concerned, IBM clearly won the race on the global level with innovative launches. In 2019, we saw IBM introducing Trust Your Supplier Network along with blockchain consultancy firm Chainyard. Along with IBM, Fortune 500 companies including Anheuser-Busch InBev, GlaxoSmithKline, Lenovo, Nokia, Schneider Electric and Vodafone are founding participants in the Trust Your Supplier (TYS) network.

Another IBMs blockchain project called Food Trust added big players in the food sector including Walmart, Nestle, Tyson Foods, French supermarket chain Carrefour, Dole Foods, Unilever, and US grocery giants Kroger and Albertsons. Both of these blockchain networks run on the IBM Blockchain Platform which is built to run on-premises and in multi-cloud environments. With the platform, organisations can create, test and debug smart contracts, and also connect to Hyperledger Fabric. IBM also in 2019 also launched a new supply chain service caled Sterling Supply Chain Suite based on its blockchain platform and open-source software from recently-acquired Red Hat that allows developers and third-party apps to integrate legacy corporate data systems onto a distributed ledger.

Another large scale private deployment of blockchain technology in 2019 was when the OOC Oil & Gas Blockchain Consortium announced it completed a trial for blockchain-based authorization for expenditure (AFE) balloting after it acquired tech from Canadian firm GuildOne. The alliance consists of several major oil companies including Chevron, ExxonMobil and Shell. Automaker BMW and logistics provider DHL worked on a blockchain proof of concept (PoC) for the formers Asia Pacific supply chain operations to provide better visibility for parts shipped from Malaysia. In these two cases, it became clear that apart from open source technologies like Hyperledger or blockchain technologies from large vendors such as IBM, there are niche tech companies and consortiums working to develop in-house distributed ledgers for supply chains and trust/identity management.

In India, the enterprise adoption of blockchain is on the rise with multiple proof of concepts happening in both public and private enterprises. In fact, blockchain developer is Indias fastest growing emerging job role, as per a Linkedin report. To highlight the rising trend of private blockchain solutions, we saw in 2019 that major Indian IT solution providers like TCS, Infosys and Wipro launched their blockchain-focused products for businesses. Software major Infosys launched blockchain-powered distributed applications for government services, insurers and supply chain management verticals. The applications are planned for business systems to guarantee speedy deployment, and interoperability crosswise over divergent frameworks of significant value chain partners and cases including analytics and IoT (Internet of Things), Infosys said.

Services and consulting firm Tata Consultancy Services (TCS) introduced an innovative a low code development kit for organizations interested in developing and deploying blockchain technology quickly. The Quartz DevKit is a web-based development platform coupled with plug-and-play components that can be reused to help speed up the process. The company claims that these features enable shaving off as much as 40% of the total time required to develop and deploy the solutions. R Vivekananda, Global Head of Quartz at TCS, stated that they had received very positive feedback from pilot customers to their kit.

Unlike Infosys and TCS, Wipro made strides in enterprise payments space in partnership with blockchain firm R3, where the duo together developed a prototype in 2019 to execute digital currency payments for interbank financial settlements for a consortium consisting of the Bank of Thailand and 8 commercial banks in Thailand. Built as a component of the first phase of Project Inthanon, the solution will deliver de-centralized interbank real-time gross settlement (RTGS) using wholesale Central Bank Digital Currency (CBDC) in Thailand. The solution highlights that central banks across the globe are taking interest in hiring software companies to deploy blockchain solutions for payment and finance-related activities. It is to be noted that R3 developed a similar enterprise payments solution in 2019 with other companies too, including SAP and Accenture.

2019 saw multiple POCs coming into action for helping create enterprise blockchain networks for different purposes. The trend was clear- blockchain technologies created a trusted environment for data transmissions between virtual networks or devices while increasing efficiency of such exchanges. According to research, 75% of IoT technology implementers in America have already adopted distributed ledger or are working on adopting it by the end of 2020 out of more than 500 U.S. companies. Yet, Gartners Hype Cycle (above) for Blockchain Business also shows that most blockchain technologies are still 5-10 years away from transformational impact.

comments

Read this article:
How 2019 Was The Tipping Point For Adoption Of Private Blockchain Solutions - Analytics India Magazine

National Science Foundation to fund the development of a new cloud testbed – SDTimes.com

The National Science Foundation (NSF) Division of Computer and Network Systems has announced it will fund the development of a new cloud testbed, which will be designed for the research and development of new cloud computing platforms. To do so, the foundation is awarding a grant to three universities: Boston University, Northeastern University, and the University of Massachusetts Amherst.

Cloud computing plays an important role in supporting most software we use in our daily lives. This project will construct and support a testbed for research and experimentation into new cloud platforms the underlying software which provides cloud services to applications. Testbeds such as this are critical for enabling research into new cloud technologies research that requires experiments which potentially change the operation of the cloud itself. The new testbed will combine proven software technologies with a real production cloud enhanced with programmable hardware Field Programmable Gate Arrays (FPGA) capabilities not present in other facilities available to researchers today, the National Science Foundation wrote.

RELATED CONTENT: Moving to the cloud

The testbed, known as the Open Cloud Testbed, will leverage previously developed features from the CloudLab testbed with the Massachusetts Open Cloud (MOC). The MOC is a production cloud developed by government, academia, and industry.

An important part of the MOC has always been to enable cloud computing research by the academic community, said Orran Krieger, professor of Electrical and Computer Engineering at Boston University; co-director of the Red Hat Collaboratory; and PI at Massachusetts Open Cloud. This project dramatically expands our ability to support researchers both by providing much richer capabilities and by expanding from a regional to a national community of researchers.

According to Red Hat, while testbeds like this one are essential for enabling new cloud technologies, they are also important for ensuring that the services that they provide are efficient and accessible to a wide range of scientists.

The researchers will combine open-source technologies with a production cloud in order to close the gap in computing capabilities, which are currently only available to researchers. Red Hat hopes the testbed will accelerate innovation, and they will actively contribute research to the project.

This grant and the work being done by the MOC show how open source solutions can positively impact real-world challenges outside of enterprise data centers, Chris Wright, senior vice president and chief technology officer at Red Hat. Red Hat is no stranger to pioneering new ways in which open source software can be used for innovative research, and we are pleased to help drive this initiative in bringing open cloud technologies to a wider range of disciplines, from social sciences to physics, while also continuing our commitment to the next generation of open source practitioners.

Read more here:
National Science Foundation to fund the development of a new cloud testbed - SDTimes.com

Open source in 2020: The future looks bright – TechRepublic

Jack Wallen offers up predictions on open source, Linux, docker engines, automation, and more for the coming year.

Image: Getty Images/iStockphoto

Mirror, mirror on the wall.

Wait; wrong glass.

Crystal, crystal on my table, predict for me if you are able.

Much better.

It's all about open source this time. The year is 2020. The future looks bright and my shades are preppped and ready. Shall we prognosticate?

SEE: 2020 IT budget research report: Security, cloud services, and digitalization are top budget priorities (TechRepublic Premium)

It's not often I predict that one Linux distribution might change the landscape of open source, but everything I've seen and heard about the upcoming release for Deepin Linux has me thinking this could be the one. The developers of Deepin 15.11 are planning to release a feature that could shift the tectonic plates of Linux distributions. That feature is Deepin Cloud Sync.

This feature will sync system settings--of your choosing--to the cloud. For instance, you could install another instance of the OS, connect it to your Deepin Cloud Sync account, and have that new instance of the OS automatically sync your settings. Imagine how much time that would save for the rollout of multiple desktop instances. Couple that with how gorgeous the Deepin desktop is, and you have something special.

Deepin Linux is going to turn heads, and many users will jump the ship of their favorite distribution.

SEE: Top five open source Linux server distributions (TechRepublic Premium)

This one has been a long time in the making. It's a slow burn, slow clap moment that will help Linux in its rise in market share--maybe even reaching near double digits for the first time. What is this remarkable moment? I believe more OEMs will start selling machines with Linux pre-installed.

We already have System76, Dell, Pine64, Lenovo, ThinkPenguin, Star LabTop, and more, and by the end of 2020, I predict we'll see not only a rise in smaller OEMs (mostly rebranding Clevo hardware) with pre-installed Linux, but some larger OEMs as well. I am expecting Acer, HP, and ASUS to jump into the fray, so when 2020 comes to a close, don't be surprised if every desktop and laptop manufacturer on the planet offers a Linux version of some of its hardware.

Let's face it, there's not much about enterprise business that open source doesn't dominate--it's everywhere. It's in the cloud, containers, big data, Internet of Things (IoT), edge computing--you name it--and open source is leading that charge. If there's one area Linux has yet to conquer in the enterprise domain, it's the desktop, but be prepared for that to change--at least on a small scale--by the end of 2020.

The cause will be security. I predict another rise in Windows ransomware attacks, which will lead some businesses to look for a more reliable alternative: Linux. Companies realizing how much of the workflow and bottom line depends upon open source may also drive this shift. I realize most of us pundits have been saying this for years, but 2020 could see a perfect storm bringing the change we've hoped for.

SEE: Predicting 2020 trends across DesignOps, AppDev, AI, IoT and 5G (TechRepublic)

I'm not talking about Docker the company (although I do have my fingers crossed it will find solid footing this year). I'm talking about docker the engine that launched the popularity of the container as it is now. 2019 wasn't kind to docker--Kubernetes became the container orchestrator of choice--but I believe there are going to be developments with the docker engine that will bring it on par with Kubernetes.

These developments could consist of more powerful, user-friendly docker swarm tools or a new client tool to make orchestrating docker clusters easy. In the end, what will drive the comeback of docker is an ease of administration. As Kubernetes gets more powerful, it also becomes more complicated. If docker can reclaim simplicity--while maintaining its power and flexibility--it will regain some much-needed market share.

SEE: What is Kubernetes? (free PDF) (TechRepublic)

This prediction comes from the fiction writing side of my brain. You've been warned.

Thanks to the drive for more efficient CI/CD pipelines, we've witnessed a rise in impressive automation. With the help of Helm, Terraform, and other Kubernetes-centric tools, we can create systems that update themselves, test code and refuse to promote it to production (if there's a problem), and much more.

In 2020, open source automation will come near the realm of fiction, with systems that "think" for themselves, and for the first time we'll experience a system that optimizes itself based on experience (from AI) and prediction. The big questions are: How far will these systems go with the tasks, and will we be able to shut them down once they pass some unknown event horizon? You might scoff at the notion, but the more you look down the rabbit hole of CI/CD, the more plausible it becomes.

SEE: Forrester: The 5 IoT predictions paving the way for 2020 (TechRepublic)

NVIDIA announced it has a big surprise in store for Linux in 2020. Those outside the open source loop may not understand how big this could be, but I believe NVIDIA plans on doing one of two things: Contributing to the Nouveau drivers, or open source its official NVIDIA drivers. Why? I think NVIDIA sees the writing on the wall, and getting on board with Linux is the only way forward.

This will be a huge win for Linux for two reasons: The Nouveau driver has never been all that great for gaming; if NVIDIA started contributing to the Nouveau drivers or open sourced its official drivers, it could be a boon to gaming on the Linux desktops and spike a rise in Linux popularity. Gamers would love a platform that is more reliable and secure than Windows. Give them the option, and we'll see Linux not only crash through that double-digit market share but maybe overtake macOS for that coveted second place.

On a personal note, thank you for continuing to read my words and supporting open source software and TechRepublic. I hope you have a productive, outstanding, and joyous year.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

Continue reading here:
Open source in 2020: The future looks bright - TechRepublic

National Science Foundation Awards Grant to Develop Next-Generation Cloud Computing Testbed Powered by Red Hat – Business Wire

RALEIGH, N.C.--(BUSINESS WIRE)--Red Hat, Inc., the world's leading provider of open source solutions, today announced that the National Science Foundation (NSF) Division of Computer and Network Systems has awarded a grant to a research team from Boston University, Northeastern University and the University of Massachusetts Amherst (UMass) to help fund the development of a national cloud testbed for research and development of new cloud computing platforms.

The testbed, known as the Open Cloud Testbed, will integrate capabilities previously developed for the CloudLab testbed into the Massachusetts Open Cloud (MOC), a production cloud developed collaboratively by academia, government, and industry through a partnership anchored at Boston Universitys Hariri Institute for Computing. As a founding industry partner and long-time collaborator on the MOC project, Red Hat will work with Northeastern University and UMass, as well as other government and industry collaborators, to build the national testbed on Red Hats open hybrid cloud technologies.

Testbeds such as the one being constructed by the research team, are critical for enabling new cloud technologies and making the services they provide more efficient and accessible to a wider range of scientists focusing on research in computer systems and other sciences.

By combining open source technologies and a production cloud enhanced with programmable hardware through field-programmable gate arrays (FPGAs), the project aims to close a gap in computing capabilities currently available to researchers. As a result, the testbed is expected to help accelerate innovation by enabling greater scale and increased collaboration between research teams and open source communities. Red Hat researchers plan to contribute to active research in the testbed, including a wide range of projects on FPGA hardware tools, middleware, operating systems and security.

Beyond this, the project also aims to identify, attract, educate and retain the next generation of researchers in this field and accelerate technology transfer from academic research to practical use via collaboration with industry partners such as Red Hat.

Since its launch in 2014, Red Hat has served as a core partner of the MOC, which brings together talent and technologies from various academic, government, non-profit, and industry organizations to collaboratively create an open, production-grade public cloud suitable for cutting-edge research and development. The MOCs open cloud stack is based on Red Hat Enterprise Linux, Red Hat OpenStack Platform and Red Hat OpenShift.

Beyond creating the national testbed, the grant will also extend Red Hats collaboration with Boston University researchers to develop self-service capabilities for the MOCs cloud resources. For example, via contributions to the OpenStack bare metal provisioning program (Ironic), the collaboration aims to produce production quality Elastic Secure Infrastructure (ESI) software, a key piece to enabling more flexible and secure resource sharing between different datacenter clusters. And by sharing new developments that enable moving resources between bare metal machines and Red Hat OpenStack or Kubernetes clusters in open source communities such as Ironic or Ansible, Red Hat and the MOCs researchers are helping to advance technology well beyond the Open Cloud Testbed.

Supporting Quotes

Michael Zink, associate professor, Electrical and Computer Engineering (ECE), University of Massachusetts AmherstThis testbed will help accelerate innovation in cloud technologies, technologies affecting almost all of computing today. By providing capabilities that currently are only available to researchers within a few large commercial cloud providers, the new testbed will allow diverse communities to exploit these technologies, thus democratizing cloud-computing research and allowing increased collaboration between the research and open-source communities. We look forward to continuing the collaboration in MOC to see what we can accomplish with the testbed.

Orran Krieger, professor of Electrical and Computer Engineering, Boston University; co-director, Red Hat Collaboratory; PI, Massachusetts Open CloudAn important part of the MOC has always been to enable cloud computing research by the academic community. This project dramatically expands our ability to support researchers both by providing much richer capabilities and by expanding from a regional to a national community of researchers.

Chris Wright, senior vice president and chief technology officer, Red HatThis grant and the work being done by the MOC show how open source solutions can positively impact real-world challenges outside of enterprise data centers. Red Hat is no stranger to pioneering new ways in which open source software can be used for innovative research, and we are pleased to help drive this initiative in bringing open cloud technologies to a wider range of disciplines, from social sciences to physics, while also continuing our commitment to the next generation of open source practitioners.

Additional Resource

Connect with Red Hat

About Red Hat, Inc.

Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

Forward-Looking Statements

Certain statements contained in this press release may constitute "forward-looking statements" within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors, including: risks related to the ability of the Company to compete effectively; the ability to deliver and stimulate demand for new products and technological innovations on a timely basis; delays or reductions in information technology spending; the integration of acquisitions and the ability to market successfully acquired technologies and products; risks related to errors or defects in our offerings and third-party products upon which our offerings depend; risks related to the security of our offerings and other data security vulnerabilities; fluctuations in exchange rates; changes in and a dependence on key personnel; the effects of industry consolidation; uncertainty and adverse results in litigation and related settlements; the inability to adequately protect Company intellectual property and the potential for infringement or breach of license claims of or relating to third party intellectual property; the ability to meet financial and operational challenges encountered in our international operations; and ineffective management of, and control over, the Company's growth and international operations, as well as other factors. In addition to these factors, actual future performance, outcomes, and results may differ materially because of more general factors including (without limitation) general industry and market conditions and growth rates, economic and political conditions, governmental and public policy changes and the impact of natural disasters such as earthquakes and floods. The forward-looking statements included in this press release represent the Company's views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company's views as of any date subsequent to the date of this press release.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, and OpenShift are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. The OpenStack Word Mark is either a registered trademark/service mark or trademark/service mark of the OpenStack Foundation, in the United States and other countries, and is used with the OpenStack Foundation's permission. Red Hat is not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

Continue reading here:
National Science Foundation Awards Grant to Develop Next-Generation Cloud Computing Testbed Powered by Red Hat - Business Wire

Xen Project Hypervisor 4.13 Brings Improved Security, Hardware Support and Features to Increase Embedded Use Case Adoption – PRNewswire

SAN FRANCISCO, Dec. 18, 2019 /PRNewswire/ -- TheXen Project, an open source hypervisor hosted atthe Linux Foundation, today announced the release of Xen Project Hypervisor 4.13, which improves security, hardware support, added new options for embedded use cases and reflects a wide array of contributions from the community and ecosystem. This release also represents a fundamental shift in the long-term direction of Xen, one which solidifies its resilience against security threats due to side channel attacks and hardware issues.

"Xen 4.13 combines improved security, broader support for hardware platforms, an easier adoption path for embedded and safety-critical use-cases, as well as a broad representation of diverse community collaboration," said Lars Kurth, Xen Project Advisory Board Chairperson. "In addition to the significant features we are adding, including Core scheduling, late uCode loading, live-patching and added support for OP-TEE and improvements to Dom0less, our community is laying the groundwork for a fully functional and more easily safety certifiable platform for Xen."

SecurityXen 4.13 provides key updates in defence against hardware vulnerabilities including Core scheduling, late uCode loading and branch hardening to mitigate against Spectre v1. Xen 4.13 is the first step in revamping key architectural functionality within Xen that allows users to better balance security and performance.

Key updates include:

Embedded and Safety-CriticalXen 4.13 brings new features that provide easier adoption for embedded and safety-critical use-cases, specifically ISO 26262 and ASIL-B.

Key updates include:

In addition, the Xen Project community has created a Functional Safety Working group supported by multiple vendors, including safety assessors. This group is working on a multi-year plan that makes it possible for vendors to consume Xen Project software in a fashion that is compatible with ASIL-B requirements. This is a significant challenge that requires code and development processes to comply with key tenets of ISO 26262, a challenge which has not yet been solved by any open source project, but which multiple projects are trying to address.

Support for new hardware platforms Xen 4.13 brings support for a variety of hardware platforms. Most notably, Xen 4.13 introduces support for AMD 2nd Generation EPYC with exceptional performance-per-dollar, connectivity options, and security features. In addition, Xen 4.13 also supports Hygon Dhyana 18h processor family, Raspberry Pi4 and Intel AVX512.

Comments from Xen Project Users and Contributors:"AMD has been a long-time contributor to the Xen Project and we are pleased to include Xen in our growing AMD 2nd Generation EPYC ecosystem. The Xen 4.13 based hypervisors running on servers powered by AMD EPYC processors are well suited for many different workloads and help provide customers an attractive total cost of ownership. In particular, the results of VDI performance tests demonstrate the power of Xen on AMD EPYC processors," said Raghu Nambiar, Corporate Vice President and CTO of Datacenter Ecosystems & Application Engineering, AMD.

"The Xen Project Hypervisor has always focused on securely isolating VMs, enabling operators to run multi-tenant workloads with confidence. Xen 4.13 builds on this heritage by further defending against attacks which attempt to leverage hardware-based side channels," said Jacus de Beer, Director of Engineering, Hybrid Cloud Platforms, Citrix. "Xen 4.13 also helps integrators and operators to simplify system maintenance and reduce downtime using the new live-patching, and run-time microcode-loading features. This blend of security and serviceability helps Citrix Hypervisor, which uses Xen at its core, to deliver a dependable platform to our cloud, server and desktop virtualization customers."

"The Xen Project is making huge progress in functional safety compliance, which will allow OEMs and tier 1 suppliers to design mixed safety systems using an open source hypervisor," said Alex Agizim, CTO, Automotive & Embedded, EPAM Systems. "We are excited to be part of this initiative as one of the leaders in Xen's FuSa SiG and enable vehicles to be part of the connected services ecosystem."

"At SUSE we are constantly looking at the requirements of performance and security in our enterprise solutions. Xen's new scheduling option 'core scheduling' is the result of many months of work in the Xen community championed by SUSE," said Claudio Fontana, Engineering Manager, Virtualization, SUSE. "It demonstrates a new way to take advantage of hardware optimizations, without compromising on the security of our customers' systems, that should also be looked at as a successful example to spark similar work and discussions in other large open source projects."

"Xilinx sees Xen Project Hypervisor as the leader in the embedded and automotive virtualization space," said Tony McDowell, Senior Product Marketing Engineer at Xilinx. "Xilinx embraces and continues to enhance with support the Xen Project by completing our development of key features required to have usable and easily configured Dom0-less systems."

Additional Resources

About the Xen ProjectXen Project software is an open source virtualization platform licensed under the GPLv2 with a similar governance structure to the Linux kernel. Designed from the start for cloud computing, the Project has more than a decade of development and is being used by more than 10 million users. A project at The Linux Foundation, the Xen Project community is focused on advancing virtualization in a number of different commercial and open source applications including server virtualization, Infrastructure as a Services (IaaS), desktop virtualization, security applications, embedded and hardware appliances. It counts many industries and open source community leaders among its members includingAlibaba, Amazon Web Services, AMD, Arm, Bitdefender, Citrix, EPAM Systems, Huawei and Intel. For more information about the Xen Project software and to participate, please visit XenProject.org.

Intel and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.AMD, the AMD logo, EPYC, and combinations thereof are trademarks of Advanced Micro Devices, Inc.

About Linux FoundationFounded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world's leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation's projects are critical to the world's infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation's methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us atlinuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media ContactRachel Romoff rromoff@linuxfoundation.org 210-241-8284

SOURCE Xen Project

http://www.xenproject.org

Continued here:
Xen Project Hypervisor 4.13 Brings Improved Security, Hardware Support and Features to Increase Embedded Use Case Adoption - PRNewswire

The world increasingly relies on open source here’s how to control its risks – BetaNews

Open source softwares hold on the IT sector has deepened in the last five years. An estimated 96 percent of applications use open source components, and big players like Microsoft, IBM and even the U.S. government now embrace open source projects for their software needs. But while open source has transformed organizations ability to use proven and maintained code in the development of new software, its not untouchable in terms of security. Using code thats readable by anyone brings risks -- and issues have occurred in the past.

Its true that open source makes security efforts more transparent since its happening out in the open. If there are flaws in the code, theyre often resolved quickly by committed members of the open source community. Additionally, many open source projects have security scans built into their build processes, so contributions that introduce vulnerabilities directly or through dependencies are few and far between. But leaving the code in the open also allows bad actors to write attacks specific to unpatched vulnerabilities or to unrealized vulnerabilities in libraries that products actively depend on. As a result, teams using open source need to take steps to remain secure.

A growing embrace of open source

Its clear why organizations use open source. Would you rather build a tool on proprietary code from the ground up or use blocks of code that are proven effective and maintained by other trustworthy users out in the open?

Most organizations prefer the latter and open source code delivers it to thousands of companies. According to Red Hat, 69 percent of IT leaders said open source is very or extremely important to their organizations infrastructure plans.

Over the past decade, the business world has embraced open source as the new normal for building software. At SPR, we work with more than 300 clients, and I cant name one that doesnt use open source software in some form.

Open source adoption by some of the most powerful tech players has been key to its growing influence. For example, Microsoft now offers many development tools for free and allows users to run them on all operating systems, not just Windows. In 2018, it even acquired GitHub, the well-known open source software development platform.

The ethos of the open source community is to further whats possible with software and share innovations with the rest of the world. When members develop useful code or software, they share it back to the community. But its not a perfect system. Cybersecurity risks often accompany open source projects, and IT teams must proactively respond to those risks.

Managing open source risks

While open source projects tend to be in good hands out in the open, the right sequence of events can lead to attacks from malicious actors. They may inject harmful code into an open source package or write an attack specifically targeting applications that use certain (potentially outdated) versions of tools.

For example, in January 2018, nefarious individuals inserted malicious code into multiple core JavaScript packages that other projects depended on. After a manual operations professional accidentally deleted a certified username, several packages associated with it immediately disappeared. The mistake cleared the way for a bad actor to quickly upload malicious packages that used the same names. As a result, teams accidentally injected these packages into their applications without scanning them. In the end, hundreds of products were likely affected.

With such consequences in mind, your team must have an established policy operating in the background to protect your tools from ever-changing vulnerabilities -- or more rarely, malicious code.

Above all, you should define a policy for using open source and enforce it through automation. Automate the retrieval and scanning of dependencies so your deployments are halted or your builds are broken when new vulnerabilities are detected in packages that your software depends on. If an organization had this type of process during the JavaScript snafu, the malicious code may never have made it into the software because the scanning tool would have picked it up.

This measure goes back to continuous delivery best practices. Whenever I pull code from my repository for a build, it should be automatically tested -- both to ensure correct functionality and to check for known security vulnerabilities, either directly or through one of its dependencies. Your process should have a security scan built into the delivery pipeline so when your build is happening, youll know what could affect your app. Dependency management tools like Node Package Manager (NPM) provide warnings when a projects dependencies contain vulnerabilities -- and you shouldnt ignore them.

Additionally, IT leaders should monitor vulnerabilities by subscribing to the security bulletin mailing list accompanying the software license. Monitoring a security bulletin, like Common Vulnerabilities and Exposures (CVE) -- which is run by an open consortium -- keeps you informed about published vulnerabilities and scheduled fixes. Make sure youre paying attention to bulletins like CVE for all the tools you use.

For all the good open source software has brought to the world, there will always be vulnerabilities and malicious actors intent on wreaking havoc. By implementing the right measures into software delivery timelines, you can position your teams to reap the benefits that open source offers, while confidently minimizing risks.

Photo Credit: Olivier Le Moal / Shutterstock

Justin Rodenbostel is a polyglot software engineer, architect, and lead one of the custom application development practices at SPR Consulting, a digital technology consultancy, where he focuses on building solutions for our clients using open source technology. His teams broad expertise enables them to build their customers, not the most convenient solution or the solution someones most comfortable with -- but the right solution. Throughout his career, he's worked on large, mission-critical enterprise apps, small internal apps, and everything in between -- for clients as small as a 3-person startup and as large as a global Fortune 50.

See original here:
The world increasingly relies on open source here's how to control its risks - BetaNews

Is getting paid to write open source a privilege? Yes and no – TechRepublic

Commentary: While we tend to think of open source as a community endeavor, at its heart open source is inherently selfish, whether as a contributor or user.

Image: SIphotography, Getty Images/iStockphoto

"[O]ften [a] great deal of privilege is needed to be able to dedicate one's efforts to building Free Software full time," declared Matt Wilson, a longtime contributor to open source projects like Linux. He's right. While there are generally no legal hurdles for a would-be contributor to clear, there are much more pragmatic constraints like, for example, rent. While many developers might prefer to spend all of their time writing and releasing open source software, comparatively few can afford to do so or, at least, on a full-time basis.

And that's OK. Because maybe, just maybe, "privilege" implies the wrong thing about open source software.

Who are these developers privileged to get to write open source software? According to GitHub COO Erica Brescia, 80% of the developers actively contributing to open source GitHub repositories come from outside the US. Of course, the vast majority of these developers aren't contributing full-time. According to an IDC analysis, of the roughly 24.2 million global developers, roughly half (12.5 million) get paid to write software full-time, while another seven million get paid to write software part-time.

But this doesn't mean they're getting paid to write open source software, whether full-time or part-time.

SEE: Open source vs. proprietary software: A look at the pros and cons (TechRepublic Premium)

If there are 12.5 million paid full-time developers globally, a small percentage of that number gets paid to write open source software. There simply aren't that many companies that clearly see a return on open source investments. Using open source? Of course. Contributing to open source? Not so much. This is something Red Hat CEO Jim Whitehurst called out as a problem over 10 years ago. It remains a problem.

As for individual developers like osxfuse maintainer Benjamin Fleischer, it's a persistent struggle to figure out how to get paid for the valuable work they do. Going back to Wilson's point, most developers simply don't get to enjoy the privilege of spending their time giving away software.

Is this a bad thing?

When I asked if full-time open source is an activity only the rich (individuals or companies) can indulge, developer Henrik Ingo challenged the assumptions underlying my question. "Why should we expect anyone to contribute to open source in the first place?" he queried. Then he struck at the very core of the assumption that anyone should contribute to open source:

Some of us donate to charity, some others receive that gift. Some do both, at different phases in life, or even at the same time. Yet neither of those roles makes us a better person than the other. With open source, the idea is that we share an abundant resource. If you go back to Cathedral and Bazaar, the idea of "scratch your own itch" is prevalent. You write a tool, or fix an existing one, because you needed it. Or you write code to learn. Or just social reasons! Whatever your reasons, nobody should be expected to contribute code as some kind of tax you have to pay to justify your existence on this planet.

Open source, in other words, is inherently self-interested, and that self-interest brings its own rewards. Sometimes unpaid work becomes paid work, as was the case with Andy Oliver. Sometimes it doesn't. If the work is fulfilling in and of itself, it may not matter whether that developer ever gets the "privilege" to spend all of her time getting paid to write open source software. It also may not matter whether that software is open source or closed.

SEE: How to build a successful developer career (free PDF) (TechRepublic)

To Ingo's point, we may need to stop trying to impose ethical obligations on software developers and users, whether open source or not. I personally think more open source tends toward more good and, frankly, more realization of self-interest, because it can be a great way to share the development load. For downstream users, contributing back can be a great way to minimize the accumulation of technical debt that can collect on a fork.

But in any case, there isn't a compelling reason to browbeat others into contributing. Open source is inherently self-interested, to Ingo's point, whether as a user or contributor. When I use open source software, I benefit. When I contribute, I benefit. Either way, I (and we) am privileged.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

See the original post:
Is getting paid to write open source a privilege? Yes and no - TechRepublic

May the Open Source Force Be with You – Enterprise License Optimization Blog

Image by GooKingSword from Pixabay

Im giving away my affinity for Star Wars. Its true. I was there when the first movie hit the big screen (lets just say, a while ago) and now dreading while at the same time wildly anticipating the release of this last movie in the Skywalker saga, Start Wars: The Rise of Skywalker. As mawkishly sentimental as it may appear, I had to work it into a blog. Only here, in the end, will we most likely understand some of the true plot lines of this epic story.

How does this relate to my thoughts about open source software? I look at how the Skywalker story evolved over the years, how it took shape, and what and who impacted the narrative.

The same lookback can be undertaken for open source and, in fact, it has, right here in my own blog on the occasion (Happy Birthday, Open Source. The Term Turns 21). Lets take that story a little further.

There are events that have shaped the course of open source and how companies use and implement it; lawsuits such as Oracle v. Google, Versata v. Ameriprise, and a number of years ago Free Software Foundation went after Cisco over some GPL code in one of their routers. The Heartbleed vulnerability, of course, has a place in history given the impact it had on companies such as Equifax.

Likewise, Software Composition Analysis (SCA) has changed the course of SCA and the perceived risks associated with open source. SCA enables companies to be more proactive in their management of open source. There was a realization after Heartbleed that just because the source code is open doesnt mean that its without examination and oversight to mitigate risks.

SCA allows companies to better understand what open source software they are using. It allows for the discovery of that open source, and it allows for the remediation of threat issues in a way that isnt possible without the ongoing automated and controlled monitoring of a SCA platform.

During the exposure of the Heartbleed vulnerability, development teams went on hunting missions. They scrambled to understand what version of OpenSSL they had and then conducted fire drills to figure out how to rectify and remediate quickly and efficiently. SCA is a game changer for those companies that had to crawl painstakingly through complicated processes and manual work to get a handle on the situation. With SCA there is a continuous process that allows for in-the-moment understanding of what open source libraries youre using and what versions are in use.

One of the challenges is that companies could have multiple versions of the same open source library in their product. Version control is a real issue. The ability to leverage SCA to make sure you have the latest version of a particular library and the version that is approved, safe, has the most desirable license terms according to your policies, and is used consistently across the entire product line is a huge benefit.

In the end, youre accountable. Youre accountable from a reporting standpoint and to stakeholders about whats in your solutions. Thats peace of mind.

Your focus determines your reality. Qui-Gon Jinn

Follow this link:
May the Open Source Force Be with You - Enterprise License Optimization Blog

Ubuntu turns 15: what impact has it had and what does the future hold? – TechRadar

In 1991, Linus Torvolds created the Linux operating system just for fun and, 15 years later, Ubuntu was born - offering developers a more lightweight, user-friendly distribution.

This year marks Ubuntus 15th birthday, with it now having established itself as the leading open source software operating system across public and OpenStack clouds.

As we reflect on this milestone, those of us at Canonical are thinking about what it is that sets us apart from other Linux distributions and has driven Ubuntu to underpin so many successful projects in its time.

One thing which has really come to the foreground during this time is Ubuntus popularity both as a desktop and a server. There is immense appreciation and adoption from the developer community, with millions of PC users around the world running Ubuntu. As the cloud landscape has matured, so too has the popularity of this OS as a server. Ubuntu runs on the majority of public cloud workloads across the board and, with our continued promise of free access for everyone, it has helped democratise access to innovation.

We frequently hear from people who tell us that they wouldn't be where they are if they hadn't had access to Ubuntu. In fact, our CEO, Mark Shuttleworth, recently said in an interview that "Ubuntu gives millions of software innovators around the world a platform that's very cheap, if not free, with which they can innovate." It helps the future to arrive faster, no matter who or where you are. In this respect it is unique as the one Linux distribution which has made Linux accessible to - and consumable by - everyone.

It is this ideology that has resulted in the numerous Ubuntu success stories. We so often hear from people about their stories and how they have used this open source platform to build incredible businesses upon. As mentioned earlier, the majority of the public cloud workloads run on Ubuntu, and so almost any hyper-scale company today is using Ubuntu. But often the best stories are those where people found ways to be creative and build something new using Ubuntu, or when scientists have made significant breakthroughs and the accompanying photo published in the press shows an Ubuntu desktop in the background.

Paramount in this success is the community mindset. At Canonical, we talk a lot about how the open source community powers fast and efficient innovation through a collaborative approach. The next wave of innovation will be powered by software which builds upon this collaborative effort, not just from a single company, but from a community committed to improving the entire landscape. The future success of self-driving cars, medical robots or smart cities is not something which should be entrusted to a select few companies, but instead to a global community of innovators who can come together to achieve the very best outcome.

Ubuntu will continue to be a platform which serves its users, no matter what their needs. On the desktop, we will continue to focus on performance and the engineering use case. We have increasingly seen requests for Ubuntu as an enterprise desktop for development teams, and that will also play a role. The majority of our OEM partners are looking to partner around AI/ML workstations for data scientists, so again that is another focus area.

On the server side, the focus is on security, performance, and long-term support. We're looking closely at the latest innovations around performance, especially in the Linux kernel, to provide the best possible OS for high-performance workloads. Canonical already already provides five years of free security updates with every LTS release. Furthering this support offering is the continued expansion of the Extended Security Maintenance program, within which our paid customer base benefits from access to even more security patches.

Ubuntu has been a springboard for many and we will continue our commitment to this mission. With the developer community at the heart of this distribution, Canonical will continue to provide the accessibility to development tools that will enable fast and reliable innovation, to power a more successful future.

View original post here:
Ubuntu turns 15: what impact has it had and what does the future hold? - TechRadar

Kubernetes and the Industrial Edge to be Researched by ARC – ARC Viewpoints

I'd like to talk about what I'll call The Cloud Native Tsunami, which is the emerging software architecture for cloud, but also for enterprise, and eventually for edge and even embedded software as well.

It has been my thesis for a couple of years that when Marc Andreessen (the co-founder of Netscape) said, "Software is eating the world," the software that's really going to eat the world has been cloud software, and this is especially true for software development. My thesis is that the methods and technologies people use to develop and deploy cloud software will eventually swallow and take over the enterprise space in our corporate data centers, will take over the edge computing space, and will even threaten the embedded software space. Today each of these domains has different development tools in use, but my thesis is that cloud software tools will eventually take over, and the same common development tools and technologies will end up being used across all of these domains.

In mid-November, I attended the KubeCon America event in San Diego, California. KubeCon is an event sponsored by the Cloud Native Computing Foundation (CNCF), which is an umbrella organization under the Linux Foundation. CNCF manages a number of open source software projects critical to cloud computing. The growth of this conference has been phenomenal, and its name KubeCon stems from Kubernetes, which is the flagship software project managed by this organization.

Kubernetes is an open source software project that orchestrates large deployments of containerized software applications in a distributed system. These can be deployed across a cloud provider, or across a cloud provider and an enterprise, or basically anywhere. The growth of this conference, as you can see from the chart, has been phenomenal. Five years ago, the Kubernetes project the software was private and maintained within Google. About 5 years ago, Google released Kubernetes to Open Source, and since then the KubeCon event and the interest in this software has grown exponentially.

It certainly seems to me that Kubernetes represents a software inflection point similar to ones we've seen in the past. For instance, when Microsoft presented it's Microsoft Office Suite, they defined personal productivity applications for the PC. Or before Y2K, when enterprises were rewriting their existing software to avoid Y2K bugs, but in doing so were generally leaping onto SAP R/2 in order to avoid a issues with Y2K. Or maybe its a little bit like the introduction of Java, which defined a multi-platform execution environment in a virtual machine, and maybe also a bit like the early days of e-commerce when for the first time the worldwide web was linked to enterprise databases, transactions, and business processes.

This rapid growth in interest in Kubernetes has been phenomenal, but exponential growth is obviously unsustainable or the whole planet would soon be going to one software development conference. One thing that's very important to point out with this rapid growth (from basically nothing to 23,000 people attending these events) is that there is a people constraint in this ecosystem right now. There is a shortage of people who are deeply experienced. And even some of the exhibitors and sponsors at KubeCon came to the event just to recruit talented software developers with Kubernetes experience. But you can see from the chart that there's not a lot of people in the world who have more than five years of Kubernetes experience!

In addition to Kubernetes, the Cloud Native Computing Foundation curates several other Open Source software projects. These projects provide services or other kinds of auxiliaries that are important for distributed applications. While Kubernetes is the flagship product, there are other projects that are in different stages of development. The CNCF breaks projects into three areas that they call graduated for the software projects that are ready to be incorporated into commercial products, incubating which refers to an open source software project that is in a more rapid state of development and change, and finally there's a third tier below this which CNCF calls sandbox projects, which are more embryonic projects that are newer, less fully developed, still emerging. And of course, there are any number of software projects outside of this CNCF ecosystem, but CNCF is a major ecosystem for open source software for the cloud.

From the conference, we could see the enterprise impact of Kubernetes is still relatively low. In other words, market leaders are using this technology now, but in general it's at its early stages of deployment even among the leaders, and most enterprises have not yet adopted containerized applications with Kubernetes for orchestration. But, growth in this area is inevitable. This is, as I said before, like Microsoft Office, or like SAP, or like Java; it's coming to the enterprise. Even though the penetration is still low, leaders are rolling out distributed applications and managing distributed applications at scale, Kubernetes is the tool that people are turning to in order to do this.

The auxiliary open source projects I mentioned before will grow the capabilities of Kubernetes over time. So, a number of auxiliary services for data storage, for stateful behavior, for network communications, software defined networking, etc. are going to supplement Kubernetes and make it more powerful. While at the same time, other engineers are working to make this kind of technology, as complex as it is, easier to use and to deploy.

I should mention a couple of vertical industry initiatives where Kubernetes is especially attractive. One of them is 5G telecommunications. Telecom service providers are extremely interested in digitizing their services as they move to 5G. Instead of maintaining services at a tower base cell and providing them via hardware/software appliances that are dedicated-function, telecom providers are now looking to virtualize these network functions and deploy them digitally. So they will have a very large set of applications to manage at a huge scale, and so they've turned to Kubernetes to do this.

A second vertical industry area that is important is the ability to manage new automotive products. This can be autonomous vehicles, fleets of vehicles, or just vehicles that have much more software content than vehicles used to have. Clearly, there's a need for these automotives to manage large scale software deployments at hundreds or thousands of end points and do so with very low costs and very high reliability. So, there are certainly vertical industry initiatives that are driving Kubernetes from the cloud service providers through the data centers toward the edge.

But what about the industrial edge? When we turned to the industrial edge (and the figure below is from Siemens Research) we can divide the compute world into four different domains. We really have at the industrial edge, much more restricted capability in terms of compute power, in terms of storage, in terms of networking, than we find at a data center, be that a corporate data center and much less than we find within the commercial public cloud. And we can go a level further and even see within the automation or manufacturing devices, things such as program logic controllers, CNC machines, robotics, etc., that these are generally addressed by an embedded system that is built for purpose.

One difficulty is that deploying Kubernetes and managing containerized applications at scale, requires larger amounts of compute, network and storage capacity than these edge domain and device domain systems now have. So, this is an area where there's a big challenge to adopt this new technology. Why am I so optimistic that this is going to happen? I'm very optimistic, because there are very similar challenges in the two huge vertical industries that I mentioned, the automotive and the telecommunications industry. These industries also have thousands, or tens of thousands of small systems on the edge on which they need to maintain and deploy software. And that challenge is going to have to be met one way or another. And there's extensive research and development going on now to do just that.

So, in terms of its industrial and industrial IoT impact (though industrial automation is traditionally a technology laggard), industrial IoT applications are definitely a target for Kubernetes. And this involves moving orchestrated, containerized software apps to the edge. As I mentioned, both automotive and industrial applications have similar kind of constraints. They have low compute capability, small footprint, and generally they also demand low bill-of-material costs for the kind of solutions that they can provide. This remains a challenge, but again, I think there are a number of venture stage companies and a lot of research going on to bridge this gap, and people are going to find a way to do that effectively.

But that makes the future very difficult to map out. This ecosystem is extremely dynamic. As I mentioned, Kubernetes was not even in the public domain five years ago. Now it has, if you will, taken over mind share in terms of the technology that people are going to use to orchestrate containerized applications. But, the next five years are likely to be equally revolutionary. So, it's absurdly difficult to map out this space and say, "Here is where it's going to go in 5 years."

But I found that this little quote I saw at KubeCon was interesting and I think if you're working in manufacturing or manufacturing automation, you'll find this interesting, too. This is a description of Kubernetes by one of the co-chairs of their architecture special interest group.

The entire system [that being a Kubernetes deployment] can now be described as an unbounded number of independent, asynchronous control loops reading from and writing to a schematasized resource store as the source of truth. This model has proven to be resilient, evolvable and extensible.

What he's talking about here in terms of control loops are not control loops in the automation sense. They are control loops in the enterprise software sense. These control loops are functions that Kubernetes is performing to maintain a software deployment and monitor the health of this deployment. I found this interesting in that at this level (at the deployment level) for huge distributed applications, people view Kubernetes as a driver of a large number of independent and asynchronous control loops. It points out, to me, that the same sort of technology could be used to manage other types of control loops in automation within a manufacturing operation.

This leads to an upcoming ARC research topic. ARC Advisory Group is beginning research into industrial edge orchestration, specifically the orchestration of applications that are distributed in industrial manufacturing, the industrial internet of things and infrastructure. Because this state of the technology is so early (even though it's critical for the future of industrial automation and for the fourth industrial revolution or industry 4-0) the field is very dynamic, and it's very difficult to map out such a nascent and varied landscape of technologies for integrating and orchestrating the industrial edge. During this research ARC, will be studying the products and technologies of many venture stage firms as well as open source projects that are designed to bridge the gap between the cloud and the industrial edge and these include infrastructure for 5G telecommunications, edge networks, requirements to manage fleets of vehicles, as well as the networking opportunities that are afforded by 5G itself.

With this industry at such an early stage, any detailed market forecast would be highly speculative and very uncertain. But ARC has decided to map out this landscape and plans to provide as deliverables for this research a series of podcasts, webcasts, and reports for our ARC Advisory Service clients. So, ARC is reaching out to relevant suppliers in this space, be they hardware, software or services suppliers, to participate in this research initiative. If your firm would like to participate in this research, ARC welcomes your input. Please use this link to connect with ARC or feel free to contact me at hforbes@arcweb.com and I'll be happy to discuss this project with you.

Read more:
Kubernetes and the Industrial Edge to be Researched by ARC - ARC Viewpoints