OpenLogic by Perforce Expands Java Support Offering with Trusted Distributions of OpenJDK – Stockhouse

MINNEAPOLIS, Aug. 4, 2020 /PRNewswire/ -- OpenLogic by Perforce, the leading provider in agnostic open source support, now provides an enterprise-class alternative to Oracle Java by offering the most widely-used OpenJDK distributions backed by OpenLogic support.

The expansion of OpenLogic's Java Support offering with OpenJDK builds follows an overall growth trend for the business of almost 40% since its acquisition by Perforce Software in March 2019. This success has been underpinned by growth in the customer base, increased services and strategic partnerships with open source industry leaders.

"With organizations deploying several open source packages in production environments, managing the technology stack with multiple support vendors has become unsustainable," said Tim Russell, Chief Product Officer at Perforce. "OpenLogic provides product-agnostic, consolidated open source support so companies can reduce to one vendor for their entire stack. This enables organizations to simplify issue resolution and receive unbiased innovation guidance, while cutting costs and risks so they can confidently deploy open source in business critical systems."

In addition to supporting their own OpenJDK builds, OpenLogic also offers commercial support for all Java distributions, including Adopt OpenJDK, IBM, and Oracle's Java. Java support from OpenLogic includes security patches and bug fixes, in addition to guidance for the usage and administration of Java and the JVM.

"The licensing changes from Oracle have left many organizations looking for guidance on Java alternatives" explained Justin Reock, Chief Architect for OpenLogic at Perforce. "Because OpenLogic supports all Java, we are uniquely positioned to help organizations better understand their Java needs today vs tomorrow, reduce their TCO, and plan their overall open source strategy moving forward."

OpenLogic's OpenJDK builds are fully compliant with the Java SE specifications. All JDKs and JREs are verified with an inhouse test suite that validates execution. OpenLogic provides and supports free distributions for Linux, Windows, and MacOS. These distributions will be updated quarterly, with critical security patches on-demand.

The OpenLogic OpenJDK distributions are available from the OpenLogic website at openlogic.com/openjdk-downloads. After downloading an OpenJDK build, your team can choose to connect with an OpenJDK expert from OpenLogic if you're looking to get support.

In addition to supporting OpenJDK, OpenLogic provides commercial support for over 400 open source packages and common Java stack elements including Spring, ActiveMQ, Tomcat, JBoss/Wildfly, Kafka, Camel and CentOS.

About OpenLogic

OpenLogic provides enterprise-level support and services for organizations using open source software as part of their infrastructure and application stacks.

OpenLogic's team of experienced enterprise architects delivers commercial SLAs for critical open source packages including key enterprise components and platforms such as CentOS, OpenJDK, Jenkins CI, Apache, Docker, and Kubernetes.

For more information, visit http://www.openlogic.com.

About Perforce

Perforce powers innovation at unrivaled scale. With a portfolio of scalable DevOps solutions, we help modern enterprises overcome complex product development challenges by improving productivity, visibility, and security throughout the product lifecycle.

Our portfolio includes solutions for Agile planning & ALM, API management, automated mobile & web testing, embeddable analytics, open source support, repository management, static & dynamic code analysis, version control, and more.

With over 15,000 customers, Perforce is trusted by the world's leading brands to drive their business critical technology development.

For more information, visit http://www.perforce.com.

PERFORCE GLOBAL Colleen Kulhanek Perforce Software Ph: +1 612 517 2069 ckulhanek@perforce.com

PERFORCE UK/EMEA Maxine Ambrose Ambrose Communications Ph: +44 118 328 0180 perforcepr@ambrosecomms.com

View original content to download multimedia:http://www.prnewswire.com/news-releases/openlogic-by-perforce-expands-java-support-offering-with-trusted-distributions-of-openjdk-301105500.html

SOURCE Perforce Software

Read the original here:
OpenLogic by Perforce Expands Java Support Offering with Trusted Distributions of OpenJDK - Stockhouse

OpenLogic by Perforce Expands Java Support Offering with Trusted Distributions of OpenJDK – RealWire

Minneapolis, August 4, 2020 OpenLogic by Perforce, the leading provider in agnostic open source support, now provides an enterprise-class alternative to Oracle Java by offering the most widely-used OpenJDK distributions backed by OpenLogic support.

The expansion of OpenLogics Java Support offering with OpenJDK builds follows an overall growth trend for the business of almost 40% since its acquisition by Perforce Software in March 2019. This success has been underpinned by growth in the customer base, increased services and strategic partnerships with open source industry leaders.

With organizations deploying several open source packages in production environments, managing the technology stack with multiple support vendors has become unsustainable, said Tim Russell, Chief Product Officer at Perforce. OpenLogic provides product-agnostic, consolidated open source support so companies can reduce to one vendor for their entire stack. This enables organizations to simplify issue resolution and receive unbiased innovation guidance, while cutting costs and risks so they can confidently deploy open source in business critical systems.

In addition to supporting their own OpenJDK builds, OpenLogic also offers commercial support for all Java distributions, including Adopt OpenJDK, IBM, and Oracle's Java. Java support from OpenLogic includes security patches and bug fixes, in addition to guidance for the usage and administration of Java and the JVM.

The licensing changes from Oracle have left many organizations looking for guidance on Java alternatives explained Justin Reock, Chief Architect for OpenLogic at Perforce. Because OpenLogic supports all Java, we are uniquely positioned to help organizations better understand their Java needs today vs tomorrow, reduce their TCO, and plan their overall open source strategy moving forward.

OpenLogics OpenJDK builds are fully compliant with the Java SE specifications. All JDKs and JREs are verified with an inhouse test suite that validates execution. OpenLogic provides and supports free distributions for Linux, Windows, and MacOS. These distributions will be updated quarterly, with critical security patches on-demand.

The OpenLogic OpenJDK distributions are available from the OpenLogic website at openlogic.com/openjdk-downloads. After downloading an OpenJDK build, your team can choose to connect with an OpenJDK expert from OpenLogic if youre looking to get support.

In addition to supporting OpenJDK, OpenLogic provides commercial support for over 400 open source packages and common Java stack elements including Spring, ActiveMQ, Tomcat, JBoss/Wildfly, Kafka, Camel and CentOS.

About OpenLogic OpenLogic provides enterprise-level support and services for organizations using open source software as part of their infrastructure and application stacks.

OpenLogics team of experienced enterprise architects delivers commercial SLAs for critical open source packages including key enterprise components and platforms such as CentOS, OpenJDK, Jenkins CI, Apache, Docker, and Kubernetes.

For more information, visit http://www.openlogic.com.

About PerforcePerforce powers innovation at unrivaled scale. With a portfolio of scalable DevOps solutions, we help modern enterprises overcome complex product development challenges by improving productivity, visibility, and security throughout the product lifecycle.

Our portfolio includes solutions for Agile planning & ALM, API management, automated mobile & web testing, embeddable analytics, open source support, repository management, static & dynamic code analysis, version control, and more.

With over 15,000 customers, Perforce is trusted by the world's leading brands to drive their business critical technology development.

For more information, visit http://www.perforce.com.

Read the rest here:
OpenLogic by Perforce Expands Java Support Offering with Trusted Distributions of OpenJDK - RealWire

Terabytes of data were buried in the Artic. We ask the person in charge, why? – Mashable SE Asia

Some of you who are into the programming field know what's GitHub. For those of you who don't, GitHub is a website where people who have developed open source codes can store and collaborate on.

Companies mainly use it as a collaborative tool where any changes they make to the software's code can be uploaded to GitHub and the next person would just need to pull the updated code and make other changes.

But the company is now embarking on a unique journey to archive terabytes of code from all around the world.

Curiosity got the better of us, and we had some questions to ask VP of Strategic Programs at GitHub, Thomas Dohmke, on why they did such a thing?

We thought its worth preserving that open source software for future generations to come and we came up with the idea of the archive program. It uses different approaches to archive open-source software in various forms of media.

Not everything on your SSD is kind of like up to date, and then you might upload it into Dropbox or One Drive every other day or so.

But then youll do like an (Apple) Time Machine backup or like an external hard drive backup more rarely, right? Because you dont have the time to always connect that hardware. You might not have it with you when you go on a trip. We do a similar thing, and we have a hot layer which is real-time backups. First, of course, in GitHub itself, our data centres have backups.

And we have multiple regions where we store our data, so if there are some thunderstorms, we can just hand over to a different region, taking over the data. But then we also stream data through our API into two partners. Theyre called GH Archive and GH torrent.

If you have some cool projects that have more than just one, that was kind of like a classifier to figure out which ones to pick.

This is a real open source project and not just some students Hello, World kind of project where you just created something to try it out. Its all the relevant open source code in the world.

That includes like Linux for example, and the Bitcoin source code, the Ruby or JavaScript source codes. It goes all the way from operating systems to Crypto Currency.

Thousands and thousands of libraries that are used in the dependency tree of basically every kind of software project where thats commercial or not. It also includes Microsoft MS-DOS because a couple of years ago they made that open source.

It depends on the layer. Like in the cold layer, its more like a museum or history lesson.

We put it into the vault about two weeks ago, so thats now code frozen in time. The value is not so much to recover this code and make it run and found your startup in 1,000 years. The value is similar to learning about our past in the medieval ages.

We have a saying at GitHub, software is the biggest team sport on Earth. Because people all around the world in their spare time, weekends, and of course professional players are working together and they dont care about where theyve come from. They dont care about languages. They dont care about cultural differences.

All they care about is collaborating on the codes to make them better, and this is what the archive preserves in the first place.

Thats why we also have the warm and the hot layer. They will be updated all the time, assuming that those entities and including GitHub, are there for the next 100 years.

For the cold layer, were also thinking about going back, in the next five years or so to do another snapshot. Maybe we put the next snapshot in a different location. We havent figured that part out. We are thinking about new ways of archiving software all around the world.

I would say pretty strong in the sense that its in a coal mine, and the entrance to the core is 100 meters above sea level so that you have to go up a hill first to get into the coal mine.

Its unlikely that the sea levels will rise 100 meters, right? What the scientists are talking about is a single-digit rise. Thats already a problem because of cities like Miami or in the Netherlands thats built on ocean level.

You go down a little bit and then up a little bit again and down a little bit and that way its protected from meltwater. So, if the permafrost is melting and some water is flowing into the mine, its basically stuck in those valleys of the shafts. Unless a lot of the ice melts, it shouldnt be an issue.

And then you get into the actual archive, which looks like a metal container thats protecting the archive data from the permafrost. But its deep, and you have to go down 300 meters in the mountain.

Then you have the code which is stored in a plastic container, and the plastic containers are wrapped with aluminium foil to keep it in constant conditions. So I would with high confidence say that this will survive most of the likely events that are happening now.

We have not only that copy in the Arctic. We have the other copies in Paris and in San Francisco on their servers. Were also putting two reels into the Library of Oxford which have been archiving data since the 1500s.

The archive is locked down. Its kind of like a data centre or a bank vault so you cant access this unless you have a scientific reason or youre like a partner with the Norwegian government and the company that we are working with.

Occasionally they might have visitors there to do another drop. For example, if the Singapore government wants to store the data. They can of course access the archive and they do photo shoots. But normal people cannot go into the archives. They can go into the coal mine and see the coal mine.

I personally think its fascinating what GitHub is doing. We always talk about preserving history so we can learn from it. But with how much technology has integrated into our lives, we should remember how it has help in human advancement.

These types of technology are ingrained into history. Therefore, it should be given the same treatment as any other pieces of history.

Cover image sourced from GitHub.

Go here to read the rest:
Terabytes of data were buried in the Artic. We ask the person in charge, why? - Mashable SE Asia

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic – HPCwire

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community and their demand for high compute power in low precision formats. Also, server-line products are increasingly featuring low-precision special function units, such as the Nvidia tensor cores in the Oak Ridge National Laboratorys Summit supercomputer, providing more than an order of magnitude of higher performance than what is available in IEEE double precision.

At the same time, the gap between the compute power on the one hand and the memory bandwidth on the other hand keeps increasing, making data access and communication prohibitively expensive compared to arithmetic operations. Having the choice between ignoring the hardware trends and continuing the traditional path, and adjusting the software stack to the changing hardware designs, the Department of Energys Exascale Computing Project decided for the aggressive step of building a multiprecision focus effort to take on the challenge of designing and engineering novel algorithms exploiting the compute power available in low precision and adjusting the communication format to the application-specific needs.

To start the multiprecision focus effort, we have written a survey of the numerical linear algebra community and summarized all existing multiprecision knowledge, expertise, and software capabilities in this landscape analysis report. We also include current efforts and preliminary results that may not yet be considered mature technology, but have the potential to grow into production quality within the multiprecision focus effort. As we expect the reader to be familiar with the basics of numerical linear algebra, we refrain from providing a detailed background on the algorithms themselves but focus on how mixed- and multiprecision technology can help to improve the performance of these methods and present highlights of application significantly outperforming the traditional fixed precision methods.

This report covers low precision BLAS operations, solving systems of linear systems, least squares problems, eigenvalue computations using mixed precision. These are demonstrated with dense and sparse matrix computations and direct and iterative methods. The ideas presented try to exploit low precision computations for the bulk of the compute time and then use mathematical techniques to enhance the accuracy of the solution to bring it to full precision accuracy with less time to solution.

On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. There are two reasons for this. Firstly, a 32-bit floating point arithmetic rate of execution is usually twice as fast as a 64-bit floating point arithmetic on most modern processors. Secondly, the number of bytes moved through the memory system is halved. It may be possible to care out the computation in lower precision, say 16-bit operations.

One approach exploiting the compute power in low precision is motivated by the observation that in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. The refinement can be accomplished, for instance, by means of the Newtons algorithm (see Equation (1)) which computes the zero of a function f (x) according to the iterative formula:

In general, we would compute a starting point and f (x) in single precision arithmetic, and the refinement process will be computed in double precision arithmetic. If the refinement process is cheaper than the initial computation of the solution, then double precision accuracy can be achieved nearly at the same speed as the single precision accuracy.

Stunning results can be achieved. In Figure 1, we are comparing the solution of a general system of linear equations using a dense solver on an Nvidia V100 GPU comparing the performance for 64-, 32-, and 16-bit floating point operations for the factorization and then using refinement techniques to improve the solution for the 32- and 16-bit solution to what was achieved using 64-bit factorization.

The survey report presents much more detail on the methods and approaches using these techniques, see https://www.icl.utk.edu/files/publications/2020/icl-utk-1392-2020.pdf.

Author Bio Hartwig Anzt

Hartwig Anzt is a Helmholtz-Young-Investigator Group leader at the Steinbuch Centre for Computing at the Karlsruhe Institute of Technology (KIT). He obtained his PhD in Mathematics at the Karlsruhe Institute of Technology, and afterwards joined Jack Dongarras Innovative Computing Lab at the University of Tennessee in 2013. Since 2015 he also holds a Senior Research Scientist position at the University of Tennessee. Hartwig Anzt has a strong background in numerical mathematics, specializes in iterative methods and preconditioning techniques for the next generation hardware architectures. His Helmholtz group on Fixed-point methods for numerics at Exascale (FiNE) is granted funding until 2022. Hartwig Anzt has a long track record of high-quality software development. He is author of the MAGMA-sparse open source software package managing lead and developer of the Ginkgo numerical linear algebra library, and part of the US Exascale computing project delivering production-ready numerical linear algebra libraries.

Author Bio Jack Dongarra

Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his PhD in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist.He now holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University.

More here:
Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic - HPCwire

Microsoft Joins Tech Giants in Forming the Open Source Security Foundation – WinBuzzer

Microsoft is part of a group of tech industry heavyweights who have formed a new foundation focused on open source. Specifically, the Open Source Security Foundation wants to increase security around open source services.

Joining Microsoft in the Open Source Security Foundation (OSSF) are Red Hat, Google, JPMC, IBM, NCC Group, and OWASP Foundation. Microsofts own GitHub is also a part of the group. Announced Monday, the collective of tech companies are also joined by the JPMorgan Chase banking firm.

All of the foundation is hosted at the Linux Foundation. In the announcement, the group said the intention is connect and secure software by leveraging the Linux Foundation. For example, the Core Infrastructure Initiative (CII) and the GitHub-initiated Open Source Security Coalition (OSSC), are part of the initiative.

In a confirmation post, Microsofts chief technology officer Mark Russinovich says the foundation will improve the security of open source software by building a broader community, targeted initiatives and best practices.

Given the complexity and communal nature of open source software, building better security must also be a community-driven process.

For a list of current project being looked at by the Open Source Security Foundation, head to the official GitHub page.

Russinovich explains securing open source software can benefit every company in the foundation, as well as users:

Open source software is core to nearly every companys technology strategy and securing it is an essential part of securing the supply chain for every company, including our own. With the ubiquity of open source software, attackers are currently exploiting vulnerabilities across a wide range of critical services and infrastructure, including utilities, medical equipment, transportation, government systems, traditional software, cloud services, hardware and IoT.

Read more from the original source:
Microsoft Joins Tech Giants in Forming the Open Source Security Foundation - WinBuzzer

Open Source Projects: Why Security Still Matters – Dice Insights

Open source has been an integral part of the enterprise for decades. With the advent of Agile development methodologies and DevOps practices, open source has never been more important to developers creating the apps that are driving digital transformation.

But where does that leave security?

For years, open source was considered a safe security bet and immune to many of the vulnerabilities associated with Windows and other closed-source software, largely due to the open nature of the community supporting it. That way of thinking, however, needs to change.

A report published in June by security firmRiskSensefound that vulnerabilities in open source software nearly doubled between 2018 and 2019, with nearly 1,000 projects posting year-over-year increases. Some of the biggest offenders include Magento, GitLab, and Jenkins.

The RiskSense reports that one reason for this increase in vulnerabilities is the growing acceptance of open source in many enterprise applications: Between 80 percent to 90 percent of software in use has some type of open-source component. This growing popularity means errors are slipping through the cracks and hackers are there to exploit them.

And while developers want to speed up application development, security is seen as cautious and risk-averse, which can cause frictions between developer and cybersecurity teams when trying to ensure that apps are ready for production and are free of potential vulnerabilities.

When you are focused on building and shipping software, there are benefits of using open source software, Wei Lien Dang, the co-founder and chief strategy officer at StackRox, which makes security tools for containers and Kubernetes, told Dice.

However, organizations need to be careful that they understand how to deal with vulnerabilities and licensing issues that could create exposures, Dang added. Software development practices, regardless of the methodology, that borrow from open source need to account for product security. Its not unique to DevOpsif you overlook the [open source software] patching process, you can easily put your organization at risk.

The non-profitInformation Security Forumrecently published a report on the growing use of open source software in the enterprise driven by the adoption of DevOps and Agile methodologies.

The study notes that, while open source software contains about the same amount of vulnerabilities as proprietary software, there are security issues to consider as well as unique challenges.

In some organizations OSS has been inadvertently included in the IT infrastructure, or the organisation lacks a complete view of all OSS components deployed across their environment, according to the ISF report. If this is the case, [open source software] components may have been implemented in an uncontrolled manner and potentially left in an insecure state, outdated, unpatched and prone to vulnerability exploits. Without adequate knowledge of where and how OSS is used, organisations risk allowing vulnerabilities into their infrastructure that they are unaware of, and therefore cannot proactively address.

One of the biggest security blunders associated with open source software happened in 2017, when Equifaxs IT and security teams did not respond to and patch a vulnerability in theApache Strutsopen-source web application framework, which Chinese hackers allegedly exploited in order to gain access to the companys network and exfiltrate data on over 145 million U.S. citizens. It was one of the largest data breaches in history.

Thomas Hatch, CTO and co-founder of automation security firm SaltStack, believes that many security and IT professionals are focused on protecting and securing high-level components and not checking to see what open source components are finding their way into enterprise applications. This leaves a security gap.

The duties of IT workers vary greatly from organization to organization, but a large number of organizations have very few IT resources that are focused on patching, Hatch told Dice. Modern IT professionals spend much more time managing high-level APIs and UIs. They need to deal with a large group of systems and services and are not as focused on the system and OSS management as they were 10 years ago. The ability to take massive amounts of free, untested, unvalidated, and not necessarily secured software off-the-shelf has created a liability deeply embedded in areas that make heavy use of open source software.

While open source is considered a way to reduce costs, there is still a price to pay to ensure the security of applications that use this code, Wang said.

This comes in the form of experience and/or training to ensure that OSS code is patched and secured, Wang said. This is one of the reasons why organizations go with commercial software or a cloud managed service. In those cases, its the responsibility of the software or cloud provider to make patches available. You get the added benefit of a level of outsourced support and upkeep.

For those organizations that want to use open source and still ensure the security of the applications, there are two core considerations to make: Having the right tooling in place to ensure protection and creating the right processes for patching.

You need to have a way of discovering vulnerabilities, license issues and other risks associated with using open source software, Wang said. The methodology, Agile, DevOps or otherwise, shouldnt make a difference. If you choose to use OSS, you need to understand the security risks and implications of doing so and be prepared to deal with it appropriately.

Hatch, the CTO at SaltStack, has three ways that security and development teams can ensure the integrity of their applications:

Apply Patches:If the Equifax data breaches proved one thing, its that patching matters. In that case, the fault did not lie with Apache Struts but with Equifax not responding to the associated vulnerability alert in a timely manner, Hatch said.

Visibility:There are more rules and processes in place today for open source projects, and the most mature of these projects follow best practices for security disclosures. When the security issues are disclosed, they are done in a way that users will be able to see exactly what the issues are and how to upgrade the software, Hatch said.

Know What You Have:As open source has proliferated, developers have more choices than ever, which means keeping a better inventory of what components are being used in applications and knowing what bugs can affect them. Open source allows us to have hundreds of thousands of software components to use; keeping track of them all is daunting, Hatch said.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

View original post here:
Open Source Projects: Why Security Still Matters - Dice Insights

Microsoft, Google, Red Hat and IBM join forces to improve the security of open source software – Explica

The Open Source Security Foundation (OpenSSF) is a new organization founded by GitHub, Google, IBM, Microsoft, Intel, GitLab, HackerOne, and Red Hat, among others. Its goal is to unite initiatives to promote open source software security and accelerate collaboration between industries in one place.

At least thats how the official website of this new project within the Linux Foundation explains it, which has among its members some of the largest technology companies in the world, some that also leverage large-scale open source projects in their commercial productssuch as Microsoft (owner of GitHub), IBM (owner of Red Hat) and Google.

This foundation seeks to be a massive collaboration between software giants to improve response to vulnerabilities in open source software. In fact, Microsoft itself (recently a lover of open source) says it will move many of its resources to this initiative to help identify security threats, establish best practices, develop tools and improve the disclosure of vulnerabilities.

They hope that their vulnerability disclosure system will help developers fix problems with their open source software in minutes instead of months.

This last point is especially interesting given that at the OpenSSF they will collaborate with companies like Google, whose group of cybersecurity specialists (Project Zero) constantly collides with Microsoft for revealing vulnerabilities of the company before they have been able to fix them.

OpenSSF was established on the premise that there is a need for a mechanism for security researchers to collaborate in securing the open source supply chain, and that those researchers in different organizations have common interests and concerns. OpenSSF will seek to facilitate dialogues between these organizations.

Jim Zemlin, the CEO of the Linux Foundation explained it like this:

We believe that open source is a public good and in all industries we have a responsibility to unite to improve and support the security of open source software on which we all depend. Ensuring open source security is one of the most important things we can do and it requires everyone around the world to collaborate with the effort. OpenSSF will provide a forum for a truly collaborative effort across industries.

Other founding members of OpenSSF also include ElevenPaths, Okta, Purdue, SAFECode, StackHawk, Trail of Bits, JPMorgan Chase, NCC Group, OWASP Foundation, Uber, and VMware.

Share OpenSSF: Microsoft, Google, Red Hat and IBM join forces to improve the security of open source software

See the rest here:
Microsoft, Google, Red Hat and IBM join forces to improve the security of open source software - Explica

Breaking up the Patent Monopoly for the Benefit of Batteries – JD Supra

PV Magazine - July 23, 2020

The patent monopoly is at odds with the global need for battery storage technology. As the world mobilizes towards climate change solutions, companies with battery patents will face increasing pressure to share this critical intellectual property (IP). How they respond will impact our planets future.

The effort to make battery technology more widely available could not be more vital. It currently takes 20-30 years for energy sector inventions to reach the mass market. To reach current climate goals, we need to halve the diffusion time for clean technologies globally, according to Bernice Lee et al.s Who Owns Our Low Carbon Future? Intellectual Property and Energy Technologies.

There are a variety of steps that need to be taken, and quickly, in order for the patent industry to make the innovation well need to change the world: a global licensing database, patent pools, standard setting organizations, co-assigned patents, university-to-industry technology transfer, open to the public and patent pledges.

Global Licensing Database

Licensing has the benefit of being familiar to most sophisticated corporate players and many major universities. Licensing alone may not sufficiently speed diffusion, particularly if the licenses are exclusive. Cross-licensing agreements between parties to license their IP to each other may address part of this problem.

Because the existence, let alone terms, of many license agreements are confidential, parties hoping to enter a license have few benchmarks for price or other important terms. One way to provide such benchmarks would be to establish a global database with licensing data and best practices.

Patent Pools

In a patent pool, multiple patent holders assign or license their individual rights to a central entity, which in turn exploits the collective rights by licensing, manufacturing, or both. according to Robert P. Merges, Contracting into Liability Rules: Intellectual Property Rights and Collective Rights Organizations. Member patent holders generally must license all patents covering industry-relevant technology, but can use any other members IP for a fixed fee. Patent pools can vary widely in size.

Pooling IP may be essential in industries with complex technology that requires access to many, complementary, patents to function meaningfully. They significantly reduce the transaction costs of acquiring IP. However, the technology in a pool is only open to members; a small pool may do little to diffuse clean energy technology. Additionally, potential members may not want to pool royalties, particularly companies with high-value patents.

Standard Setting Organizations

Standard setting organizations (SSOs) are governing bodies of member companies that form technical standards. Like patent pools, member entities will typically contribute IP for the groups mutual use and may pay royalties into a shared pool. But SSOs go beyond patent pools by writing a catalogue of technical standards to which industry players commit to comply. This effort encourages standardization. And standardization is key to mass production.

On the other hand, SSOs require coordinated collective action and significant up-front investment. Establishing a battery technology SSO may take so much time that it overshadows the benefits.

Co-Assigned Patents

In the United States, and other nations, more than one entity or person can own a patent. Each co-owner may exploit the patent without compensating co-owners or obtaining their consent. This shared ownership obviates the need for a license between the co-owners. This arrangement also increases enforcement power, since all co-owners must join in enforcement actions.

But co-assigned patents may not be a viable global solution. The co-ownership statute does not allow co-owners to make, use, offer to sell, or sell the patented invention outside the U.S. (unsurprisingly). Co-owners must also be prepared to incur the additional expense of joining enforcement actions.

University-To-Industry Technology Transfer

Major research universities typically have technology transfer offices responsible for licensing technology invented by students and researchers to private enterprise. These licensed technologies can sometimes spin off into corporations. Many universities also have clean technology competitions, such as the MIT Clean Energy Prize, that encourage student-led startups.

Universities attract bright minds with time and resources to experiment with novel technologies. Dr. John Goodenough, inventor of the lithium-ion battery, is a professor of mechanical engineering at University of Texas at Austin. Universities also have clear monetary and prestige incentives to share their work with private industry. But research institutions may have existing relationships with companies, making access to this technology selective.

Open to the Public: Create a Culture of Open Innovation

The clean energy industry can also look to the example of open source software to guide its response to the need for globally-diffused battery storage technology.

Authors of open source software make the software open to the public; anyone can modify and improve on the source code. Users are still subject to a license, but the license is royalty free and contains minimal restrictions. Some open source licenses require that anyone who modifies the software must release their modifications publicly and free of charge.

These clever licensing schemes would not be possible without a programming culture that valued the free exchange of ideas. Companies with critical clean energy patents could recreate that same culture in the clean energy industry.

Patent Pledge

Patent pledges are a promise not to assert ones patents against others who use the patented technology. As remarkable as it may seem to promise something for nothing, more than 100 companies have historically taken patent pledges, including Google, IBM, Microsoft, Red Hat, Sun Microsystems, and Twitter.

This amnesty is not without exception; most pledges ends if the party using the patents sues the pledging patent holder. Companies with patent pledges are, in fact, getting something in return: goodwill with the public and, more importantly, with open source developers who rely on the free flow of information to innovate. Indeed, failing to take a patent pledge may tarnish your brand.

Unfortunately, its not clear if patent pledges are enforceable in court. Moreover, sharing ones patent is not at all the same as sharing best practices for producing the invention.

Conclusion

These solutions may seem challenging, but as the effects of climate change are more widely felt, the adoption of one or more of these strategies to make green technology more broadly available appears increasingly essential. Better to understand the available tools now with time yet to strategize for a cleaner future.

Read the original:
Breaking up the Patent Monopoly for the Benefit of Batteries - JD Supra

How analogue film will be the future of digital history – ComputerWeekly.com

Earlier in July, a new initiative to preserve historical open source code began, with snapshots of the code that makes Facebook and Netflix among others archived for future prosperity. The open source code of these and other GitHub repositories were successfully deposited in the GitHub Arctic Code Vault. These snapshots aim to preserve the code for future generations, historians and scientists.

The storage medium GitHub is entrusting to store this valuable archive on is good old fashioned film, which is not dissimilar to the reels that people used to put into cameras before digital camera manufacturers came along claiming SD cards were better.

The GitHub Arctic Code Vault is a data repository preserved in the Arctic World Archive (AWA). This data repository is located in a decommissioned coal mine in the Svalbard archipelago, closer to the North Pole than the Arctic Circle. The archive is stored 250 meters deep in the permafrost of an Arctic mountain. GitHub originally captured a snapshot of every active public repository on 2 February 2020.

The archive holds 6,000 of its most significant repositories for perpetuity, capturing the evolution of technology and software. This collection includes the source code for the Linux and Android operating systems; the programming languages Python, Ruby, and Rust; web platforms Node, V8, React, and Angular; cryptocurrencies Bitcoin and Ethereum; AI tools TensorFlow and FastAI; and many more.

Describing why it is important to maintain such a code archive, Thomas Dohmke, vice-president of special projects at GitHub, says: Over the past 20 years, open source software has dramatically changed our lives. For instance, the German coronavirus track and trace app and apps for finding the status of a flight or booking a car all rely on open source code.

Moving forward, there will be no major invention that doesnt rely on open source software, he said. For instance, the code that Katie Bouman and the team behind the Event Horizon Telescope, used to capture the first ever picture of a black hole, is based on open source software. Some 90% of all software is dependent on open source software, says Dohmke. No one wants to reinvent the wheel. Developers pull in libraries from GitHub.

From a purely practical perspective, the dependency on open source code in modern software development actually means that developers may find the code repository their application depends on has been removed by its maintainer. Stuff gets lost because hard disk drives fail, or the inventor intentionally deletes the repository when it became a burden. He says this recently happened when the inventor of a Javascript library decided to delete it. Its removal broke software that had coding dependencies based on it.

We know knowledge gets lost, says Dohmke. For instance, you cant find a recipe for Roman concrete or how they built them. The original plans for the Saturn V rocket were lost. Today, this is happening as developers strive to invent new things, which means early versions of products are not only superseded, but also forgotten. We didnt care about the early Amazon pages or the first blogs. Their creators have moved on.

From a historical perspective, he adds: The way we do software development may become irrelevant. Without an archive, the understanding of how software development was done in the early 21st century may be lost forever.

Dohmke says the team at GitHub has put together a manual describing software development practices and how developers collaborate. Such a manual may become more important as coding becomes more automated and the advent of AI algorithms such as GPT-3, which shows that an AI can be taught how to write software.

Due to the global pandemic, the original snapshot of GitHub could not be flown to the Arctic Global Archive. Instead, Github worked with Piql to write 21TB of repository data to 186 reels of piqlFilm.

According to Piql, film is a photosensitive, chemically stable and secure medium with proven longevity of hundreds of years. Film cannot be altered, and once the data is written, it cannot be edited. The data is stored offline and will not be affected in case of an electricity shortage or if exposed to electromagnetic pulses.

The code was successfully deposited in the Arctic Code Vault on 8 July 2020.

See the rest here:
How analogue film will be the future of digital history - ComputerWeekly.com

Open Source Security Foundation Joined by Microsoft and Others To Improve Linux Software – Redmondmag.com

News

Microsoft on Monday announced that it has joined the newly created Open Source Security Foundation, along with other industry partners, to improve the security of open source software.

Microsoft is a founding member of the Open Source Security Foundation, along with "GitHub, Google, IBM, JPMC, NCC Group, OWASP Foundation and Red Hat," the announcement added. The JPMorgan Chase banking chain is also listed as a founding member, per the Open Source Security Foundation's FAQ.

The Open Source Security Foundation is "hosted at the Linux Foundation" and brings together various Linux Foundation-initiated efforts. Those efforts include the "Core Infrastructure Initiative (CII)" and the "GitHub-initiated Open Source Security Coalition (OSSC)," among others. The OSSC members are joining the Open Source Security Foundation and the efforts of CII likely will get dissolved into the new foundation, the FAQ explained.

The aim of the new foundation is to "improve the security of open source software by building a broader community, targeted initiatives and best practices," Microsoft indicated. A list of current technical initiative being overseen by the foundation can be found at this GitHub page.

The open source software community has tended to critique proprietary software companies, such as Microsoft, because its code can't be independently checked. However, Microsoft's announcement by Mark Russinovich, Microsoft's chief technology officer, offered an alternative view, namely:

"Given the complexity and communal nature of open source software, building better security must also be a community-driven process," Russinovich argued in explaining Microsoft's support for the new foundation.

Microsoft had already been working with the OSSC to identify security threats in open source software. It's also worked to speed the software fixing process. Additionally, Microsoft has developed security tools for open source developers, and it currently offers best-practices advice, Russinovich noted.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

Original post:
Open Source Security Foundation Joined by Microsoft and Others To Improve Linux Software - Redmondmag.com