Manage Type 1 Diabetes With Lifesaving Open Source Software – Edgy Labs (blog)

A Type 1 diabetes patient took it upon herself to push diabetes technologies forward. She built a pancreatic system to monitor her diabetes, but more impressively, she gave it away for free.

Diabetes is one of the most common diseases in the world, with 8.5% global prevalence among adults over 18, which is about 422 million (WHO, 2014).

According to the American Diabetes Association (ADA), in 2015, nearly 1 in 10 Americans had diabetes. Thats 9.4% of the population, or 30.3 million people. Among those, 1.25 million have type 1 diabetes.

Type 1 diabetes is a chronic autoimmune disease that destroys the pancreas cells that secrete insulin, the hormone regulating blood glucose or blood sugar levels.

Most TD1 cases occur before the age of 20, and from then on the patient will have to learn how to manage their disease throughout their adult life.

If monitoring blood sugar levels is important for type 2 diabetes patients, its crucial for TD1 patients.

TD1 patients must make decisions several times every day on the precise dose of insulin to be administered, either by injection or by insulin pump which they have to constantly adjust.

Errors may cause hyperglycemia, which can cause many of health complications associated with diabetes (blindness, amputations, kidney or cardiac failure in the most severe cases). Sometimes patients, afraid of hyperglycemia, overdose on insulin only to trigger hypoglycemia, which can lead to coma and even death.

Diabetes tech, CGMs (Continuous Glucose Monitor) and insulin pumps, have made things easier for TD1 patients. However, they still need to monitor their blood sugar levels throughout the day.

During the night, type 1 patients rely on alarms built into their blood sugar monitors.

Sounds tough, right?

It is. This diabetes-induced anxiety motivated one patient to take necessary steps to get control over her own health and life.

Alabama-born Dana Lewis is one of 1.5 million T1D patients in the U.S.she was diagnosed at the age of 14.

Dana, as well as her friends and family, especially when she was living alone in Seattle, were afraid that the glucose alarm would not wake her up and that she would die of overnight hypoglycemia, or whats grimly known as dead in bed syndrome.

First, the University of Alabama graduate used an open-source code to transfer the data from her CGM to the cloud and back to her again with louder alarms. Then, she made data shareable with her mother and boyfriend, just in case she didnt wake up so they could call her.

Then, along with her husband Scott Leibrand, she took her system a step further and created a software so she no longer had to wake up several times to push the button on her pump.

Called APS (the Artificial Pancreas System), Danas system consists of, in addition to the algorithm, a CGM and insulin pump, to monitor blood sugar and automatically adjust the insulin pump around the clock, day or night.

Dana also made her creation free, via the Open APS organization, so that diabetes patients can DIY build their own system.

Recently, Fast Company put Dana Lewis on its 100 Most Creative People in Business list for 2017.

See the original post:
Manage Type 1 Diabetes With Lifesaving Open Source Software - Edgy Labs (blog)

Developers Petition Adobe to Open Source Flash – MakeUseOf

It turns out that not everyone hates Flash. Honestly, we thought the feelings of revulsion were fairly universal. However, one web developer has made it his mission to save Flash for future generations. How? By petitioning Adobe to open source Flash once they kill it off in 2020.

In case you hadnt heard, Adobe is killing Flash Adobe Is Finally Killing Flash... in 2020 Adobe Is Finally Killing Flash... in 2020 Adobe has announced it's killing Flash. This is it, with Flash going away for good. No comebacks, no last-minute reprieves, gone but not forgotten. Read More . Finally, after years of tech companies begging Adobe Die Flash Die: The Ongoing History of Tech Companies Trying to Kill Flash Die Flash Die: The Ongoing History of Tech Companies Trying to Kill Flash Flash has been in decline for a long time, but when will it die? Read More to hammer the final nail in the rotting coffin. It wont be happening for a few more years, but Adobe has announced its intention to end-of-life Flash at some point in 2020.

Most people are pleased that Adobe is finally killing Flash. Most people, but not everyone

A Finnish web developer by the name of Juha Lindstedt has started a petition asking Adobe to open source Flash. He accepts that Flash is flawed Security Alert: You Need to Uninstall Flash Right Now Security Alert: You Need to Uninstall Flash Right Now Flash is so full of security holes and vulnerabilities, it just doesn't make sense to keep it installed anymore. Here's how to get rid of it. Read More as it exists at the moment. However, hes arguing the case for Adobe to essentially hand Flash over to the internet to see what they can do with it.

His main argument appears to be that Flash is an important piece of Internet history and killing Flash means future generations cant access the past. The early web was built using Flash, which means that once its gone, games, experiments and websites would be forgotten.

Lindstedt even lists some of the ways open sourcing the software would keep Flash projects alive safely for archive reasons. He argues, There might be a way to convert swf/fla to HTML5/canvas/webgl/webassembly, or some might write a standalone player for it. Another possibility would be to have a separate browser.

The idea of open sourcing Flash is currently being discussed in various places, including Hacker News and Newgrounds. Opinions are very much split down the middle, which was always going to be the case with a debate such as this one. In the end though, it will be Adobes decision.

If you want to sign the petition to support the idea of open sourcing Flash you simply need to Star the repository on GitHub. Lindstedt promises that he will deliver this petition to Adobe at some point in the future. But its anyones guess what Adobes reaction will be.

Were fans of open source software Open Source vs. Free Software: What's the Difference and Why Does It Matter? Open Source vs. Free Software: What's the Difference and Why Does It Matter? Many assume "open source" and "free software" mean the same thing but that's not true. It's in your best interest to know what the differences are. Read More here at MakeUseOf. However, open sourcing Flash would just be keeping it around, and those gaping security holes with it. Flash IS part of internet history, but thats where it belongs; in the past with all of the other software thats been abandoned over the years. Let Flash die with dignity Why Flash Needs to Die (And How You Can Get Rid of It) Why Flash Needs to Die (And How You Can Get Rid of It) The Internet's relationship with Flash has been rocky for a while. Once, it was a universal standard on the web. Now, it looks like it may be headed to the chopping block. What changed? Read More , and lets all move onto a bigger and better future.

What are your views on Flash? Do you welcome Adobes decision to kill it off in 2020? Or do you think it would be a good idea to open source Flash to preserve the past for future generations? Will you be signing the petition? The comments are open below

Image Credit: Antonio Silveira via Flickr

Visit link:
Developers Petition Adobe to Open Source Flash - MakeUseOf

Is Wall Street Ready for Open Source Software? – FTF News

Eugene Grygo, Chief Content Editor, FTF News

Open source software and the open collaboration practices that it engenders are quietly gaining ground among software vendors, securities firms and maybe across Wall Street.

One of the biggest proponents of OSS movement, the Symphony Software Foundation, based in Palo Alto, Calif., has been consistently pulling market participants into its camp.

Known for its support of the Symphony messaging platform, Symphonys community has grown to 200,000 licensed users across 170 companies, including 40 of the worlds top asset managers and 25 of the largest global banks.

In February, the nonprofit group added charting and data visualization provider ChartIQ, data and analytics application vendor The Beast Apps, and real-time products and services provider Tick42 as new, cutting-edge members. These members will add to the dynamic between large financial institutions and younger fintech firms to collaborate in the open and achieve true interoperability through open source, Gabriele Columbro, executive director of the Foundation, said in a prepared statement at the time.

In addition, about a year ago, OpenFin, a provider of HTML5 runtime technology, joined the foundation for a collaboration that will enable OpenFin and other foundation member organizations to drive fintech standardization, contribute to the Symphony platform, and further drive the adoption of open source technology within financial services, officials say.

Through the foundations Open Governance model, OpenFin will influence the overall product direction of the Symphony platform, while its participation in working groups will aim to foster container standardization and application interoperability for the financial industry, officials add.

The foundation now reports that it has more than 50 open source projects underway, and two months shy of its second anniversary, it also has more than 100 contributors, four active working groups, and 25 member organizations that are part of the Symphony ecosystem.

The foundation has also added four new Silver Members Arcontech, BankEX, Cloud9 Technologies and FinTech Studios that bring experience in financial market data solutions, distributed ledger technology (DLT), voice trading and artificial intelligence technologies.

When done right, open source enables a degree of innovation which is simply not possible in proprietary development or solution-time collaboration models like open APIs [application programming interfaces] or open standards, Columbro, executive director, Symphony Software Foundation, in a statement.

The growth of our community shows how these strategic benefits can outweigh legal, technical and frankly cultural aspects preventing effective innovation in financial services technology, Columbro adds. We see our Foundation as the proven-to-be-trusted environment where fintech producers and consumers can collaborate on open source, industry-grade standard solutions, sparking innovation on common and new use cases that have the potential of reshaping Wall Street.

With Wall Street in mind, the foundation has also announced that it is hosting an inaugural Open Source Strategy Forum in New York on November 8. The one-day conference is open to executive-level decision makers and senior technologists from financial services seeking to drive industry innovation through open source, officials say.

In the meantime, the foundation offers an Open Developer Platform (ODP) to open source contributors, providing open API access to Symphony and a compliant open source development process, officials add.

More information about the technology is at: http://symphony.foundation

Read this article:
Is Wall Street Ready for Open Source Software? - FTF News

Open-source software rapidly processes spectral data, accurately identifies and quantifies lipid species – Phys.Org

July 25, 2017 The LIQUID interface. Credit: Pacific Northwest National Laboratory

Lipids play a key role in many metabolic diseases, including hypertension, diabetes, and stroke. So having a complete profile of the body's lipidsits "lipidome"is important.

Lipidomics studies are often based on liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). But researchers have a hard time processing data fast enough, and they are unable to confidently identify and accurately quantify the lipid species detected.

Incorrect identifications can result in misleading biological interpretations. Yet existing tools are not designed for high-volume verification of identifications and have to be manually verified to ensure accuracy.Since scientists increasingly want larger scale lipidomics studies, analysts need improved software for identifying lipids.

A recent paper by lead author Jennifer E. Kyle and eight co-authors at Pacific Northwest National Laboratory (PNNL) introduces an open-source lipid identification software, Lipid Quantification and Identification (LIQUID). The scoring is trainable, the search database is customizable, and multiple lines of evidence are displayed, allowing for confident identifications. LIQUID also makes single- and global-target searches available, as well as fragment-pattern searches. All this makes it possible for researchers to track similar and repeating patterns of MS/MS spectra.

Compared to other freely available software commonly used to identify lipids and other small molecules, LIQUID has a rapid processing time that can generate a higher number of validated lipid identifications faster. Its reference database includes more than 21,200 unique lipid targets across six lipid categories, 24 classes, and 63 subclasses.

LIQUID is able to confidently identify more lipid species with a faster combined processing and validation time than any other software in its field.

What's Next?

Developers of LIQUID will increase the reference library to include lipids that may be unique to particular disease states or to organisms from select environmental niches. This means researchers will be able to characterize a more diverse range of samples and therefore enhance the understanding of biological and environmental systems of interest.

Explore further: Link found between types of lipid metabolism and species lifespan

More information: Jennifer E. Kyle et al. LIQUID: an-open source software for identifying lipids in LC-MS/MS-based lipidomics data, Bioinformatics (2017). DOI: 10.1093/bioinformatics/btx046

An international team of researchers has found a link between types of lipid metabolism in different species and differing lifespans. In their paper published in the journal Scientific Reports, the team describes their study ...

(PhysOrg.com) -- A journal article showcasing results of lipidomics analyses for identifying novel biomarkers of diabetes conducted at Pacific Northwest National Laboratory was selected as "Editor's Choice" by two clinical ...

Could controlling cell-membrane fat play a key role in turning off disease?

Almost everyone knows that fats are the culprits in expanding waistlines and killer diseases, but scientific understanding of the roles of "lipids" -- fats and oils -- inside cells in the body got short shrift until launch ...

If you want to create sustainable biofuels from less and for less, you've got a range of options. And one of those options is to go microbial, enlisting the help of tiny but powerful bacteria in creating a range of renewable ...

Studying how our bodies metabolize lipids such as fatty acids, triglycerides, and cholesterol can teach us about cardiovascular disease, diabetes, and other health problems, as well as reveal basic cellular functions. But ...

From the smallest cell to humans, most organisms can sense their local population density and change behavior in crowded environments. For bacteria and social insects, this behavior is referred to as "quorum sensing." Researchers ...

A team of researchers has developed a faster and easier way to make sulfur-containing polymers that will lower the cost of large-scale production.

The structure of the critical quality control checkpoint enzyme that oversees the production of thousands of secreted glycoproteins has been solved by a fruitful collaborative effort at Diamond Light Source. The study, recently ...

Around half of all medications for chronic diseases are not taken as prescribed, costing the U.S. health care system more than $100 billion in avoidable hospital stays each year.

One of the hardest parts in drug discovery is pinning down how a medicine actually works in the body. It took nearly 100 years to uncover the molecular target of aspirin, but even with cutting-edge technology, it can take ...

Copper has long been known for its ability to kill bacteria and other microbes.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Excerpt from:
Open-source software rapidly processes spectral data, accurately identifies and quantifies lipid species - Phys.Org

Federal Cloud Computing – TechTarget

The following is an excerpt from Federal Cloud Computing by author Matthew Metheny and published by Syngress. This section from chapter three explores open source software in the federal government.

Open source software (OSS) and cloud computing are distinctly different concepts that have independently grown in use, both in the public and private sectors, but have each faced adoption challenges by federal agencies. Both OSS and cloud computing individually offer potential benefits for federal agencies to improve their efficiency, agility, and innovation, by enabling them to be more responsive to new or changing requirements in their missions and business operations. OSS improves the way the federal government develops and also distributes software and provides an opportunity to reduce costs through the reuse of existing source code, whereas cloud computing improves the utilization of resources and enables a faster service delivery.

In this chapter, issues faced by OSS in the federal government will be discussed, in addition to the relationship of the federal government's adoption of cloud computing technologies. However, this chapter does not present a differentiation of OSS from proprietary software, rather focuses on highlighting the importance of the federal government's experience with OSS in the adoption of cloud computing.

Over the years, the private sector has encouraged the federal government to consider OSS by making a case that it offers an acceptable alternative to proprietary commercial off-the-shelf (COTS) software. Regardless of the potential cost-saving benefits of OSS, federal agencies have historically approached it with cautious interest. Although, there are other potential issues in transitioning from an existing proprietary software, beyond cost. These issues include, a limited in-house skillset for OSS developers within the federal workforce, a lack of knowledge regarding procurement or licensing, and the misinterpretation of acquisition and security policies and guidance. Although some of the challenges and concerns have limited or slowed a broader-scale adoption of OSS, federal agencies have become more familiar with OSS and the marketplace expansion of available products and services, having made considerations for OSS as a viable alternative to enterprise-wide COTS software. This renewed shift to move toward OSS is also being driven by initiatives such as the 18F and the US Digital Service, and the publication of the guidance such as the Digital Services Playbook, which urges federal agencies to "consider using open source, cloud based, and commodity solutions across the technology stack".

Interoperability, portability, and security standards have already been identified as critical barriers for cloud adoption within the federal government. OSS facilitates overcoming standards obstacles through the development and implementation of open standards. OSS communities support standards development through the "shared" development and industry implementation of open standards. In some instances, the federal government's experience with standards development has enabled the acceptance and use of open standards-based, open source technologies and platforms.

The federal government's use of OSS has its beginning in the 1990s. During this period, OSS was used primarily within the research and scientific community where collaboration and information sharing was a cultural norm. However, it was not until 2000 that federal agencies began to seriously consider the use of OSS as a model for accelerating innovation within the federal government. As illustrated in Fig. 3.1, the federal government has developed a list of OSS-related studies, policies, and guidelines that have formed the basis for the policy framework that has guided the adoption of OSS. This framework tackles critical issues that have inhibited the federal government from attaining the full benefits offered by OSS. Although gaps still exist in specific guidelines relating to the evaluation, contribution, and sharing of OSS, the policy framework serves as a foundation for guiding federal agencies in the use of OSS. In this section, we will explore the policy framework with the objective of describing how the current policy framework has led to the broader use of OSS across the federal government, and more importantly how this framework has enabled the federal government's adoption of cloud computing by overcoming the challenges with acquisition and security that will be discussed in detail in the next section.

The President's Information Technology Advisory Committee (PITAC), which examined OSS, was given the goal of:

The PITAC published a report concluding that the use of the open source development model (also known as the Bazaar model) was a viable strategy for producing high-quality software through a mixture of public, private, and academic partnerships. In addition, as presented in Table 3.1, the report also highlighted several advantages and challenges. Some of these key issues have been at the forefront of the federal government's adoption of OSS.

Over the years since the PITAC report, the federal government has gained significant experience in both sponsoring and contributing to OSS projects. For example, one of the most widely recognized contributions by the federal government specifically related to security is the Security Enhanced Linux (SELinux) project. The SELinux project focused on improving the Linux kernel through the development of a reference implementation of the Flask security architecture for flexible mandatory access control (MAC). In 2000, the National Security Agency (NSA) made the SELinux available to the Linux community under the terms of the GNU's Not Unix (GNU) General Public License (GPL).

Starting in 2001, the MITRE Corporation, for the US Department of Defense (DoD), published a report42 that built a business case for the DoD's use of OSS. The business case discussed both the benefits and risks for considering OSS. In MITRE's conclusion, OSS offered significant benefits to the federal government, such as improved interoperability, increased support for open standards and quality, lower costs, and agility through reduced development time. In addition, MITRE highlighted issues and risks, recommending any consideration of OSS should be carefully reviewed.

Shortly after the MITRE report, the federal government began to establish specific policies and guidance to help clarify issues around OSS. The DoD Chief Information Officer (CIO) published the Department's first official DoD-wide memorandum to reiterate existing policy and to provide clarifying guidance on the acquisition, development, and the use of OSS within the DoD community. Soon after the DoD policy, the Office of Management and Budget (OMB) established a memorandum to provide government-wide policy regarding acquisition and licensing issues.

Since 2003, there were multiple misconceptions, specifically within the DoD, regarding the use of OSS. Therefore, in 2007, the US Department of the Navy (DON) CIO released a memorandum that clarified the classification of OSS and directed the Department to identify areas where OSS can be used within the DON's IT portfolio. This was followed by another DoD-wide memorandum in 2009, which provided DoD-wide guidance and clarified the use and development of OSS, including explaining the potential advantages of the DoD reducing the development time for new software, anticipating threats, and response to continual changes in requirements.

In 2009, OMB released the Open Government Directive, which required federal agencies to develop and publish an Open Government Plan on their websites. The Open Government Plan provided a description on how federal agencies would improve transparency and integrate public participation and collaboration. As an example response to the directive support for openness, the National Aeronautics and Space Administration (NASA), in furtherance of its Open Government Plan, released the "open. NASA" site that was built completely using OSS, such as the LAMP stack and the WordPress content management system (CMS).

On May 23, 2012, the White House released the Digital Government Strategy that complements other initiatives and established principles for transforming the federal government. More specifically, the strategy outlined the need for a "Shared Platform" approach. In this approach, the federal government would need to leverage "sharing" of resources such as the "use of open source technologies that enable more sharing of data and make content more accessible".

The Second Open Government Action Plan established an action to develop an OSS policy to improve access by federal agencies to custom software to "fuel innovation, lower costs, and benefit the public". In August 2016, the White House published the Federal Source Code Policy, which is consistent with the "Shared Platform" approach in the Digital Government's Strategy, by requiring federal agencies make available custom code as OSS. Further, the policy also made "custom-developed code available for Government-wide reuse and make their code inventories discoverable at https://www.code.gov ('Code.gov')".

In this section, we discussed key milestones that have impacted the federal government's cultural acceptance of OSS. It also discussed the current policy framework that has been developed through a series of policies and guidelines to support federal agencies in the adoption of OSS and the establishment of processes and policies to encourage and support the development of OSS. The remainder of this chapter will examine the key issues that have impacted OSS adoption and briefly examine the role of OSS in the adoption of cloud computing within the federal government.

About the author:

Matthew Metheny, PMP, CISSP, CAP, CISA, CSSLP, CRISC, CCSK, is an information security executive and professional with twenty years of experience in the areas of finance management, information technology, information security, risk management, compliance programs, security operations and capabilities, secure software development, security assessment and auditing, security architectures, information security policies/processes, incident response and forensics, and application security and penetration testing. He currently is the Chief Information Security Officer and Director of Cyber Security Operations at the Court Services and Offender Supervision Agency (CSOSA), and is responsible for managing CSOSA's enterprise-wide information security and risk management program, and cyber security operations.

Read the original post:
Federal Cloud Computing - TechTarget

Teradata Acquires San Diego-based Start-up StackIQ – MarTech Series (press release)

Acquisition Bolsters Teradatas Build and Delivery Capability of On-Premises and Hybrid Cloud Solutions for Its Enterprise Customers Teradata, the leading data and analytics company, announced the acquisition of StackIQ, developers of one of the industrys fastest bare metal software provisioning platforms which has managed the deployment of cloud and analytics software at millions of servers in data centers around the globe. The deal will leverage StackIQs expertise in open source software and large cluster provisioning to simplify and automate the deployment of Teradata Everywhere. Offering customers the speed and flexibility to deploy Teradata solutions across hybrid cloud environments, allows them to innovate quickly and build new analytical applications for their business.

In addition to technology assets, the acquisition also includes StackIQs talented team of engineers, who will join Teradatas R&D organization to help accelerate the companys ability to automate software deployment in operations, engineering and end-user customer ecosystems.

Teradata prides itself on building and investing in solutions that make life easier for our customers, saidOliver Ratzesberger, Executive Vice President and Chief Product Officer for Teradata.Only the best, most innovative and applicable technology is added to our ecosystem, and StackIQ delivers with products that excel in their field. Adding StackIQ technology to IntelliFlex, IntelliBase and IntelliCloud will strengthen our capabilities and enable Teradata to redefine how systems are deployed and managed globally.

Our incredibly high standards also apply to the people we hire, continued Ratzesberger. As Teradata continues to expand its engineering (R&D) skills to drive ongoing technology innovation, we are seeking qualified, talented individuals to join our team. Once again, StackIQ has set the bar with stellar engineers who we are honored to now call Teradata employees.

Under terms of the deal, Teradata will now own StackIQs unique IP that automates and accelerates software deployment across large clusters of servers (both physical and virtual/in the cloud). This increase in automation will occur across all Teradata Everywhere deployments, dramatically reducing build and delivery times for complex business analytics solutions and adding the capability to manage software-only appliances across hybrid cloud infrastructure. The speed of Teradatas new integrated solution also allows for rapid re-provisioning of internal test or benchmarking hardware, as well as swift redeployment between technologies to match a customers changing workload requirements.

Joining Teradata, the market leader in analytic data solutions, truly validates the importance of StackIQs engineering and the talent we have cultivated over the years, saidTim McIntire, Co-Founder at StackIQ. We are looking forward to bringing a bit ofSan Diegosstart-up culture to Teradata, and working together to simplify Teradatas customer experience for system software deployment and upgrades.

Read this article:
Teradata Acquires San Diego-based Start-up StackIQ - MarTech Series (press release)

The need for open source security in medical devices – ITProPortal

Wireless and wearable technologies have brought about dramatic improvements in healthcare, allowing patients mobility while providing healthcare professionals with easier access to patient data. Many medical devices that were once tethered to patients, positioned next to hospital beds, or at a fixed location, are now transportable. Evolving from the traditional finger-prick method of glucose monitoring, wearable devices equipped with sensors and wireless connectivity now assist with monitoring blood sugar levels, connect with health-care providers, and even deliver medication. Critical life-sustaining devices, such as pacemakers, can be checked by doctors using wireless technology and reduce the time a patient needs to spend at the hospital while allowing the doctor to react more rapidly to patient problems.

A major driver of the technological revolution in medical devices is software, and that software is built on a core of open source. Black Ducks 2017 Open Source Security and Risk Analysis (OSSRA) research found that the average commercial application included almost 150 discrete open source components, and that 67 per cent of the over 1000 commercial applications scanned included vulnerable open source components. The analysis made evident that the use of open source components in commercial applications is pervasive across every industry vertical, including the healthcare industry.

The arguments for using open source are straightforward open source lowers development costs, speeds time to market, and accelerates innovation. When it comes to software, every manufacturer wants to spend less time on what are becoming commodities such as the core operating system and connectivity and focus on features that will differentiate their brand. The open source model supports that objective by expediting every aspect of agile product development.

But visibility and control over open source are essential to maintain the security and code quality of medical device software and platforms.

Over two million patients in the United States have implanted devices, including pacemakers and implantable cardioverter-defibrillators. More than seven million patients now benefit from remote monitoring and the use of connected medical devices as an integral part of their care routines.

While the software used in the vast majority of medical devices is closed and proprietary to prevent commercial rivals from copying each other's code, that software usually contains a wealth of open source components. The OSSRA study I cited earlier found open source in 46 per cent of the commercial applications associated with the healthcare, health tech, and life sciences sector.

Researchers Billy Rios and Jonathan Butts recently acquired hardware and supporting software for four different brands of pacemakers and looked for weaknesses in architecture and execution. One of the biggest issues noted in the paper they published was one Black Duck sees time and again unpatched software libraries.

All four pacemakers examined contained open source components with vulnerabilities, and roughly 50 per cent of all components included vulnerabilities. Most shockingly, the pacemakers had an average of 50 vulnerabilities per vulnerable component and over 2,000 vulnerabilities per vendor.

When patient safety is a function of software, the issue of software security becomes paramount particularly when it comes to medical devices. But, secure software is an ephemeral concept. What we think of as secure today can change overnight as new vulnerabilities are discovered and disclosed. As code ages, the probability is high that more vulnerabilities are likely to be disclosed. An average 3,600 new open source vulnerabilities are discovered every year (though still far less than that reported in commercial code).

Open source is neither more nor less secure than custom code. However, there are certain characteristics of open source that make vulnerabilities in popular components very attractive targets for hackers. The return on investment for an open source vulnerability is high. A single exploit can be used to compromise hundreds or thousands of applications using that vulnerable component.

Whether open source or proprietary code, most known vulnerabilities like Heartbleed, and the SMB vulnerability exploited in the WannaCry ransomware attacks, have patches available on the date of their public disclosure. But, despite the availability of patches, an alarming number of both companies and individuals simply do not apply them. Months after Microsoft issued its security patch, thousands of computers remain vulnerable to the WannaCry exploit for a variety of reasons, ranging from the use of bootleg software to simple neglect.

Patches often arent applied because of concerns that the patch might break a currently-working system. Each time a patch is introduced, changing a system can impact its reliability and functionality. Healthcare organisations, for example, often will put functionality and uptime as a higher priority than security, and in doing so expose themselves to attack on unpatched and vulnerable applications.

In other cases, its a lack of insight organisations are simply unaware of a critical vulnerability or its patch until theyre under attack. While software vendors like Microsoft can push updates and fixes out to users, open source has a pull support model. Unlike most proprietary software, users of open source are responsible for keeping track of vulnerabilities as well as fixes and updates for the open source they use rather than having those fixes pushed out to them. Unless a vendor is aware that a vulnerable open source component is included in its application(s), its highly probable that that component will remain unpatched.

Rios and Butts paper didnt state if the researchers checked for software/firmware updates from the vendors prior to analysis. My assumption is that they did not, but whether this would have made a real-world difference is arguable, Black Ducks own research indicates that vendors are typically unaware of all of the open source they use, since it can enter the code base in so many ways. On average, prior to having a Black Duck code scan, our customers were aware of less than half of the third-party libraries they use.

To be clear, the problem isnt the use of open source. Its the fact that open source is often invisible to those using it. Vulnerabilities in open source may open up users to targeted or non-targeted attacks. Depending on the software (home monitoring, physician, programmer, etc.) the attack could affect a single patient or an entire practice. When the WannaCry ransomware spread across the world, multiple U.K. hospitals reported that their radiology departments were completely knocked out by the outbreak.

If the attack is on implantable medical devices, this could become a life or death decision.

Unless the medical device software supply chain carefully tracks the open source they use, and maps that open source to the thousands of vulnerabilities disclosed every year, they will be unable to protect their applicationsand their customersfrom those vulnerabilities.

To make progress in defending against open source security threats and compliance risks, both medical device manufacturers and their suppliers must adopt open source management practices that:

Fully inventories open source software: Organisations cannot defend against threats that they do not know exist. A full and accurate inventory (bill of materials) of the open source used in their applications is essential.

Map open source to known security vulnerabilities: Public sources, such as the National Vulnerability Database provide information on publicly disclosed vulnerabilities in open source software. Organisations need to reference these sources to identify which of the open source components they use are vulnerable.

Identify license and code quality risks: Failure to comply with open source licenses can put organisations at significant risk of litigation and compromise of IP. Likewise, use of out-of-date or poor quality components degrades the quality of applications that use them. These risks also need to be tracked and managed.

Enforce open source risk policies: Many organisations lack even basic documentation and enforcement of open source policies that would help them mitigate risks. Manual policy reviews are a minimum requirement, but as software development becomes more automated so too must management of open source policies.

Alert on new security threats: With more than 3,600 new open source vulnerabilities discovered every year, the job of tracking and monitoring vulnerabilities does not end when applications leave development. Organisations need to continuously monitor for new threats as long as their applications remain in service.

As open source use continues to increase, effective management of open source security and license compliance risk is becoming increasingly important. By integrating risk management processes and automated solutions into their product lifecycle, medical device manufacturers can maximise the benefits of open source use while effectively managing its risks.

Mike Pittenger, Vice President of Security Strategy, Black Duck Software Image Credit: Photo_Concepts / iStock

Original post:
The need for open source security in medical devices - ITProPortal

Monthly quiz: Test yourself on open source development tools trends – TechTarget

The move to open source development tools -- already unstoppable -- continues to gain momentum. Years ago, open source was looked upon as a way to save money. Today, a key driver is the clear fact that, with tens of thousands of contributors sharing their expertise and the ever-widening availability of high-quality code, resistance is futile.

In this expert handbook, we explore the issues and trends in cloud development and provide tips on how developers can pick the right platform.

One gauge for measuring the growth of open source is how quickly container technology has been adopted. According to a January 2017 report from 451 Research, the global application container segment -- just one piece of the overall tools market -- reached $762 million in 2016 and is forecast to reach $2.7 billion in 2020. That's an impressive 40% compound rate over four years.

Yet, not all is well. A problem with open source components is that they are, well, open. They could come from anywhere, from anyone. How do they rate in terms of performance and security? It's the big unknown. In its 2017 open source security and risk analysis report, Black Duck Software noted its own audit found that open source components were present in 96% of applications it examined, with apps incorporating 147 unique components on average. And consider this scary finding: The financial services and financial technology sector had the highest number of vulnerabilities per application at 52. Fully 60% of those apps harbored high-risk vulnerabilities.

How well do you know open source development tools and components trends? Take this brief quiz, and see how well you measure up.

Joel Shore is news writer for TechTarget's Business Applications and Architecture Media Group. Write to him atjshore@techtarget.comor follow @JshoreTTon Twitter.

Red Hat exec explains Container Health Index

Most open source software has security holes

Build an open source security toolkit

Follow this link:
Monthly quiz: Test yourself on open source development tools trends - TechTarget

Scality Launches Zenko, Open Source Software To Assure Data Control In A Multi-Cloud World – insideBIGDATA

Scality, a leader in object and cloud storage, announced the open source launch of its Scality Zenko, a Multi-Cloud Data Controller. The new solution is free to use and embed into developer applications, opening a new world of multi-cloud storage for developers.

Zenko provides a unified interface based on a proven implementation of the Amazon S3 API across clouds. This allows any cloud to be addressed with the same API and access layer, while storing information in their respective native format. For example, any Amazon S3-compliant application can now support Azure Blob Storage without any application modification. Scalitys vision for Zenko is to add data management controls to protect vital business assets, and metadata search to quickly subset large datasets based on simple business descriptors.

We believe that everyone should be in control of their data, said Giorgio Regni, CTO at Scality. Our vision for Zenko is simplebring control and freedom to the developer to unleash a new generation of multi-cloud applications. We welcome anyone who wants to participate and contribute to this vision.

Zenko builds on the success of the companys Scality S3 Server, the open-source implementation of the Amazon S3 API, which has experienced more than 600,000 DockerHub pulls since it was introduced in June 2016. Scality is releasing this new code to the open source community under an Apache 2.0 license, so that any developer can use and extend Zenko in their development.

With Zenko, Scality makes it even easier for enterprises of all sizes to quickly and cost-effectively deploy thousands of apps within the Microsoft Azure Cloud and leverage its many advanced services, said Jurgen Willis, Head of Product for Azure Object Storage at Microsoft Corp. Data stored with Zenko is stored in Azure Blob Storage native format, so it can easily be processed in the Azure Cloud for maximum scalability.

Zenko Multi-Cloud Data Controller expands the Scality S3 Server, and includes:

Application developers looking for design efficiency and rapid implementation will appreciate the productivity benefits of using Zenko. Today, applications must be rewritten to support each cloud, which reduces productivity and makes the use of multiple clouds expensive. With Zenko, applications are built once and deployed across any cloud service.

Cityzen Data provides a data management platform for collecting, storing, and delivering value from all kinds of sensor data to help customers accelerate progress from sensors to services, primarily for health, sport, wellness, and scientific applications, said Mathias Herberts, co-founder and CTO at Cityzen Data. Scality provides our backend storage for this and gives us a single interface for developers to code within any cloud on a common API set. With Scality, we can write an application once and deploy anywhere on any cloud.

Sign up for the free insideBIGDATAnewsletter.

Link:
Scality Launches Zenko, Open Source Software To Assure Data Control In A Multi-Cloud World - insideBIGDATA

Software wet wipes, Sonatype advocates supply chain hygiene – ComputerWeekly.com (blog)

Supply chain automation company Sonatype produces what it calls itsSoftware Supply Chain Report every year (now in its third) in an attempt tohighlights alleged risks lurking within open source software components.

Access the latest thinking in AI and machine learning, and look at how these technologies could help your IT department

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

The firm gets quite puritanical and says it wants to quantify the empirical benefits of actively managing so-called software supply chain hygiene.

Theres a big claim being made here and it reads as follows organisations that are actively managing the quality of open source components flowing into production applications are realising:

Sonatype specialises in technology areas which includeautomated governance tools within the context of what we now understand to be the DevOps discipline.

With the above fact (and perhaps a pinch of salt) in mind then, we can learn that analysis of more than 17,000 applications reveals that applications built by teams utilising automated governance tools reduced the percentage of defective components by 63%.

Companies are no longer building software applications from scratch, they are manufacturing them as fast as they can using an infinite supply of open source component parts. However, many still rely on manual and time consuming governance and security practices instead of embracing DevOps-native automation. Our research continues to show that development teams managing trusted software supply chains are dramatically improving quality and productivity, said Wayne Jackson, CEO, Sonatype.

The wider claims here (from Sonatype) include suggestions that even when vulnerabilities are known, open source software projects are slow to remediate if they do so at all. Only 15.8 percent of OSS projects actively fix vulnerabilities, and even then the mean time to remediation was 233 days.

This says the firm puts the onus on DevOps organisations to actively govern which opens source OSS projects they work with, and which components they ultimately consume.

The full report is available here.

Originally posted here:
Software wet wipes, Sonatype advocates supply chain hygiene - ComputerWeekly.com (blog)