Open-source software rapidly processes spectral data, accurately identifies and quantifies lipid species – Phys.Org

July 25, 2017 The LIQUID interface. Credit: Pacific Northwest National Laboratory

Lipids play a key role in many metabolic diseases, including hypertension, diabetes, and stroke. So having a complete profile of the body's lipidsits "lipidome"is important.

Lipidomics studies are often based on liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). But researchers have a hard time processing data fast enough, and they are unable to confidently identify and accurately quantify the lipid species detected.

Incorrect identifications can result in misleading biological interpretations. Yet existing tools are not designed for high-volume verification of identifications and have to be manually verified to ensure accuracy.Since scientists increasingly want larger scale lipidomics studies, analysts need improved software for identifying lipids.

A recent paper by lead author Jennifer E. Kyle and eight co-authors at Pacific Northwest National Laboratory (PNNL) introduces an open-source lipid identification software, Lipid Quantification and Identification (LIQUID). The scoring is trainable, the search database is customizable, and multiple lines of evidence are displayed, allowing for confident identifications. LIQUID also makes single- and global-target searches available, as well as fragment-pattern searches. All this makes it possible for researchers to track similar and repeating patterns of MS/MS spectra.

Compared to other freely available software commonly used to identify lipids and other small molecules, LIQUID has a rapid processing time that can generate a higher number of validated lipid identifications faster. Its reference database includes more than 21,200 unique lipid targets across six lipid categories, 24 classes, and 63 subclasses.

LIQUID is able to confidently identify more lipid species with a faster combined processing and validation time than any other software in its field.

What's Next?

Developers of LIQUID will increase the reference library to include lipids that may be unique to particular disease states or to organisms from select environmental niches. This means researchers will be able to characterize a more diverse range of samples and therefore enhance the understanding of biological and environmental systems of interest.

Explore further: Link found between types of lipid metabolism and species lifespan

More information: Jennifer E. Kyle et al. LIQUID: an-open source software for identifying lipids in LC-MS/MS-based lipidomics data, Bioinformatics (2017). DOI: 10.1093/bioinformatics/btx046

An international team of researchers has found a link between types of lipid metabolism in different species and differing lifespans. In their paper published in the journal Scientific Reports, the team describes their study ...

(PhysOrg.com) -- A journal article showcasing results of lipidomics analyses for identifying novel biomarkers of diabetes conducted at Pacific Northwest National Laboratory was selected as "Editor's Choice" by two clinical ...

Could controlling cell-membrane fat play a key role in turning off disease?

Almost everyone knows that fats are the culprits in expanding waistlines and killer diseases, but scientific understanding of the roles of "lipids" -- fats and oils -- inside cells in the body got short shrift until launch ...

If you want to create sustainable biofuels from less and for less, you've got a range of options. And one of those options is to go microbial, enlisting the help of tiny but powerful bacteria in creating a range of renewable ...

Studying how our bodies metabolize lipids such as fatty acids, triglycerides, and cholesterol can teach us about cardiovascular disease, diabetes, and other health problems, as well as reveal basic cellular functions. But ...

From the smallest cell to humans, most organisms can sense their local population density and change behavior in crowded environments. For bacteria and social insects, this behavior is referred to as "quorum sensing." Researchers ...

A team of researchers has developed a faster and easier way to make sulfur-containing polymers that will lower the cost of large-scale production.

The structure of the critical quality control checkpoint enzyme that oversees the production of thousands of secreted glycoproteins has been solved by a fruitful collaborative effort at Diamond Light Source. The study, recently ...

Around half of all medications for chronic diseases are not taken as prescribed, costing the U.S. health care system more than $100 billion in avoidable hospital stays each year.

One of the hardest parts in drug discovery is pinning down how a medicine actually works in the body. It took nearly 100 years to uncover the molecular target of aspirin, but even with cutting-edge technology, it can take ...

Copper has long been known for its ability to kill bacteria and other microbes.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Excerpt from:
Open-source software rapidly processes spectral data, accurately identifies and quantifies lipid species - Phys.Org

Federal Cloud Computing – TechTarget

The following is an excerpt from Federal Cloud Computing by author Matthew Metheny and published by Syngress. This section from chapter three explores open source software in the federal government.

Open source software (OSS) and cloud computing are distinctly different concepts that have independently grown in use, both in the public and private sectors, but have each faced adoption challenges by federal agencies. Both OSS and cloud computing individually offer potential benefits for federal agencies to improve their efficiency, agility, and innovation, by enabling them to be more responsive to new or changing requirements in their missions and business operations. OSS improves the way the federal government develops and also distributes software and provides an opportunity to reduce costs through the reuse of existing source code, whereas cloud computing improves the utilization of resources and enables a faster service delivery.

In this chapter, issues faced by OSS in the federal government will be discussed, in addition to the relationship of the federal government's adoption of cloud computing technologies. However, this chapter does not present a differentiation of OSS from proprietary software, rather focuses on highlighting the importance of the federal government's experience with OSS in the adoption of cloud computing.

Over the years, the private sector has encouraged the federal government to consider OSS by making a case that it offers an acceptable alternative to proprietary commercial off-the-shelf (COTS) software. Regardless of the potential cost-saving benefits of OSS, federal agencies have historically approached it with cautious interest. Although, there are other potential issues in transitioning from an existing proprietary software, beyond cost. These issues include, a limited in-house skillset for OSS developers within the federal workforce, a lack of knowledge regarding procurement or licensing, and the misinterpretation of acquisition and security policies and guidance. Although some of the challenges and concerns have limited or slowed a broader-scale adoption of OSS, federal agencies have become more familiar with OSS and the marketplace expansion of available products and services, having made considerations for OSS as a viable alternative to enterprise-wide COTS software. This renewed shift to move toward OSS is also being driven by initiatives such as the 18F and the US Digital Service, and the publication of the guidance such as the Digital Services Playbook, which urges federal agencies to "consider using open source, cloud based, and commodity solutions across the technology stack".

Interoperability, portability, and security standards have already been identified as critical barriers for cloud adoption within the federal government. OSS facilitates overcoming standards obstacles through the development and implementation of open standards. OSS communities support standards development through the "shared" development and industry implementation of open standards. In some instances, the federal government's experience with standards development has enabled the acceptance and use of open standards-based, open source technologies and platforms.

The federal government's use of OSS has its beginning in the 1990s. During this period, OSS was used primarily within the research and scientific community where collaboration and information sharing was a cultural norm. However, it was not until 2000 that federal agencies began to seriously consider the use of OSS as a model for accelerating innovation within the federal government. As illustrated in Fig. 3.1, the federal government has developed a list of OSS-related studies, policies, and guidelines that have formed the basis for the policy framework that has guided the adoption of OSS. This framework tackles critical issues that have inhibited the federal government from attaining the full benefits offered by OSS. Although gaps still exist in specific guidelines relating to the evaluation, contribution, and sharing of OSS, the policy framework serves as a foundation for guiding federal agencies in the use of OSS. In this section, we will explore the policy framework with the objective of describing how the current policy framework has led to the broader use of OSS across the federal government, and more importantly how this framework has enabled the federal government's adoption of cloud computing by overcoming the challenges with acquisition and security that will be discussed in detail in the next section.

The President's Information Technology Advisory Committee (PITAC), which examined OSS, was given the goal of:

The PITAC published a report concluding that the use of the open source development model (also known as the Bazaar model) was a viable strategy for producing high-quality software through a mixture of public, private, and academic partnerships. In addition, as presented in Table 3.1, the report also highlighted several advantages and challenges. Some of these key issues have been at the forefront of the federal government's adoption of OSS.

Over the years since the PITAC report, the federal government has gained significant experience in both sponsoring and contributing to OSS projects. For example, one of the most widely recognized contributions by the federal government specifically related to security is the Security Enhanced Linux (SELinux) project. The SELinux project focused on improving the Linux kernel through the development of a reference implementation of the Flask security architecture for flexible mandatory access control (MAC). In 2000, the National Security Agency (NSA) made the SELinux available to the Linux community under the terms of the GNU's Not Unix (GNU) General Public License (GPL).

Starting in 2001, the MITRE Corporation, for the US Department of Defense (DoD), published a report42 that built a business case for the DoD's use of OSS. The business case discussed both the benefits and risks for considering OSS. In MITRE's conclusion, OSS offered significant benefits to the federal government, such as improved interoperability, increased support for open standards and quality, lower costs, and agility through reduced development time. In addition, MITRE highlighted issues and risks, recommending any consideration of OSS should be carefully reviewed.

Shortly after the MITRE report, the federal government began to establish specific policies and guidance to help clarify issues around OSS. The DoD Chief Information Officer (CIO) published the Department's first official DoD-wide memorandum to reiterate existing policy and to provide clarifying guidance on the acquisition, development, and the use of OSS within the DoD community. Soon after the DoD policy, the Office of Management and Budget (OMB) established a memorandum to provide government-wide policy regarding acquisition and licensing issues.

Since 2003, there were multiple misconceptions, specifically within the DoD, regarding the use of OSS. Therefore, in 2007, the US Department of the Navy (DON) CIO released a memorandum that clarified the classification of OSS and directed the Department to identify areas where OSS can be used within the DON's IT portfolio. This was followed by another DoD-wide memorandum in 2009, which provided DoD-wide guidance and clarified the use and development of OSS, including explaining the potential advantages of the DoD reducing the development time for new software, anticipating threats, and response to continual changes in requirements.

In 2009, OMB released the Open Government Directive, which required federal agencies to develop and publish an Open Government Plan on their websites. The Open Government Plan provided a description on how federal agencies would improve transparency and integrate public participation and collaboration. As an example response to the directive support for openness, the National Aeronautics and Space Administration (NASA), in furtherance of its Open Government Plan, released the "open. NASA" site that was built completely using OSS, such as the LAMP stack and the WordPress content management system (CMS).

On May 23, 2012, the White House released the Digital Government Strategy that complements other initiatives and established principles for transforming the federal government. More specifically, the strategy outlined the need for a "Shared Platform" approach. In this approach, the federal government would need to leverage "sharing" of resources such as the "use of open source technologies that enable more sharing of data and make content more accessible".

The Second Open Government Action Plan established an action to develop an OSS policy to improve access by federal agencies to custom software to "fuel innovation, lower costs, and benefit the public". In August 2016, the White House published the Federal Source Code Policy, which is consistent with the "Shared Platform" approach in the Digital Government's Strategy, by requiring federal agencies make available custom code as OSS. Further, the policy also made "custom-developed code available for Government-wide reuse and make their code inventories discoverable at https://www.code.gov ('Code.gov')".

In this section, we discussed key milestones that have impacted the federal government's cultural acceptance of OSS. It also discussed the current policy framework that has been developed through a series of policies and guidelines to support federal agencies in the adoption of OSS and the establishment of processes and policies to encourage and support the development of OSS. The remainder of this chapter will examine the key issues that have impacted OSS adoption and briefly examine the role of OSS in the adoption of cloud computing within the federal government.

About the author:

Matthew Metheny, PMP, CISSP, CAP, CISA, CSSLP, CRISC, CCSK, is an information security executive and professional with twenty years of experience in the areas of finance management, information technology, information security, risk management, compliance programs, security operations and capabilities, secure software development, security assessment and auditing, security architectures, information security policies/processes, incident response and forensics, and application security and penetration testing. He currently is the Chief Information Security Officer and Director of Cyber Security Operations at the Court Services and Offender Supervision Agency (CSOSA), and is responsible for managing CSOSA's enterprise-wide information security and risk management program, and cyber security operations.

Read the original post:
Federal Cloud Computing - TechTarget

Teradata Acquires San Diego-based Start-up StackIQ – MarTech Series (press release)

Acquisition Bolsters Teradatas Build and Delivery Capability of On-Premises and Hybrid Cloud Solutions for Its Enterprise Customers Teradata, the leading data and analytics company, announced the acquisition of StackIQ, developers of one of the industrys fastest bare metal software provisioning platforms which has managed the deployment of cloud and analytics software at millions of servers in data centers around the globe. The deal will leverage StackIQs expertise in open source software and large cluster provisioning to simplify and automate the deployment of Teradata Everywhere. Offering customers the speed and flexibility to deploy Teradata solutions across hybrid cloud environments, allows them to innovate quickly and build new analytical applications for their business.

In addition to technology assets, the acquisition also includes StackIQs talented team of engineers, who will join Teradatas R&D organization to help accelerate the companys ability to automate software deployment in operations, engineering and end-user customer ecosystems.

Teradata prides itself on building and investing in solutions that make life easier for our customers, saidOliver Ratzesberger, Executive Vice President and Chief Product Officer for Teradata.Only the best, most innovative and applicable technology is added to our ecosystem, and StackIQ delivers with products that excel in their field. Adding StackIQ technology to IntelliFlex, IntelliBase and IntelliCloud will strengthen our capabilities and enable Teradata to redefine how systems are deployed and managed globally.

Our incredibly high standards also apply to the people we hire, continued Ratzesberger. As Teradata continues to expand its engineering (R&D) skills to drive ongoing technology innovation, we are seeking qualified, talented individuals to join our team. Once again, StackIQ has set the bar with stellar engineers who we are honored to now call Teradata employees.

Under terms of the deal, Teradata will now own StackIQs unique IP that automates and accelerates software deployment across large clusters of servers (both physical and virtual/in the cloud). This increase in automation will occur across all Teradata Everywhere deployments, dramatically reducing build and delivery times for complex business analytics solutions and adding the capability to manage software-only appliances across hybrid cloud infrastructure. The speed of Teradatas new integrated solution also allows for rapid re-provisioning of internal test or benchmarking hardware, as well as swift redeployment between technologies to match a customers changing workload requirements.

Joining Teradata, the market leader in analytic data solutions, truly validates the importance of StackIQs engineering and the talent we have cultivated over the years, saidTim McIntire, Co-Founder at StackIQ. We are looking forward to bringing a bit ofSan Diegosstart-up culture to Teradata, and working together to simplify Teradatas customer experience for system software deployment and upgrades.

Read this article:
Teradata Acquires San Diego-based Start-up StackIQ - MarTech Series (press release)

The need for open source security in medical devices – ITProPortal

Wireless and wearable technologies have brought about dramatic improvements in healthcare, allowing patients mobility while providing healthcare professionals with easier access to patient data. Many medical devices that were once tethered to patients, positioned next to hospital beds, or at a fixed location, are now transportable. Evolving from the traditional finger-prick method of glucose monitoring, wearable devices equipped with sensors and wireless connectivity now assist with monitoring blood sugar levels, connect with health-care providers, and even deliver medication. Critical life-sustaining devices, such as pacemakers, can be checked by doctors using wireless technology and reduce the time a patient needs to spend at the hospital while allowing the doctor to react more rapidly to patient problems.

A major driver of the technological revolution in medical devices is software, and that software is built on a core of open source. Black Ducks 2017 Open Source Security and Risk Analysis (OSSRA) research found that the average commercial application included almost 150 discrete open source components, and that 67 per cent of the over 1000 commercial applications scanned included vulnerable open source components. The analysis made evident that the use of open source components in commercial applications is pervasive across every industry vertical, including the healthcare industry.

The arguments for using open source are straightforward open source lowers development costs, speeds time to market, and accelerates innovation. When it comes to software, every manufacturer wants to spend less time on what are becoming commodities such as the core operating system and connectivity and focus on features that will differentiate their brand. The open source model supports that objective by expediting every aspect of agile product development.

But visibility and control over open source are essential to maintain the security and code quality of medical device software and platforms.

Over two million patients in the United States have implanted devices, including pacemakers and implantable cardioverter-defibrillators. More than seven million patients now benefit from remote monitoring and the use of connected medical devices as an integral part of their care routines.

While the software used in the vast majority of medical devices is closed and proprietary to prevent commercial rivals from copying each other's code, that software usually contains a wealth of open source components. The OSSRA study I cited earlier found open source in 46 per cent of the commercial applications associated with the healthcare, health tech, and life sciences sector.

Researchers Billy Rios and Jonathan Butts recently acquired hardware and supporting software for four different brands of pacemakers and looked for weaknesses in architecture and execution. One of the biggest issues noted in the paper they published was one Black Duck sees time and again unpatched software libraries.

All four pacemakers examined contained open source components with vulnerabilities, and roughly 50 per cent of all components included vulnerabilities. Most shockingly, the pacemakers had an average of 50 vulnerabilities per vulnerable component and over 2,000 vulnerabilities per vendor.

When patient safety is a function of software, the issue of software security becomes paramount particularly when it comes to medical devices. But, secure software is an ephemeral concept. What we think of as secure today can change overnight as new vulnerabilities are discovered and disclosed. As code ages, the probability is high that more vulnerabilities are likely to be disclosed. An average 3,600 new open source vulnerabilities are discovered every year (though still far less than that reported in commercial code).

Open source is neither more nor less secure than custom code. However, there are certain characteristics of open source that make vulnerabilities in popular components very attractive targets for hackers. The return on investment for an open source vulnerability is high. A single exploit can be used to compromise hundreds or thousands of applications using that vulnerable component.

Whether open source or proprietary code, most known vulnerabilities like Heartbleed, and the SMB vulnerability exploited in the WannaCry ransomware attacks, have patches available on the date of their public disclosure. But, despite the availability of patches, an alarming number of both companies and individuals simply do not apply them. Months after Microsoft issued its security patch, thousands of computers remain vulnerable to the WannaCry exploit for a variety of reasons, ranging from the use of bootleg software to simple neglect.

Patches often arent applied because of concerns that the patch might break a currently-working system. Each time a patch is introduced, changing a system can impact its reliability and functionality. Healthcare organisations, for example, often will put functionality and uptime as a higher priority than security, and in doing so expose themselves to attack on unpatched and vulnerable applications.

In other cases, its a lack of insight organisations are simply unaware of a critical vulnerability or its patch until theyre under attack. While software vendors like Microsoft can push updates and fixes out to users, open source has a pull support model. Unlike most proprietary software, users of open source are responsible for keeping track of vulnerabilities as well as fixes and updates for the open source they use rather than having those fixes pushed out to them. Unless a vendor is aware that a vulnerable open source component is included in its application(s), its highly probable that that component will remain unpatched.

Rios and Butts paper didnt state if the researchers checked for software/firmware updates from the vendors prior to analysis. My assumption is that they did not, but whether this would have made a real-world difference is arguable, Black Ducks own research indicates that vendors are typically unaware of all of the open source they use, since it can enter the code base in so many ways. On average, prior to having a Black Duck code scan, our customers were aware of less than half of the third-party libraries they use.

To be clear, the problem isnt the use of open source. Its the fact that open source is often invisible to those using it. Vulnerabilities in open source may open up users to targeted or non-targeted attacks. Depending on the software (home monitoring, physician, programmer, etc.) the attack could affect a single patient or an entire practice. When the WannaCry ransomware spread across the world, multiple U.K. hospitals reported that their radiology departments were completely knocked out by the outbreak.

If the attack is on implantable medical devices, this could become a life or death decision.

Unless the medical device software supply chain carefully tracks the open source they use, and maps that open source to the thousands of vulnerabilities disclosed every year, they will be unable to protect their applicationsand their customersfrom those vulnerabilities.

To make progress in defending against open source security threats and compliance risks, both medical device manufacturers and their suppliers must adopt open source management practices that:

Fully inventories open source software: Organisations cannot defend against threats that they do not know exist. A full and accurate inventory (bill of materials) of the open source used in their applications is essential.

Map open source to known security vulnerabilities: Public sources, such as the National Vulnerability Database provide information on publicly disclosed vulnerabilities in open source software. Organisations need to reference these sources to identify which of the open source components they use are vulnerable.

Identify license and code quality risks: Failure to comply with open source licenses can put organisations at significant risk of litigation and compromise of IP. Likewise, use of out-of-date or poor quality components degrades the quality of applications that use them. These risks also need to be tracked and managed.

Enforce open source risk policies: Many organisations lack even basic documentation and enforcement of open source policies that would help them mitigate risks. Manual policy reviews are a minimum requirement, but as software development becomes more automated so too must management of open source policies.

Alert on new security threats: With more than 3,600 new open source vulnerabilities discovered every year, the job of tracking and monitoring vulnerabilities does not end when applications leave development. Organisations need to continuously monitor for new threats as long as their applications remain in service.

As open source use continues to increase, effective management of open source security and license compliance risk is becoming increasingly important. By integrating risk management processes and automated solutions into their product lifecycle, medical device manufacturers can maximise the benefits of open source use while effectively managing its risks.

Mike Pittenger, Vice President of Security Strategy, Black Duck Software Image Credit: Photo_Concepts / iStock

Original post:
The need for open source security in medical devices - ITProPortal

Monthly quiz: Test yourself on open source development tools trends – TechTarget

The move to open source development tools -- already unstoppable -- continues to gain momentum. Years ago, open source was looked upon as a way to save money. Today, a key driver is the clear fact that, with tens of thousands of contributors sharing their expertise and the ever-widening availability of high-quality code, resistance is futile.

In this expert handbook, we explore the issues and trends in cloud development and provide tips on how developers can pick the right platform.

One gauge for measuring the growth of open source is how quickly container technology has been adopted. According to a January 2017 report from 451 Research, the global application container segment -- just one piece of the overall tools market -- reached $762 million in 2016 and is forecast to reach $2.7 billion in 2020. That's an impressive 40% compound rate over four years.

Yet, not all is well. A problem with open source components is that they are, well, open. They could come from anywhere, from anyone. How do they rate in terms of performance and security? It's the big unknown. In its 2017 open source security and risk analysis report, Black Duck Software noted its own audit found that open source components were present in 96% of applications it examined, with apps incorporating 147 unique components on average. And consider this scary finding: The financial services and financial technology sector had the highest number of vulnerabilities per application at 52. Fully 60% of those apps harbored high-risk vulnerabilities.

How well do you know open source development tools and components trends? Take this brief quiz, and see how well you measure up.

Joel Shore is news writer for TechTarget's Business Applications and Architecture Media Group. Write to him atjshore@techtarget.comor follow @JshoreTTon Twitter.

Red Hat exec explains Container Health Index

Most open source software has security holes

Build an open source security toolkit

Follow this link:
Monthly quiz: Test yourself on open source development tools trends - TechTarget

SKT Develops Hacking-Proof Core Chip for Quantum Cryptography – BusinessKorea


BusinessKorea
SKT Develops Hacking-Proof Core Chip for Quantum Cryptography
BusinessKorea
Key equipment was developed for the popularization of quantum cryptography known to be impossible to hack. SK Telecom announced on July 23 that it developed a prototype chip for generating ultra-small quantum random numbers. The product which was ...
SK Telecom develops 5mm QRNG chip prototype - TelecompaperTelecompaper (subscription)

all 5 news articles »

More here:
SKT Develops Hacking-Proof Core Chip for Quantum Cryptography - BusinessKorea

When thieves strike, cryptocurrency investors tremble – CBS News

Cryptocurrency Ethereum has emerged from the shadow of its better-known rival Bitcoin thanks to its skyrocketing price -- that has also made it a tempting target for hackers.

Thieves earlier this month stole $10 million from an electronic wallet provide by Coindash, a company that specializes in the kind of blockchain technology used in digital currencies. Another $32 million recently went missing after hackers exploited a vulnerability in an e-wallet from startup Parity.

The price of Ethereum slumped following news of the heists, tumbling more than 15 percent from $258.52 on July 18 to $218.82 on Friday, according to CoinMarket Cap.

Coindash, which was using a so-called initial coin offering to raise funds, plans to compensate victims of the hack. To help stabilize the price of Ethereum, it will also offer bonuses to anyone who holds it for at least six months. According to Parity, there were three accounts compromised in the attack and that the thief is attempting to launder the money through exchanges.

"If anything, it makes people more aware of the pitfalls of coding," said Luis Cuende, CEO of Aragon, an Ethereum-based corporate management tool, adding that the underlying code that powers the cryptocurrency wasn't affected by the attack.

The concept behind Ethereum was initially described by computer programmer Vitalik Buterin in 2013 based on his research on Bitcoin. A year later he joined forces with another programmer to create Ethereum, now the second-most popular cryptocurrency after Bitcoin.

New investors in Ethereum may not be aware of the risks of losing their funds to hackers, said Simon Yu, CEO of CakeCodes, which offers cryptocurrency rewards to computer game players. He said accounts should be secured with private keys whose combinations are known only to the account holders.

Cryptocurrencies have long been dogged by concerns about their security, particularly after the collapse of Bitcoin exchange Mt. Gox in 2014. The company's former CEO, Mark Karpales, is currently on trial in Japan, where the corporation was based, on embezzlement and data manipulation charges. Karpales has blamed the company's collapse on hackers.

South Korea's largest Ethereum and Bitcoin exchange was breached in late June in a theft estimated at 1.2 billion won ($1.07 million). A Pennsylvania man also recently confessed to stealing $40 million worth of Bitcoin.

Despite the risks, investors continue to have faith in digital currencies even as their prices fluctuate wildly. Ethereum, which started the year valued at $8.17, has in a matter of months soared 2,600 percent. Over the same period, Bitcoin prices have surged from $1,027 to $2,638, a gain of more than 150 percent.

The S&P 500, the stock market index most closely tracked by professional money managers, has this year posted a gain of 10.3 percent.

2017 CBS Interactive Inc.. All Rights Reserved.

Continue reading here:
When thieves strike, cryptocurrency investors tremble - CBS News

Decentralisation mooted for African cryptocurrency – IT-Online

While some sceptics may have misgivings about the technology, cryptocurrency and blockchain has disrupted financial services and will probably be around for a lot longer. This is the view of Heinrich Springhorn, business analyst at MobileData, who says: There is some instability due to a hearing in Japan regarding a bitcoin exchange that was shut down due to suspected embezzlement. However, this does not take away from the potential of what cryptocurrency, and essentially the blockchain, can mean to transacting worldwide. This realisation can make a real difference for operations in Africa, he says. MobileDatas standpoint is that to apply this methodology in Africa and transact more freely, companies must be willing to participate in a decentralised model of transacting. One of the biggest set-backs at the moment is that there are only a small number of stock and service providers worldwide that accept cryptocurrency such as Bitcoin, and it is still far away from becoming main stream, Springhorn says. If a cryptocurrency should become mainstream, the potential exists that it could cause instability in financial enterprises. The reason for this is that the banking institutions will lose their locus of control over currencies and consumers will transact outside of their control. The decentralised nature of cryptocurrency means the reality facing markets is that there is no intermediary with the power to limit any fraud or embezzlement. This means there is no way for the assets to be seized in these cases, Springhorn explains. The companys assessment of the market is that for widespread adoption of this model to occur in Africa it will require a mechanism for on-the-fly exchanging of the cryptocurrency to a value of the fiat money. This is on the basis that services and stock providers do not accept cryptocurrencies as payment. If the service and stock providers do accept cryptocurrency as payment, then the transaction engine used will write an entry into the decentralised ledger and the transaction will go through the blockchain. In addition, there are socio-economic concerns with regards to cryptocurrency, as many end-users do not have access to the technology needed to transact with cryptocurrency,Springhorn adds.

Link:
Decentralisation mooted for African cryptocurrency - IT-Online

New Zealand Reserve Bank Rejects Need for Expansive … – Bitcoin News (press release)

In recent statements addressing contemporary cyber threats, including those pertaining to cryptocurrency cyber crime such as ransomware, the New Zealand Reserve Bank has rejected calls for enhanced and intrusive regulations.

Also Read:New Zealand Exchange Bitnz Shuts Down Due to Banking Hostility

The New Zealand Reserve Bank has rejected calls for enhanced regulations designed to target contemporary cyber threats, including ransomware and other challenges associated with virtual currencies.

In a speech which has been published on the New Zealand Reserve Banks website, Reserve Bank representative Toby Fiennes articulates the banks position on contemporary cyber threats. The dynamic cyber environment means that organisations have to be nimble in their approach to cyber security focused on outcomes, rather than prescriptive compliance exercises.

The speech indicates recognition that the challenges posed by cryptocurrency will be dynamic, and that the threats posed by online crime cannot simply be regulated out of existence. The nature and incidence of cyber risk is unique, meaning that typical approaches to risk management and disaster recovery planning may not be appropriate. While cyber vulnerabilities can be mitigated, the potential sources of cyber threats and the attack footprint are just too broad, so they can never be eliminated.

Whilst recognizing the short term disruptive potential of contemporary fintech innovations like bitcoin, the New Zealand Reserve Bank also believes that these new technologies are likely to bring benefits to the financial system in the long term. The bank recommends against heavy-handed prescriptive regulations for cryptocurrency suggesting that legal guidelines for virtual currencies should be flexible, adaptive, and not restrict innovation. Looking forward, the Reserve Bank and other regulators will need to make sure the regulatory regime in New Zealand is adaptive should any new business models become systemic, while not unduly harming innovation.

The central bank also revealed that it is working in partnership with other government agencies including the Ministry of Business, Innovation, and Employment, and the Financial Markets Authority to ensure that New Zealand cultivates a regulatory climate that will encourage financial innovation within the digital sphere.

Do you agree with the reserve bank of New Zealands opinion that regulations designed to protect against cyber crime could harm innovation in new financial industries? Share your thoughts in the comments section below!

Images courtesy of Shutterstock

Whats the quickest way to see the current bitcoin price in your local currency? Click here for aninstant quote.

See the original post:
New Zealand Reserve Bank Rejects Need for Expansive ... - Bitcoin News (press release)

ECB President: Cryptocurrency Price Boom Having Limited Effect on Economy – CoinDesk

The president of the European Central Bank (ECB) has issued remarks addressing the rising interest in cryptocurrencies as an asset class.

In a letter to members of the European parliament this week, Mario Draghi built on statements made during a May hearing, in which he first discussed financial innovation, including the "rapid pace of development" in digital ledger(DLT) and related technologies. At the time, he cautioned that care must be taken so that fintech, including blockchain and DLT, does not disrupt the financial system.

Published this week, the new letter builds on this commentary, addressing more directly the rise in cryptocurrency prices so far in 2017. Driven by big gains in bitcoin and ether, the value of the total supply of all cryptocurrencies is now $93bn, down slightly from an all-time high of $115bn earlier this year.

Still, in the face of this increase, Draghi used the opportunity to restate his belief that cryptocurrencies still havea limited impact on the financial system.

Draghi wrote:

"Although the market capitalisation of [virtual currency schemes] has increased since the publication of these reports, there is no evidence to suggest that the connection of VCS to the real economy has strengthened significantly."

Citing past research from the ECB, Draghi indicated he still believes there could be a "build-up of risks" due to the use of cryptocurrencies, which may necessitate an international regulatory response.

Still, for now, he said the ECB would likely take steps to continue to monitor the ecosystem, tracking the "number, structure and scope" of public blockchain tokens.

"An increase in the usage of [virtual currency schemes] is conceivable. It is thus important to monitor the take-up of VCS from a financial stability perspective," he said.

For more on how the ECB is approaching blockchain and cryptocurrencies, read our most recent interview.

ECB DLT Lead: Central Banks Won't Compete on Blockchain Tech

Mario Draghi image via Shutterstock

The leader in blockchain news, CoinDesk is an independent media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. Have breaking news or a story tip to send to our journalists? Contact us at [emailprotected].

Continue reading here:
ECB President: Cryptocurrency Price Boom Having Limited Effect on Economy - CoinDesk