Quantum Computing Market Size by Top Companies, Regions, Types and Application, End Users and Forecast to 2027 – Bulletin Line

New Jersey, United States,- Verified Market Researchhas recently published an extensive report on the Quantum Computing Market to its ever-expanding research database. The report provides an in-depth analysis of the market size, growth, and share of the Quantum Computing Market and the leading companies associated with it. The report also discusses technologies, product developments, key trends, market drivers and restraints, challenges, and opportunities. It provides an accurate forecast until 2027. The research report is examined and validated by industry professionals and experts.

The report also explores the impact of the COVID-19 pandemic on the segments of the Quantum Computing market and its global scenario. The report analyzes the changing dynamics of the market owing to the pandemic and subsequent regulatory policies and social restrictions. The report also analyses the present and future impact of the pandemic and provides an insight into the post-COVID-19 scenario of the market.

Quantum Computing Market was valued at USD 193.68 million in 2019 and is projected to reach USD 1379.67 million by 2027, growing at a CAGR of 30.02% from 2020 to 2027.

The report further studies potential alliances such as mergers, acquisitions, joint ventures, product launches, collaborations, and partnerships of the key players and new entrants. The report also studies any development in products, R&D advancements, manufacturing updates, and product research undertaken by the companies.

Leading Key players of Quantum Computing Market are:

Competitive Landscape of the Quantum Computing Market:

The market for the Quantum Computing industry is extremely competitive, with several major players and small scale industries. Adoption of advanced technology and development in production are expected to play a vital role in the growth of the industry. The report also covers their mergers and acquisitions, collaborations, joint ventures, partnerships, product launches, and agreements undertaken in order to gain a substantial market size and a global position.

Quantum Computing Market, By Offering

Consulting solutions Systems

Quantum Computing Market, By Application

Machine Learning Optimization Material Simulation

Quantum Computing Market, By End-User

Automotive Healthcare Space and Defense Banking and Finance Others

Regional Analysis of Quantum Computing Market:

A brief overview of the regional landscape:

From a geographical perspective, the Quantum Computing Market is partitioned into

North Americao U.S.o Canadao MexicoEuropeo Germanyo UKo Franceo Rest of EuropeAsia Pacifico Chinao Japano Indiao Rest of Asia PacificRest of the World

Key coverage of the report:

Other important inclusions in Quantum Computing Market:

About us:

Verified Market Research is a leading Global Research and Consulting firm servicing over 5000+ customers. Verified Market Research provides advanced analytical research solutions while offering information enriched research studies. We offer insight into strategic and growth analyses, Data necessary to achieve corporate goals, and critical revenue decisions.

Our 250 Analysts and SMEs offer a high level of expertise in data collection and governance use industrial techniques to collect and analyze data on more than 15,000 high impact and niche markets. Our analysts are trained to combine modern data collection techniques, superior research methodology, expertise, and years of collective experience to produce informative and accurate research.

Contact us:

Mr. Edwyne Fernandes

US: +1 (650)-781-4080UK: +44 (203)-411-9686APAC: +91 (902)-863-5784US Toll-Free: +1 (800)-7821768

Email: [emailprotected]

View original post here:
Quantum Computing Market Size by Top Companies, Regions, Types and Application, End Users and Forecast to 2027 - Bulletin Line

Revenues from Quantum Key Distribution to Reach Almost $850 Million by 2025 – Quantaneo, the Quantum Computing Source

More details on the report, Quantum Key Distribution: The Next Generation A Ten-year Forecast and Revenue Assessment: 2020 to 2029 can be found at: https://www.insidequantumtechnology.com/product/quantum-key-distribution-the-next-generation-a-ten-year-forecast-and-revenue-assessment-2020-to-2029/

About the Report Inside Quantum Technology has covered the Quantum Key Distribution (QKD) market since 2014. We were the first industry analysis firm to predict that quantum security in mobile phones would become a significant revenue earner in the short-term. This report has been compiled from interviews from key players in the industry as well as with the assistance of government intelligence experts.

There have been some big developments in the QKD space since our previous report. The ITU-T standardization is near complete while both the US and UK governments have announced funding for large-scale quantum networks with QKD as a central component and the QuantumCTek IPO may be the beginning of the new public companies in this space.

This report contains ten-year forecasts of QKD for each of the major applications sections including national and civil government, the financial sector, telecommunications, data centers, utilities, infrastructure, mobile communications and possible consumer markets. There are also forecast broken out by end-user country and transportation type (satellite, fiber optic and free space). In addition, the report contains strategic profiles of the A list of QKD including ABB, Cambridge Quantum Computing, ID Quantique, KETS Quantum, MagiQ Technologies, Nokia, QuantumCTek, QuantumXChange, Qubitekk, Quintessence Labs, SK Telecom and Toshiba.

From the Report QKD for the data center with us soon: Adoption of QKD for conventional business communications is a small opportunity right now and we dont expect a real take off until 2025. By 2029 we expect the market for data center QKD to reach about $180 million. Early opportunities in data center QKD will be found in private firms that do business with governments, where QKD may actually be mandated someday. Another early target market are R&D centers, where so much high-tech data is vulnerable to theft.

QKD and he rise of China in QKD: SmarTechs estimates are that China currently accounts for about 36 percent of worldwide QKD revenues and will account for $1,329.0 of QKD revenues by 2029. The growing political and military rivalry between China and the US is a key driver for QKD deployment in both countries, especially as China is recording significant steps forward in this area. For example, Chinese researchers have successfully performed full QKD between two ground stations located 1200 km from each other with the aid of a satellite and the recent IPO of QuantumCTeK IPO shows.

QKD and PQC: Together at last: Until quite recently, Post Quantum Cryptography (PQC) were marketed as a rival to QKD. However, they are now seeming synergetic; so much so that some firms are offering both. What the IT world is coming to recognize is that PQC itself could ultimately succumb to the quantum threat if powerful quantum computers are built, this PQC may ultimately need QKD to survive. Note that Inside Quantum Technology has also recently published an analyst report on PQC markets [https://www.insidequantumtechnology.com/product/post-quantum-cryptography-pqc-a-revenue-assessment/ ]

Read more here:
Revenues from Quantum Key Distribution to Reach Almost $850 Million by 2025 - Quantaneo, the Quantum Computing Source

Global Quantum Key Distribution Market 2020-2029: Ten-year Forecasts and Revenue Assessments – PRNewswire

DUBLIN, Aug. 12, 2020 /PRNewswire/ -- The "Quantum Key Distribution: The Next Generation - A Ten-year Forecast and Revenue Assessment: 2020 to 2029" report has been added to ResearchAndMarkets.com's offering.

This report provides forecasts and analysis for key QKD industry developments. The author was the first industry analysis firm to predict that quantum security in mobile phones would become a significant revenue earner in the short-term. Phones using QRNGs were announced earlier this year and this report discusses how the mobile QRNG market will evolve.

There have been some big developments in the QKD space. In particular, the regulatory and financial framework for the development of a vibrant QKD business has matured. On the standardization and funding front, the ITU-T standardization is near complete while both the US and UK governments have announced major funding for large-scale quantum networks with QKD as a central component. And the QuantumCtek IPO may just be the beginning of the new public companies in this space.

The report contains forecasts of the hardware and service revenues from QKD in all the major end-user groups. It also profiles all the leading suppliers of QKD boxes and services. These profiles are designed to provide the reader of this report with an understanding of how the major players are creating QKD products and building marketing strategies for QKD as quantum computers become more ubiquitous.

Key Topics Covered:

Executive SummaryE.1 Key Developments Since our Last ReportE.2 Specific Signs that the Market for QKD is GrowingE.3 Evolution of QKD Technology and its Impact on the MarketE.3.1 Reach (Transmission Distance)E.3.2 Speed (Key Exchange Rate)E.3.3 Cost (Equipment)E.4 Summary of Ten-year Forecasts of QKD MarketsE.4.1 Forecasts by End-user SegmentE.5 Five Firms to Watch Closely in the QKD Space

Chapter One: Introduction1.1 Why QKD is a Growing Market Opportunity1.2 Overview of QKD Technological Challenges1.3 Goals and Scope of this Report1.4 Methodology of this Report1.5 Plan of this Report

Chapter Two: Technological Assessment2.1 Setting the Scene: QKD in Cryptography-land2.2 Why QKD: What Exactly does QKD Bring to the Cryptography Table?2.3 PQC's Love-Hate Relationship with QKD2.4 QKD's Technological Challenges2.5 QKD Transmission Infrastructure2.6 Chip-based QKD2.7 QKD Standardization: Together we are Stronger2.8 Key Takeaways from this Chapter

Chapter Three: QKD Markets - Established and Emerging3.1 QKD Markets: A Quantum Opportunity Being Driven by Quantum Threats3.2 Government and Military Markets - Where it all Began3.3 Civilian Markets for QKD3.4 Key Points from this Chapter

Chapter Four: Ten-year Forecasts of QKD Markets4.1 Forecasting Methodology4.2 Changes in Forecast Since Our Last Report4.2.1 The Impact of COVID-194.2.2 Reduction in Satellite Penetration4.2.3 Faster Reduction in Pricing4.2.4 Bigger Role for China?4.2 Forecast by End-User Type4.3 Forecast by Type of QKD Infrastructure: Terrestrial or Satellite4.4 Forecast of Key QKD-related Equipment and Components4.5 Forecast by Geography/Location of End Users

Chapter Five: Profiles of QKD Companies5.1 Approach to Profiling5.2 ABB (Switzerland/Sweden)5.3 Cambridge Quantum Computing (United Kingdom)5.4 ID Quantique (Switzerland)5.5 KETS Quantum Security (United Kingdom)5.6 MagiQ Technologies (United States)5.7 Nokia (Finland)5.8 QuantumCtek (China)5.9 Quantum Xchange (United States)5.10 Qubitekk (United States)5.11 QuintessenceLabs (Australia)5.12 SK Telecom (Korea)5.13 Toshiba (Japan)

For more information about this report visit https://www.researchandmarkets.com/r/lajrvk

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Excerpt from:
Global Quantum Key Distribution Market 2020-2029: Ten-year Forecasts and Revenue Assessments - PRNewswire

Digitalisation in Reliance Jio times: IoT, mobile internet to be key drivers of $5 trillion economy dream – Financial Express

ByShailesh Haribhakti and Arumugam Govindasamy

With a vertically integrated offering traversing Digital infrastructure, 5-G, customer interfaces, devices and technology, the stage is set for India to exploit the full power of Digitalisation.

The coming quantum computing world will bring the Internet of things(IOT) alive in India. IoT fuels a world of connected devices to make our lives easier: smart cities, fleet tracking, temperature monitoring and the digital transformation of agriculture. IoT has the potential to disrupt both business and policy. For instance, Amazon is using connected robots to locate products from its warehouse shelves and to bring them to workers, saving time and money. Similarly, the medical field is transformed by the use of connected devices to monitor the real-time health of patients.

During the current pandemic, IoT is a need-to-have. People and businesses are relying on IoT products such as remote connected health monitoring solutions, packaging and shipping trackers, and streaming devices the devices that are enabling remote work, telehealth, and distance learning. It also means that a tremendous amount of data is being transmitted, received, stored, and analyzed at the edge, or on devices. IoT devices are making it possible to eliminate dense gatherings of workers to avoid virus transmission.

Interwoven with the rise of IoT is an even stronger demand for processing and storage. As the pandemic built up, the need for latency free data transmission is realized. For example, business video conferencing requires low latency immersive HD video. The pipes simply arent large enough to do that with acceptable performance. At the same time, very little of that data you send and receive is worth storing, and most of it only has value for a small period of time. Of course, some of that data is actually tremendously valuable, but might require added AI software to extract it and of course thats immensely compute intensive.

With travel restrictions due to COVID-19 the use of virtual meetings is on the rise as multinational companies developing new technologies have the required expertise in different locations. This makes the move towards AR/VR and meeting virtually real.

The role of data infrastructure is important to ensure that mission-critical data can be transmitted, received, stored, and analyzed where its needed and when. Most important is a boost in connectivity. That kind of internet connectivity and speed is becoming increasingly available, and availability has begun to accelerate as demand continues to increase. Then theres 5G.

For several years, 5G has been getting a lot of hype because users want the ability to connect anywhere, and share large data files and videos, and 5G seemed set to deliver.

The pandemic has shed a light on ways that 5G, were it fully deployed globally, could help home-based workers and/or workers still onsite who are focused on mission-critical manufacturing and other work. 5G is a key driving force in helping IoT move forward enabling more reliable autonomous manufacturing processes via new standards for ultra-low latency in factories. The processing power required for 5G is tremendous, and along with that comes the requirements for data storage.

Every crisis leaves a long-lasting legacy in terms of faster innovation and a new normal. COVID-19 will accelerate the move to digital and to companies adopting IoT, AI/ML and 5G amongst other converging technologies to drive digital transformation.

According to the findings by McKinsey Global Institute IoT combined with mobile Internet will have substantial global economic impact of up to $20 trillion by 2025 and will be the key economic driver among disruptive technologies. For India to surpass the projected $5 Trillion economy, the digital India initiative with focus on IoT and mobile Internet will be the key force. With the frozen economy due to the pandemic, IoT will need to be empowered. The Technological disruption based on IoT and mobile internet will create innovation and entrepreneurship simultaneously.

IoT requires localized innovation, which requires Indian Government support in terms of providing appropriate funding, mobilizing private and public sectors for innovation and push for using homegrown technologies for better security, privacy and sustainability. All this and more will need to be catalysed by retraining, unlearning and relearning and a new pace of innovation. The Indian trained manpower alone is capable of delivering this growth. From a negative 9.5% to a positive 5% is a journey the only converging exponential technology can deliver.

See the original post here:
Digitalisation in Reliance Jio times: IoT, mobile internet to be key drivers of $5 trillion economy dream - Financial Express

More reasons SA is making itself a basket case in manufacturing – @AuManufacturing

Comment by Peter Roberts

I have previously argued that the South Australian government once the centre of the sector has given up on manufacturing.

It has no manufacturing minister, no manufacturing section in any department, no bureaucrat with that responsibility and no focus on the sector other than a few sexy areas and Canberras pet growth industries of defence and space.

At the time I had a back and forth with the Premier, Stephen Marshall which convinced me that he didnt see the problem in his government dropping an Industry and Skills department in favour of a renamed Innovation and Skills one.

He didnt get that no specific focus on manufacturing meant, well, no focus on manufacturing.

Now the states 300 companies in the electronics sector have been excluded from the SA State Growth Plan, a 10 year plan to be launched later this year.

The plan contains all the trendy and sexy buzz words Artificial Intelligence, Machine Learning, Data Analytics, Blockchain, Computer Vision, Virtual, Cyber Security, Internet of Things, Quantum Computing and Photonics.

But none of this is building on what the state has got in spades, and that is electronics.

This is a sector of 300 companies that employs 11,000 staff with $4 billion annual revenue and productivity at $343,600 per person, according to Electronics Industry Development Adelaide.

This is about three times that of all other SA manufacturing at $113,600.

Not only do we have an electronics sector in Adelaide, we are also building close to $100 billion worth of high tech defence equipment such as frigates and submarines in new facilities at Port Adelaide (pictured), which one might expect could include some electronics equipment.

Adding insult to injury my article also bemoaned SA dumping its bid for the 2026 Commonwealth Games because it was too expensive at $4 billion.

Since then a big group led by former federal minister Christopher Pyne and Olympians Anna Mears and Kyle Chalmers have been agitating for Marshall to show some vision and embrace the Games.

They presented updated figures to the government which showed it could be done for $1.1 billion.

The government has rejected their move, better perhaps to chase mirages of a quantum computing industry on North Terrace.

(Readers might excuse me my mentioning Adelaide again, but I recently relocated to my home state.)

Picture: Australian Naval Infrastructure//Hunter class frigate construction hall, Osborne

Subscribe to our free @AuManufacturing newsletter here.

Continue reading here:
More reasons SA is making itself a basket case in manufacturing - @AuManufacturing

news digest: Microsoft launches open source website, TensorFlow Recorder released, and Stackery brings serverless to the Jamstack – SD Times -…

Microsoft launched a new open source site, which features aims to help people get involved, explore projects and join the ecosystem.

The site also offers near real-time view of things that are happening across Microsofts projects on GitHub.

In addition, the site highlights Microsofts open-source projects such as Accessibility Insights, PowerToys and Windows Terminal.

More information is available here.

TensorFlow Recorder releasedGoogle announced that it open sourced the TensorFlow Recorder last week to make it possible for data scientists, engineers, or AI/ML engineers to create image-based TFRecords with just a few lines of code.

Before TFRecorder, users would have had to write a data pipeline that parsed their structured data, loaded images from storage, and serialized the results into the TFRecord format. Now, TFRecorder allows users to write TFRecords directly from a Pandas dataframe or CSV without writing any complicated code, according to Google in a post.

Data loading performance can be further improved by implementing prefetching and parallel interleave along with using the TFRecord format.

Stackery brings serverless to the JamstackStackery announced that it added the website resource to simplify the build process for static site generators like Gatsby.

This automates a lot of machinery within AWS to retrieve application source and build it with references to external sources, including: AWS Cognito Pools, GraphQL APIs, Aurora MySQL databases, and third-party SaaS services like GraphCMS.

The combination of JAMstack and serverless allows for powerful, scalable, and relatively secure applications which require very little overhead and low initial cost to build, Stackery wrote in a post.

Visual Studio Code updateVisual Studio Code version 1.48 includes updates such as Settings Sync now available for preview in stable, an updated Extensions view menu, and a refactored overflow menu for Git in the Source Control view.

It also includes the option to publish to a public or private GitHub repository and to debug within the browser without writing a launch configuration.

Preview features are not ready for release but are functional enough to use, Microsoft wrote in a post that contains additional details on the new release.

Source dependency reporting in Visual Studio 2019 16.7The new switch for the compiler toolset enables the compiler to generate a source-level dependency report for any given translation unit it compiles.

Additionally, the use of /sourceDependencies is not limited only to C++, it can also be used in translation units compiled as C. The switch is designed to be used with multiple files and scenarios under /MP, according to Microsoft in a post.

C++20 demands a lot more from the ecosystem than ever before. With C++20 Modules on the horizon the compiler needs to work closely with project systems in order to provide rich information for build dependency gathering and making iterative builds faster for inner-loop development, Microsoft stated.

Read the original:

news digest: Microsoft launches open source website, TensorFlow Recorder released, and Stackery brings serverless to the Jamstack - SD Times -...

The Risks Associated with OSS and How to Mitigate Them – Security Boulevard

Open source has become nearly ubiquitous with Agile and DevOps. It offers development teams the ability to quickly and easily scale their software development life cycles (SDLC). At the same time, open-source software (OSS) components can introduce security vulnerabilities, licensing issues, and development workflow challenges. Open-source risks include both licensing challenges and cyber threats from poorly written code that leads to security gaps. With the number of Common Vulnerabilities and Exposures (CVE) growing rapidly, organizations must define actionable OSS policies, monitor OSS components, and institute continuous integration/continuous deployment (CI/CD) controls to improve OSS vulnerability remediation without slowing release cycles.

Due to the need for rapid development and innovation, developers are increasingly turning to open-source frameworks and libraries to accelerate software development life cycles (SDLC). Use of open-source code by developers grew 40% and is expected to expand 14% year on year through 2023.

Agile and DevOps enable development teams to release new features multiple times a day, making software development a competitive differentiator. The demand for new and innovative software is brisk64% of organizations report an application development backlog (19% have more than 10 applications queued).

Beyond helping to accelerate development cycles, OSS enables organizations to lower costs and reduce time to market in many ways. Rather than writing custom code for large segments of applications, developers are turning to OSS frameworks and libraries. This reduces cost while enabling much greater agility and speed.

Despite all its benefits, OSS can present an array of risks with licensing limitations as well as security risks. Following is a quick look at some of these.

An area that organizations should not overlook in terms of risk is OSS licensing. Open source can be issued under a multitude of different licenses, or under no license at all. Not knowing the obligations that fall underneath each particular license (or not abiding by those obligations) can cause an organization to lose intellectual property or experience a monetary loss. While OSS is free, this does not mean it cannot be used without complying with other obligations. Indeed, there are over 1,400 open software licenses that software can fall under with a variety of stipulations restricting and permitting use.

With shift-left methodologies gaining traction, organizations are focused on finding and preventing vulnerabilities early in the software delivery process. However, open-source licensing issues will not show up at this stage unless software composition is analyzed. Waiting until right before release cycles to check on open-source licensing issues can incur significant development delaystime spent reworking code and checking it for vulnerabilities and bugs. Additionally, as development teams are measured on the speed and frequency of releases, these delays can be particularly onerous.

With the use of OSS, there is a possibility to introduce an array of vulnerabilities into the source code. The reality is that developers are under increasing pressure to write feature-rich applications within demanding release windows. When the responsibility of managing application security workflows and vulnerability management is added, including analysis of OSS frameworks and libraries, it becomes increasingly difficult for them to efficiently and effectively ensure security remains top of mind. In addition, for legacy application security models, code scanning as well as triage, diagnosis, and remediation of vulnerabilities requires specialized skill sets that developers are not commonly trained on.

A critical part of the problem is that legacy application security uses an outside-in model where security sits outside of the software and SDLC. However, research shows that security must be built into development processes from the very startand this includes the use of open-source frameworks and libraries.

Since OSS is publicly available, there is no central authority to ensure quality and maintenance. This makes it difficult to know what types of OSS are most widely in use. In addition, OSS has numerous versions, and thus older versions may contain vulnerabilities that were fixed in subsequent updates. Indeed, according to the Open Web Application Security Project(OWASP), using old versions of open-source components with known is one of the most critical web application security risks. Since security researchers can manually review code to identify vulnerabilities, each year thousands of new vulnerabilities are discovered and disclosed publicly, often with exploits used to prove the vulnerability exists.

But Common Vulnerabilities and Exposures (CVEs) are just a tip of the iceberg. Open source contains a plethora of unknown or unreported vulnerabilities. These can pose an even greater risk to organizations. Due to its rapid adoption and use, open source has become a key target for cyber criminals.

To effectively realize the many OSS benefits, development teams must implement the right application security strategies. It all starts with setting up the right policies.

Organizations use policy and procedures to provide guidance for proper usage of OSS components. This includes which types of OSS licensing are permitted, which type of components to use, when to patch vulnerabilities, and how to prioritize them.

To minimize the risk associated with licensing, organizations need to know which licenses are acceptable by use case and environment. And when it comes to security, application security teams need policies to help disclose vulnerabilities. For example, a component with a high severity vulnerability may be acceptable in an application that manages data that is neither critical nor sensitive and that has a limited attack surface. However, according to a documented policy, that same vulnerability is unacceptable in a public-facing application that manages credit card data and should be remediated immediately.

According to Gartner, one of the first steps to improving software security is to ensure that a software bill of materials (SBoM) exists for every software application. An SBoM is a definitive list of all serviceable parts (including OSS) needed to maintain an application. Since software is usually built by combining componentswith development frameworks, libraries, and operating system featuresit has a bill of materials that describes the bits that comprise it, just as much as hardware does.

A critical aspect of maintaining an effective software inventory is to ensure that it accurately and dynamically represents the relationships between components, applications, and serversso that development teams always know what is deployed, where each component resides, and exactly what needs to be secured. Once an SBoM is built, it needs to map to a reliable base of license, quality, and security data.

Since cyber criminals often launch attacks on newly exposed vulnerabilities in hours or days, an application security solution is needed to immediately protect against exploitation of open-source vulnerabilities. Security instrumentation embeds sensors within applications so they can protect themselves from the most sophisticated attacks in real time. This enables an effective open-source risk management programthe ability to deliver the quickest possible turnaround for resolving issues once they emerge. This includes providing insight into which libraries are in use by the application, which helps development teams to prioritize the fixes that pose the greatest likelihood of exploitation. Security teams can also leverage this functionality to foster goodwill with developers; too often, developers are overwhelmed by the sheer volume of findings presented by legacy software composition analysis (SCA) tools.

It is no surprise that automating some application security processes improves an organizations ability to analyze and prioritize threats and vulnerabilities. Last years Cost of a Data Breach Report from Ponemon Institute and IBM Security finds that organizations without security automation experience breach costs that are 95% higher than breaches at organizations that have fully deployed automation.

Another approach in securing the use of OSS in DevOps environments is to embed automated controls in continuous integration/continuous deployment (CI/CD) processes. OSS elements often do not pass the same quality and standards checks as proprietary code. Unless each open-source component is evaluated before implementation, it is easy to incorporate code containing vulnerabilities.

When properly operationalized, an open-source management solution can automatically analyze all dependencies in a project. If vulnerable components are detected in an application build, an automated policy check should trigger a post-build action failing or mark the build as unstable based on set parameters. Regardless of the specific process and tooling an organization has in place, the goal should always be to deliver immediate and accurate feedback to developers so that they can take direct action to keep the application secure and functional.

The many advantages of using open-source components in applications come with a costrisk exposures in both licensing and cybersecurity. As a favorite target of cyber criminals, open-source code vulnerabilities can become a moving target requiring constant vigilance to prevent bad actors from taking advantage. Successfully managing OSS increasingly depends on automated application security processes. Automation helps organizations track all the open-source components in use, identify any associated risks, and enable effective mitigation actions so that teams can safely use open source without inhibiting development and delivery.

For more information on what organizations need to seek when securing open source, read the eBook, The DevSecOps Guide to Managing Open-Source Risk.

Go here to read the rest:

The Risks Associated with OSS and How to Mitigate Them - Security Boulevard

Open Source: What’s the delay on the former high/middle school on North Mulberry? – knoxpages.com

EDITOR'S NOTE:This story is in response to a reader-submitted question throughOpen Source, a platform where readers can submit questions to the staff.

MOUNT VERNON When a reader asked through Open Source about the stoppage of demolition on the old high/middle school on North Mulberry Street, he wasn't the only one wondering what is going on. Councilmember Tammy Woods asked the same question during Monday night's city council meeting.

After years of uncertainty, promises, and unrealized plans, demolition finally began on June 19, only to come to a halt a few days later. After a seven-week hiatus, activity resumed this week.

When Safety-service Director Richard Dzik asked developer Joel Mazza early this week the reason for the delay, Mazza cited two reasons: vacation and illness.

The initial stoppage was due to the contractor, Jeff Page of Lucas-based Page Excavating, being on vacation a couple of weeks. Mazza has been on vacation the last couple of weeks, and the contractor has had employees out sick.

I have not seen any more of the building come down, but there has been activity, Dzik told Knox Pages on Thursday. They continue to deal with the debris.

When initially contacted, Page declined to comment other than to say crews were working at the school this week. In a series of text messages, however, he explained that his crew is separating the wood from the brick and block on the part of the building already demolished. When the current pile of rubble is sorted and removed, another section will be demolished, and the process resumed.

According to the demolition permit signed on Dec. 5, 2019, the proposed start date for demolition was Dec. 15, 2019, with a completion date of Mar. 31, 2020. Dzik said the contractor is given six months after signing the contract to start the work. An extension is possible.

According to our code, from the time the permit is issued, the contractor has 12 months to substantially complete the project, he said. I would hope it wouldn't drag out that long.

The current permit is only for demolition. Dzik said that to begin construction, Mazza will have to apply for a zoning permit and present plans for the project. Mazza plans to build an affordable housing option for renters that will include two-to-three-story town homes, flats, and a three-to-four-story apartment complex.

Our stories will always be free to read, but they aren't free to produce. We need your support. To help our news organization tell Knox County's story every day, join our team. Become a member today.

Read the original post:

Open Source: What's the delay on the former high/middle school on North Mulberry? - knoxpages.com

The state of application security: What the statistics tell us – CSO Online

The emergence of the DevOps culture over the past several years has fundamentally changed software development, allowing companies to push code faster and to automatically scale the infrastructure needed to support new features and innovations. The increased push toward DevSecOps, which bakes security into the development and operations pipelines, is now changing the state of application security, but gaps still remain according to data from new industry reports.

A new report by the Enterprise Strategy Group (ESG), which surveyed 378 application developers and application security professionals in North America, found that many organizations continue to push code with known vulnerabilities into production despite viewing their own application security programs as solid.

Releasing vulnerable code is never good but doing so knowingly is better than doing it without knowing, since the decision usually involves some risk assessment, a plan to fix, and maybe temporary mitigations. Half of respondents said their organizations do this regularly and a third said they do it occasionally. The most often cited reasons were meeting a critical deadline, the vulnerabilities being low risk or the issues being discovered too late in the release cycle (45%).

The findings highlight why integrating security testing as early in the development process as possible is important, but also that releasing vulnerable code is not necessarily a sign of not having a good security program because this can happen for different reasons and no single type of security testing will catch all bugs. However, the report also found that many organizations are still in the process of expanding their application security programs, with only a third saying their programs cover more than three quarters of their codebase and a third saying their programs cover less than half.

Who takes responsibility for the decision of pushing vulnerable code into production can vary from organization to organization, the survey found. In 28% of organizations the decision is taken by the development manager together with a security analyst, in 24% by the development manager alone and in 21% by a security analyst.

This could actually be a sign of application security programs maturing, because DevSecOps is about moving security testing as early as possible in the development pipeline, whereas in the past security testing fell solely in the sphere of security teams who used to perform it after the product was complete.

In organizations where the development team does the security testing as a result of integrations into their processes and also consumes the results, it's normal for the development manager to make decisions regarding which vulnerabilities are acceptable, either in collaboration with the security team or even inside their own organization if they have a security champion -- a developer with application security knowledge and training -- on their team. Such decisions, however, should still be taken based on policies put in place by the CISO organization, which is ultimately responsible for managing the entire company's information security risk and can, for example, decide which applications are more exposed to attacks or contain more sensitive information that hackers could target. Those applications might have stricter rules in place when it comes to patching.

If the risk is not evaluated correctly, shipping code with known vulnerabilities can have serious consequences. Sixty percent of respondents admitted that their production applications were exploited through vulnerabilities listed in the OWASP Top-10 over the past 12 months. The OWASP Top-10 contains the most critical security risks to web applications and include problems like SQL injection, broken authentication, sensitive data exposure, broken access controls, security misconfigurations, the use of third-party components with known vulnerabilities and more. These are issues that should not generally be allowed to exist in production code.

According to ESG's report, companies use a variety of application security testing tools: API security vulnerability (ASV) scanning (56%), infrastructure-as-code security tools to protect against misconfigurations (40%), static application security testing (SAST) tools (40%), software composition analysis (SCA) testing tools (38%), interactive application security testing (IAST) tools (38%), dynamic application security testing (DAST) tools (36%), plugins for integrated development environments (IDEs) that assist with security issue identification and resolution (29%), scanning tools for images used in containers, repositories and microservices (29%), fuzzing tools (16%) and container runtime configuration security tools (15%).

However, among the top challenges in using these tools, respondents listed developers lacking the knowledge to mitigate the identified issues (29%), developers not using tools the company invested in effectively (24%), security testing tools adding friction and slowing down development cycles (26%) and lack of integration between application security tools from different vendors (26%).

While almost 80% of organizations report that their security analysts are directly engaged with their developers by working directly to review features and code, by working with developers to do threat modelling or by participating in daily development scrum meetings, developers themselves don't seem to get a lot of security training. This is why in only 19% of organizations the application security testing task is formally owned by individual developers and in 26% by development managers. A third of organizations still have this task assigned to dedicated security analysts and in another 29% it's jointly owned by the development and security teams.

In a third of organizations less than half of developers are required to take formal security training and only 15% such training is required for all developers. Less than half of organizations require developers to engage in formal security training more than once a year, 16% expecting developers to self-educate and 20% only offering training when a developer joins the team.

Furthermore, even when training is provided or required, the effectiveness of such training is not properly tracked in most organizations. Only 40% of organizations track security issue introduction and continuous improvement metrics for development teams or individual developers.

Veracode, one of the application security vendors who sponsored the ESG research, recently launched the Veracode Security Labs Community Edition, an in-browser platform where developers can get free access to dozens of application security courses and containerized apps that they can exploit and patch for practice.

Any mature application security program should also cover any open-source components and frameworks because these make up a large percentage of modern application code bases and carry risks of inherited vulnerabilities and supply chain attacks. Almost half of respondents in ESG's survey said that open-source components make up over 50% of their code base and 8% said they account for two thirds of their code. Despite that, only 48% of organizations have invested in controls to deal with open-source vulnerabilities.

In its 2020 State of the Software Supply Chain report, open-source governance company Sonatype noted a 430% year-over-year growth in attacks targeting open-source software projects. These attacks are no longer passive where attackers exploit vulnerabilities after they've been publicly disclosed, but ones where attackers try to compromise and inject malware into upstream open-source projects whose code is then pulled by developers into their own applications.

In May, the GitHub security team issued a warning about a malware campaign dubbed Octopus Scanner that was backdooring NetBeans IDE projects. Malicious or compromised components have also been regularly distributed on package repositories like npm or PyPi.

The complex web of dependencies makes dealing with this issue difficult. In 2019, researchers from Darmstadt University analyzed the npm ecosystem, which is the primary source for JavaScript components. They found that any typical package loaded an average of 79 other third-party packages from 39 different maintainers. The top five packages on npm had a reach of between 134,774 and 166,086 other packages.

"When malicious code is deliberately and secretly injected upstream into open source projects, it is highly likely that no one knows the malware is there, except for the person that planted it," Sonatype said in its report. "This approach allows adversaries to surreptitiously set traps upstream, and then carry out attacks downstream once the vulnerability has moved through the supply chain and into the wild."

According to the company, between February 2015 and June 2019, 216 such "next-generation" supply chain attacks were reported, but from July 2019 to May 2020 an additional 929 attacks were documented, so this has become a very popular attack vector.

In terms of traditional attacks where hackers exploit known vulnerabilities in components, companies seem unprepared to respond quickly enough. In the case of the Apache Struts2 vulnerability that ultimately led to the Equifax breach in 2017, attackers started exploiting the vulnerability within 72 hours after it became known. More recently, a vulnerability reported in SaltStack was also exploited within three days after being announced, catching many companies unprepared.

A Sonatype survey of 679 software development professionals revealed that only 17% of organizations learn about open-source vulnerabilities within a day of public disclosure. A third learn within the first week and almost half after a week's time. Furthermore, around half of organizations required more than a week to respond to a vulnerability after learning about it and half of those took more than a month.

Both the availability and consumption of open-source components is increasing with every passing year. The JavaScript community introduced over 500,000 new component releases over the past year pushing the npm directory to 1.3 million packages. Until May developers downloaded packages 86 billion times from npm, Sonatype projecting that by the end of the year the figure will reach 1 trillion downloads. It's concerning that the University of Darmstadt research published last year revealed that nearly 40% of all npm packages contain or depend code with known vulnerabilities and that 66% vulnerabilities in npm packages remain unpatched.

In the Java ecosystem, developers downloaded 226 billion open-source software components from the Maven Central Repository in 2019, which was a 55% increase compared to 2018. Given the statistics seen in 2020, Sonatype estimates that Java components downloads will reach 376 billion this year. The company, which maintains the Central Repository and has deep insights into the data, reports that one in ten downloads was for a component with a known vulnerability.

A further analysis of 1,700 enterprise applications revealed that on average they contained 135 third-party software components, of which 90% were open source. Eleven percent of those open-source components had at least one vulnerability, but applications had on average 38 known vulnerabilities inherited from such components. It was also not uncommon to see applications assembled from 2,000 to 4,000 open-source components, highlighting the major role the open-source ecosystem plays in modern software development.

Similar component consumption trends were observed in the .NET ecosystem and the microservice ecosystem, with DockerHub receiving 2.2 container images over the past year and being on track to seeing 96 billion image pull requests by developers this year. Publicly reported supply chain attacks have involved malicious container images hosted on DockerHub and the possibility of having images with misconfigurations or vulnerabilities is also high.

The DevOps movement has fundamentally changed software development and made possible the new microservice architecture where traditional monolith applications are broken down into individually maintained services that run in their own containers. Applications no longer contain just the code necessary for their features, but also the configuration files that dictate and automate their deployment on cloud platforms, along with the resources they need. Under DevSecOps, development teams are not only responsible for writing secure code, but also deploying secure infrastructure.

In a new report, cloud security firm Accurics, which operates a platform that can detect vulnerable configurations in infrastructure-as-code templates and cloud deployments, 41% of organizations had hardcoded keys with privileges in their configurations that were used to provision computing resources, 89% deployments had resources provisioned and running with overly permissive identity and access management (IAM) policies and nearly all of them had misconfigured routing rules.

View post:

The state of application security: What the statistics tell us - CSO Online

Key Considerations and Tools for IP Protection of Computer Programs in Europe and Beyond – Lexology

Software companies often are faced with the issue of how solutions relating to software, i.e. computer programs, can be protected. This brief article provides an overview on various strategies and tools that are available for protecting computer programs in Europe, in particular, elements of computer programs which may be protectable by patents, trademarks and/or design rights. In addition, this article addresses intellectual property-related questions which can be used as a framework for consideration when entering into a market with a computer product.

When a company designs and develops a computer program, it is important to answer at least the following questions.

If the answer to Question 4 is YES, then the patentability requirements should be met in Europe, and the next question is whether or not to patentthe solution. This decision is based on several factors including the available budget for IP protection, the value that can be gained through a patent, the market need, the PR value and brand creation. In addition, the benefits that are reached through a granted patent, such as exclusive rights, the possibility to request licensing fees and registered ownership of a solution, have an impact on the decision.

In particular, if a granted patent includes features that will be part of a certain standard (Question 1), and thus would be considered a Standard Essential Patent (SEP), such patent may be of great value. This is especially true if the standard will be widely used commercially and the patent is a valuable contribution to the standard. This, of course, will depend on the standard, its content and its conditions.

If the answer to Question 4 is NO, or if it is otherwise decided that patenting is not an option, two alternatives to patenting are (1) to maintain the solution as a trade secret, or (2) to publish the solution. It is important to note that simply leaving the solution unpublished, does not automatically turn the solution into a trade secret. For trade secrets, there are strict standards in Europe requiring, inter alia, careful limitation of persons having access to the solution both physically and virtually. The drawback with trade secrets is that they do not provide broad rights to the owner, and therefore any third party could patent the solution for themselves, which could limit the companys freedom to operate. Publication on the other hand, reveals the solution, and therefore prevents third parties from patenting the solution, since the solutions novelty would be destroyed through publication. However, publication does not create any affirmative rights to the solution for the company, and therefore compensation for third party use of the solution would not be available.

In addition to, or as an option to patenting, other forms of protection can be obtained for computer programs. For example, if a computer program includes user interface elements (Question 3), and these user interface elements are unique, a design right should be considered. Design rights in Europe protect the unique appearance of a product, and when a computer program is the target, the design right may protect individual visual elements, such as icons, a complete layout of a user interface, or a set of game characters, for example.

On the other hand, if a computer program includes elements intended for use in marketing (Question 2), in addition to design rights, trademark protection should be considered. At a minimum, the product name and logo warrant trademark protection, but trademark rights also can be obtained for certain leading game characters. From the brand creations point of view, trademark and design rights are strong protection tools that provide exclusive rights to the impression a customer has about the company and/or the product.

Thus, all the main forms of IP protection in Europe (patents, trademark, design right) are available for computer programs. The decision on what protection to seek for particular aspects of a computer program depends not only on the legal requirements, but also on the companys IP strategy.

In addition to considerations relating to IP protection, it is also important for a company to identify any third-party data that is involved in the computer program. This should include not only external data, but also any open source code. For external data, the company should confirm that they own or otherwise have sufficient rights to the data, such that they are able to use it. For open source code, the company should examine the license in order to determine whether its coverage is sufficient, and whether there are limitations which affect the delivery of the computer program. In addition, the content of open source licenses should be carefully examined to identify how the license affects the companys ability to utilize and manage their patent portfolio. For example, the license may state that any patent claim that includes features from the open source code is not infringed by other parties to the same open source license. In other words, under such circumstances the company would be obliged to grant free licenses to all of their relevant patents to other open source contributors.

Thus, when developing a computer program, it is advisable to consider it from various points of view in order to identify key elements that are unique and representative of the computer program. If such elements are also technically essential and solve a technical problem, consider patenting the solution but also consider the other rights that can be used in combination and synergistically for various aspects of a computer program.

Continue reading here:

Key Considerations and Tools for IP Protection of Computer Programs in Europe and Beyond - Lexology