news digest: Microsoft launches open source website, TensorFlow Recorder released, and Stackery brings serverless to the Jamstack – SD Times -…

Microsoft launched a new open source site, which features aims to help people get involved, explore projects and join the ecosystem.

The site also offers near real-time view of things that are happening across Microsofts projects on GitHub.

In addition, the site highlights Microsofts open-source projects such as Accessibility Insights, PowerToys and Windows Terminal.

More information is available here.

TensorFlow Recorder releasedGoogle announced that it open sourced the TensorFlow Recorder last week to make it possible for data scientists, engineers, or AI/ML engineers to create image-based TFRecords with just a few lines of code.

Before TFRecorder, users would have had to write a data pipeline that parsed their structured data, loaded images from storage, and serialized the results into the TFRecord format. Now, TFRecorder allows users to write TFRecords directly from a Pandas dataframe or CSV without writing any complicated code, according to Google in a post.

Data loading performance can be further improved by implementing prefetching and parallel interleave along with using the TFRecord format.

Stackery brings serverless to the JamstackStackery announced that it added the website resource to simplify the build process for static site generators like Gatsby.

This automates a lot of machinery within AWS to retrieve application source and build it with references to external sources, including: AWS Cognito Pools, GraphQL APIs, Aurora MySQL databases, and third-party SaaS services like GraphCMS.

The combination of JAMstack and serverless allows for powerful, scalable, and relatively secure applications which require very little overhead and low initial cost to build, Stackery wrote in a post.

Visual Studio Code updateVisual Studio Code version 1.48 includes updates such as Settings Sync now available for preview in stable, an updated Extensions view menu, and a refactored overflow menu for Git in the Source Control view.

It also includes the option to publish to a public or private GitHub repository and to debug within the browser without writing a launch configuration.

Preview features are not ready for release but are functional enough to use, Microsoft wrote in a post that contains additional details on the new release.

Source dependency reporting in Visual Studio 2019 16.7The new switch for the compiler toolset enables the compiler to generate a source-level dependency report for any given translation unit it compiles.

Additionally, the use of /sourceDependencies is not limited only to C++, it can also be used in translation units compiled as C. The switch is designed to be used with multiple files and scenarios under /MP, according to Microsoft in a post.

C++20 demands a lot more from the ecosystem than ever before. With C++20 Modules on the horizon the compiler needs to work closely with project systems in order to provide rich information for build dependency gathering and making iterative builds faster for inner-loop development, Microsoft stated.

Read the original:

news digest: Microsoft launches open source website, TensorFlow Recorder released, and Stackery brings serverless to the Jamstack - SD Times -...

The Risks Associated with OSS and How to Mitigate Them – Security Boulevard

Open source has become nearly ubiquitous with Agile and DevOps. It offers development teams the ability to quickly and easily scale their software development life cycles (SDLC). At the same time, open-source software (OSS) components can introduce security vulnerabilities, licensing issues, and development workflow challenges. Open-source risks include both licensing challenges and cyber threats from poorly written code that leads to security gaps. With the number of Common Vulnerabilities and Exposures (CVE) growing rapidly, organizations must define actionable OSS policies, monitor OSS components, and institute continuous integration/continuous deployment (CI/CD) controls to improve OSS vulnerability remediation without slowing release cycles.

Due to the need for rapid development and innovation, developers are increasingly turning to open-source frameworks and libraries to accelerate software development life cycles (SDLC). Use of open-source code by developers grew 40% and is expected to expand 14% year on year through 2023.

Agile and DevOps enable development teams to release new features multiple times a day, making software development a competitive differentiator. The demand for new and innovative software is brisk64% of organizations report an application development backlog (19% have more than 10 applications queued).

Beyond helping to accelerate development cycles, OSS enables organizations to lower costs and reduce time to market in many ways. Rather than writing custom code for large segments of applications, developers are turning to OSS frameworks and libraries. This reduces cost while enabling much greater agility and speed.

Despite all its benefits, OSS can present an array of risks with licensing limitations as well as security risks. Following is a quick look at some of these.

An area that organizations should not overlook in terms of risk is OSS licensing. Open source can be issued under a multitude of different licenses, or under no license at all. Not knowing the obligations that fall underneath each particular license (or not abiding by those obligations) can cause an organization to lose intellectual property or experience a monetary loss. While OSS is free, this does not mean it cannot be used without complying with other obligations. Indeed, there are over 1,400 open software licenses that software can fall under with a variety of stipulations restricting and permitting use.

With shift-left methodologies gaining traction, organizations are focused on finding and preventing vulnerabilities early in the software delivery process. However, open-source licensing issues will not show up at this stage unless software composition is analyzed. Waiting until right before release cycles to check on open-source licensing issues can incur significant development delaystime spent reworking code and checking it for vulnerabilities and bugs. Additionally, as development teams are measured on the speed and frequency of releases, these delays can be particularly onerous.

With the use of OSS, there is a possibility to introduce an array of vulnerabilities into the source code. The reality is that developers are under increasing pressure to write feature-rich applications within demanding release windows. When the responsibility of managing application security workflows and vulnerability management is added, including analysis of OSS frameworks and libraries, it becomes increasingly difficult for them to efficiently and effectively ensure security remains top of mind. In addition, for legacy application security models, code scanning as well as triage, diagnosis, and remediation of vulnerabilities requires specialized skill sets that developers are not commonly trained on.

A critical part of the problem is that legacy application security uses an outside-in model where security sits outside of the software and SDLC. However, research shows that security must be built into development processes from the very startand this includes the use of open-source frameworks and libraries.

Since OSS is publicly available, there is no central authority to ensure quality and maintenance. This makes it difficult to know what types of OSS are most widely in use. In addition, OSS has numerous versions, and thus older versions may contain vulnerabilities that were fixed in subsequent updates. Indeed, according to the Open Web Application Security Project(OWASP), using old versions of open-source components with known is one of the most critical web application security risks. Since security researchers can manually review code to identify vulnerabilities, each year thousands of new vulnerabilities are discovered and disclosed publicly, often with exploits used to prove the vulnerability exists.

But Common Vulnerabilities and Exposures (CVEs) are just a tip of the iceberg. Open source contains a plethora of unknown or unreported vulnerabilities. These can pose an even greater risk to organizations. Due to its rapid adoption and use, open source has become a key target for cyber criminals.

To effectively realize the many OSS benefits, development teams must implement the right application security strategies. It all starts with setting up the right policies.

Organizations use policy and procedures to provide guidance for proper usage of OSS components. This includes which types of OSS licensing are permitted, which type of components to use, when to patch vulnerabilities, and how to prioritize them.

To minimize the risk associated with licensing, organizations need to know which licenses are acceptable by use case and environment. And when it comes to security, application security teams need policies to help disclose vulnerabilities. For example, a component with a high severity vulnerability may be acceptable in an application that manages data that is neither critical nor sensitive and that has a limited attack surface. However, according to a documented policy, that same vulnerability is unacceptable in a public-facing application that manages credit card data and should be remediated immediately.

According to Gartner, one of the first steps to improving software security is to ensure that a software bill of materials (SBoM) exists for every software application. An SBoM is a definitive list of all serviceable parts (including OSS) needed to maintain an application. Since software is usually built by combining componentswith development frameworks, libraries, and operating system featuresit has a bill of materials that describes the bits that comprise it, just as much as hardware does.

A critical aspect of maintaining an effective software inventory is to ensure that it accurately and dynamically represents the relationships between components, applications, and serversso that development teams always know what is deployed, where each component resides, and exactly what needs to be secured. Once an SBoM is built, it needs to map to a reliable base of license, quality, and security data.

Since cyber criminals often launch attacks on newly exposed vulnerabilities in hours or days, an application security solution is needed to immediately protect against exploitation of open-source vulnerabilities. Security instrumentation embeds sensors within applications so they can protect themselves from the most sophisticated attacks in real time. This enables an effective open-source risk management programthe ability to deliver the quickest possible turnaround for resolving issues once they emerge. This includes providing insight into which libraries are in use by the application, which helps development teams to prioritize the fixes that pose the greatest likelihood of exploitation. Security teams can also leverage this functionality to foster goodwill with developers; too often, developers are overwhelmed by the sheer volume of findings presented by legacy software composition analysis (SCA) tools.

It is no surprise that automating some application security processes improves an organizations ability to analyze and prioritize threats and vulnerabilities. Last years Cost of a Data Breach Report from Ponemon Institute and IBM Security finds that organizations without security automation experience breach costs that are 95% higher than breaches at organizations that have fully deployed automation.

Another approach in securing the use of OSS in DevOps environments is to embed automated controls in continuous integration/continuous deployment (CI/CD) processes. OSS elements often do not pass the same quality and standards checks as proprietary code. Unless each open-source component is evaluated before implementation, it is easy to incorporate code containing vulnerabilities.

When properly operationalized, an open-source management solution can automatically analyze all dependencies in a project. If vulnerable components are detected in an application build, an automated policy check should trigger a post-build action failing or mark the build as unstable based on set parameters. Regardless of the specific process and tooling an organization has in place, the goal should always be to deliver immediate and accurate feedback to developers so that they can take direct action to keep the application secure and functional.

The many advantages of using open-source components in applications come with a costrisk exposures in both licensing and cybersecurity. As a favorite target of cyber criminals, open-source code vulnerabilities can become a moving target requiring constant vigilance to prevent bad actors from taking advantage. Successfully managing OSS increasingly depends on automated application security processes. Automation helps organizations track all the open-source components in use, identify any associated risks, and enable effective mitigation actions so that teams can safely use open source without inhibiting development and delivery.

For more information on what organizations need to seek when securing open source, read the eBook, The DevSecOps Guide to Managing Open-Source Risk.

Go here to read the rest:

The Risks Associated with OSS and How to Mitigate Them - Security Boulevard

Open Source: What’s the delay on the former high/middle school on North Mulberry? – knoxpages.com

EDITOR'S NOTE:This story is in response to a reader-submitted question throughOpen Source, a platform where readers can submit questions to the staff.

MOUNT VERNON When a reader asked through Open Source about the stoppage of demolition on the old high/middle school on North Mulberry Street, he wasn't the only one wondering what is going on. Councilmember Tammy Woods asked the same question during Monday night's city council meeting.

After years of uncertainty, promises, and unrealized plans, demolition finally began on June 19, only to come to a halt a few days later. After a seven-week hiatus, activity resumed this week.

When Safety-service Director Richard Dzik asked developer Joel Mazza early this week the reason for the delay, Mazza cited two reasons: vacation and illness.

The initial stoppage was due to the contractor, Jeff Page of Lucas-based Page Excavating, being on vacation a couple of weeks. Mazza has been on vacation the last couple of weeks, and the contractor has had employees out sick.

I have not seen any more of the building come down, but there has been activity, Dzik told Knox Pages on Thursday. They continue to deal with the debris.

When initially contacted, Page declined to comment other than to say crews were working at the school this week. In a series of text messages, however, he explained that his crew is separating the wood from the brick and block on the part of the building already demolished. When the current pile of rubble is sorted and removed, another section will be demolished, and the process resumed.

According to the demolition permit signed on Dec. 5, 2019, the proposed start date for demolition was Dec. 15, 2019, with a completion date of Mar. 31, 2020. Dzik said the contractor is given six months after signing the contract to start the work. An extension is possible.

According to our code, from the time the permit is issued, the contractor has 12 months to substantially complete the project, he said. I would hope it wouldn't drag out that long.

The current permit is only for demolition. Dzik said that to begin construction, Mazza will have to apply for a zoning permit and present plans for the project. Mazza plans to build an affordable housing option for renters that will include two-to-three-story town homes, flats, and a three-to-four-story apartment complex.

Our stories will always be free to read, but they aren't free to produce. We need your support. To help our news organization tell Knox County's story every day, join our team. Become a member today.

Read the original post:

Open Source: What's the delay on the former high/middle school on North Mulberry? - knoxpages.com

The state of application security: What the statistics tell us – CSO Online

The emergence of the DevOps culture over the past several years has fundamentally changed software development, allowing companies to push code faster and to automatically scale the infrastructure needed to support new features and innovations. The increased push toward DevSecOps, which bakes security into the development and operations pipelines, is now changing the state of application security, but gaps still remain according to data from new industry reports.

A new report by the Enterprise Strategy Group (ESG), which surveyed 378 application developers and application security professionals in North America, found that many organizations continue to push code with known vulnerabilities into production despite viewing their own application security programs as solid.

Releasing vulnerable code is never good but doing so knowingly is better than doing it without knowing, since the decision usually involves some risk assessment, a plan to fix, and maybe temporary mitigations. Half of respondents said their organizations do this regularly and a third said they do it occasionally. The most often cited reasons were meeting a critical deadline, the vulnerabilities being low risk or the issues being discovered too late in the release cycle (45%).

The findings highlight why integrating security testing as early in the development process as possible is important, but also that releasing vulnerable code is not necessarily a sign of not having a good security program because this can happen for different reasons and no single type of security testing will catch all bugs. However, the report also found that many organizations are still in the process of expanding their application security programs, with only a third saying their programs cover more than three quarters of their codebase and a third saying their programs cover less than half.

Who takes responsibility for the decision of pushing vulnerable code into production can vary from organization to organization, the survey found. In 28% of organizations the decision is taken by the development manager together with a security analyst, in 24% by the development manager alone and in 21% by a security analyst.

This could actually be a sign of application security programs maturing, because DevSecOps is about moving security testing as early as possible in the development pipeline, whereas in the past security testing fell solely in the sphere of security teams who used to perform it after the product was complete.

In organizations where the development team does the security testing as a result of integrations into their processes and also consumes the results, it's normal for the development manager to make decisions regarding which vulnerabilities are acceptable, either in collaboration with the security team or even inside their own organization if they have a security champion -- a developer with application security knowledge and training -- on their team. Such decisions, however, should still be taken based on policies put in place by the CISO organization, which is ultimately responsible for managing the entire company's information security risk and can, for example, decide which applications are more exposed to attacks or contain more sensitive information that hackers could target. Those applications might have stricter rules in place when it comes to patching.

If the risk is not evaluated correctly, shipping code with known vulnerabilities can have serious consequences. Sixty percent of respondents admitted that their production applications were exploited through vulnerabilities listed in the OWASP Top-10 over the past 12 months. The OWASP Top-10 contains the most critical security risks to web applications and include problems like SQL injection, broken authentication, sensitive data exposure, broken access controls, security misconfigurations, the use of third-party components with known vulnerabilities and more. These are issues that should not generally be allowed to exist in production code.

According to ESG's report, companies use a variety of application security testing tools: API security vulnerability (ASV) scanning (56%), infrastructure-as-code security tools to protect against misconfigurations (40%), static application security testing (SAST) tools (40%), software composition analysis (SCA) testing tools (38%), interactive application security testing (IAST) tools (38%), dynamic application security testing (DAST) tools (36%), plugins for integrated development environments (IDEs) that assist with security issue identification and resolution (29%), scanning tools for images used in containers, repositories and microservices (29%), fuzzing tools (16%) and container runtime configuration security tools (15%).

However, among the top challenges in using these tools, respondents listed developers lacking the knowledge to mitigate the identified issues (29%), developers not using tools the company invested in effectively (24%), security testing tools adding friction and slowing down development cycles (26%) and lack of integration between application security tools from different vendors (26%).

While almost 80% of organizations report that their security analysts are directly engaged with their developers by working directly to review features and code, by working with developers to do threat modelling or by participating in daily development scrum meetings, developers themselves don't seem to get a lot of security training. This is why in only 19% of organizations the application security testing task is formally owned by individual developers and in 26% by development managers. A third of organizations still have this task assigned to dedicated security analysts and in another 29% it's jointly owned by the development and security teams.

In a third of organizations less than half of developers are required to take formal security training and only 15% such training is required for all developers. Less than half of organizations require developers to engage in formal security training more than once a year, 16% expecting developers to self-educate and 20% only offering training when a developer joins the team.

Furthermore, even when training is provided or required, the effectiveness of such training is not properly tracked in most organizations. Only 40% of organizations track security issue introduction and continuous improvement metrics for development teams or individual developers.

Veracode, one of the application security vendors who sponsored the ESG research, recently launched the Veracode Security Labs Community Edition, an in-browser platform where developers can get free access to dozens of application security courses and containerized apps that they can exploit and patch for practice.

Any mature application security program should also cover any open-source components and frameworks because these make up a large percentage of modern application code bases and carry risks of inherited vulnerabilities and supply chain attacks. Almost half of respondents in ESG's survey said that open-source components make up over 50% of their code base and 8% said they account for two thirds of their code. Despite that, only 48% of organizations have invested in controls to deal with open-source vulnerabilities.

In its 2020 State of the Software Supply Chain report, open-source governance company Sonatype noted a 430% year-over-year growth in attacks targeting open-source software projects. These attacks are no longer passive where attackers exploit vulnerabilities after they've been publicly disclosed, but ones where attackers try to compromise and inject malware into upstream open-source projects whose code is then pulled by developers into their own applications.

In May, the GitHub security team issued a warning about a malware campaign dubbed Octopus Scanner that was backdooring NetBeans IDE projects. Malicious or compromised components have also been regularly distributed on package repositories like npm or PyPi.

The complex web of dependencies makes dealing with this issue difficult. In 2019, researchers from Darmstadt University analyzed the npm ecosystem, which is the primary source for JavaScript components. They found that any typical package loaded an average of 79 other third-party packages from 39 different maintainers. The top five packages on npm had a reach of between 134,774 and 166,086 other packages.

"When malicious code is deliberately and secretly injected upstream into open source projects, it is highly likely that no one knows the malware is there, except for the person that planted it," Sonatype said in its report. "This approach allows adversaries to surreptitiously set traps upstream, and then carry out attacks downstream once the vulnerability has moved through the supply chain and into the wild."

According to the company, between February 2015 and June 2019, 216 such "next-generation" supply chain attacks were reported, but from July 2019 to May 2020 an additional 929 attacks were documented, so this has become a very popular attack vector.

In terms of traditional attacks where hackers exploit known vulnerabilities in components, companies seem unprepared to respond quickly enough. In the case of the Apache Struts2 vulnerability that ultimately led to the Equifax breach in 2017, attackers started exploiting the vulnerability within 72 hours after it became known. More recently, a vulnerability reported in SaltStack was also exploited within three days after being announced, catching many companies unprepared.

A Sonatype survey of 679 software development professionals revealed that only 17% of organizations learn about open-source vulnerabilities within a day of public disclosure. A third learn within the first week and almost half after a week's time. Furthermore, around half of organizations required more than a week to respond to a vulnerability after learning about it and half of those took more than a month.

Both the availability and consumption of open-source components is increasing with every passing year. The JavaScript community introduced over 500,000 new component releases over the past year pushing the npm directory to 1.3 million packages. Until May developers downloaded packages 86 billion times from npm, Sonatype projecting that by the end of the year the figure will reach 1 trillion downloads. It's concerning that the University of Darmstadt research published last year revealed that nearly 40% of all npm packages contain or depend code with known vulnerabilities and that 66% vulnerabilities in npm packages remain unpatched.

In the Java ecosystem, developers downloaded 226 billion open-source software components from the Maven Central Repository in 2019, which was a 55% increase compared to 2018. Given the statistics seen in 2020, Sonatype estimates that Java components downloads will reach 376 billion this year. The company, which maintains the Central Repository and has deep insights into the data, reports that one in ten downloads was for a component with a known vulnerability.

A further analysis of 1,700 enterprise applications revealed that on average they contained 135 third-party software components, of which 90% were open source. Eleven percent of those open-source components had at least one vulnerability, but applications had on average 38 known vulnerabilities inherited from such components. It was also not uncommon to see applications assembled from 2,000 to 4,000 open-source components, highlighting the major role the open-source ecosystem plays in modern software development.

Similar component consumption trends were observed in the .NET ecosystem and the microservice ecosystem, with DockerHub receiving 2.2 container images over the past year and being on track to seeing 96 billion image pull requests by developers this year. Publicly reported supply chain attacks have involved malicious container images hosted on DockerHub and the possibility of having images with misconfigurations or vulnerabilities is also high.

The DevOps movement has fundamentally changed software development and made possible the new microservice architecture where traditional monolith applications are broken down into individually maintained services that run in their own containers. Applications no longer contain just the code necessary for their features, but also the configuration files that dictate and automate their deployment on cloud platforms, along with the resources they need. Under DevSecOps, development teams are not only responsible for writing secure code, but also deploying secure infrastructure.

In a new report, cloud security firm Accurics, which operates a platform that can detect vulnerable configurations in infrastructure-as-code templates and cloud deployments, 41% of organizations had hardcoded keys with privileges in their configurations that were used to provision computing resources, 89% deployments had resources provisioned and running with overly permissive identity and access management (IAM) policies and nearly all of them had misconfigured routing rules.

View post:

The state of application security: What the statistics tell us - CSO Online

Key Considerations and Tools for IP Protection of Computer Programs in Europe and Beyond – Lexology

Software companies often are faced with the issue of how solutions relating to software, i.e. computer programs, can be protected. This brief article provides an overview on various strategies and tools that are available for protecting computer programs in Europe, in particular, elements of computer programs which may be protectable by patents, trademarks and/or design rights. In addition, this article addresses intellectual property-related questions which can be used as a framework for consideration when entering into a market with a computer product.

When a company designs and develops a computer program, it is important to answer at least the following questions.

If the answer to Question 4 is YES, then the patentability requirements should be met in Europe, and the next question is whether or not to patentthe solution. This decision is based on several factors including the available budget for IP protection, the value that can be gained through a patent, the market need, the PR value and brand creation. In addition, the benefits that are reached through a granted patent, such as exclusive rights, the possibility to request licensing fees and registered ownership of a solution, have an impact on the decision.

In particular, if a granted patent includes features that will be part of a certain standard (Question 1), and thus would be considered a Standard Essential Patent (SEP), such patent may be of great value. This is especially true if the standard will be widely used commercially and the patent is a valuable contribution to the standard. This, of course, will depend on the standard, its content and its conditions.

If the answer to Question 4 is NO, or if it is otherwise decided that patenting is not an option, two alternatives to patenting are (1) to maintain the solution as a trade secret, or (2) to publish the solution. It is important to note that simply leaving the solution unpublished, does not automatically turn the solution into a trade secret. For trade secrets, there are strict standards in Europe requiring, inter alia, careful limitation of persons having access to the solution both physically and virtually. The drawback with trade secrets is that they do not provide broad rights to the owner, and therefore any third party could patent the solution for themselves, which could limit the companys freedom to operate. Publication on the other hand, reveals the solution, and therefore prevents third parties from patenting the solution, since the solutions novelty would be destroyed through publication. However, publication does not create any affirmative rights to the solution for the company, and therefore compensation for third party use of the solution would not be available.

In addition to, or as an option to patenting, other forms of protection can be obtained for computer programs. For example, if a computer program includes user interface elements (Question 3), and these user interface elements are unique, a design right should be considered. Design rights in Europe protect the unique appearance of a product, and when a computer program is the target, the design right may protect individual visual elements, such as icons, a complete layout of a user interface, or a set of game characters, for example.

On the other hand, if a computer program includes elements intended for use in marketing (Question 2), in addition to design rights, trademark protection should be considered. At a minimum, the product name and logo warrant trademark protection, but trademark rights also can be obtained for certain leading game characters. From the brand creations point of view, trademark and design rights are strong protection tools that provide exclusive rights to the impression a customer has about the company and/or the product.

Thus, all the main forms of IP protection in Europe (patents, trademark, design right) are available for computer programs. The decision on what protection to seek for particular aspects of a computer program depends not only on the legal requirements, but also on the companys IP strategy.

In addition to considerations relating to IP protection, it is also important for a company to identify any third-party data that is involved in the computer program. This should include not only external data, but also any open source code. For external data, the company should confirm that they own or otherwise have sufficient rights to the data, such that they are able to use it. For open source code, the company should examine the license in order to determine whether its coverage is sufficient, and whether there are limitations which affect the delivery of the computer program. In addition, the content of open source licenses should be carefully examined to identify how the license affects the companys ability to utilize and manage their patent portfolio. For example, the license may state that any patent claim that includes features from the open source code is not infringed by other parties to the same open source license. In other words, under such circumstances the company would be obliged to grant free licenses to all of their relevant patents to other open source contributors.

Thus, when developing a computer program, it is advisable to consider it from various points of view in order to identify key elements that are unique and representative of the computer program. If such elements are also technically essential and solve a technical problem, consider patenting the solution but also consider the other rights that can be used in combination and synergistically for various aspects of a computer program.

Continue reading here:

Key Considerations and Tools for IP Protection of Computer Programs in Europe and Beyond - Lexology

GM Creates And Shares New Workplace Safety Technologies – Pulse 2.0

General Motors Company (NYSE: GM) announced it has developed new technology to aid workplace safety during the COVID-19 pandemic. And the company is sharing its innovations with the world for potential use in job sites as diverse as manufacturing plants, offices, or even schools. The new technologies include an automated kiosk for temperature scanning and software for contact tracing and a mobile app for touchless printing.

While developing to aid the companys workplace protocols, GM made these tools available to the public for potential use in any workplace, school, or facility. GMs in-house software developers developed solutions aimed at helping employees return to work with more confidence, streamlining, and improving workplace safety protocols.

Thermal Scanning Kiosk

At GM workplaces and many other kinds of facilities today, safety protocols call for an entry process that includes a temperature evaluation, which is normally administered by another worker. GM developed new software code that integrates the operation of an infrared thermal camera with a computer and monitor, automating and streamlining the entire process.

Utilizing open-source code from the OpenCV project, the system automatically detects when someone has stepped in front of the camera and checks for an elevated skin temperature, indicating it is either safe to proceed or the entrant needs further evaluation. And the process can take as little as 1-2 seconds, thus relieving bottlenecks at workplace entry points and reducing physical contact between workers. The scanning is also effective when the worker is wearing a mask or face covering.

Kiosks are in use at a number of GM offices and plants acrossthe United Statesand will expand to other global locations soon. Even though the actual kiosk GM built is for its own facilities, the innovation that makes it work can be applied in many other kinds of workplaces. And the software is available to help any workplace or facility with similar entry scanning processes.

Workplace Contact Tracing

Developers around the world are designing technology solutions to improve contact tracing. GM had made significant improvements to Covid Watch, which is an open-source contact tracing application by adding real-time social distance alerts, boosting performance on both iOS and Android devices and adding support for Bluetooth beacons. And GM will release its open-source software soon, helping developers worldwide who are collaborating on open-source solutions to aid contact tracing.

GM is testing a mobile app that would create a record for the employee, listing other users with whom he or she has been in contact. And it can help medical staff reach employees that had contact with a worker testing positive for COVID-19 while maintaining privacy and security. The app also constantly computes the physical distance between users and can send an alert to help encourage safe behavior. GM is planning a pilot to test the application soon.

Touchless Print

The new Touchless Print mobile web application now enables employees to print documents without touching the printers control panel, which instead uses a QR code scanned through the employees mobile phone. This process is not only safer, but very quick and extremely simple.

This app is in use now at GM facilities worldwide for iOS and Android devices and exclusive to HP printers. And the open-source software for the app was released earlier this month.

These innovations are considered the latest in a series of pandemic-response actions from the company, including production of critical care ventilators and personal protective equipment like masks, face shields and N95 respirators.

KEY QUOTES:

We developed an extensive playbook for a safe return to work for our employees, and were seeing very good success. As we implemented the protocols, GM software developers started to work on how technology could make the process smoother and more precise.

GM medical director Dr.Jeffery Hess

We had to respond quickly to the challenges the COVID-19 pandemic created for our workforce. Our teams collaborated online with experts around the world to quickly innovate and support the safe return of our employees to the workplace. We know many of these challenges affect others globally. We felt it important to share our innovation so other companies, organizations, and institutions could benefit from our experience.

GM executive vice president and chief information officer Randy Mott

We believe our application advances the state of the art when it comes to mobile apps for contact tracing, which is the subject of massive software development efforts across multiple industries today.

Tony Bolton, GM chief information officer of Global Telecommunications and End-User Services

Read the original here:

GM Creates And Shares New Workplace Safety Technologies - Pulse 2.0

Aspire Technology Launches First Truly Secure Public Blockchain for Creation of Digital Assets – GlobeNewswire

Las Vegas, Aug. 13, 2020 (GLOBE NEWSWIRE) -- (via Blockchain Wire) -Aspire Technology, developer of digital asset creation technologies, has launched the mainnet of its first-of-a-kind digital asset creation technology, Aspire (www.aspirewallet.com). Digital assets have been a key part of the growth of the blockchain, including the rapid growth of digital collectibles and new public blockchains during the pandemic.

Simply put, the Aspire platform consists of Aspire (ASP), which is currently only available through its airdrop campaign, with the first 10,000 users receiving sufficient Aspire and Aspire Gas (GASP) to create one full-fledged digital asset, each of which can comprise up to 92 billion tokens. Aspire Gas acts like bitcoin does for Counterparty, which was the platform for 30 of the top 100 tokens by market capitalization in 2014-15.

The Aspire platform improves upon the standard Counterparty open-source code, but grafts in an automated checkpoint server to prevent 51 percent attacks that have caused many other blockchains to be attacked and lose funds, including two successful attacks on top 25 token Ethereum Classic this month, resulting in the loss of nearly $2 million. Aspire is also not subject to miner attacks, as recently happened to top 100 token Ravencoin, where an attack that exploited a weakness in the mining algorithm allowed for hackers to create and then steal nearly $6 million of Ravencoin by artificially expanding the block reward. Many other platforms have suffered one of these two common exploits that Aspire is immune from; even bitcoin can theoretically be 51 percent attacked, but it would cost an extraordinary amount of energy to pull off.

As we move into the second decade of cryptocurrency, there is still no easy solution for non-technical or technical users to create and customize a digital asset quickly and put it on a highly secure blockchain, said Jim Blasko, CEO and co-founder of Aspire Technology and core developer of the Aspire platform. Aspire solves this problem, and the Aspire Gas blockchain that it is teamed with solves the problem of excessive fees and slow throughput on most major blockchains. We believe Aspire is poised to be a leading creator of digital assets globally.

About Aspire Technology and the Aspire platformAspire Technology is a leading developer of digital asset creation technologies. It was incubated from the bCommerce Labs accelerator fund and other angel investors. The Aspire platform, which consists of the Aspire (ASP) digital asset creation platform and Aspire Gas (GASP) blockchain, is the first digital asset creation platform to resist both mining exploits and 51 percent attacks that are common to proof-of-work blockchains. For more information, contact info@aspirecrypto.com.

Read more from the original source:

Aspire Technology Launches First Truly Secure Public Blockchain for Creation of Digital Assets - GlobeNewswire

IBM asked software developers to take on the wrath of Mother Nature – The Drum

IBM was highly commended in the Best Social Good Campaign category at The Drum B2B Awards 2019. Here, the team behind reveals the challenges faced and the strategies used to deliver this successful project.

The challenge

IBM had a problem. Research showed we were losing the confidence of developers.

They were consistently ranking us in the bottom 10, if at all, behind our competitors in:

Innovation

Open Source Support

Community Advocacy

Technical Prowess

Humanity had a bigger problem. Natural disasters of apocalyptic proportion are occurring at a more rapid rate than ever before.

The strategy

We decided to hand over our code patterns (AI, IOT, Blockchain, Cloud) and unleash the creativity of developers from around the world to take on humanitys greatest challenge: the wrath of Mother Nature.

With key partners, we launched a global challenge that asked developers to use technology to address natural disaster preparedness and recovery and ultimately help save lives. Hosting the biggest hackathon in the world, would put our technology in the hands of new audiences and help IBMs pioneering capabilities and ethos shine through.

We needed to connect with developers but do so by avoiding traditional marketing efforts which they would ignore. Putting our code into their hands, gave us an opportunity to celebrate their creativity, compassion and inherent need to solve problems. We could bring attention to them with authenticity by putting them front and center rather than ourselves.

In addition, a documentary would turn the spotlight on what ingenuity can achieve and reveal the hard work and unique characters of coders from around the world. It would also be a way to inspire even more engagement and involvement from developers, employees, students, volunteers and partners in 2019.

The campaign

The best way to connect with our audience (18-34 year-old coders) was to give them our code and a problem they cared about to solve. Instead of advertising our story, we told theirs. We used the entries and hackathon events as a source for casting the documentary and along the way we captured footage of disasters as they happened, making sure that we presented the challenge as a global problem that had no regard for who, what or where--Mother Nature does not discriminate.

After hundreds of hours of interviews and thousands of miles traveling to events, we found a diverse and global cast. The movie had to be cinema-worthy. We did not want to shoot a corporate brand video...so we removed the brand. It was a risk but we knew that is was true to our original strategy: this is about developersnot IBM.

We wanted to raise their profiles, so we needed to introduce them to new audiences, which meant filming a movie worthy of that larger audiences time - inclusive of beautiful photography and superior production design. These were a new set of first responders and we wanted to introduce them to audiences by creating a story that was worthy of theatrical release and eventual distribution.

The results

The film is won awards at several film festivals around the country and is now in negotiations for formal distribution.

We screened it for over 300 IT professionals at our largest annual event called Think and the film was rated 4.9 out of 5.0 (our highest-rated session ever).

In the process of making the film, we established a global platform for developers to drive positive change through the code they create for the greater good of all.

We created enough momentum that the movies name Code & Response became the name for the overall initiative IBM launched at CES in 2019: a new $25 million, four-year initiative to fortify, test and launch open source technology to help communities needing critical aid.

From Think event feedback:

Moved me to tears! So awesome! Thanks all! What an inspiring film Fantastic story and a great piece of community involvement and support. Very inspiring stories and awesome that IBM is supporting the execution of these life-saving solutions.

The project as a winner at The Drum B2B Awards 2019. To find out which Drum Awards are currently open for entries, click here.

// Featured in this article

IBM

IBM is an American multinational technology company headquartered in Armonk, New York, United States, with operations in over 170 countries.

See original here:

IBM asked software developers to take on the wrath of Mother Nature - The Drum

Introducing the CDK construct library for the serverless LAMP stack – idk.dev

In this post, you learn how the new CDK construct library for the serverless LAMP stack is helping developers build serverless PHP applications.

The AWS Cloud Development Kit (AWS CDK) is an open source software development framework for defining cloud application resources in code. It allows developers to define their infrastructure in familiar programming languages such as TypeScript, Python, C# or Java. Developers benefit from the features those languages provide such as Interfaces, Generics, Inheritance, and Method Access Modifiers. The AWS Construct Library provides a broad set of modules that expose APIs for defining AWS resources in CDK applications.

The Serverless LAMP stack blog series provides best practices, code examples and deep dives into many serverless concepts and demonstrates how these are applied to PHP applications. It also highlights valuable contributions from the community to help spark inspiration for PHP developers.

Each component of this serverless LAMP stack is explained in detail in the blog post series:

The CDK construct library for the serverless LAMP stack is an abstraction created by AWS Developer Advocate, Pahud Hsieh. It offers a single high-level component for defining all resources that make up the serverless LAMP stack.

Building complex web applications from scratch is a time-consuming process. PHP frameworks such as Laravel and Symfony provide a structured and standardized way to build web applications. Using templates and generic components helps reduce overall development effort. Using a serverless approach helps to address some of the traditional LAMP stack challenges of scalability and infrastructure management. Defining these resources with the AWS CDK construct library allows developers to apply the same framework principles to infrastructure as code.

The AWS CDK enables fast and easy onboarding for new developers. In addition to improved readability through reduced codebase size, PHP developers can use their existing skills and tools to build cloud infrastructure. Familiar concepts such as objects, loops, and conditions help to reduce cognitive overhead. Defining the LAMP stack infrastructure for your PHP application within the same codebase reduces context switching and streamlines the provisioning process. Connect CDK constructs to deploy a serverless LAMP infrastructure quickly with minimal code.

Code is a liability and with the AWS CDK you are applying the serverless first mindset to infra code by allowing others to create abstractions they maintain so you dont need to. I always love deleting code

Says Matt Coulter, creator of CDK patterns An open source resource for CDK based architecture patterns.

The cdk-serverless-lamp construct library is built with aws/jsii and published as npm and Python modules. The stack is deployed in either TypeScript or Python and includes the ServerlessLaravel construct. This makes it easier for PHP developers to deploy a serverless Laravel application.

First, follow the Working with the AWS CDK with in TypeScript steps to prepare the AWS CDK environment for TypeScript.

Deploy the serverless LAMP stack with the following steps:

The cdk-serverless-lamp construct library uses the bref-FPM custom runtime to run PHP code in a Lambda function. The bref runtime performs similar functionality to Apache or NGINX by forwarding HTTP requests through the FastCGI protocol. This process is explained in detail in The Serverless LAMP stack part 3: Replacing the web server. In addition to this, a bref package named larval-bridge automatically configures Laravel to work on Lambda. This saves the developer from having to manually implement some of the configurations detailed in The serverless LAMP stack part 4: Building a serverless Laravel application

This creates the following directory structure:

The cdk directory contains the AWS CDK resource definitions. The codebase directory contains the Laravel project.

Replace the contents of ./lib/cdk-stack.ts with:

The brefLayerVersion argument refers to the AWS Lambda layer version ARN of the Bref PHP runtime. Select the correct ARN and corresponding Region from the bref website. This example deploys the stack into the us-east-1 Region with the corresponding Lambda layer version ARN for the Region.

Once the deployment is complete, an Amazon API Gateway HTTP API endpoint is returned in the CDK output. This URL serves the Laravel application.

The application is running PHP on Lambda using brefs FPM custom runtime. This entire stack is deployed by a single instantiation of the ServerlessLaravel construct class with required properties.

The ServerlessLaravel stack is extended with the DatabaseCluster construct class to provision an Amazon Aurora database. Pass a Amazon RDS Proxy instance for this cluster to the ServerlessLaravel construct:

The output shows that a shared VPC is created for the ServerlessLaravel stack and the DatabaseCluster stack. An Amazon Aurora DB cluster with a single DB instance and a default secret from AWS Secrets Manager is also created. The cdk-serverless-lamp construct library configures Amazon RDS proxy automatically with the required AWS IAM policies and connection rules.

The ServerlessLaravel stack is running with DatabaseCluster in a single VPC. A single Lambda function is automatically configured with the RDS Proxy DB_WRITER and DB_READER stored as Lambda environment variables.

The Lambda function authenticates to RDS Proxy with the execution IAM role. RDS Proxy authenticates to the Aurora DB cluster using the credentials stored in the AWS Secrets Manager. This is a more secure alternative to embedding database credentials in the application code base. Read Introducing the serverless LAMP stack part 2 relational databases for more information on connecting to an Aurora DB cluster with Lambda using RDS Proxy.

To remove the stack, run:$ cdk destroy

The video below demonstrates a deployment with the CDK construct for the serverless LAMP stack.

This post introduces the new CDK construct library for the serverless LAMP stack. It explains how to use it to deploy a serverless Laravel application. Combining this with other CDK constructs such as DatabaseCluster gives PHP developers the building blocks to create scalable, repeatable patterns at speed with minimal coding.

With the CDK construct library for the serverless LAMP stack, PHP development teams can focus on shipping code without changing the way they build.

Start building serverless applications with PHP.

See the original post here:

Introducing the CDK construct library for the serverless LAMP stack - idk.dev

What developers need to know about inter-blockchain communication – ComputerWeekly.com

This is a guest post for Computer Weekly Open Source Insider written by Christopher Goes in his roles as lead developer of IBC at Interchain GmbH.

Interchain GmbH is a wholly-owned subsidiary of the Interchain Foundation. As maintainers of the Tendermint project, lead architects of the Inter-Blockchain Communication (IBC) Protocol and contributors to the Cosmos SDK, the organisation has envision a more connected, open and self-sovereign world made possible through the Cosmos Network.

Goes writes as follows

While technological change has moved rapidly in the information era, the system of international payments has lagged behind. In a world where a user can order organic beetroot on AmazonFresh to be delivered to their doorstep overnight, we are still trapped by the network effects of archaic monetary systems subject to the political vagaries of fiat powers.

The protocols that power the Internet are capable of efficient information transfer but they were not designed for monetary exchange or the similar exchange of digital assets.

Enter Inter-Blockchain Communication (IBC): This is a protocol designed to enable an open, sovereign, secure network of interconnected blockchains or Internet of Blockchains if you will.

IBC is the connective substrate of the distributed interchain future the cornerstone upon which the entire structure rests. IBC enables blockchains to send & receive messages to & from other blockchains, just as computers send messages to & from each other over TCP/IP. IBC facilitates interoperability between blockchains with different consensus algorithms, state machines and design philosophies, allowing them to selectively interoperate while retaining their diversity. IBC is blockchain agnostic and does not need to be hosted on or controlled by any single project.

Goes: IBC is the connective substrate of the distributed interchain future .

Any distributed ledger which supports IBC can permissionlessly initiate a handshake to another IBC-supporting ledger.Once a handshake is complete, it opens up a channel of communication between the two ledgers. IBC packets, containing arbitrary data, can then start traveling back and forth. Should the protocol gain wide adoption, well see an organic explosion of IBC packets and the concept of an Internet of Blockchains can become a practical reality.

Currently, there are a lot of different blockchains in the world that perform different functions, for instance: providing currency in the case of Bitcoin, localised application platforms in the case of Ethereum and efficient light client verification in the case of Coda.

These blockchains are highly specialized in different tasks, but right now they are isolated from one another.

Specialisation of blockchains is beneficial because it allows these blockchains to perform a specific function expertly, instead of trying to do everything all at once. However, synergies & seamless user experiences are lost when there is no communication between them. Interoperability between these chains would allow for the benefits of specialisation without the drawbacks. For example, one could send Bitcoin over to Ethereum and use it in a smart contract or vice versa. Two pieces of software designed totally independently can now be connected and introduced to each other, without the thousands of coding hours required to build a custom interop system or bridge.

So here is what developers need to know about the upcoming release of IBC:

Interoperability is essential for decentralised networks to compete with the status quo. An ecosystem of politically independent, sovereign chains economically interacting will be able to knit all the different specialised blockchains into a single interoperable economy pulling in mainstream users and forming the foundations of a new crypto-economic system.

Read more here:

What developers need to know about inter-blockchain communication - ComputerWeekly.com