China Censored These 5 American Movies, Offering an Alternative Ending – Al-Bawaba

Censorship in China has made it to American-made movies, including the 1999 Brad Pitt film, Fight Club, stirring wide reactions.

Despite the shocked reactions of online users, commentators who are familiar with Chinese policies highlighted a routine practice by censors in the country, where films that involve law enforcement figures are always modified to avoid any hint that they could be on any side but the right one.

According to Vice, "Its unclear if the ending was altered out of self-censorship or by government order."

On Twitter, people shared a screenshot from the alternative ending offered by China to Fight Club, inspiring many users to take similar screenshots from other films, where censorship served the same purpose.

Snippets from censored films have been posted on Twitter under the hashtag#ChinaEditChallenge, with a number of well-known people making their own contributions, includingformer American computer intelligence consultantEdward Snowden.

Amongst the films that have been highlighted for changes made to their endings by China areLord of War, Star Wars,Falling Down,Mission Impossible, and others.

Read the rest here:
China Censored These 5 American Movies, Offering an Alternative Ending - Al-Bawaba

Posted in Uncategorized

ActiveState Trusted Artifacts Secures the Open Source Supply Chain – PRNewswire

ActiveState's Trusted Artifacts eliminate the risk of developers using insecure artifacts from open source repositories.

JFrog Artifactory is a popular language-agnostic repository that provides enterprise developers with a central location from which to retrieve the open source packages their software development projects require. Enterprises take two main approaches to populating JFrog Artifactory:

We also know from our Secure Supply Chain Survey that ~80% of organizations that build from source code struggle with creating reproducible builds, meaning the open source artifacts they create are insecure since there is no way to verify if the source code was compromised when the original build was produced.

The ActiveState Platform features a secure build service that delivers reproducible builds whose provenance can be verified by tracing each component back to its original source. Scripted builds from vetted source code occur inside of ephemeral, isolated and hermetically sealed (i.e., no internet access) containers purpose-built to perform a single function, reducing the potential for compromise. As a result, ActiveState can help enterprises ensure the security and integrity of their open source supply chain by populating their JFrog Artifactory with secure Java, JavaScript, .Net, Python, Ruby, PHP, and other open source language artifacts.

Loreli Cadapan, Vice President, Product Management, ActiveState, said: "Open source organizations are making great strides to improve the security of their public repositories, but the reality is that they are still the Wild West where anything goes. Our recent Supply Chain Security Survey results indicate that a worryingly high proportion of organizations continue to implicitly trust these open source repositories. Starting with our Artifactory offering, ActiveState is looking to help enterprises overcome these limitations in order to improve the security and integrity of their software development processes."

ActiveState Trusted Artifacts is now generally available. Talk to our product experts to understand how ActiveState can help you decrease the risk and overhead of managing open source language packages in Artifactory.

About ActiveState

ActiveState has a 20+ year history of providing secure, scalable open source language solutions to more than 2 million developers and 97% of Fortune 1,000 enterprises. Enterprises choose ActiveState to support mission-critical systems and speed up software development while enhancing the security and integrity of their open source supply chain. Visit http://www.activestate.com/for more information.

Related linksActiveState Blog: Introducing trusted open source artifact subscription for Artifactory usersSolution Sheet: Automate, secure, and streamline artifact maintenance in JFrog Artifactory

SOURCE ActiveState

Here is the original post:

ActiveState Trusted Artifacts Secures the Open Source Supply Chain - PRNewswire

Posted in Uncategorized

Popular Open Source Low Code Software Appsmith Delivered 184 New Features Last Year, Providing a Reliable and Mature Platform – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Appsmith, the first open-source low code software helping developers to build internal tools, reported impressive progress in 2021. The company was founded in mid-2019 and its open source software has been downloaded more than 5 million times with more than 11,000 stars on GitHub (100% growth since September). That ranks third on this list of open source projects compiled by Runa Capital in terms of percentage increase in GitHub stars over the fourth quarter.

Highlights from 2021 include introducing more than 150 enhancements, including major features like JS Editor, Gitsync, and 30-plus new widgets and variations. In total, 184 features were released. A full list can be viewed here.

Appsmith now has more than 2,000 community members with 168 contributors -- 100 of those from outside the company.

Appsmith is the first open-source low code software that helps developers build custom (often critical yet tedious) internal and CRUD (create, read, update and delete) type applications quickly, usually within only hours.

Its so easy to build an app with Appsmith that it took just less than 10 minutes, says Jatin Sharma, operations engineer at Fyle, a financial technology firm, that has used Appsmith to increase its customer success department's productivity by 30%.

2021 was the year we grew our community and created a framework used by thousands of developers worldwide, said Abhishek Nayak, co-founder and CEO, Appsmith. With a much larger community, contributors, and core team, were excited for what lies ahead in 2022. Were doubling down on the commitment and enthusiasm of our amazing contributors from all over the world with new widgets, custom theming, enhanced mobile support, additional security features and much more.

A list of more than a dozen areas targeted for new features in 2022 can be viewed here.

Every enterprise needs to create custom applications -- a slow, repetitive, expensive process -- that requires work to build the user interface, write integrations, code the business logic, manage access controls and ultimately deploy the app. Appsmith is 10 times faster by enabling software engineers to build the user interface with pre-built components, code the business logic by connecting application programming interfaces (APIs) along with any database, then test and deploy a web application where users are authenticated using a dashboard.

To learn more, check out the Getting Started information.

About Appsmith

Appsmith was founded in 2019 with the mission to enable backend engineers to build internal web apps quickly with a low code approach. Taking an open source software approach provides anyone with access to the software and the opportunity to get involved in the community. The company has offices in San Francisco and Bengaluru, India. For more information visit https://www.appsmith.com

Continue reading here:

Popular Open Source Low Code Software Appsmith Delivered 184 New Features Last Year, Providing a Reliable and Mature Platform - Business Wire

Posted in Uncategorized

The O-RAN Alliance announced the 5th release of its open source software – iTWire

Security tool vendor Snyk recently added code scanning to its range of tools for DevSecOps practitioners.

Snyk (pronounced "sneak") was founded in 2015 by Guy Podjarny, and offered an open source dependency scanner so developers could easily see if there were any known vulnerabilities in the open source code they used in their software. Importantly, the scan is recursive, so it not only checks the libraries used by the developer, but the libraries used by those libraries, and so on.

Since then, Snyk Open Source has been joined by three other tools Snyk Container (to find and fix vulnerabilities in container images and Kubernetes applications), Snyk Infrastructure as Code (to find and fixe insecure configurations in Terraform and Kubernetes code), and recently Snyk Code (to find and fix vulnerabilities in the developer's own code).

All four run on a single platform, explained Snyk APJ head of solutions engineering Lawrence Crowther, so it is possible, for example, to apply global policies across the software development lifecycle.

{loadposition stephen08}

Furthermore, Snyk is developer focussed, he said, so the platform integrates with common developer tools IDE, source control, CI/CD, etc so "the tool does the heavy lifting for them" and the developer can then concentrate on fixing the problem rather than finding it.

"We started with the digital natives" because for them, DevSecOps is a natural extension of DevOps, but now the company is addressing the enterprise market including the financial services sector. Local Snyk customers include Afterpay and Australia Post.

"DevSecOps is a bit of a buzzword," Crowther admitted, but one of the company's goals is to bake security into DevOps so that in a few years the security part will be a first class citizen of every project.

But "you need to do DevOps right before you do DevSecOps," he warned.

The broad adoption of cloud has led to the adoption of different architectures (vs traditional monolithic applications), and this means the security of all the components must be properly addressed.

For example, it's easy to get started with Kubernetes, he said, but it has a range of security implications and so DevOps teams need to step back and think about issues such as ensuring only the correct ports are open, that files aren't inappropriately exposed, and that the exactly correct privileges are assigned.

There's a cultural issue here, Crowther suggests, because developers need to take ownership of security not only of the code they write, but right down to the infrastructure level.

One way this can be addressed is by moving security specialists into application security roles, but this means they will need to understand engineering practices, DevOps workflows, and so on. Consequently, there aren't many people who can be slotted immediately into such roles.

So organisations need to find ways to provide developers with security guidance (eg, "how to avoid SQL injection flaws) and should invest in reskilling, including giving developers sufficient time to learn and absorb this knowledge.

Australian organisations are behind the US, but ahead of most of the APAC region, said Crowther. However, they are generally not getting to grips with the proper checking of open source code.

A typical project now contains around 10% locally developed code, with the other 90% being open source, he said. But that 90% depends on other open source libraries, and if a project explicitly uses 10 libraries it could be implicitly using another 1000.

Without proper checking, you're "just trusting the internet," he said.

A further problem is that most tools for checking open source libraries only go one level down. In contrast, Snyk Open Source traverses the entire dependency tree according to Crowther.

Similarly, downloading a container image from Docker Hub is a risky business without due diligence. It might purport contain something simple such as a Linux distribution and Node, but have other libraries been planted in it, and are there any old libraries that should have been updated?

Lots of blind spots exist, he warned, so it is important to check all dependencies.

If you're not sure whether Snyk's products are right for you, or if you only work on a small project, the company offers a 'free forever' plan that has a monthly limit of 200 Open Source tests, 100 Container tests, 300 Infrastructure as Code tests, and 100 Code tests.

Otherwise, prices start at $115 a month for five developers using Snyk Open Source.

And if you have your eye on job opportunities, Crowther said Snyk will be hiring sales, solutions engineering and support staff in 2022, in part to staff a planned Canberra office that will augment the existing operations in Sydney and Melbourne.

Read more here:

The O-RAN Alliance announced the 5th release of its open source software - iTWire

Posted in Uncategorized

Speeding up open-source GPU driver development with unit tests, drm-shim, and code reuse – CNX Software

Getting an Arm platform that works with mainline Linux may take several years as the work is often done by third parties, and the silicon vendor has its own Linux tree. That means in many cases, the software is ready when the platform is obsolete or soon will be. It would be nice to start software development before the hardware is ready. It may seem like a crazy idea, but thats what the team at Collabora has done to add support for Arm Valhall GPUs (Mali-G57, Mali-G78) to the Panfrost open-source GPU driver.

The result is that it only took the team a few days to successfully pass tests using data structures prepared by their Mesa driver and shaders compiled by their Valhall compiler after receiving the actual hardware thanks to the work done in the last six months. So how did they achieve this feat exactly?

We have to go back in time by a few months first. Last July, Collabora announced they had reverse-engineered Mali-G78 GPUs Valhall instruction set using a Samsung Galaxy S21 smartphone. Wait? Didnt I just say they work without Mali-G78 hardware? Correct, but they could not install mainline Linux and their GPU driver on the device as it was not rooted. They just used it to reverse-engineer the instructions and perform some testing by modifying compiled shaders and GPU data structures to experiment with individual bits. That step could have been avoided if Mali G78 documentation was available.

Alyssa Rosenzweig, a graphics software engineer for Collabora, continued her software development work, and in November 2021. she had written a Valhall compiler, and reverse-engineered enough to write a driver but still had no Linux hardware to test the code. So she wrote unit tests for everything from instruction packing to optimization and managed to solve a few bugs in the process simply using her development machine running Linux.

The next step was to use drm-shim library with fake GEM kernel drivers in userspace for CI (continuous integration) used in the Mesa project. A drm-shim driver makes the system think it features an actual GPU, but does nothing apart from receiving systems calls from userspace graphics drivers. This is not an emulator, and can not be used to test functionality, but it can help find flaws in the program flow. She was able to run a large number of tests on Apple M1 running Linux after fixing a bug (Hint: page size is 16K, instead of 4K) including compiling thousands of shaders per second with the Valhall compiler, and running Khronoss OpenGL ES Conformance Test Suite to identify any issues.

Collabora also attempted to identify differences between Valhall and earlier Arm Mali GPUs such as Bifrost, and reuse a large part of Panfrost driver code, and only change the part of the code where they detected differences. For instance, the Valhall instruction set is quite similar to the older Bifrost instruction set, so embedded the Valhall compiler as an additional backend in the existing Bifrost compiler. Alyssa explains:

Shared compiler passes like instruction selection and register allocation just work on Valhall, even though they were developed and debugged for Bifrost.

Earlier this month (January 2022), Collabora finally received a Chromebook with a MediaTek MT8192 (Kompanio 820) system-on-chip (with Mali-G57 MC5 GPU) and a serial cable, they managed to run mainline Linux on the board after fixing USB, although the display is not working yet. The GPU is automatically disabled in MT8192 apparently due to a silicon bug but can be enabled after disabling the Accelerator Coherency Port (ACP). As discussed above it then only took a few days to successfully pass hundreds of tests on the actual hardware thanks to their preparation work. Collabora now expects Panfrost to support Valhall GPU in time for end-users. You can read the full story on Collabora blog.

Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.

Here is the original post:

Speeding up open-source GPU driver development with unit tests, drm-shim, and code reuse - CNX Software

Posted in Uncategorized

The road less travelled paves the path to stealth disruption – ComputerWeekly.com

During the recentOcado Re:Imagined event, the companys founder and CEO, Tim Steiner, discussed how stealth disruptors were used to invent something he described as really radical. Really radical, in Ocados case, is embodied in its 600 series grocery fulfilment bot. The headline figure is that half of the parts it uses are 3D printed.

This is not about making 3D printed prototypes. It is about manufacturing production grade robots that are destined to be deployed in warehouses to fulfil customer grocery orders. Ocado has the ability to print bot parts on site, or at manufacturing sites equipped with 3D printers.

As Jame Donkin, the companys CTO, pointed out in Computer Weekly, it is very important to lower the level of specialist skills required in Ocado warehouses. This is why the company has tried to drive down maintenance costs of operating these bots.

Exploring the idea of stealth disruptors further, Computer Weekly recently spoke to Nicolas Forgues, former CTO at the supermarket chain, Carrefour. When the company found that the maintenance of one of the open source components it needed would become prohibitive, the company formed Tosit, the Open I Trust alliance, with several other French businesses including EDF and SNCF. Their objective was to maintain the open source code themselves. In doing so, alliance members have been able to free themselves from costly support and maintenance fees normally associated with open source software. Support is effectively at no cost. Instead, each organisation agrees to commit a certain number of people to work on the open source code.

On its own, supporting open source code in-house is not disruptive. Some may argue that it defies logic, since experts in the open source community are primed and ready to offer such services. But sometimes, as Forgues and members of Open I Trust, realised, the pricing of open source support contracts is not necessarily a good match for the business model of the customer. When the cost of support is growing faster than the business benefit derived from the open source software, then it is time to take a look at a different approach. By sharing best practises and contributing code and human resources, alliance members are able to maintain the open source code among themselves. This is stealth disruption.

What these examples with the CEO of Ocado and former CTO of Carrefour illustrate, is that stealth disruption is not about evolving an existing good idea. Its about difficult choices and taking the road less travelled.

View original post here:

The road less travelled paves the path to stealth disruption - ComputerWeekly.com

Posted in Uncategorized

Project EVerest is on a mission to standarize EV charging protocols – The Next Web

The development and expansion of the EV charging software ecosystem is a critical component to the mainstream adoption of electric vehicles. However, the industry has become complex and fragmented, with multiple isolated solutions and inconsistent technology standards. This slows and threatens the adoption of EVs.

In response,PIONIXhas developed a project calledEVerest, an open-source software stack designed to establish a common base layer for a unified EV charging ecosystem.

EVerest has gained some serious cred in the developer world, with its biggest support LF Energy (the Linux open-source foundation for the power systems sector). I spoke to the projects brainchild, Dr. Marco Mller, managing director of PIONIX, to find out more.

The idea for EVerst came from Marco Mllers previous startup, German commercial drone software firm MAVinci(Intel acquired the company in 2016). Mller shared:

We saw over our10 years in the drone industry that open source software benefited from the power of so many developers, including those from universities,and their development speed finally outpaced even the largest players in the market.

After the initial founder teamleft Intel in 2020, theydid consulting work with charging manufacturers and found that many engineers were involved in me too projects, effectively replicating the same code, even though 99% of the code was already engineered.

Shared open source implementations are vital to mitigate the differences in charging standards globally fromCHAdeMo, commonly used by Japanese automakers, to ChinasChaoJi, andCCS, popular in Europe and the US.

Further, every car is slightly different, making it even more urgent to have an open-source solution for the chargers. An open tech stack means developers can focus on more exciting work and bring products to market faster.

Mller explained that the project had received interest from various players in the EV charging space, except for a few big proprietary players, noting:

Cloud operators were particularly keen on standardization. I was told that the market leader for charge point clouds has more than 200 different dialects implemented of the same protocol. This is because everyone is doing slightly different ChargePoint protocols.

The EVerest software platform runs on a lightweight Linux system inside the charging point. It manages communication around energy between different players:

EVerest digitally abstracts the complexity of multiple standards and use cases. This means it can run on any device, from AC home chargers to public DC charging stations. This helps facilitate new features for local energy management such as local energy management, PV integration, and initiatives like wind and solar-powered EV stations and bi-directional charging.

Theres been a lot of research about the challenges of securing EV chargers, especially preventing hackers from using them for over or undercharging, committing identity fraud, or damaging the grid.

Vulnerabilities in open-source libraries have increasingly become an attack vector. The insertion of malicious code being inserted into packagesin repositories (for example, npm and PyPI) is rising, and the recentLog4j zero-dayexploits demonstrate the challenge of code maintenance in open source projects.

However, having a project under the auspices of The Linux Foundation is an advantage. It especially helps lend credibility increases the number of people working on the code. Mller noted:

We follow the security standards as well as we can. We have quite an advanced team. They know what theyre doing. We try to do extra security layers. Open source still has the benefit that a lot of people are looking at it.

The project is also liaising with the hacker enthusiast community to participate in hackathons over the summer months hopefully. As Marco asserts, these people who break systems for fun. The team is also raising venture capital, which will go into developing and improving the open-source codebase.

In short, EVerest welcomes the global community to contribute to this project. Visit the project onGitHuband subscribe to the EVerestgeneral mailing list.

Read more:

Project EVerest is on a mission to standarize EV charging protocols - The Next Web

Posted in Uncategorized

Software is crammed full of bugs. This ‘exciting’ project could banish most of them – ZDNet

Chip designer Arm has released a prototype of its Morello development board for researchers at Google, Microsoft and industry to test its goal for a CPU design that wipes out a chunk of memory-related security flaws in code.

The Morello board is the product of a collaboration between Arm, Cambridge University, Microsoft and others based on the Capability Hardware Enhanced RISC Instructions (CHERI) architecture. Microsoft says the board and system on chip (SoC) is the first high-performance implementation of CHERI, which provides "fine-grained spatial memory safety at a hardware level". If it proves successful after testing with legacy software, it could pave the way for future CPU designs.

CHERI architectural extensions are designed to mitigate memory safety vulnerabilities. CHERI augments pointers the variables in computer code that reference where data is stored in memory with limits as to how those references can be used, the address ranges that they can use to access, and which functionality they can use. "Once baked into silicon, they cannot be forged in software," Arm explained. CHERI was developed by the University of Cambridge and SRI International after it received funding from DARPA's Clean-slate design of Resilient, Adaptive, Secure Hosts (CRASH) program.

SEE: The IT skills gap is getting worse. Here are 10 ways you can avoid a crisis

The Morello architecture is based on CHERI. Arm kicked off work on hardware for the Morello program in 2019 with backing from the UK government's Digital Security by Design (DSbD) program and UK Research and Innovation (UKRI).

The Morello demonstrator board is a tweaked Arm Neoverse N1, a 2.5GHz quad-core server core CPU with support for Armv8.2a 64-bit architecture that has extra features to enable CHERI-based "compartmentalization" to counter exploits against memory-related security flaws.

"For any research project, this phase is both exciting and critical. There has never been a silicon implementation of this hardware capability technology in a high-performance CPU," said Arm.

The Morello board is a significant advancement for CHERI, which has been in development for over a decade. Saar Amar, of Microsoft's Security Research and Defense team, notes the top existing implementation of CHERI topped was Toooba, which while a "significant achievement" could only run in an FPGA at 50MHz in a dual-core configuration. It was "roughly equivalent in microarchitecture to a mid-'90s CPU" that wasn't good enough for testing complex software stacks at scale.

The CHERI and Morello architectures may be one way of tackling memory-related security flaws that stem from code written in programming languages like C and C++. Microsoft and Google say the majority of security bugs are memory safety issues and they're often due to coding issues written in these languages.

The volume of these bugs and patches they require has prompted major software firms like Microsoft, Google and Amazon to explore 'type safe' languages like Rust for systems programming. However, Rust is generally used to write new components because vast, existing code bases written in C or C++ are left in place, as Google is doing for Android's code base.

The Morello boards are being shared with researchers to test the hypothesis of CHERI's compartmentalization approach and whether it is a viable security architecture for businesses and consumers in the future.

As detailed in a paper about CHERI by Google researcher Ben Laurie and peers, various CHERI modes can be more effective and efficient than mitigations in conventional memory management unit (MMU) hardware, which are used to translate virtual memory addresses to physical addresses.

CHERI allows for software compartmentalization in a similar way to process isolation in software for today's operating systems, notes Laurie. It also includes an in-process memory safety mechanism that avoids the need to make major changes to source-code a potentially major benefit for existing code bases.

"Contemporary type-safe languages prevent big classes by construction, whereas CHERI memory protection prevents the exploitation of some of these bug classes," writes Microsoft's Armar.

"There are billions of lines of C and C++ code in widespread use, and CHERI's strong source-level compatibility provides a path to achieving the goals of high-performance memory safety without requiring a ground-up rewrite."

The rest is here:

Software is crammed full of bugs. This 'exciting' project could banish most of them - ZDNet

Posted in Uncategorized

Best Practices for Application Security in the Cloud – Security Boulevard

An overview of threats and best practices in all stages of software development in thecloud.

The future of application security is in the cloud. Software development and application deployment continue to move from on-premise to various types of cloud environments. While the basics of application security (AppSec) carry over from on-premise, the cloud introduces new areas of complexity and a new set of requirements.

AppSec best practices for the cloud are somewhat different from standard AppSec best practices. Cloud applications tend to be more segmented into different services and are more likely to use other cloud services, delivered via API, to compose application functionality. AppSec teams may need to coordinate with security and ops teams from cloud service providers (CSPs) to ensure proper coverage and to adapt cloud-specific best practices. This blog covers AppSec cloud best practices and offers a basic framework on how to think about cloudAppSec.

Cloud application security is the discipline of securing application code running in public, private, or hybrid cloud environments. Logically, this means threat modeling for cloud environments and deploying tools and controls to protect applications running in thecloud.

It also involves creating policies and compliance processes that may be different from traditional application security practices used for legacy on-premise application deployments. More specifically, traditional security for applications has focused on the network and infrastructure layer. In the cloud, because applications tend to be more accessible to third-parties via API and incorporate third-party code and services, more care must be taken to secure the application code and application environment itself.

For cloud applications, software development is more likely to involve rapid iterations pushed through Continuous Integration / Continuous Deployment (CI/CD) pipelines. This dynamic is causing security to shift left with developers increasingly responsible for writing secure code and DevOps teams responsible for testing code with security tooling prior to code submission. For this reason, the AppSec team has an expanded role in defining cloud security best practices but also teaching developers and DevOps teams how to better secure applications at the code and CI/CD pipelinestages.

It is critical that AppSec teams understand and plan for their level of responsibility in guarding applications. The different types of cloud environments determine who is responsible for security. In a private cloud, the organization owns full responsibility for the fullstack.

For applications running in public cloud service provider (CSP) environments like Amazon Web Services, Microsoft Azure, and Google Cloud, responsibility for application security starts at the operating system layer. That said, AppSec teams should still factor in the risk of compromise of lower layers of the CSPs multi-tenant environment.

For Platform-as-a-Service offerings like RedHat OpenShift or Heroku, security teams are primarily responsible for security of the application code anddata.

For SaaS applications, AppSec teams do not need to be involved as full responsibility is on the vendor. The only exception is if a SaaS application integrates directly into a cloud application, in which case the AppSec team must be mindful of the risks of this integration and apply controls against those risks, e.g., data loss protection or payment gateway abuse. The reality is that in an era of microservices and APIs, application security rarely stops at the application or cloudedge.

Cloud applications face the same threats as on-premise applications plus several additional risk types. The list of threats that AppSec teams must guard against includes:

For best results, think about your cloud AppSec practice as segmented into stages. The first stage, application development, requires a certain set of best practices. The second stage, formal application security, requires an overlapping but slightly different set of practices. The third stage, DevOps and production, requires yet another overlapping set of practices. The three stages do tend to blend together in rapidly iterating application development organizations but this remains a useful guide to building a cloud AppSec best practices playbook.

For developers responsible for shifting left application security, key considerations and best practices include:

AppSec teams often conduct their own security reviews on top of existing efforts by development teams. As advanced security practitioners, AppSec teams should apply a broad range of security measures and best practices more appropriate to a discrete security discipline. Specifically, AppSec working with the network security and operations teams should put in place and or at least verify and help configure solutions for the following:

DevOps manages CI/CD solutions and controls application code deployment and lifecycle. DevOps is responsible for implementing any of the elements of AppSec practices that work at the CI/CD level. This mayinclude:

Cloud AppSec practices will continue to evolve. What we have detailed here is a starting point. Because cloud and cloud services are changing so rapidly, it is important to review cloud AppSec best practices and playbooks frequently. Just as the lines of responsibility between networking, development and operations have blurred, in cloud AppSec the lines have also blurred. Cooperation between all stakeholders is essential, however.

Responsibility for security is shifting left but the AppSec team remains the quarterback and the ultimate accountable party for ensuring that cloud applications remain safe and performant. Creating a detailed runbook for cloud AppSec and the responsibilities of the different stakeholders will help clarify your cloud AppSec approach and create a practice guide you can follow to continuously evolve and improve your cloud security.

To shift left and get started with cloud AppSec in the development stage, create a free account with a modern static analysis tool. A single scan from ShiftLeft CORE finds vulnerabilities in custom code, CVEs in open-source code, and hard-coded secrets. It is delivered as SaaS so it is easy for DevOps to integrate into your CI/CD and, because it never takes source code off of your servers, it is a safe alternative to on-premtools.

Best Practices for Application Security in the Cloud was originally published in ShiftLeft Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

*** This is a Security Bloggers Network syndicated blog from ShiftLeft Blog - Medium authored by The ShiftLeft Team. Read the original post at: https://blog.shiftleft.io/best-practices-for-application-security-in-the-cloud-dd1ce72cca26?source=rss----86a4f941c7da---4

See more here:

Best Practices for Application Security in the Cloud - Security Boulevard

Posted in Uncategorized

What are the main advantages and disadvantages of PaaS? – TechTarget

A decade ago, everyone was talking about moving applications to the cloud, meaning uprooting something running on a private server and taking it to a cloud provider. The original models of cloud computing -- IaaS, PaaS and SaaS -- reflect three ways of doing that. What's happened instead is that the cloud has become more of a universal front end to legacy data center applications.

Little of what runs in the cloud ever ran elsewhere; it was developed for the cloud, and cloud providers quickly realized that. They created web services or hosted features that developers could use to build applications. These services created the successor to the old PaaS cloud model, and when people talk about PaaS today, they're referring to these services.

There are four major advantages to modern PaaS:

Most enterprises that adopt a PaaS cloud model today do so because of one or more of these benefits. And the majority say that the greatest benefits of PaaS are accrued during project development and maintenance, where cloud provider tools improve project quality and accelerate the delivery of results.

For all the positives of PaaS, there are three significant negatives as well. Enterprises agree that the upsides of PaaS are most visible to development teams, and the downsides of PaaS to CFOs. The most significant are the following:

The best way to get the most out of PaaS is to plan accordingly. The risks of PaaS can be minimized by fully assessing the costs of using PaaS tools for application development and deployment. Enterprises can sometimes reduce costs through careful feature selection, and all cloud providers offer tools to estimate costs. If an enterprise has good data on application usage, it can avoid cost surprises that would incur the wrath of senior management.

PaaS benefits can also be optimized. Planning is the key to this as well. Cloud providers often offer multiple ways of doing essentially the same thing -- high-level PaaS features aimed at IoT, for example, that are really wrappers around lower-level features such as event handling. You might not need all the high-level features, and if that's the case, the benefits won't offset the costs.

The most difficult problem to address in PaaS is portability. Tools are likely to be implemented differently across cloud providers, and that increases the cost of sustaining a multi-cloud or changing cloud providers. One way to address this is to design applications so that cloud provider-specific features are contained in small software modules that can be changed easily or switched for multi-cloud deployment -- or if another cloud provider offers a better deal.

These measures work where there's a modest number of specialized PaaS tools involved, but they can be difficult to apply when there's a lot of software and a lot of PaaS tools associated with the software. In that case, it's wise to look at the idea of separating PaaS tools from the cloud provider completely.

Enterprises generally agree that the best alternative to cloud provider PaaS is what can be called private PaaS, which means building applications on middleware tools designed to be portable across cloud providers and hosted directly via IaaS VM or containers. This, if done properly, can eliminate most of the risks of PaaS while retaining the main benefits.

The key to success with this approach is minimizing the number of software sources required to create the private PaaS. Try to lay out all PaaS requirements for current and future applications, and then use that list to find software sources, starting with software providers that can fulfill the largest number of PaaS needs. Enterprises that acquire their private PaaS software from an open source supplier rather than building their own tools from source code generally report having fewer issues with managing compatibility across tools and libraries.

Private PaaS is more work, and the acquired PaaS tools likely won't be free, so it's essential to compare the costs and benefits of private PaaS with those of traditional public cloud PaaS. Enterprises should also look at how well private PaaS tools work compared with public PaaS. Cloud providers' implementations of private PaaS tools can take advantage of relationships with cloud provider infrastructure that aren't exposed to users, and thus aren't available to private PaaS implementations.

Cloud provider relationships with software vendors, increasingly common in the cloud market, can offer an easier pathway to private PaaS. Look at the tools available from a source that's affiliated with all your cloud options first, and then compare it with the costs and benefits of others as you would with public cloud PaaS tools.

There's no easy way to tell how to balance the pluses and minuses of PaaS. Every enterprise must look at each benefit and risk and assign a value to it based on their own operations. It's also important to track any shifts in those values created by changes in cloud provider services and pricing, company application usage and traffic, and expenses and capital costs. Keeping careful notes on how each plus and minus is assessed -- each time an assessment is made -- is essential to getting the best results over time.

Continued here:

What are the main advantages and disadvantages of PaaS? - TechTarget

Posted in Uncategorized