Boycott 7-Zip Because It’s Not On Github; Seriously? – PC Perspective

There is a campaign on Reddit that is gaining some traction calling for a boycott of the software because it is not a true Scotsman truly Open Source. The objection raised is that 7-Zip is not present on Github, Gitlab, nor any public code hosting and therefore is not actually Open Source. The fact that those sites do not appear at all in the Open Source Initiatives official definition of open source software doesnt seem to dissuade those calling for the boycott whatsoever.

Indeed you can find the source code for 7-Zip on Sourceforge, an arguably much easier site to deal with than the Gits, and it is indeed licensed under the GNU Lesser GPL. That indeed would be considered as qualifying as open source software, with the use of the LGPL likely being because 7-Zip includes the unRAR library to be able to unzip RAR files and that requires a license from RARLAB.

Their evidence of the lack of 7-Zips openness is based on comments from a 12 year old Reddit thread and the fact that sometimes there are security vulnerabilities in the software. As The Register points out, the existence of the Nanazip fork of 7-Zip and the fact that 7-Zip has no problems with it is much stronger that the software is indeed open source.

You can find a link to the thread in the article, if you want to participate in one of the internets current pointless arguments.

Read the original post:

Boycott 7-Zip Because It's Not On Github; Seriously? - PC Perspective

It’s a Race to Secure the Software Supply Chain Have You Already Stumbled? – DARKReading

The digital world is ever-increasing in complexity and interconnectedness, and that's nowhere more apparent than in software supply chains. Our ability to build upon other software components means we innovate faster and build better products and services for everyone. But our dependence on third-party software and open source increases the complexity of how we must defend digital infrastructure.

Our recent survey of cybersecurity professionals found one-third of respondents monitor less than 75% of their attack surface, and almost 20% believe that over half of their attack surface is unknown or not observable. Log4Shell, Kaseya, and SolarWinds exposed how these statistics can manifest as devastating breaches with wide-reaching consequences. Cybercriminals already know supply chains are highly vulnerable to exploitation.

Last year, a threat actor exploited a vulnerability in Virtual System Administrator (VSA) provider Kaseya to inject REvil ransomware into code for VSA. Kaseya supported thousands of managed service providers (MSPs) and enterprises, and its breach compromised a critical network within thousands of organizations. Consequently, these organizations' internal systems were also compromised.

The ripple effect that Kaseya had on its customers can happen to any organization that uses a third-party software vendor. The European Union Agency for Cybersecurity (ENISA) analyzed 24 recent software supply chain attacks and concluded that strong security protection is no longer enough. The report found supply chain attacks increased in number and sophistication in 2020, continued in 2021, and, based on recent attacks by Lapsus$, is likely to carry over through 2022.

Similar to third-party software vendors but at an even-greater magnitude, open source code has a devastating impact on digital function if left insecure the havoc wreaked by Log4Shell illustrates this. These consequences are partly because open source software remains foundational to nearly all modern digital infrastructure and every software supply chain. The average application uses more than 500 open source components. Yet limited resources, training, and time available for the maintainers who voluntarily support projects mean they struggle to remediate the vulnerabilities. These factors have likely contributed to high-risk open source vulnerabilities remaining in code for years.

This issue demands immediate action. That's why the National Institute of Standards and Technology (NIST) released its security guidelines in February. But why are we still so slow to try and secure the software supply chain effectively? Because it's tough to know where to start. It's challenging to keep up with security updates for your own software and new products, let alone police other vendors to ensure they match your organization's standards. To add more complexity, many of the open source components that underpin digital infrastructure lack the proper resources for project maintainers to keep these components fully secure.

So, how do we secure it? It all looks pretty daunting, but here's where you can start.

First, get your house in order and identify your attack resistance gap the space between what organizations can defend and what they need to defend. Know your supply chain and implement strategies that set teams up for success:

Then, enforce your strategies and standards to maintain security for your organization and the collective security of the Internet:

Most in the cybersecurity community are familiar with Murphy's Law: "Everything that can go wrong, will" it defines the mindset of anyone working in this field. And if my experience in this industry has taught me anything, you just have to do your best to keep up with the inevitable increase in challenges, risks, and complexity of securing digital assets. Part of staying ahead of these challenges is remaining highly proactive when it comes to your security best practices, and if you haven't properly secured your software supply chain yet, you're already behind. But even if you've had a false start, the good news is that it's never too late to get back up.

Read more here:

It's a Race to Secure the Software Supply Chain Have You Already Stumbled? - DARKReading

Developing a Cloud-Native Application on Microsoft Azure Using Open Source Technologies – InfoQ.com

Key Takeaways

Cloud native is a development approach that improves building, maintainability, scalability, and deployment of applications. My intention with this article is to explain, in a pragmatic way, how to build, deploy, run, and monitor a simple cloud-native application on Microsoft Azure using open-source technologies.

This article demonstrates how to build a cloud-native application replicating real-case scenarios through a demo applicationguiding the reader step by step.

Without any doubt, one of the latest trends in software development is the cloud native term. But exactly, what is a cloud-native application?

Cloud-native applications are applications built around various cloud technologies or services hosted in the (private/public) cloud. Cloud native is a development approach that improves building, maintainability, scalability, and deployment of applications. Often, they are distributed systems (commonly designed as microservices) but they also use DevOps methodologies to automate application building and deployment that can be done at any time on demand. Usually these applications provide API through standard protocols such as REST or gRPC and they are interoperable through standard tools, such as Swagger (OpenAPI).

The demo app is quite simple, but at the same time it involves a number of factors and technologies combined to replicate the fundamentals of a real-case scenario. This demo doesnt include anything about authentication and authorization because, in my opinion, it would bring too much complexity that is not required to explain the topic of this article.

Simple Second-Hand Store application

Simple Second-Hand Store (SSHS) is the name of the demo app described in this article.

SSHS system design overview.

SSHS is a simple cloud-native application that sells second-hand products. Users can create, read, update, and delete products. When a product is added to the platform, the owner of that product receives an email confirming the operations success.

When developing a microservice architecture, decomposing the business requirements into a set of services is actually the first step. A set of these principles are:

These simple principles help to build consistent and robust applications, gaining all the advantages that a distributed system can provide. Keep in mind that designing and developing distributed applications is not an easy task, and ignoring a few rules could lead to both monolithic and microservices issues. The next section explains by examples how to put them in practice.

It is easy to identify two contexts in the Simple Second-Hand Store (SSHS) application: the first one is in charge of handling productscreation and persistence. The second context is all about notification and is actually stateless.

Coming down to the application design, there are two microservices:

At a higher level, microservices could be considered as a group of subsystems that are composing a single application. And, as per traditional applications, components need to communicate with other components. In a monolithic application you can do it by adding some abstraction between different layers, but of course it is not possible in a microservice architecture since the code base is not the same. So how can microservices communicate? The easiest way to do it is through HTTP protocol: each service exposes some REST APIs for the other one and they can easily communicate, but although it may sound good at the first impression some dependencies are put in the system. For example, if service A needs to call service B to reply to the client, what happens if service B is down or just slow? Why does service Bs performances affect service A, spreading the outage to the entire application?

This is where asynchronous communication patterns come into playto help keep components loosely coupled. Using asynchronous patterns, the caller doesnt need to wait for a response from the receiver, but instead, it throws a fire and forget event, and then someone will catch this event to perform some action. I used the word someone because the caller has no idea who is going to read the eventmaybe no one will catch it.

This pattern is generally called pub/sub, where a service publishes events and others may subscribe to events. Events are generally published on another component called event bus that works like a FIFO (first in-first out) queue.

Of course, there are more sophisticated patterns other than the FIFO queue, even if it is still used a lot in real environments. For example, an alternative scenario may have consumers subscribing to a topic rather than a queue, copying and consuming messages belonging to that topic and ignoring the rest. A topic is, generally speaking, a property of the message, such as the subject in AMQP (Advanced Message Queuing Protocol) terms.

Using asynchronous patterns, service B can react to some events occurring in service Abut service A doesnt know anything about who the consumers are and what they are doing. And obviously its performance is not affected by other services. They are completely independent from each other.

NOTE: Unfortunately, sometimes using an asynchronous pattern is not possible, and even if using synchronous communication is an antipattern, there is no alternative. This shall not become an excuse to build things quicker, but keep in mind that in some specific scenarios, it may happen. Do not feel that guilty if you have no alternatives.

In the SSHS application, microservices dont need to have direct communication since the Nts service must react to some events that occur on the Pc service. This can be clearly done as an asynchronous operation, through a message on a queue.

For the same reasons exposed in the Communication between microservices paragraph, to keep services independent from each other, a different storage for each service is required. It doesnt matter if a service has one or more storages using a multitude of technologies (often they have both SQL and NoSQL), each service must have the exclusive access to its repository; not only for performance reasons, but also for data integrity and normalization. The business domain between services could be very different, and each service needs its own database schema that could be very different from one microservice to another. On the other hand, the application is usually decomposed following business bounded context, and it is quite normal to see schemas diverge over time, even if at the beginning they may look the same. Summarizing, merging everything together leads to the monolithic application issuesso why use a distributed system?

Notifications service doesnt have any data to persist, while the ProductCatalog offers some CRUD APIs to manage uploaded products. These are persisted in a SQL database since the schema is well defined and the flexibility given by a NoSQL storage is not needed in this scenario.

Both services are ASP.NET applications running on .NET 6 that can be built and deployed using Continuous Integration (CI) and deployment techniques. In fact, GitHub hosts the repository and the build and deployment pipelines are scripted on top of GitHub Actions. Cloud infrastructure is scripted using a declarative approach, to provide a full IaC (Infrastructure as a Code) experience using Terraform. Pc service stores data on a Postgresql database and communicates with Nts service using a queue on an event bus. Sensitive data such as connection strings are stored in a secure place on Azure and not checked in the code repository.

Before starting: the following sections dont explain each step in detail (such as creating solutions and projects) and are aimed at developers that are familiar with Visual Studio or similar. However, the GitHub repository link is at the end of this post.

To get started with the SSHS development, first create the repository and define the folders structure. SSHS repository is defined as:

Just focus on few things for now:

NOTE: Disable the nullable flag in csproj file, usually enabled by default in Net Core 6 project templates.

ProductCatalog service needs to provide APIs to manage products, and to better support this scenario we use Swagger (Open API) to give some documentation to consumers and make the development experience easier.

Then there are dependencies: database and event bus. To get access to the database, it is going to use Entity Framework.

Finally, a secure storage serviceAzure KeyVaultis required to safely store connection strings.

The new ASP.NET Core 6 application Visual Studio templates dont provide a Startup class anymore, but instead, everything is in a Program class. Well, as discussed in the ProductCatalog deployment paragraph there is a bug about using this approach, so lets create a Startup class:

Then replace the Program.cs content with the following code:

The next step is about writing some simple CRUD APIs to manage the products. Heres the controller definition:

The ProductService definition is:

And finally, defines the (very simple) DTOs classes:

The Owner property should contain the email address to notify when a product is added to the system. I havent added any kind of validation since it is a huge topic not covered in this post.

Then, register the ProductService in the IoC container using services.AddScoped(); in the Startup class.

Often cloud-native applications use Open API to make APIs testing and documentation easier. The official definition is:

The OpenAPI Specification (OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic.

Long version short: OpenAPI is a nice UI to quickly consume APIs and read their documentation, perfect for development and testing environments, NOT for production. However, since this is a demo app, I kept it enabled in all the environments. In order to include an attention flag, I put some commented-out code to exclude it from the Production environment.

To add Open API support, install the Swashbuckle.AspNetCore NuGet package in the Pc project and update the Startup class:

Enable the XML documentation file generation in the csproj. These documentation files are read by Swagger and shown in UI:

NOTE: Add into the appsettings.json file a section named SwaggerApiInfo with two inner properties with a value of your choice: Name and Title.

Add some documentation to APIs, just like in the following example:

Now, run the application and navigate to localhost:/index.html. Here, you can see how Swagger UI shows all the details specified in the C# code documentation: APIs description, schemas of accepted types, status codes, supported media type, a sample request, and so on. This is extremely useful when working in a team.

Even though this is just an example, it is a good practice to add the GZip compression to APIs response in order to improve performance. Open the Startup class and add the following lines:

To handle errors, custom exceptions, and a custom middleware is used:

The Pc application needs to persist datathe productsin storage. Since the Product entity has a specific schema, a SQL database suits this case scenario. In particular, Postgresql is an open-source transactional database offered to as a PaaS service from Azure.

Entity Framework is an ORM, a tool that makes the object translation between SQL and the OOP language easier. Even if SSHS does perform very simple queries, the goal is to simulate a real scenario where ORMsand eventually MicroORMs, such as Dapperare heavily used.

Before starting, run a Postgresql local instance for the development environment. My advice is to use Dockerespecially for Windows users. Now, install Docker if you dont have it yet, and run docker run -p 127.0.0.1:5432:5432/tcp --name postgres -e POSTGRES_DB=product_catalog -e POSTGRES_USER=sqladmin -e POSTGRES_PASSWORD=Password1! -d postgres.

For more information, you can refer to the official documentation

Once the local database is running properly, it is time to get started with Entity Framework for Postgresql. Lets install these NuGet packages:

Define entitiesthe Product class:

Create a DbContext classthat will be the gateway to get access to the databaseand define the mapping rules between the SQL objects and the CLR objects:

The DbSet property represents as an in-memory collection that the data persisted on storage; the OnModelCreating method override scans the running assembly looking for all the classes that implement the IEntityTypeConfiguration interface to apply custom mapping. Instead, the OnConfiguring override enables the Entity Framework Proxy to lazy load relationships between tables. This isnt the case since we have a unique table, but it is a nice tip to improve performance in a real scenario. The feature is given by the NuGet package Microsoft.EntityFrameworkCore.Proxies.

Finally the ProductEntityConfiguration class defines some mapping rules:

*It is important to remind that the Guid is generated after the creation of the SQL object. If you need to generate the Guid before the SQL object you can use HiLomore info here.

Finally, update the Startup class with the latest changes:

The database connection string is sensitive information, so it shouldnt be stored in the appsettings.json file. For debugging purposes, UserSecrets can be used. It is a feature provided by the dotnet framework to store sensitive information that shouldnt be checked in the code repository. If you are using Visual Studio, right click on the project and select Manage user secrets; if you are using any other editor, open the terminal and navigate to the csproj file location. Then type dotnet user-secrets init. The csproj file now contains a UserSecretsId node with a Guid to identify the project secrets.

There are three different ways to set a secret now:

The secret.json should look as follow:

Lets get down with the ProductService implementation:

The next step is about creating the database schema through migrations. The Migrations tool incrementally updates a checked-in file database to keep it in sync with the application data model while preserving existing data. The details of the applied migrations to the database are stored in a table called "__EFMigrationHistory". This information is then used to execute not-applied migrations only to the database specified in the connection string.

To define the first migration, open the CLI in the csproj folder and run

dotnet-ef migrations add "InitialMigration"it is stored in Migration folder. Then update the database: dotnet-ef database update with the migration just created.

NOTE: If this is the first time you are going to run migrations, install the CLI tool first using dotnet tool install --global dotnet-ef.

As Ive said, user secrets just works in a Development environment so Azure KeyVault support must be added. Install the package Azure.Identity and edit the Program.cs:

where is the KeyVault name that will be declared in the Terraform scripts later.

The ASP.NET Core SDK offers libraries for reporting application health, through REST endpoints. Install the Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore NuGet package and configure the endpoints in the Startup class:

The code above adds two endpoints: at the /health/ping endpoint the application responses with the health status of the system. Default values are Healthy, Unhealthy or Degraded, but they can be customized. Instead, the /health/dbcontext endpoint gives back the current Entity Framework DbContext status, so basically, if the app can communicate with the database. Note that the NuGet package mentioned above is the one specific for Entity Framework, that internally refers to Microsoft.Extensions.Diagnostics.HealthChecks. If you dont use EF, you can use this one only.

You can get more info in the official documentation .

The last step to complete the Pc project is to add a Dockerfile file. Since Pc and Nts are independent projects, it is important to have a single Dockerfile per project. Create a folder Docker in the ProductCatalog project, define a .dockerignore file and the Dockerfile:

NOTE: Dont forget to add a .dockerignore file as well. On the internet, there are plenty of examples based on specific technologies.NET Core in this case.

NOTE: If your Docker build stucks on the dotnet restore command, you faced a bug documented here. To fix it, add this node to the csproj:

and add /p:IsDockerBuild=true to both restore and publish commands in the Dockerfile as explained in this comment.

To try this Dockerfile locally, navigate with your CLI to the project folder and run:

docker build -t productcatalog -f DockerDockerfile ., where:

Then run the image using:

docker run --name productcatalogapp -p 8080:80 -it productcatalog -e ConnectionStrings:ProductCatalogDbPgSqlConnection="Host=localhost;Port=5432;Username=sqladmin;Password=Password1!;Database=product_catalog;Include Error Detail=true":

NOTE: The docker run command starts your app, but it wont work correctly unless you create a docker network between the ProductCatalog and the Postgresql containers. However, you can try to load the Swagger web page to see if the app is at least started. More info here.

Go to http://localhost:8080/index.html and if everything is working locally move forward to the next step: the infrastructure definition.

Now that the code is written and properly running in a local environment, it can be deployed in a cloud environment. As mentioned earlier, the public cloud we are going to use is Microsoft Azure.

Azure App Service is a PaaS service able to run Docker containers, that suits best for this scenario. Azure Container Registry holds the Docker images ready to be pulled from the App Service. Then, an Azure KeyVault instance can store application secrets such as connection strings.

Other important resources are the database serverAzure Database for Postgresqland the Service Bus to allow asynchronous communication between the services.

To deploy Azure resources, no operation is required to be executed manually. Everything is writtenand versionedas a Terraform script, using declarative configuration files. The language used is HashiCorp Configuration Language (HCL), an agnostic-cloud language that allows you to work with different cloud providers using the same tool. No Azure Portal, CLI, ARM or Bicep files are used.

Before working with Terraform, just a couple of notes:

Terraform needs to store the status of the deployed resources in order to understand if a resource has been added, changed, or deleted. This state is saved in a file stored on the cloud (storage account for Azure, S3 for AWS, etc.). Since this is part of the Terraform configuration, it is not possible to do it through script but instead, it is the only one operation that must be done using other tools. The next sections explain how to set up the environment using az CLI to create the storage account and the IAM identity that actually runs the code.

NOTE: You can not use the same names I used because some of the resources require a unique name across the entire Azure cloud.

Create a resource group for the Terraform state

Every Azure resource must be in a resource group and it is a good practice to have a different resource group for each application/environment. In this case, I created a resource group to hold the Terraform state and another one to host all the productive resources.

To create the resource group, open the CLI (Cloud Shell, PowerShell, it is up to you) and type:

az group create --location --name sshsstates

Create a storage account for the Terraform state

Storage account is a resource that holds Azure Storage objects such as blobs, file shares, queues, tables, and so on. In this cases it will hold a blob container with the Terraform state file.

Create one by running:

where location is the location of the resource group created at the previous step.

Then, create the blob container in this storage account:

where sshsstg01 is the storage account name created at the previous step.

See more here:

Developing a Cloud-Native Application on Microsoft Azure Using Open Source Technologies - InfoQ.com

Timecho, Founded by the Creators of Apache IoTDB, Raises Over US$10 Million First Fund for Open-Source Software R&D and Time-Series Database Solutions…

STUTTGART, Germany and BEIJING, June 28, 2022 /PRNewswire/ -- Timecho, founded by the creators of Apache IoTDB, an IoT-native time series database, has announced a first round of funding of more than US$10 million led by Sequoia China and joined by Koalafund, Gobi China, and Cloudwise.

Time series database has thrived in recent years to meet new data management requirements for the Internet of Things (IoT) era. Developers are seeking a database that can optimize data ingestion rates, storage compression as well as query and analytics features that match the needs of their time-stamped data applications. Based on Apache IoTDB, a top-level open-source project ranked seventh by commits (individual changes to files) in 2021 among all Apache projects, Timecho is technically capable of delivering foundational infrastructure that powers demanding industrial use cases. Moreover, the Timecho team has over 40 patents and papers in industry-leading, time-series management journals including ICDE, SIGMOD, VLDB.

"Apart from its outstanding performance, Apache IoTDB is lightweight, easy-to-use, and deeply integrated with software in the big data ecosystem, including Apache PLC4X, Kafka, Spark, and Grafana," said Dr. Xiangdong Huang, Founder of Timecho and PMC Chair of Apache IoTDB. "It has been widely used by multiple enterprises in the power industry, public transportation, intelligent factories and other scenarios. To expand its application in the IoT area and strengthen its performance to meet the specific requirements of enterprises, Timecho will continue investing aggressively in R&D to enhance features such as end-edge-cloud collaboration."

"Timecho maintains intensive interaction with the open-source community," said Dr. Julian Feinauer, Timecho Technical Director for the European Market and PMC member of Apache IoTDB. "With the funding, we will be able to accelerate R&D to help developers and companies best leverage the power of Apache IoTDB."

Timecho will focus on providing solutions that will be more adaptable with high-end equipment, mass devices, computational-resource-limited platforms, and further scenarios.

About Timecho

Timecho combines "time" with "echo", which means to make time-series data become more valuable. Founded by the creators of Apache IoTDB, Timecho delivers IoT-native time-series solutions to satisfy the demands of mass data storage, fast reads and complex data analytics, enabling companies to leverage the value of time-series data with higher reliability at lower costs.

For more information, visitwww.timecho.com or its LinkedIn page.

Contact:Timecho Limited[emailprotected]

SOURCE Timecho Limited

Here is the original post:

Timecho, Founded by the Creators of Apache IoTDB, Raises Over US$10 Million First Fund for Open-Source Software R&D and Time-Series Database Solutions...

OpenReplay raises $4.7M for its open source tool to find the bugs in sites – TechCrunch

When users on websites and apps find they have problems, developers have various tools to record what they are doing in order to find out what went wrong. But these can involve cumbersome methods like screenshots, and back-and-forth emails with customers.

ContentSquareand Medalliaare products that primarily target marketers and product managers rather than developers, who need to know where apps are going wrong. Meanwhile, developers are using open source solutions, but these have their drawbacks.

OpenReplayis similarto nginx in the sense that the software is available for free for developers and self-hosted; this means data cant leave a companys infrastructure, but then extra services are paid for.

It has now raised $4.7 million in a seed funding round led by Runa Capital with the participation of Expa, 468 Capital, Rheingau Founders and co-founders of Tekion.

OpenReplayprovides developers with a session replay stack that helps them troubleshoot issues by making debugging visual, allowing developers to replay everything users do on their web app and potentially understand whereand whythey got stuck, says the company.

Mehdi Osman, CEO and founder of OpenReplay, said in a statement: Whilst other session replay tools are targeting marketers and product managers, we focused on those who actually build the product, developers. Enabling developers to self-host it on their premises and without involving any 3rd-party to handle their user data, is a game changer.

OpenReplaywill use the new funding to grow its community, accelerate deployment and improve user experience.

Konstantin Vinogradov, principal at Runa Capital, based in Palo Alto, California, which also invested in nginx, added: We actively invest in companies building open-source projects, especially, when the open-source model enables better products. OpenReplay is a great example of such an approach.

See the original post:

OpenReplay raises $4.7M for its open source tool to find the bugs in sites - TechCrunch

Open Source Video Editing Software to Witness Huge Growth by 2031 Designer Women – Designer Women

marketreports.info delivers well-researched industry-wide information on the Open Source Video Editing Software market. It provides information on the markets essential aspects such as top participants, factors driving Open Source Video Editing Software market growth, precise estimation of the Open Source Video Editing Software market size, upcoming trends, changes in consumer behavioral pattern, markets competitive landscape, key market vendors, and other market features to gain an in-depth analysis of the Open Source Video Editing Software market. Additionally, the report is a compilation of both qualitative and quantitative assessment by industry experts, as well as industry participants across the value chain. The Open Source Video Editing Software report also focuses on the latest developments that can enhance the performance of various market segments.

This Open Source Video Editing Software report strategically examines the micro-markets and sheds light on the impact of technology upgrades on the performance of the Open Source Video Editing Software market. The Open Source Video Editing Software report presents a broad assessment of the market and contains solicitous insights, historical data, and statistically supported and industry-validated market data. The Open Source Video Editing Software report offers market projections with the help of appropriate assumptions and methodologies. The Open Source Video Editing Software research report provides information as per the market segments such as geographies, products, technologies, applications, and industries.

To get sample Copy of the Open Source Video Editing Software report, along with the TOC, Statistics, and Tables please visit @ marketreports.info/sample/65183/Open-Source-Video-Editing-Software

Key vendors engaged in the Open Source Video Editing Software market and covered in this report: Meltytech, LLC, OpenShot Studios, LLC, Blender Manual, KDE, Flowblade, Avidemux, Gabriel Finch (Salsaman), Natron, Pitivi, Heroine Virtual, Blender, EditShare, LLC

Segment by Type Linux macOS Windows OthersSegment by Application Video Engineers and Editors Freelancers Artists Hobbyists Others

The Open Source Video Editing Software study conducts SWOT analysis to evaluate strengths and weaknesses of the key players in the Open Source Video Editing Software market. Further, the report conducts an intricate examination of drivers and restraints operating in the Open Source Video Editing Software market. The Open Source Video Editing Software report also evaluates the trends observed in the parent Open Source Video Editing Software market, along with the macro-economic indicators, prevailing factors, and market appeal according to different segments. The Open Source Video Editing Software report also predicts the influence of different industry aspects on the Open Source Video Editing Software market segments and regions.

Researchers also carry out a comprehensive analysis of the recent regulatory changes and their impact on the competitive landscape of the Open Source Video Editing Software industry. The Open Source Video Editing Software research assesses the recent progress in the competitive landscape including collaborations, joint ventures, product launches, acquisitions, and mergers, as well as investments in the sector for research and development.

Open Source Video Editing Software Key points from Table of Content:

Scope of the study:

The research on the Open Source Video Editing Software market focuses on mining out valuable data on investment pockets, growth opportunities, and major market vendors to help clients understand their competitors methodologies. The Open Source Video Editing Software research also segments the Open Source Video Editing Software market on the basis of end user, product type, application, and demography for the forecast period 20222030. Comprehensive analysis of critical aspects such as impacting factors and competitive landscape are showcased with the help of vital resources, such as charts, tables, and infographics.

This Open Source Video Editing Software report strategically examines the micro-markets and sheds light on the impact of technology upgrades on the performance of the Open Source Video Editing Software market.

Open Source Video Editing Software Market Segmented by Region/Country: North America, Europe, Asia Pacific, Middle East & Africa, and Central & South America

Major highlights of the Open Source Video Editing Software report:

Interested in purchasing Open Source Video Editing Software full Report? Get instant copy @ marketreports.info/checkout?buynow=65183/Open-Source-Video-Editing-Software

Thanks for reading this article; you can also customize this report to get select chapters or region-wise coverage with regions such as Asia, North America, and Europe.

About Us

Marketreports.info is a global market research and consulting service provider specialized in offering wide range of business solutions to their clients including market research reports, primary and secondary research, demand forecasting services, focus group analysis and other services. We understand that how data is important in todays competitive environment and thus, we have collaborated with industrys leading research providers who works continuously to meet the ever-growing demand for market research reports throughout the year.

Contact Us:

CarlAllison (Head of Business Development)

Tiensestraat 32/0302,3000 Leuven, Belgium.

Market Reports

phone:+44 141 628 5998

Email: sales@marketreports.info

Website: http://www.marketreports.info

See more here:

Open Source Video Editing Software to Witness Huge Growth by 2031 Designer Women - Designer Women

AlmaLinux Build System: What You Need to Know – ITPro Today

AlmaLinux announced on Monday that it has released ALBS (AlmaLinux Build System), which is the platform that the distribution uses to build the AlmaLinux Red Hat Enterprise Linux clone from the corresponding version of CentOS Stream.

Naturally, AlmaLinuxs software will be available under an open-source license, in this case GPLv3. The source code has been available for a while, but now users can access an online software-as-a-service version. However, until role-based access control (RBAC) is in place, access is read only, meaning you can view the source code, see the build system, and look at the build logs for all the packages AlmaLinux has built, but you can't actually use it.

Related: CentOS Stream vs. CentOS Linux: Red Hat Explains the Differences

"Hopefully, in the next month or so, you'll be able to not only view what we did but use the build system to build your own packages," said Jack Aboutboul, community manager at AlmaLinux. "We decided to release it now because there was a CentOS Dojo [event]. We wanted to show it at the Dojo and explain the architecture and how it works."

AlmaLinux Build System in its current read-only mode

The software can also be downloaded and installed locally. While the latter does open the door for organizations or individuals to roll their own RHEL-based distro (or, theoretically, distros based on any other Linux distribution), it's mainly being released to enable DevOps teams to build software for their AlmaLinux installations.

"We built the distro with the system, and we built the updates with it," Aboutboul said. "It's fully operational."

When the RBAC system is ready, he said, ALBS will let users grab packages from their GitHub repositoriesto build packages within their CI/CD pipelines.

The release of the source code and access to a working model aims to help potential AlmaLinux users understand how the distribution is built, Aboutboul said. That is especially important now given the rising threat of software supply-chain attacks.

"People want transparency into what's being built, so we want to make sure that people see that," Aboutboul explained. "Then the next step, which is also something that's coming in the next couple of months, is we also want to implement software bill of materials [SBOM] within the build system. The whole distribution will come with a full software bill of materials, so you'll know exactly what's in where, what's being used, where it came from -- all of that information."

ALBS SBOM support, which is slated to be added by the end of July, will integrate with CodeNotary's Community Attestation Service, which is open-source software designed to help organizations comply with the Biden administrations cybersecurity executive order.

Although ALBS was custom built for AlmaLinux, it was "inspired" by software that CloudLinux (the company that founded AlmaLinux) developed for its eponymous commercial Linux distribution, CloudLinux OS. CloudLinux was initially a hardened version of CentOS that's now based on AlmaLinux and widely used by hosting companies.

"The principles behind [ALBS]were inspired by the way the CloudLinux build system works, but it's not the CloudLinux build system," Aboutboul said. "This is kind of the next generation, which was inspired by the work that they did."

Visit link:

AlmaLinux Build System: What You Need to Know - ITPro Today

Russians are searching for pirated Microsoft products and switching to Linux as the Western corporate exodus hits software updates and services:…

Many foreign tech firms are ending software updates and services in Russia.Vlad Karkov/SOPA Images/LightRocket/Getty Images

Russia-based web searches for pirated Microsoft software have surged after it halted sales in March.

Some Russians are turning to Linux from Microsoft's Windows operating system.

Russia is reliant is foreign software to power manufacturing and engineering systems.

Russians are searching for pirated Microsoft software online after the US tech giant halted sales in the country over its invasion of Ukraine, the Kommersantnewspaper reported on Monday.

Russia-based web searches for pirated Microsoft software have surged by as much as 250% after the company suspended new sales on March 4, according to Kommersant. In June so far, there's been a 650% surge in searches for Excel downloads, the media outlet added.

Microsoft said earlier this month it's significantly scaling downbusiness in Russia, joining a long list of companies winding down businesses in the country amid sweeping sanctions over the war in Ukraine. The move hits Russia hard because the country relies on foreign software to power many of its manufacturing and engineering tech systems, Bloomberg reported on Tuesday.

Russian government agencies, too, are switching from Microsoft's Windows to the Linux operating system, the Moscow Times reported last Friday. Developers of Russian systems based on the Linux open source operating system are also seeing more demand, Kommersant reported.

Not all sectors are able to swap out their systems easily.

In the case of industries, software is generally embedded into machinery and providers typically don't give clients access to the code, Sergey Dunaev, the chief information officer of steel giant Severstal, told Bloomberg.

"All industries are facing the same problems," Dunaev told the news outlet. "Many processes in modern units are controlled by software."

There are few alternatives available in the short term.

"Russian analogues in this area are much weaker and the need is high," Elena Semenovskaya, a Russia-focused analyst at IDC told Bloomberg. "But for now the approach is to rely on piracy and outdated copies, which is a dead-end and not sustainable."

Read the original article on Business Insider

Originally posted here:

Russians are searching for pirated Microsoft products and switching to Linux as the Western corporate exodus hits software updates and services:...

Nintex Earns a 2022 Top Rated Award from TrustRadius for Fourth Consecutive Year – Yahoo Finance

Customers rank the Nintex Process Platform for speed, ease of use, and helping organisations rapidly automate work with business process management, low-code development, and workflow

AUCKLAND, New Zealand, June 29, 2022 /PRNewswire/ -- Nintex,the global standard for process intelligence and automation,today announced that TrustRadius has recognised the Nintex Process Platform with a 2022 Top Rated Award. Nintex received a score of 8.6 based on 351 reviews and ratings from customers, who have shared how they are leveraging the platform to accelerate digital transformation by quickly and easily deploying workflow apps and automating business processes.

The TrustRadius Top Rated Awards, based on customer feedback, have become an industry standard for unbiased recognition of the best B2B technology products.

"Our commitment to the success of every customer focused on accelerating their digital transformationwhether in commercial or government sectorsis unwavering," said Nintex CEO Eric Johnson. "We are honoured to receive this ranking and are proud to support the critical and evolving business needs of more than 10,000 customers worldwide."

The TrustRadius Top Rated Awards, based on customer feedback and established in 2016, have become an industry standard for unbiased recognition of the best B2B technology products. To receive the Top Rated designation products must have at least 10 new or updated reviews in the last 12 months, earn at least 1.5% of traffic for that category, and have a trScore of at least 7.5.

Every day IT teams, operations professionals, business analysts and app developers across departments like HR, finance, customer service, legal, sales and marketing use the Nintex Process Platform to manage, automate, and optimise enterprise-wide business processes using an intuitive design canvas with clicks, not code.

Nintex customer review highlights on TrustRadius include the following feedback:

Eliminate paper-based processes and manual workflows."We use this platform for internal digital forms and workflows. We have converted from very paper-based manual processes. The idea is to build the capability internally and empower teams to automate and build their own forms and workflows. We have started with Promapp as our documentation tool which is easy and intuitive for users."

Supports process optimisation and improvement for all your employees."The Nintex Process Platform [is] utilised throughout the company, from procurement and production to sales and services. This tool makes it simple to improve processes and the performance of company operations. It makes work easier and faster, saving a significant amount of time. This product includes several useful features for simplifying chores. The Nintex Process Platform is incredibly simple to use for first-time users since it has drag-and-drop capability. Excellent file storage is a lifesaver."

Build and automate workflows without extensive training or complex coding. "Nintex Forms are the best way to automate the business processes both administratively and for the end-users. The automated workflows help the supervisor to get requests like leave requests or time off requests from employees through the forms created with Nintex. Manual intervention is not required. Nintex doesn't involve complex coding, therefore, anyone with basic computer skills can create forms on it."

Story continues

"Buyers have many Business Process Management (BPM) software options to choose from," said Megan Headly, VP of Research at TrustRadius. "Nintex Process Platform has earned a Top Rated Award in the Business Process Management (BPM) software category based entirely on customer feedback. Reviewers on TrustRadius highlight the Nintex Process Platform's high performance and availability along with its process designer and process simulation features."

Business process intelligence and automation capabilities included in the Nintex Process Platform include: process discovery (Nintex Process Discovery), visual process mapping (Nintex Promapp), workflow automation (Nintex Workflow, Nintex Forms and Mobile Apps), robotic process automation (Nintex Kryon RPA), document automation (Nintex Drawloop DocGen), eSignature (Nintex AssureSign), low-code process automation (Nintex K2 Cloud and Nintex K2 Five), and process intelligence (Nintex Analytics).

Request a live demo of the Nintex Process Platform by visiting https://www.nintex.com/request-demo/

Media ContactLaetitia Smithlaetitia.smith@nintex.comcell: +64 21154 7114

About NintexNintex is the global standard for process intelligence and automation. Today more than 10,000 public and private sector organisations across 90 countries turn to the Nintex Process Platform to accelerate progress on their digital transformation journeys by quickly and easily managing, automating and optimising business processes. Learn more by visiting http://www.nintex.com and experience how Nintex and its global partner network are shaping the future of Intelligent Process Automation (IPA).

Product or service names mentioned herein may be the trademarks of their respective owners.

Nintex is the global standard for process management and automation. Today more than 10,000 public and private sector organizations across 90 countries turn to the Nintex Platform to accelerate progress on their digital transformation journeys by quickly and easily managing, automating and optimizing business processes. Learn more by visiting http://www.nintex.com and experience how Nintex and its global partner network are shaping the future of Intelligent Process Automation (IPA).

Photo - https://mma.prnewswire.com/media/1847632/Nintex_Earns_Trust_Radius_Award_NZ.jpgLogo - https://mma.prnewswire.com/media/700078/Nintex_Logo.jpg

SOURCE Nintex

Read the original post:

Nintex Earns a 2022 Top Rated Award from TrustRadius for Fourth Consecutive Year - Yahoo Finance

Beyond Identity Joins GitLab Inc.s Alliance Partner Program to Secure Software Supply Chains From Malicious Attacks – Yahoo Finance

Beyond Identity Inc.

New Integration Cryptographically Binds Access and Code Signing Keys to Valid Corporate Identities and Authorized Devices to Dramatically Reduce Critical Vulnerabilities

NEW YORK and SAN FRANCISCO, June 27, 2022 (GLOBE NEWSWIRE) -- Today, Beyond Identity, the leading provider of unphishable MFA, and GitLab Inc., the provider of The One DevOps Platform, announced a new partnership and integration that enables customers to prevent intentional vulnerabilities from being introduced into DevOps environments and to dramatically reduce the risk of supply chain attacks. The integration between Beyond Identity and GitLab enables companies to ensure that only authorized users working from company-approved and secure computers can access code repositories or sign source code during commit activities. Beyond Identity extends the continued security enhancements and API hooks the GitLab team has released to also add in the unique capability of associating an SSH or GPG key with a known corporate identity. These capabilities are available today.

GitLabs One DevOps Platform supports essential security capabilities, including the ability to use cryptographic keys to control access and sign source code entering the repository. These advanced capabilities are critical to reducing vulnerabilities that most organizations, even advanced shops, currently have in their DevOps environments. This enables organizations to tightly control access to the source and infrastructure code in repositories and gain visibility into exactly who is committing code. In the past, DevOps teams have typically not required this, and in the rare cases where they have, the SSH and GPG keys used to access repos and sign commits are not bound to an authorized corporate identity. Further, there is no way to ensure that engineers work from an authorized and appropriately secure computer. These issues leave the door wide open to malicious code injection attacks.

GitLab continues to stress capabilities and partnerships that help joint customers raise the security of their DevOps tooling as attackers continue to prey on lax security in these environments, said Johnathan Hunt, Vice President of Security at GitLab. We are very excited by the Beyond Identity partnership and the impact their integration can have on enhancing security for GitLab customers.

Story continues

Beyond Identitys Secure DevOps solution is designed to prevent credential-based breaches by automating and securing digital access for developers, enabling secure repository access and check-ins. GitLabs focus on security and essential integration hooks enable Beyond Identity to mint SSH and GPG keys that are cryptographically tied to a known and authorized corporate identity and to an authorized computer. This integration enables DevSecOps teams to lock down the repo and ensure that a valid corporate identity signs every piece of code committed to the repo. The integration also allows DevSecOps teams to validate that each piece of code entering the CI/CD pipeline is checked to ensure authorized users signed it typically as the first step in the CI pipeline.

The Secure DevOps integration with GitLab can help with the following:

Stop malicious actors or rogue insiders from injecting malware into source code and protect SaaS, PaaS, and IaaS services and apps from backdoors.

Control repository access and stop introducing unauthorized malicious code to customers (e.g., SolarWinds).

Prevent bad actors and insiders from making network/system infrastructure settings and introducing hard-to-detect vulnerabilities and backdoors by manipulating infrastructure as code now stored in repositories.

Confirm that every piece of source or infrastructure code is signed and cryptographically bound to an authorized user so that organizations have perfect visibility into who contributed to every commit ensuring that issues found by code scanning tools can be immutably tracked to a specific identity.

Ensure that engineers and contractors are using authorized and proven secure computers to access or commit code thwarting attacks by adversaries that prey on poorly secured endpoints.

After SolarWinds, Heroku, and Kaseya, organizations worldwide are digging into how to protect their code better, said Dr. Jasson Casey, CTO of Beyond Identity. This is more important than ever as modern DevOps supports tooling needed to protect both source and infrastructure code. While code scanning tools are an important part of the equation, they dont uncover every vulnerability, and when they do find an issue, organizations have no clear visibility into who contributed the malicious artifact. This partnership enables organizations to shift left and protect access to repositories and provide cryptographic visibility into who makes each change.

About Beyond IdentityBeyond Identity is fundamentally changing how the world logs in with a groundbreaking invisible, unphishable MFA platform that provides the most secure and frictionless authentication on the planet. We stop ransomware and account takeover attacks in their tracks and dramatically improve the user experience. Beyond Identitys state-of-the-art platform eliminates passwords and other phishable factors, enabling organizations to confidently validate users identities. The solution ensures users log in from authorized devices, and that every device meets the security policy requirements during login and continuously after that. Our revolutionary approach empowers zero trust by cryptographically binding the users identity to their devices and analyzing hundreds of risk signals on an ongoing basis. The companys advanced risk policy engine enables organizations to implement foundationally secure authentication and utilize risk signals for protection, rather than just for detection and response. For more information on why Unqork, Snowflake, and Roblox use Beyond Identity, please visit http://www.beyondidentity.com.

All product and company names herein may be trademarks of their respective owners.

Contact:Carol VolkBeyond Identitycarol.volk@beyondidentity.com(212) 653-0847

View original post here:

Beyond Identity Joins GitLab Inc.s Alliance Partner Program to Secure Software Supply Chains From Malicious Attacks - Yahoo Finance