Google Cloud to Offer Security-Vetted Open Source Software – InformationWeek

Looking to help cut the risk of software supply chain vulnerabilities in open source software, Google says it will release its own packages and libraries of vetted open source for other organizations to use.

The company made the announcement in its Google Cloud blog, saying that its new Assured Open Source Software service (Assured OSS) will enable enterprise and public sector users to incorporate the same open source software packages that Google uses in their own developer workflows.

The new cloud service from Google, due in a preview version in Q3 2022, comes amid a huge increase in cyber attacks that are targeting open source, with recent examples including the attacks to exploit the Log4j2 vulnerability against that open source Java-based logging framework that is common on Apache web servers. But thats not the only one. Software supply chain management vendor Sonatype said in its State Of the Software Supply Chain Report that cyber attacks aimed at open source suppliers increased by 650% year-over-year in 2021.

Whats more, enterprise organizations today are increasingly using open source software, a trend that accelerated during the pandemic, according Red Hats State of Enterprise Open Source Report 2022, and a blog post by Red Hat president and CEO Paul Cormier. Indeed, the survey found that 80% of IT leaders expect to increase their use of enterprise open source software for emerging technologies.

Googles certainly not alone in its effort to address open source vulnerabilities. The Linux Foundation and the Open Software Security Foundation with support from 37 companies including Amazon, Google and Microsoft, recently released a plan for securing open source software.

In its blog announcing the release of Assured OSS, group product manager for security and privacy Andy Chang wrote, Google continues to be one of the largest maintainers, contributors, and users of open source and is deeply involved in helping make the open source ecosystem more secure through efforts including the Open Source Security Foundation (OpenSSF), Open Source Vulnerabilities (OSV) database, and OSS-Fuzz.

Chang noted that Googles release of Assured OSS followed other open source security initiatives that the company discussed at a January White House Summit on Open Source Security.

Open source software code is available to the public, free for anyone to use, modify, or inspect, Google and parent company Alphabet President of Global Affairs Kent Walker wrote in a blog post in January. Because it is freely available, open source facilitates collaborative innovation and the development of new technologies to help solve shared problems. Thats why many aspects of critical infrastructure and national security systems incorporate it.

But there can be issues with that approach, too, as Walker noted.

Theres no official resource allocation and few formal requirements or standards for maintaining the security of that critical code, he wrote. In fact, most of the work to maintain and enhance the security of open source, including fixing known vulnerabilities, is done on an ad hoc, volunteer basis.

That opens up a big area of concern about the introduction of vulnerabilities that could be exploited. While some open source projects have many eyes working on them and looking for issues, some projects dont, Walker noted.

In conjunction with its Assured OSS announcement, Google Cloud also announced a collaboration with Snyk, a developer security platform. Google said that Assured OSS will be natively integrated into Snyk solutions for joint customers to use when developing code. In addition Synk vulnerabilities, triggering actions, and remediation recommendations will become available to joint customers within Google Cloud security and software development life cycle tools to enhance the developer experience, according to Google.

The collaboration addresses one of the major concerns that surfaced during the White House meeting in January -- preventing security defects and vulnerabilities in code and open source packages, improving the process for finding defects and fixing them, and shortening the response time for distributing and implementing fixes.

What Federal Privacy Policy Might Look Like If Passed

Best Practices for Measuring Digital Investment Success

Original post:

Google Cloud to Offer Security-Vetted Open Source Software - InformationWeek

Everyone uses open source, so the Israeli startup also gives back a little to the developers – Geektime

There are quite a few independent developers that develop open-source products or projects, and the reasons for doing so are varied. To start, such open-source ventures create a community of users who help develop the product, ensure transparency, and lead to the development of safer products because more people saw it and could thus raise any red flags. However, monetary income is not one of these reasons to start open-source projects. But with all due respect to the stars at GitHub, developers, who have invested tens and hundreds of hours on projects, would not mind getting some recognition from time to time, as small financial grants.

Invested in 20 projects

The Israeli startup Appwrite, which develops the open-source Microservices platform for web, mobile and Flutter developers, has announced the establishment of a new fund called the Appwrite OSS Fund which aims to invest in open-source developers.

The new Appwrite fund will stand at $ 50,000 in its first year, which will be distributed to various open-source software developers, unlike funds granted by venture capital. Appwrite has launched a page on its website where open-source developers can apply or recommend other developers for funding from the OSS Fund. The projects that apply will be selected by a committee set up by Appwrite and will include company representatives alongside prominent representatives from the open-source community.

The Israeli startup notes that the open-source community is necessary for the continued functioning of the entire network, when currently between 70% and 90% of all software on the network is open source. On the other hand, many developers who maintain the most critical projects embedded in the systems and products we use, get very little pay or no pay at all.

Eldad Fox, CEO and co-founder of Appwrite, told Geektime that the fund will grant $2,500 to 20 open-source code contributors and projects annually. Our plan is to grant money to a multitude of different projects, on the condition that they don't have any other funding sources, like that from a venture capital firm, and wouldn't be benefiting from the business model. When this fund completes its mission, we will consider establishing another one with additional operations to help the open-source world," Fox added.

Last year, it came out that Marak Squires, an American developer, deliberately pushed committees that destroyed two JS libraries and harmed thousands of other developers. Even before this event, the developer expressed that he was not pleased with the fact that he had done all this work which was then available for everyone online for free and was used by large corporations and commercial companies that then made money from it yet did not pay it forward to the community. What do you think about this case? Do you think initiatives like that of Appwrite can prevent such cases in the future?

"You dont have to agree with Marak's actions to understand his frustration. Almost every company today benefits gravely from open sources and only a few of them return such benefits to the community. While our initiative is modest and will not solve industry problems, I think it sends a very strong message. We hope to see more companies, large and small, come together to help open-source developers and help build a healthier ecosystem around their technology."

Fox says the money is coming straight out of the startup budget. But even though they are not a cash-rich unicorn, the move was still welcomed by investors. "It was clear from day one that we were doing everything we could to stay true to our sources as an open-source company. Without the open-source community, we would not have reached where we are today. It only makes sense to do everything we can to make sure the system around our technology stays healthy. And we are always looking for other ways in which we can help open-source projects thrive."

Appwrite was founded in 2019 by Eldad Fox, and it maintains open-source products on the SaaS side of things. Last month, the Israeli startup raised $27 million from Tiger Global and other firms. Upon announcing this round of funding, Fox told us that 95% of the company's developers are people who have previously contributed to the company's open-source product. If you are an open-source developer and interested in receiving a grant from them, the application page is here.

See more here:

Everyone uses open source, so the Israeli startup also gives back a little to the developers - Geektime

How to counter insider threats in the software supply chain – TechTarget

Recent events, including the SolarWinds hack and President Biden's cybersecurity executive order, have sparked investment in software supply chain security. Established vendors and startups alike are joining the fight as organizations start to think about the technologies needed to combat this security challenge. One particular risk that is often overlooked, however, is the people part of the equation: insider threats.

The risks associated with insider threats grow as the software supply chain extends to partners, third-party contractors and freelancers. Once data and users move past an organization's development and support teams, they become harder to control. Vetting third parties' security measures is therefore critical, though not always easy.

Many large outsourcing firms have insider threat programs because they work at a scale where such countermeasures are a cost of doing business. Details about the firm's insider threat program and other internal security measures must be part of the client's due diligence and contractual negotiations. Yet even with contractual stipulations in place, it is difficult for clients to verify a vendor's security practices in action.

Vetting smaller firms, individual contractors and freelancers comes with challenges, too. If a company is sourcing contractors through a third-party staffing firm, it can always ask for background investigations, but those will only turn up a crime in the company's or person's past. Organizations should do client reference checks on their suppliers, but these may not turn up much either. Don't hesitate to use back-channel references to verify the quality of a firm.

Preventing insider threats across a software supply chain takes more than partner agreements and employee training, however. One option is the Supply Chain Levels for Software Artifacts, a framework for protecting software supply chain integrity based on Google's Binary Authorization for Borg (BAB) platform. Google uses BAB as an internal deployment-time enforcement check to review software authorization and configurations. Organizations can likewise use this tool to reduce insider risk.

Events in Ukraine -- a hotbed of innovative engineering talent -- remind us that geopolitical conflicts also affect software supply chain security. Suppose a company's or suppliers' employees and their families are in harm's way. Concern about employees in the affected area is paramount -- it's only human. However, it's also important to implement measures to secure data and intellectual property, such as software code and documentation that might be accessible from the area of conflict.

Organizations should ask partners or vendors about their continuity of operations (COOP) plans for such black swan events. Suppose, for example, an organization relies on GitHub or GitLab for code repositories and collaboration. Processes should be in place to secure accounts of users in affected areas. Vendors and suppliers should also run COOP drills to keep employees and partners up to date on the process.

Debates abound around open source software (OSS) security -- especially with recent events in Ukraine. If an organization's enterprise software has OSS dependencies, it must be conscious of the people who contribute to those projects. Recently, Pro-Ukrainian sentiments were behind the sabotage of an NPM package, worsening software supply chain security threats.

While OSS facilitates a collaborative community, there are signs of an inflection point between OSS maintainers and for-profit corporations that use OSS as a foundation for critical internal software and software they sell to customers. Organizations should bolster their open source programs, as well as teams that provide governance and support to open source tools. To do so, organizations should, for example, dedicate staff to OSS community outreach and treat OSS onboarding as a software supply chain security best practice.

Many third-party suppliers are feeling the effect of the "Great Resignation". Organizations must ensure their suppliers have processes that prevent departing employees from taking source code or documents with them to their next job.

Ensure partners have a documented and auditable offboarding process for their developers and other technical staff. Likewise, they should ask departing employees about trainings, such as secure coding and other security practices they've taught current employees during onboarding. Large outsourcing firms have the resources to govern onboarding and offboarding of programmers and engineers, but smaller vendors -- such as regional professional services firms -- may not have formal processes.

As software supply chain security draws attention from the cybersecurity and investment communities, enterprises must not lose sight of the main rule of security: People are the weakest link. While new technologies will garner market attention, organizations must consider the risk of insider threats and keep people at the center of their software supply chain strategies.

Visit link:

How to counter insider threats in the software supply chain - TechTarget

Microsoft shows off Windows updates at Build dev event – The Register

Microsoft Build Windows still rules the enterprise, and among all the Azure and Power Platform action during Microsoft's annual Build event for developers, the company had news for users of its flagship operating system.

The first followed this week's revelation that Windows Subsystem for Android is now running on Android Open Source Project (AOSP) 12.1, and concerns the Amazon Appstore preview.

After an inexplicable delay, Microsoft is finally adding countries on top of the US. Users in France, Germany, Italy, Japan and the UK will by the end of the year be able to join in previewing the Amazon Appstore, although there appears to still be no official way to get access to apps outside of those brought to Windows 11 via Amazon.

Ever keen to get developers on-side when it comes to Microsoft Store, the Windows giant also announced the removal of the waitlist program for Win32 apps. "Any app," it said, "that runs on Windows, including C++, WinForms, WPF, MAUI, React, Rust, Flutter and Java, is welcome in the Microsoft Store."

(The Store is less popular than its rivals, but nonetheless Microsoft boasted of a 50 percent year-on-year growth in desktop apps and games for the first quarter of this year. It would not, however, confirm the number those apps have grown to.)

While the Microsoft Ad Monetization platform for Windows UWP apps was shut down in 2020, at Build 2022 Microsoft announced "Microsoft Store Ads." Flagged as "coming soon" the tech, powered by Microsoft Advertising, will "help developers surface their apps to the right user at the right time, and to help users discover new experiences."

Also demonstrating that there remains life in Windows on Arm, Microsoft announced Project Volterra, hardware for programmers that's powered by the Qualcomm Snapdragon compute platform; it's meant to enable the development of local AI-accelerated workloads on Arm-compatible devices. The platform's integrated Neural Processing Units (NPUs) are all the rage, and Microsoft reckons the tech will turn up in pretty much every computing device in the future.

Microsoft didn't want to reveal too much information about Volterra (it "will share more details at a later date" was the boilerplate comment), we can but hope it has more horsepower than the Snapdragon 7c-powered QC710 Arm desktop of 2021.

More interesting is the "end to end Arm-native toolchain for Arm native apps" also announced. Visual Code and Windows Terminal are cross-platform by design, however, Visual Studio 2022 running natively on Arm is an altogether more intriguing prospect, particularly considering how long it took to arrive in 64-bit guise.

A preview of it, and other eyebrow raising components, such the "classic" .NET Framework, are due "in the next few weeks."

Continued here:

Microsoft shows off Windows updates at Build dev event - The Register

DigitalOcean sets sail for serverless seas with Functions feature – The Register

DigitalOcean dipped its toes in the serverless seas Tuesday with the launch of a Functions service it's positioning as a developer-friendly alternative to Amazon Web Services Lambda, Microsoft Azure Functions, and Google Cloud Functions.

The platform enables developers to deploy blocks or snippets of code without concern for the underlying infrastructure, hence the name serverless. However, according to DigitalOcean Chief Product Officer Gabe Monroy, most serverless platforms are challenging to use and require developers to rewrite their apps for the new architecture. The ultimate goal being to structure, or restructure, an application into bits of code that only run when events occur, without having to provision servers and stand up and leave running a full stack.

"Competing solutions are not doing a great job at meeting developers where they are with workloads that are already running today," Monroy told The Register.

For this reason, Monroy, who previously worked on Microsoft Azure Functions before joining DigitalOcean, says ease of use, pricing predictability, and the ability to integrate serverless functions into existing applications were major considerations when bringing DigitalOcean Functions to market.

The service is built on Nibella's serverless tech, which DigitalOcean acquired last year. The platform is optimized for a variety of Jamstack and API workloads, though we're told additional functionality, including scheduled functions, is planned for a future release.

"If you want to build a static website that is powered by some functions on the backend, DigitalOcean Functions is a great product for that specific use case," Monroy said. While there are compelling opportunities for running serverless workloads at the edge, Monroy argues those functions still need to work with existing datacenter infrastructure.

"Even the serverless edge components require workloads that are running in a traditional data center," he said. "This idea that everything is just going to run on the edge is not rooted in the applications of the real world."

In addition to running standalone serverless apps, DigitalOcean is also positioning the service as a way to augment existing workloads.

According Monroy, many fall into the trap of believing new technology will supplant the old. "Containers replacing virtual machines, virtual machines replacing on-premise servers, serverless replacing containers. In reality, the new technology just gets added to the old."

DigitalOcean Functions provides customers with a way to extend new functionally to their existing applications without having to rebuild them, he claimed.

"Let's say that you're running a Ruby on Rails or Django application, and you want to add a new API to the application. You can just write some serverless functions and publish those serverless functions as an API," running alongside the existing container and/or managed database.

"Asking developers to incrementally add value to existing applications using new technology in a serverless vein is a much easier sell," he added.

DigitalOcean Functions is available now on a consumption based pricing model, with the first 90,000GB-seconds of memory use provided at no cost.

See the rest here:

DigitalOcean sets sail for serverless seas with Functions feature - The Register

Version 251 of systemd coming soon to a Linux distro near you – The Register

Version 251 of the controversial systemd Linux init system is here, and you can expect it to feature in the next version of your preferred distro.

The unified system and service manager for Linux continues to grow and develop, as does Linux itself. There is a comprehensive changelog on Github, so we will just try to pick out a few of the highlights.

New releases of systemd appear roughly twice a year, so the chances are that this will appear in the fall releases of Ubuntu and Fedora.

The new version now uses the GCC compiler's C11-with-GNU-extensions standard, nicknamed gnu11.

This brings it into line with the Linux kernel itself, which uses the same standard as of version 5.18 in turn facilitated by kernel 5.15 moving the minimum required GCC version to 5.1.

Were we betting types, we'd wager that probably the most controversial changes in the new release revolve around the new systemd-sysupdate and kernel-install features. The former is still described as an experimental feature, so relax for now.

No, this does not mean that systemd is becoming a package manager. Like it or not, though, the nature of operating systems is changing. Modern ones are large, complex, and need regular updates, and as The Register has examined in depth recently, this means that the design of Linux distributions is changing radically.

The prime example is the ever more mature ChromeOS, including Google's new move into the mass-market hardware space, ChromeOS Flex.

What that means (in brief) is that the nature and use of package managers is changing. What is disappearing is their role as the tool that allow end-users to customise and update their OS. Instead, they are becoming the tools that vendors use to build the distributions.

ChromeOS doesn't have a package manager; neither do Fedora's Silverblue and Kinoite versions. You get a tested, known-good image of the OS. Updates are distributed as a complete image, like they are today with Android or iOS.

ChromeOS has two root partitions: one live and one spare. The currently running OS updates the spare partition, then you reboot into that one. If everything works, it updates the now-idle second root partition. If it doesn't all work perfectly, then you still have the previous version available to use, and you can just reboot into that again.

When a fixed image becomes available, the OS automatically tries again on the spare instance. The idea is that you always have a known-good OS partition available, which sounds like a benefit to us.

Presumably the users are happy too: Chromebook sales may be down, and they only have a fixed lifespan, but there are still well over a hundred million of them out there.

So, no, systemd is not going to become a package manager, because ordinary distros won't have a package manager at all, except maybe Flatpak, or Snap or something similar. The new functionality, including managing installed kernels, is to facilitate A/B type dual-live-system partitions.

For some insight into this vision, Lennart Poettering, lead architect of systemd, has described this in a blog post titled "Bringing Everything Together."

Version 251 has other features, of course. It requires a minimum of kernel 4.15, which dates back to January 2018. There are some changes to systemd-networkd, such as systemd-resolved starting earlier in the boot sequence, and more cautious allocation of default routes.

The busctl tool for monitoring DBUS has changed its output format from the old PCap format to the newer PCapNG.

Handling of environment variables and unit statuses is improved, including setting a status for processes killed by the systemd-oomd out-of-memory killer.

If you still prefer to avoid systemd, don't despair. There are still a selection of distros that eschew it altogether, including Devuan GNU+Linux, Alpine Linux, and Void Linux.

Continue reading here:

Version 251 of systemd coming soon to a Linux distro near you - The Register

Clearview AI wants its facial-recognition tech in banks, schools, etc – The Register

Clearview AI is reportedly expanding its facial-recognition services beyond law enforcement to include private industries, such as banking and education, amid mounting pressure from regulators, Big Tech, and privacy campaigners.

The New York-based startup's gigantic database contains more than 20 billion photos scraped from public social media accounts and websites. The database was used to train Clearview's software, which works by performing a face-matching algorithm between input images and ones stored on its database to identify individuals.

These images were downloaded without explicit permission from netizens or companies. Although Clearview has been sent numerous cease and desist letters from Twitter, YouTube, Google, Facebook and more, it continued to collect more images and grow its database. The demands to stop scraping public-facing webpages, however, were not legally binding, unlike the settlement agreement Clearview entered into to end its lawsuit against the American Civil Liberties Union.

Clearview promised to stop giving or selling access to its database system to most private companies and organizations across the US. Public agencies and law enforcement, however, can still use its large database. Private sector businesses, instead, can only use data they provide to the company's facial-recognition software; ie, they have to provide their own database of photos. Clearview is also not allowed to use that data to add to its database.

"Clearview AI doesn't use any private images from its customers or anywhere else to train its bias-free facial recognition algorithm," its CEO Hoan Ton-That confirmed to The Register. "Clearview AI only uses public images from the open internet to train its bias-free facial recognition algorithm."

Ton-That had claimed his company's software was only being used by law enforcement to help identify suspects in criminal cases. But Clearview has ambitions to expand beyond those capabilities, and is hoping to provide facial-recognition technology to banking apps and schools, according to Reuters.

"Clearview AI is interested in using facial recognition as a way to help prevent crime and financial fraud. Today facial recognition is already being used to unlock your phone, provide access to buildings, identity checks and even for payments," Ton-That told us.

"Clearview AI provides its facial recognition technology, without the large database of 20B+ images, through Clearview Consent to a visitor management software provider, who provides visitor management services to customers, some of which are schools," he added.

Italian regulators have fined the biz millions of dollars and Canadian watchdogs have banned its public agencies from contracting with the company.

The UK's Information Commissioner's Office issued a 7.5 million ($9.43 million) fine for violating the country's data privacy laws, and ordered Clearview to stop scraping photos and delete existing images of its residents, this week.

Still, the company believes its technology is beneficial despite risks of misidentification or issues of data privacy and security. "Facial recognition can be used to help prevent identity theft and fraud. For example, before a large bank transaction, it may be useful to make a facial recognition check with the owner of the account, to ensure that money is not stolen," Ton-That said.

"The potential of facial recognition technology to make our communities safer and commerce secure is just beginning to be realized," he added.

View post:

Clearview AI wants its facial-recognition tech in banks, schools, etc - The Register

Predator spyware sold with Chrome, Android zero-day exploits to monitor targets – The Register

Spyware vendor Cytrox sold zero-day exploits to government-backed snoops who used them to deploy the firm's Predator spyware in at least three campaigns in 2021, according to Google's Threat Analysis Group (TAG).

The Predator campaigns relied on four vulnerabilities in Chrome (CVE-2021-37973, CVE-2021-37976, CVE-2021-38000 and CVE-2021-38003) and one in Android (CVE-2021-1048) to infect devices with the surveillance-ware.

Based on CitizenLab's analysis of Predator spyware, Google's bug hunters believe that the buyers of these exploits operate in Egypt, Armenia, Greece, Madagascar, Cte d'Ivoire, Serbia, Spain, Indonesia, and possibly other countries.

"We assess with high confidence that these exploits were packaged by a single commercial surveillance company, Cytrox, and sold to different government-backed actors who used them in at least the three campaigns," Google security researchers Clement Lecigne and Christian Resell wrote in a TAG update this month.

Cytrox, which is based in the Balkan state of North Macedonia, did not respond to The Register's request for comment.

"Our findings underscore the extent to which commercial surveillance vendors have proliferated capabilities historically only used by governments with the technical expertise to develop and operationalize exploits," the researchers wrote, adding that seven of the nine zero-day exploits that TAG discovered last year were developed by commercial vendors and sold to government-backed operators.

While NSO Group and its Pegasus spyware is perhaps the most notorious of these commercial providers, we're told that TAG is tracking more than 30 such software providers that possess "varying levels of sophistication." All of them are selling exploits or surveillance malware to governments for supposedly legitimate purposes.

The Predator campaigns were highly targeted to just tens of users hit, according to the Googlers. While the researchers didn't provide specifics about who these campaigns targeted, they do note that they've seen this sort of tech used against journalists in the past. Similarly, CitizenLab's analysis details Predator spyware being used against an exiled Egyptian politician and an Egyptian journalist.

Each of the TAG-discovered campaigns delivered a one-time link via email that spoofed URL shortening services. Once clicked, these URLs directed the victims to an attacker-owned domain that delivered Alien, Android malware that loads the Predator spyware and performs operations for it.

"Alien lives inside multiple privileged processes and receives commands from Predator over IPC," Lecigne and Resell noted. "These commands include recording audio, adding CA certificates, and hiding apps."

The first campaign, which TAG detected in August 2021, used a Chrome vuln on Samsung Galaxy S21 devices. Opening the emailed link in Chrome triggered a logic flaw in the browser that forced the Samsung-supplied browser to open another URL. The content at that other URL likely exploited flaws in the Samsung browser to fetch and run Alien.

The security researchers surmise that the attackers didn't have exploits for the then-current version of Chrome (91.0.4472) and instead used n-day exploits against Samsung Browser, which was running an older version of Chromium.

"We assess with high confidence this vulnerability was sold by an exploit broker and probably abused by more than one surveillance vendor," they wrote.

The second campaign, which TAG observed in September 2021, chained two exploits: an initial remote code execution and then a sandbox escape. It targeted an up-to-date Samsung Galaxy S10 running the latest version of Chrome.

"After escaping the sandbox, the exploit downloaded another exploit in /data/data/com.android.chrome/p.so to elevate privileges and install the Alien implant," according to Lecigne and Resell, adding that they haven't retrieved a copy of the exploit.

TAG analyzed one other campaign, a full Android exploit chain, targeting an up-to-date Samsung phone running the latest version of Chrome. It included a zero-day in JSON.stringify and a sandbox escape, which used a Linux kernel bug in the epoll() system call to gain sufficient privileges to hijack the device.

This particular Linux kernel bug, CVE-2021-1048, was fixed more than a year before the campaign. However, the commit was not flagged as a security issue, so the update wasn't backported to most Android kernels. All Samsung kernels remained vulnerable when the nation-state backed gangs carried out this exploit.

The rest is here:

Predator spyware sold with Chrome, Android zero-day exploits to monitor targets - The Register

New audio server Pipewire coming to next version of Ubuntu – The Register

The next release of Ubuntu, version 22.10 and codenamed Kinetic Kudu, will switch audio servers to the relatively new PipeWire.

Don't panic. As J M Barrie said: "All of this has happened before, and it will all happen again." Fedora switched to PipeWire in version 34, over a year ago now. Users who aren't pro-level creators or editors of sound and music on Ubuntu may not notice the planned change.

Currently, most editions of Ubuntu use the PulseAudio server, which it adopted in version 8.04 Hardy Heron, the company's second LTS release. (The Ubuntu Studio edition uses JACK instead.) Fedora 8 also switched to PulseAudio. Before PulseAudio became the standard, many distros used ESD, the Enlightened Sound Daemon, which came out of the Enlightenment project, best known for its desktop.

PulseAudio hit version 1.0 in 2004 and is currently on version 15. One of PulseAudio's lead developers was Lennart Poettering, who is now best known as the project lead of the famed and controversial systemd, so perhaps it's reasonable to think he's busy with other things these days.

PipeWire also handles video streams so it does a little more than the outgoing PulseAudio, which as its name suggests only handles audio. To explain what this change means, let's clarify what an audio server is and does.

The sound playback software system in Linux is a stack, and like the network stack, it has multiple layers that do different things. At the bottom are sound drivers, which are intimately connected with the Linux kernel. Above them sits a sound server, and above that, your apps playing sounds.

PulseAudio (and part of the functionality of PipeWire) are sound servers. They manage access from different apps to the underlying sound hardware, mixing their audio streams before playback. You can play, or record, sound without a sound server, but if you don't have one, the current program that is playing sound owns the audio device: it has complete and exclusive control over it, meaning that the operating system can't mix sources.

So, for example, it's a good thing to have a sound server managing your sound devices if you want to be able to hear a new-message notification while you're listening to music. The sound server manages the inputs, and can mute or better still, fade out the music player, fade in the source of the notification, and then fade the music player back in again.

If you plug headphones in to an ordinary headphone socket, that's driven by your onboard sound card, or maybe by a better one in an expansion slot. But if you use a USB headset, that is in effect a separate sound card, attached over USB instead of PCI, and the sound subsystem therefore has to manage sound devices coming and going as they are attached or removed or if you don't remove it, manage switching between the preferred current device. It's a complicated job.

The sound server sits on top of the layer that drives the sound cards or chips in your computer.

The original Linux sound system (as in, the low-level hardware drivers) was the Open Sound System, also known as OpenSound or OSS for short. OpenSound is a cross-platform tool that also runs on other operating systems such as FreeBSD and OpenSolaris. OpenSound was widely adopted and prospered so much that its programmer got hired by a commercial company, 4Front, which was later acquired by NCR.

Subsequently, many distros, including Ubuntu, removed and switched to ALSA instead. ALSA also supports the OpenSound APIs, so things still worked. ALSA itself was mainlined into the Linux kernel in version 2.5.5, and from kernel 2.6 it replaced OpenSound.

PulseAudio, which is also cross-platform, is a FreeDesktop.org project. On Linux, it sits on top of ALSA.

PulseAudio too was controversial in its time, but it was fair to say that Linux audio was a mess. PulseAudio does work, and it resolved many issues, but it can be high latency, and can use a lot of CPU. Some audio professionals favored a rival audio server called JACK, which provides lower-latency sound handling. Indeed, the development of JACK drove work to reduce audio latency in kernel 2.6.

The plan is that PipeWire will further simplify media handling on Linux. It's not just a sound server, it also handles video. The project lead, Wim Taymans, was one of the co-founders of the GStreamer framework back in 1999. Although it's quite new, currently only on version 0.3, PipeWire aims to replace PulseAudio and JACK. Similarly to how ALSA supported OSS interfaces so you could keep existing code working, now PipeWire supports JACK interfaces so, at least in theory, people who were using JACK can keep the same software and it should still work with PipeWire.

PipeWire also aims to work with GNOME, the Wayland display server, and Flatpak apps, while using less CPU and offering better latency than PulseAudio so that it can also replace JACK.

Go here to see the original:

New audio server Pipewire coming to next version of Ubuntu - The Register

CockroachDB adds command line tool as database hits version 22.1 – The Register

Cockroach Labs has finally added a new command line tool with the release of version 22.1 of its eponymous database, out today.

Although it was possible to deploy CockroachDB using something like Terraform (for example, deployment on Oracle Cloud Infrastructure) the process is often not particularly elegant.

"Until this release we didn't have an API to control the database," Jim Walker, recovering developer and product evangelist at Cockroach Labs told The Register during 2022's EU Kubecon in Valencia, Spain.

"It's really around control of environment: it's removing nodes, it's adding nodes, it's starting the cluster, it's stopping the cluster, the basic stuff.

"And so it's really as simple as kind of building it out so that we can actually integrate with the workflows that people have, or the way that they're delivering software in their organization. Like how we work in the context of the CI/CD flow where you're provisioning hardware or TerraForm, you're setting up security over here, the database has got to get up and running.

"That step with the database [for CockroachDB at least] was kind of a manual thing for a while there. And so we have an API."

The update also includes Quality of Service prioritization and data domiciling features, which will be handy for a potentially massively distributed database with nodes that are not necessarily where lawmakers would like them

The arrival of the API marks a step toward maturity for the database. Having been designed with distribution in mind from the outset, to run across clouds and be pretty much unkillable has proven attractive for investors (the company recently took $278 million in Series F funding, giving it a $5 billion valuation.) However, as with much of the cloud native landscape, the next challenge is integrating seamlessly into automated workflows.

"It's been a matter of 'we had to build this awesome database' and now it's like 'how does it work with all the other things?'" said Walker.

As well as the command line tooling, Cockroach Labs has applied the API to its CockroachDB Serverless product, aimed at luring developers to its world via a horizontally scalable (Postgres-compatible SQL) relational database. Pricing is based on how much is stored and the work done by queries, although small loads and data sizes of less than 5GB won't attract a charge.

"I am fascinated by the serverless thing," said Walker, "I think it should be called Infrastructureless, eventually, I think that's really what this thing becomes what we're doing it is we're preparing it so that it can actually work for huge massive applications."

Other updates in this release include support for time-to-live (TTL) data: "Customers had requested this," said Walker, "to both optimize the db but also for use cases where they do not want data to live forever because of risk and other concerns." The automated expiration has been a feature of other database, such as Oracle's, so its arrival in CockroachDB is a further sign of the product maturing.

The update also includes Quality of Service prioritization and data domiciling features, which will be handy for a potentially massively distributed database with nodes that are not necessarily where lawmakers would like them. Super Regions groups multiple cloud regions into a larger geographical area.

It has been possible to simply add something like a country code into a table and instruct CockroachDB to filter accordingly to dictate where data should physically reside. "In this release, as people have become more mature," said Walker, "and they're getting more comfortable with that type of capability they wanted another level. So we introduced the concept of a Super Region."

"So okay, there's Germany here. There's Ireland. There's the UK, there's Portugal Maybe I just want data to just live on any European server. But certain tables I want in Germany. So we have this concept of a Super Region, which basically collapses up a bunch of different regions into one thing."

Good for data, but not so much for non-data workloads (which are currently not Cockroach Labs' problem. Not yet, anyway.)

Overall, the latest update is, as Walker suggests, more aimed at maturity than the whizzbang features of old. Tools were added to optimize query performance and integration with Datadog makes for "single pane of glass" monitoring.

Read more:

CockroachDB adds command line tool as database hits version 22.1 - The Register