Building a Retro Linux Gaming Computer – Part 18: Run Away and Join the Circus – GamingOnLinux

Continued from Part 17: The Llama Master

In writing this seriesI have spent a great deal of time searchingeBayfor older Linux games to cover, and one night I came across acurious sight. Although being sold forWindows, I founda listing for aphysical copy of the freegame Circus Linux! as published by Alten8. At first I figured it wouldjust be another keep case in my collection with "Linux" on the cover, but upon inspecting the contents of the disc, it soon became apparent just how cheap this retail release was.

All that Alten8 seemsto have done was package the source directory with aWindows binary already built, with the install instructions urging you to "copy and paste the folder CIRCUS from the CD" and then click on the circuslinux.exe file. With the source code included I decided it would be trivial to also build the game forLinux, and in fact the included INSTALL.txt file even tellsyou how to compileand install the game on Linux with GNU Automake.

You do need the relevant SDLdevelopment libraries as packaged byyour distribution, and unfortunately Alten8 did seem to strip away some of the game's documentation files, meaning that the build willfail at first. To get around this I just used the "touch AUTHORS.txt COPYING.txt CHANGES.txt README-SDL.txt" command to create blank placeholders, but you really are just better off grabbing the source code yourself online apart from the novelty.

Circus Linux!itself is a remake ofthe older Circus Atari, which was itself a home consoleversionof the even olderCircus arcade cabinetby Taito. Circuswas a block breaker game inspired by Breakout, with the main change being that the game is now simulating a teeterboard act, with the blocks becoming balloons and your paddle a seesaw. This does have a marked difference on the gameplay, as you need to ensure your clown lands on the correct endof the teeterboard.

Circus Linux!goes all in on the theme in a way that the original Atari version never could, sportingbright colourfulanimated graphics and fun upbeat music and sound effects, showing off the power of the then still freshSimple DirectMedia Layer. One aggravation is that the mouse canleave the window when not playing full screen, butthe game does at least support a number of screen modes, including a lower graphics setting for less powerful computers.

Needless to say even on full the game did not cause my Pentium III 500 Mhz tobreak a sweat, but I appreciate the option. Beyond this the game features a number ofgameplay modifiers: "Barriers" which can block your shots, "Bouncy Balloons" that can cause the clown to careen back down on contact, and "Clear All" that demands every balloonbe popped on a stage before proceeding to the next screen.

Like most arcade gamesCircus Linux! is a test of both your dexterityand endurance, challenging you to hold on to your lives for as long as possible while racking up the highest possible score. The game also has support for local hot seat multiplayer, either in a cooperative mode where you both get the chance to help one another pop balloons, or an adversarial mode where you compete to earn the highest possible score.

Perhaps more compelling thanCircus Linux! on its own is the legacy of its creator Bill Kendrick and his development house New Breed Software, a prolific figure in the free and open source gaming scene. He is most famous for starting work on the platformer SuperTuxandcrafting the drawing programTux Paint, helping to popularize Tux as a gaming icon with others in the Tux4Kids initiative, allalongside the work of people like Steve Baker and Ingo Ruhnke.

Bill Kendrick has also created a number of other arcade conversions, edutainment, and experimental software toys which he ports to the widest possible range of platforms, all of which can still be found on the New Breed Software website. Five of them, X-Bomber, Mad Bomber, 3D Pong, ICBM3D, and Gem Drop X,were included on100 Great Linux Games.He even made a chat bot called Virtual Kendrick, inspired by a comment that he should port himself to the Zaurus handheld.

I have avoided it long enough, but I am feeling the itch to play a first person shooter again. As has already been made clear Linux has never had ashortage of them, but some are a lot harder to find today than others.The next game I am to cover isone of the rarest of themall, due to its limited physical distribution, and an attachment to a Belgian company now more knownfor maintainingan operating systemthan porting games.

Carrying on in Part 19: Sinsational

Return toPart 1: Dumpster Diving

Go here to read the rest:

Building a Retro Linux Gaming Computer - Part 18: Run Away and Join the Circus - GamingOnLinux

Want to know the future of FOSS? You can look it up in a database – The Register

Opinion In IT, there is sexy tech, there is fashionable tech, and there are databases. Your average database has very little charisma, however. Nobody's ever made a movie about one.

They should. They should make lots of movies. (The Reg must note at this point that we're not counting the vendors in this. Some of them have, indeed, spent a bit of money on just such a project.)

You don't have to spend long in any aspect of IT to discover that databases are the soul of IT, its constant animating force. From one perspective, everything digital is a specialized database: word processors, spreadsheets, shoot-em-ups, streaming services, from Google to your disk filing system. The storing, sorting and retrieval of data? That's it. That's the whole game. It has been the case ever since Herman Hollerith designed the punched card tabulating machine in the late nineteenth century.

As for databases that call themselves that, they're the engine of corporate computing. Their capability, reliability and maintainability are essential, and the metrics of performance and expense are unambiguous. Corporate decisions about databases are one of the purest indicators of how IT is sourced and deployed. Hype is quickly exposed, as is the good stuff.

So when you look at the databases developers actually choose, you're seeing a market model with wider implications. Open source versus proprietary, hosted versus on-prem, innovation versus maturity: all primary concerns across IT, all crystallized in DB decisions.

But there's an equally important flip side: how the developers and suppliers of DB software manage to stay in business themselves. That's the other great question of IT in the 2020s: how do you make money either fighting or flaunting FOSS.

That's the first lesson from a feature discussing today's FOSS databases and their respective licensing terms: open source has won. It's about time too. Before FOSS was a corporate option, the big guys were ruthless at monetizing their position in the heart ofIT.

Licensing models were set at what clients could bear, not what was equitable. Random audits could turn accidental license breaches into very expensive mistakes, and it could be very hard to manage those licenses if you were trying to scale. Or if license management was curiously difficult.

Why did anyone put up with this? They had to: these were the costs of mitigating the risk of ushering in unknowns to the galactic center of your company business model.

Times change, but memories abide. It is hard to overstate the organizational resentment towards what looks, feels and costs like extortion, or the readiness to explore options that do not have that particular pistol to point. Momentum has built for FOSS, as more people use and develop it.The quality and variety of the code has increased and deployment edged deeper into risk-averse, and rich, areas of the market.

There is a lag between what developers choose and what is actually deployed, but the trend is unambiguous and continuous. Proprietary has lost and is losing market share, open source has and is gaining. By some measures Oracle was just about equal to MySQL in 2021. Guess which is sliding down the snake, and which is climbing the ladder.

This is it. This is the canonical proof that open source can achieve everything needed for corporate software, when there's a big enough community of motivated developers. Can it in turn support that community?

Again, looking at databases in particular gives a good lens for the bigger picture. FOSS was born of idealism, frustration, opportunity and optimism. It recognized the inequity of centralized control of software, born of a time when entry costs to making software were very high and distribution very difficult, in an environment when neither situation was still the case. Like so many successful revolutions, the very act of winning changed the dynamics that made the win possible.

The ideal FOSS license is completely unrestrained: take the code, do what you like, just ask those who come after to do the same. That works in many cases, where those who do most of the work can parlay their expertise into business relationships.

However, it doesn't work so well in the age of hyperscalers, where hosted services can craft deals that require minimal interaction and risk for clients, based on FOSS running behind an API. Hence the advent of ideas like BSL, the Business Source License, that fulfils part of the FOSS ideal by making source open, but restricts commercial use. That can be any commercial use, or specific cases like selling a hosted service - something that databases are very well suited for.

Is this betrayal of FOSS? Many think so, and in a model that relies on community as a proxy for closed-door development, that could be fatal. Or is it a sensible evolution, absorbing a very well-tested model of free for non-commercial use, subsidized by production use, that's been part of proprietary for decades?

The real danger isn't some dilution of FOSS ethics, but the resurgence of lock-in. While BSL and its ilk has that danger, so does any FOSS project too dependent on a powerful sponsor. The fact that the code is open is a strong safeguard: that which can be rewritten cannot be constrained. Ask IBM about its proprietary but visible PC BIOS.

This is an evolving market, but it's evolving into a more just, more sustainable and more flexible one as FOSS ideas change the landscape.

You'll have problems if you change your model in ways your initial supporters didn't expect so be aware of how the evolution is progressing and build in your long-term options at the start. If you're as open about your plans as you are about your software, that's good enough.

The evolution of the dull old database not only predicts the future, it's helping to define it. And that's as sexy as any tech gets.

More here:

Want to know the future of FOSS? You can look it up in a database - The Register

Advantages and Disadvantages of Using Linux – It’s FOSS

Linux is a buzzword and you keep hearing about Linux here and there. People discuss it in the tech forum, it is part of the course curriculum and your favorite tech YouTubers get excited while showing their Linux build. The 10x developers you follow on Twitter are all Linux fans.

Basically, Linux is everywhere and everyone keeps talking about it. And that gives you FOMO.

So, you wonder about the advantages of Linux and whether is it really worth trying.

I have compiled various possible advantages and disadvantages of Linux in this article.

If you are on the fence about choosing Linux over your preferred operating system, we would like to help you out.

Before you start, you should know that Linux is not an operating system on its own. The operating systems are called Linux distributions and there are hundreds of them. For simplicity, Ill address it asLinuxOS instead of a specific Linux distribution. This article explains things better.

Considering you are curious about Linux as an alternative operating system choice, it only makes sense that you know its advantages.

You might never regret your decision if it excels at what you want it to do.

You need to own an Apple device to use macOS as your daily driver and a Windows license to use Microsofts Windows.

Therefore, you need a bit of investment with these options. But, with Linux? Its entirely free.

Not just the OS, there are many software packages available for free on Linux when compared to Windows and macOS.

You can try every mainstream Linux distribution without paying for a license. Of course, you get the option to donate to support the project, but that is up to you if you really like it.

Additionally, Linux is totally open-source, meaning anyone can inspect the source code for transparency.

Typically, when users think of trying another operating system, it is because they are frustrated with the performance of their system.

This is from my personal experience. I have had friends willing to try Linux to revive their old laptop or a system that constantly lags.

And, when it comes to Linux distributions, they are capable of running on decent hardware configurations. You do not need to have the latest and greatest. Moreover, there are specialized lightweight Linux distributions that are tailored to run on older hardware with no hiccups.

So, you have more chances to revive your old system or get a fast-performing computer in no time with Linux.

No operating system is safe from malicious files or scripts. If you download and run something from an unknown source, you cannot guarantee its safety.

However, things are better for Linux. Yes, researchers have found attackers targeting Linux IoT devices. But, for desktop Linux, it is not yet something to worry about.

Malicious actors target platforms that are more popular among households, and Linux does not have a big market share in the desktop space to attract that kind of attention. In a way, it can be a good thing.

All you have to do is just stick to the official software packages, and read instructions before you do anything.

As an extra plus, you do not necessarily need an antivirus program to get protection from malware.

With an open-source code, you get the freedom to customize your Linux experience as much as you want.

Of course, you require a bit of technical expertise to go utilize the best of it. Even without any experience, you get more customization features in your operating system when compared to macOS and Windows.

If you are into personalizing your experience and willing to put in extra effort, Linux is for you. As an example, refer to the KDE customization guide and dock options to get basic ideas.

With macOS or Windows, you get limited to the design/preference choices finalized by Microsoft or Apple.

But, with Linux, you will find several Linux distributions that try to focus on various things.

For instance, you can opt for a Linux distribution that focuses on getting the latest features all the time, or you can opt for something that only gives you security/maintenance updates.

You can get something that looks beautiful out of the box or something that you provide crazy customization options. You will not run out of options with Linux.

I recommend starting with options that give you the best user experience.

If you are a software developer or student learning to code, Linux definitely has an edge. A lot of your build tools are available and integrated into Linux. With Docker, you can create specialized test environment easily.

Microsoft knows about this part and this is why it created WSL to give developers access to Linux environments inside Windows. Still, WSL doesnt come close to the real Linux experience. The same goes for using Docker on Windows.

I know the same cannot be said about web designing because the coveted Adobe tools are not available on Linux yet. But if you dont need Adobe for your work, Linux is a pretty good choice.

There is a learning curve to using Linux, but it provides you with insights on various things.

You get to learn how things work in an operating system by exploring and customizing it, or even just by using it.

Not everyone knows how to use Linux.

So, it can be a great skill to gain and expand your knowledge of software and computers.

As I mentioned above, it is a great skill to have. But, not just limited to expanding your knowledge, it is also useful professionally.

You can work your way to become a Linux system administrator or a security expert and fill several other job roles by learning the fundamentals of Linux.

So, learning Linux opens up a whole range of opportunities!

These days you cannot use Windows without a Microsoft account. And when you set up Windows, youll find that it tries to track your data from a number of services and applications.

While you can find such settings and disable them, it is clear that Windows is configured to disregard your privacy by default.

Thats not the case in Linux. While some applications/distributions may have an optional feature to let you share useful insights with them, it has never been a big deal. Most of the things on Linux are tailored to give you maximum privacy by default without needing to configure anything.

Apple and Microsoft on the other hand have clever tactics to collect anonymous usage data from your computer. Occasionally, they log your activity on their app store and while you are signed in through your account.

Got a tinkerer in you? If you like to make electronics or software projects, Linux is your paradise.

You can use Linux on single-board computers like Raspberry Pi and create cool things like retro gaming consoles, home automation systems, etc.

You can also deploy open source software on your own server and maintain them. This is called self-hosting and it has the following advantages:

Clearly, youll be doing all this either directly with Linux or tools built on top of it.

Linux is not a flawless choice. Just like everything, there are some downsides to Linux as well. Those include:

Every so often it is not just about learning a new skill, it is more about getting comfortable as quickly as possible.

If a user cannot get their way around the task they intend to do, it is not for them. It is true for every operating system. For instance, a user who uses Windows/macOS, may not get comfortable with Linux as quickly.

You can read our comparison article to know the difference between macOS and Linux.

I agree that some users catch on quicker than others. But, in general, when you step into the Linux world, you need to be willing to put a bit of effort into learning the things that are not obvious.

While we recommend using the best Linux distributions tailored for beginners, choosing what you like at first can be overwhelming.

You might want to try multiple of them to see what works with you best, which can be time-consuming and confusing.

Its best to settle with one of the Linux distributions. But, if you remain confused, you can stick to Windows/macOS.

Linux is not a popular desktop operating system.

This should not be of concern to a user. However, without having a significant market presence, you cannot expect app developers to make/maintain tools for Linux.

Sure, there are lots of essential and popular tools available for Linux, more than ever. But, it remains a factor that may mean that not all good tools/services work on Linux.

Refer to our regularly updated article on Linuxs market share, to get an idea.

As I mentioned above, not everyone is interested in bringing their tools/apps to Linux.

Hence, you may not find all the good proprietary offerings for Windows/macOS. Sure, you can use a compatibility layer to run Windows/macOS programs on Linux.

But that doesnt work all the time. For instance, you do not have official Microsoft 365 support for Linux and tools like Wallpaper Engine.

If you want to game on your computer, Windows remains the best option for its support for the newest hardware and technologies.

When it comes to Linux, there are a lot of ifs and buts for a clear answer.

Note that you can play a lot of modern games on Linux, but it may not be a consistent experience across a range of hardware. As one of our readers suggested in the comments, you can use Steam Play to try many of the Windows-exclusive games on Linux without potential hiccups.

Steam Deck is encouraging more game developers to make their games run better on Linux. And, this will only improve in the near future. So, if you can take a little effort to try your favorite games on Linux, it may not be disappointing.

That being said, it may not be a seamless experience for everyone. You can refer to our gaming guide for Linux to explore more if interested.

I know not everyone needs it. But, there are tech support options that can guide users/fix issues remotely on their laptop or computer.

With Linux, you can seek help from the community, but it may not be as seamless as some professional tech support services.

Youll still have to do most of the hit and try stuff on your own and not everyone would like it.

I am primarily a Linux user but I use Windows when I have to play games. Though my preference is Linux, I have tried to be unbiased and give you enough pointers so that you can make up your mind if Linux is for you or not.

If you are going for Linux and have never used it, take the baby step and use Linux in a virtual machine first. You can also use WSL2 if you have Windows 11.

I welcome your comments and suggestions.

Read the rest here:

Advantages and Disadvantages of Using Linux - It's FOSS

OpenAPIs and Third-Party Risks – Security Boulevard

With APIs, details and specifics are vital. Each API usually takes in very specific requests in a very specific format and returns very specific information, Sammy Migues, principal scientist at Synopsys Software Integrity Group explained. You make the request and you get the information. APIs can be constructed in different ways, but one of the most common forms of web-based APIs is REST.

OpenAPI is a standardization of formats for REST APIsa way for all people working on any REST APIs anywhere to have a common way to describe those APIs, said Migues. This includes the API endpoints, authentication methods, parameters for each operation the API supports and then contact information, terms of use, licensing and other general information.

By standardizing this collective documentation, it is easier for developers to understand the software and know exactly how it will behave in different circumstances.

Developers turn to OpenAPI, like they do with any open source software or component, as a way to use code thats already out there and has already been proven to work. It saves time, gets the software into production faster, is cost-efficient, integrates workflows and is easy to implement.

OpenAPI may also improve applications security posture by using the documentation format, according to Gabe Rust, cybersecurity consultant at nVisium.

Using standardized documentation allows security testers to more easily understand test APIs, said Rust. Because using formats like OpenAPI provides more transparency to users and testers, it prevents the pitfall of a big security mistake: Security through obscurity.

This allows security testers to provide more comprehensive coverage of applications, Rust added. Potentially serious security issues are more likely to be discovered and patched before damage is done.

You could say that security is a feature of OpenAPI, but thats not to say that it comes without risks.

Any time you introduce third-party software into architecture, you also introduce risk.

Third-party web APIs can access sensitive data/information which can increase security risks such as data breaches, Deepak Gupta wrote in a blog post.

Like any software or application, APIs can be infected with malware, and that can create a lot of damage for a web project, the organization and consumers.

OpenAPIs arent immune to security risks. They can be hacked, of coursenothing is totally immune from being attackedbut the most serious threats come from third parties. With openAPIs comes data sharing, and the data shared can include personal information or corporate intellectual property, unwittingly made available to third parties.

OpenAPI security is fairly limited, said Jeff Williams, CTO and co-founder at Contrast Security. It simply allows development teams to define the authentication scheme to be used with each API. This is useful to help prevent unauthenticated endpoints from exposing critical data and functionality.

Unfortunately, it doesnt protect APIs against attacks from authenticated users. Unless you fully trust all of your users, you should be very concerned about the long list of vulnerabilities that APIs can have, such as, for example, various types of injection, unsafe deserialization, server-side request forgery and libraries with known vulnerabilities, said Williams.

In OpenAPI, it is impossible to know, let alone trust, all the users. To protect sensitive data from third-party risks, it may be necessary to evaluate the use of OpenAPIs and the type of information they have access to. Protecting sensitive data and preventing data breaches from third party intrusion should be of the highest priority when using OpenAPIs.

Recent Articles By Author

See the original post here:

OpenAPIs and Third-Party Risks - Security Boulevard

Open Source Software – W3

About W3C Software

The natural complement to W3C specifications is runningcode. Implementation and testing is an essential part of specificationdevelopment and releasing the code promotes exchange of ideas in the developercommunity.

All W3C software is certified OpenSource/Free Software. (see the license)

2022-04-25 Version3.0of Ical2html includesthe changes byJohannes Weil: command line options to set a title on thegenerated page, to highlight the current day, and to start the week onMonday; and update to libical version3.

ical2html now also recognizes text in descriptions,summaries and locations that looks like a URL and turns it into ahyperlink.

(News Archive)

2022-04-15The slide framework b6+ can nowshow a second window with a preview of the current and next slides andspeaker notes. During a presentation, you could thus show the slideswhile looking at the preview on a second screen.

(News Archive)

2022-04-01 Version8.4 ofthe HTML-XML-utils fixes a bugwith ::attr() selectors. If hxselect wasgiven multiple, comma-separated selectors, the ::attr()selector only worked on the first selector. (Thanks to Bas Ploeger forthe patch!)

(News Archive)

2021-11-28 The slide framework b6+ has a couple ofnew features: 1)When slides are embedded in aniframe or object, links in the slide replacethe parent document, rather than open inside the iframe.2)It is possible to embed a slide as a static page, disablingthe navigation to other slides. 3)Accessibility has improved:When switching slides, the new slide is made available to screenreaders. See an explanationof ARIA role=application and aria-live by Lonie Watson. Theexplanation talks about Shower, but b6+ is similar. 4)Whenslides do not have ID attributes, you can still start at a specificslide by giving its number as fragment ID. E.g., to open apresentation with slide 25, end the URL with ?full#25.5)The F1 key switches to full screen, because not all browsersprovide a command for that. 6)Pressing the ? keyin slide mode pops up a brief overview of available commands.7)It is now possible to disable the use of a left mouse click toadvance slides. 8)Another option hides the mouse pointer when itdoesn't move for some seconds. 9)Various small bug fixes andimprovements.

You can read the manual ordownload a zip filecontaining the JavaScript file (b6plus.js), a style sheet(simple.css), the manual (Overview.html) and some images used in themanual.

(News Archive)

News Archives: 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003.

Here is the list of Past Open SourceProjects developed at W3C.

W3C software is free and open source: the software is made primarily bypeople of the Web community, for the Web community.

There are many ways to get involved:

Great communities make great tools, and with only a few minutes of your timeyou can join the mailing-lists associated with W3C open source projects (suchas www-validator forthe markup validator or www-validator-cssfor the CSS validator) and participate in discussions and user support.

A lot of W3C software have a specific user discussion mailing-list (see eachprojects for details), some also have IRC (chat) channels, such as the#validator channel on the irc.freenode.net fordiscussions on W3C validation services.

Developers are welcome to get involved by contributing code. either to existing projects (see list above and check each project'sdocumentation for contact e-mail information), or proposed future software.Patches and bug fixes are alwayswelcome, and developers willing to get seriously involved will generally getcommit access after a proving period.

As explained below, all of W3C software source is freely available, developers areencouraged to get the source for the projects they care about and start hackingright away.

Read the IPR FAQon software contribution if you intend to contribute code. Note that asthis license is GPL compatible, it is possible to redistribute software basedon W3C sources under a GPL license.

Code is not the only way to get involved in making W3C software better.Testing, bug reports, suggestions, or help in creating good documentation areequally important! Most project will have a Feedback page, and you canreport bugs, test cases and patches on our Bugzilla.

All the tools listed on this page are free and open source, but hosting,maintaining and developing them often costs a lot. With your support throughthe Validator Donation Programor the W3C Supporters Program,we can build even better tools.

Most W3C software is available directly from our CVS base or in our Mercurial repository. You can browse the contentand history of either through their respective web interfaces.

See the documentation of each software for specific instructions fordownload and installation.

Some software that was formerly available via FTP atftp.w3.org has been moved to our web site.

See original here:
Open Source Software - W3

Understanding the hows and whys of open source audits – Security Boulevard

Learn who needs open source audits, why you might need one, who and what is involved, and how an open source audit can help you in an M&A.

If youre part of a modern business that does any software development, your dev teams are using open source components to move quickly, save money, and leverage community innovation. If youre a law firm or a consultant, your clients use open source. And if youre on the lookout for your next acquisition, youll be evaluating targets replete with open source. In the most recent Synopsys Open Source Security and Risk Analysis report, we found that 78% of all code analyzed was entirely open source.

While the prevalence of open source components is now widely understood, the implications of software license conflicts, unknown dependencies, and vulnerable components are often underestimated or overlooked. Unresolved issues consequent to open source in digital assets can negatively influence mergers and acquisitions (M&A). Its the responsibility of those involved in these engagements to adequately scope this influence and mitigate the issues that can spoil a deal.

The first step toward an effective and actionable audit is to consider why youre doing an audit. Are you doing it for internal purposes, or are you doing it to prove your resources are assets rather than liabilities?

For many, impending M&A activity drives an audit. After all, when buying, you want to acquire high-quality assets free of legal, security, and quality issues. When selling, you want to be a high-quality asset. Buyers want to have a good handle on the risks they are taking on so they can value and structure the deal appropriately. Those buyers want to know that their target does not bring with it baggage that is unaccounted for. Theyd like to know the company is using open source components within the bounds of their licenses, that it is minimizing potential cyber attack vectors, that it can ensure consistent uptime, and that its dataand its customers datawill be secure.

Some organizations opt for an internal open source audit because the leadership team has been reading news about open source vulnerabilities, exploits, and possible breaches. Some teams may be concerned about the intellectual property risks due to noncompliance with open source licenses. Whats driving your organizations choice? Your reason makes a difference in who you involve and your goals.

As the focus on digital transformation heightens, development and release velocity expectations rise, which is a heavy burden placed on developers. As a result, they depend more and more on open source for foundational functionality so they can spend more time on innovation.

When preparing for a code audit, understand that developers are focused on producing the highest-quality code possible given tight deadlines. Its important to not assume that developers understand the complex license terms often associated with the open source components they leverage. The same often goes for security vulnerabilities. Regardless, the scale of open source usage has far outpaced the ability to manually track these types of risks.

Senior leadership, legal departments, and senior technical managers are usually the ones charged with identifying the strategy, policies, and processes associated with open source risk management. Unfortunately, this does not always prescribe clear mechanisms to manage developers consumption of open source libraries. Developers often place more weight on a solution that meets the task if the alternative can mean missing a shipping deadline.

Software audits come in many different shapes and sizes. There are, however, several areas of consideration that should be addressed to make the audit insightful and actionable.

An audit report should focus on these areas. And the parties should review these topics with the auditor, whos experience can provide clarity and answer specific questions. This is a critical step, because what the audit uncovers may have a material impact on the valuation of a business and the deal terms during an M&A. For example, different licenses pose different levels of riskdepending on the industry in which a business operates, the sensitivity of data it touches, the external/internal orientation of the software, and more. The same goes for security vulnerabilities; they may affect web-based applications differently than they do embedded applications. These are the types of considerations that an expert audit group can advise on.

Maybe something needs to change, maybe it doesnt; the results of your audit will help you answer that question. If your audit showed exactly what you expected, youre in the minority. When we did an analysis of our security audits from 2021, we found that 97% of applications scanned used open source, and companies were only aware of about half of the open source in use. The majority of codebases we analyze have license and security issues.

The output of an open source audit provides clear information about not only the open source code in use, but also the known vulnerabilities in the code and the license compliance risks. This information gives you a clear picture of whats in the targets code, and it can help you be better prepared moving forward.

If your goal is to assess your own code for internal purposes, audit results arm you with the information to create open source risk management policies for future development efforts. If your audit is for an M&A or due diligence situation, the results provide invaluable information necessary for determining deal value and risk.

The most common reason for an open source audit among our customers is for merger and acquisition events. A snapshot of the open source use and risk exposure of the code in question provides much-needed information to help you move forward as a buyer or a seller. Buyers get visibility into risks they may be taking on; sellers have the opportunity to address such risks in advance of due diligence. If you anticipate being on either side of a transaction, the Black Duck Audit Services team can help you decide how to proceed.

Learn more about open source software audits

Follow this link:
Understanding the hows and whys of open source audits - Security Boulevard

TechOps is a mess: Open source is the solution – BetaNews

Building software is hard.Building cloud software is even harder because things move much faster -- and require mission-critical reliability and availability. To effectively build software in the cloud, engineering teams need observability, CI/CD, reporting, and lots of tooling. But all of the tools available to engineering teams never quite fit together in a way that provides visibility and consistency.When things go wrong, developers scramble to troubleshoot systems with disparate data and systems.

TechOps teams are in charge of keeping everything running. But poorly integrated toolsets create an environment where teams have several interfaces and data sets to wrangle when operating critical services. Teams often try to solve this problem by creating one-off integrations of out-of-the box tools with internally developed tooling and process.These integrations are generally very shallow, and create a significant maintenance burden and reliability gaps.

Custom integrations provide more places to store data and a wider pool to search, resulting in a decentralized view of the data sources and no easy way for developer collaboration. Whats needed is an open source-based control center for collaboration and proper integration with current systems -- no more copying and pasting. But its important to make the centralized command hub center work for everyone at the organization not just front line developers and SREs.

Challenges at every level

Challenges for operating, monitoring, and incident response exist at all level of our organizations. TechOps teams are focused on hosting, deployment, and reliability of services. These teams have specific concerns to address before, during and after a potential incident. How can developers get early warning of a service outage? How do we sort through large volumes of monitoring data to troubleshoot failures? How do we track the status and progress during an incident?How do we document the work that was done to restore the service?How do we gather all of the relevant incident information for the retrospective and RCA documents?

Lets say theres a service-interrupting issue.At the developer level, the teams need detailed monitoring and log data. Having a centralized control center provides easier access to this data, improving efficiency and offering perspectives on how to solve future problems.

Engineering leads have roughly the same goals as developers on the frontlines of the issue, but they are more focused on high-level, business-oriented trends. This broader perspective means that they primarily want a less granular view of outage data.These users will spend more of their time focused on analyzing trends in outages over time, understanding the current status and next steps for an ongoing incident, and ensuring proper communication with internal and external stakeholders.

At the Senior Management level, executives need high-level answers to explain problems to their customers. During major service disruptions, CEOs are often in constant communication with their major stakeholders providing status about why services went down. Rather than granular outage data, these discussions rely on high-level but informed and actionable business insights.

Addressing the disconnect with open source

Clear data and collaborative workflows are critical at every level of an organization. But the real power lies in integration -- not standalone solutions. By leveraging the flexibility of open source software, teams can create collaboration systems that reduce downtime, avoid confusion, enable speed, and increase efficiency.

When compared to internally developed one-off systems, open source solutions typically scale better, provide higher quality and reliability, and lower the overall maintenance burden for TechOps teams.Creating a streamlined Ops process with proper visibility and integrations improves developer productivity.It also boosts workplace satisfaction and helps reduce developer burnout.

One of the major problems with custom in-house tooling for TechOps is maintenance.This tooling may work great when its first built.But over time, requirements shift, and maintenance work for internal tooling often falls to the bottom of the priority list.Meanwhile, new tools are inserted into the tech stack, and common dependencies arent always updated and managed appropriately.The result?The tooling we all rely on breaks in an ugly way as soon as we have an incident or outage.This leaves teams scrambling to restore critical services without proper visibility and control into their systems.

Implementing an open source solution also improves a teams ability to maintain the software needed to solve future problems. When organizations adopt open source, theyre gaining access to underlying source, backed by a community of independent contributors, with flexible, layered extensibility. This allows the team to speed up maintenance and deployment of the software so they can focus on solving issues quickly and improving systems for better operations in the future.

Flexibility is one of the top traits organizations look for in developers. But to achieve complete flexibility, organizational software needs to match these human expectations. Without open source enabling this flexibility, TechOps is a mess. On the other hand, integrating tools into a centralized view makes cross-organizational collaboration easier and addresses diverse challenges at every level of a modern organization.

Photo Credit: Rawpixel.com/Shutterstock

Chris Overton is Vice President of Engineering at Mattermost, Inc. Previously, Chris led engineering at Elastic, where he was also responsible for the Cloud product division. Chris is an expert in building and operating public and hybrid SaaS services, distributed systems, analytics/processing of large data sets, and search.

Read more:
TechOps is a mess: Open source is the solution - BetaNews

New Metaverse Track at O3DCon to Tackle Big Questions and Practical Applications of Emerging Graphical Technology – PR Web

Sessions will explore where we are today in metaverse technology and applications, whats lacking, and how open source software and standards communities can take a leadership role in bridging the gaps.

SAN FRANCISCO (PRWEB) September 12, 2022

A new metaverse track hosted by the Linux Foundation is being offered at next months O3DCon event, taking place October 17-19 in Austin, Texas. The track will be presented by thought leaders from a range of open source projects. Sessions will explore where we are today in metaverse technology and applications, whats lacking, and how open source software and standards communities can take a leadership role in bridging the gaps. The event will also host open floor discussions each day for event attendees to share thoughts and ideas about the presentations delivered in the metaverse track.

The metaverse track schedule can be found at: https://bit.ly/3L3IrLG

Session topics in the metaverse track include:

This years event will convene a vibrant, diverse community focused on building an unencumbered, first-class, 3D engine poised to revolutionize real-time 3D development across a variety of applicationsfrom game development, metaverse, digital twin and AI, to automotive, healthcare, robotics and more.

Early bird pricing for O3DCon expires September 16.

The event is produced by the Open 3D Foundation (O3DF), home of the open-source Open 3D Engine (O3DE) project. O3DE is a modular, cross-platform 3D engine built to power anything from AAA games to cinema-quality 3D worlds to high-fidelity simulations. The code is hosted on GitHub under the Apache 2.0 license. Connect with the community on Discord.com/invite/o3de and GitHub.com/o3de.

About the Open 3D FoundationEstablished in July 2021, the mission of the Open 3D Foundation (O3DF) is to make an open-source, fully-featured, high-fidelity, real-time 3D engine for building games and simulations, available to every industry. The Open 3D Foundation is home to the O3D Engine project. Since its launch in 2021, more than 25 member companies have joined the O3DF. Newest members include OPPO and Heroic Labs, as well as Microsoft, LightSpeed Studios and Epic Games. Other Premier members include Adobe, Amazon Web Services (AWS), Huawei, Intel and Niantic. In May, O3DE announced its latest release, focused on performance, stability and usability enhancements. The O3D Engine community is very active, averaging up to 2 million line changes and 350-450 commits monthly from 60-100 authors across 41 repos.

About the Linux FoundationFounded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the worlds leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the worlds infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundations methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Inquiries:pr@o3d.foundation

Share article on social media or email:

Go here to read the rest:
New Metaverse Track at O3DCon to Tackle Big Questions and Practical Applications of Emerging Graphical Technology - PR Web

Rezilion Recognized as SBOM Tool Provider in Gartner Emerging Technologies Trend Report on Software Bills of Materials (SBOM) USA – English – USA -…

BE'ER SHEVA, Israel, Sept. 9, 2022 /PRNewswire/ --Rezilion,an automated software vulnerability management platform,announced today that it has been named a vendor providing Innovative tools for SBOM management in Gartner's new report, titled Emerging Tech: A Software Bill of Materials Is Critical to Software Supply Chain Management.

The report highlights the growing importance of SBOMs in managing software supply chain risk at a time when the software industry increases its reliance on third-party and/or open-source code. Unlike internally-developed components, which adhere to rigorous security and quality guidelines, open-source software (OSS) can come from many sources and is far more prone to risk. These security and compliance risks are exacerbated by a lack of visibility and understanding of open-source dependencies within the software supply chain. SBOMs answer that challenge by providing a much-needed view into an organization's inventory of software, as well as the dependencies, licenses, compliance posture and provenance information.

The software supply chain has become a target and is under constant attack, with high-profile breaches, such as the ones impacting SolarWinds and Kaseya. An SBOM is critical because it offers visibility, and also allows users to monitor vulnerabilities in parallel with whatever vulnerability management is conducted by the supplier. But having visibility isn't enough - organizations also need to be able to identify new software vulnerabilities. To meet this need, the report recommends that static SBOMs evolve to include dynamic and real time capabilities. Furthermore, the report highlights the need to go beyond identification of software vulnerabilities and leverage SBOMs to drive efficient remediation.

Using the Rezilion platform, customers can identify, prioritize, and remediate software vulnerabilities using a first-of-its-kind Dynamic SBOM. Unlike static SBOMs, which traditionally provide visibility into a single software environment at a specific point in time, Rezilion's Dynamic SBOM seamlessly plugs into all software environments, from development to production, and provides real-time visibility to all software components. Rezilion's Dynamic SBOM then does more than just uncover what software components are there: it reveals if and how they're being executed in runtime, providing organizations with an unparalleled solution to understand where bugs exist but also whether or not they could be exploited by attackers.

Through Rezilion's Dynamic SBOM, customers benefit from:

"Gartner's analysis and outlook on SBOMs arrives at a critical time," said Liran Tancman, Co-Founder and CEO of Rezilion. "As more organizations embrace SBOMs as a vital component of their software security tooling, we're thrilled to be among the named providers. Our Dynamic SBOM gives organizations the ability to know how their dependencies are being exploited, which solidifies how well-aligned our current capabilities are with the evolution of SBOMs in the future."

Rezilion was named a vendor in the Software Bill of Materials (SBOM) category in the Gartner Hype Cycle for Open Source Software, 2022, and the SBOM and ASOC categories in the Gartner Hype Cycle for Application Security, 2022, in July of this year.

Rezilion's Dynamic SBOM is available now across CI and on-prem and cloud environments. A basic, free-of-charge version is available for use in CI through Rezilion's website. Get started today at http://www.rezilion.com/get-started.

Rezilion's platform automatically secures the software you deliver to customers. Rezilion's continuous runtime analysis detects vulnerable software components on any layer of the software stack and determines their exploitability, filtering out up to 95% of identified vulnerabilities. Rezilion then automatically mitigates exploitable vulnerabilities across the SDLC, reducing vulnerability backlogs and remediation timelines from months to hours, while giving DevOps teams time back to build.

Learn more about Rezilion's software attack surface management platform at http://www.rezilion.com and get a 30-day free trial.

Disclaimer: GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Media Contact:Danielle OstrovskyHi-Touch PR410-302-9459[emailprotected]

SOURCE Rezilion

View original post here:
Rezilion Recognized as SBOM Tool Provider in Gartner Emerging Technologies Trend Report on Software Bills of Materials (SBOM) USA - English - USA -...

Open Security: The next step in the evolution of cybersecurity – SC Media

When it comes to openness in technology, people first think of open source software. But IT professionals can (and should) explore another avenue of openness: open security.

Open security may sound like an oxymoron for many in the cybersecurity field. After all, many security vendors today employ secrecy to guard their threat detection and response methods. But the consequence of this secrecy has created a dangerous monoculture in security, characterized by a general lack of transparency, black-box products, and poor integrations. The prioritization of vendor competition over collaboration to safeguard users further supports the asymmetric advantage held by attackers and ensures one breach can take down an entire ecosystem.

Closed security, while good in the short-term for vendors, has not been good for users, customers, or organizations seeking better security.

As a CISO with more than two decades of experience leading tech and financial service organizations, I believe that open securityoffering open detection rules, open artifacts, and open codeholds significant promise in making for transparent, interoperable, and accessible cybersecurity for all companies.

Open Security Open Source

Think of open security as a philosophy, methodology, and way of doing business that shifts the dynamic of a security companys relationship with its users toward transparency. Open security encourages community engagement to further strengthen the security posture of vendors, their customers, and users.

By developing security in the open, vendors let security practitioners see the underlying code of a product and run tests before implementing it in their environment.

Open security also offers practitioners a better understanding of how threat detections work and how security technology operates within a given environment, allowing organizations to simplify their cybersecurity processes.

Most important, it helps information security professionals identify potential blind spots or known gaps in a products code, and thats especially crucial given that no single security solution can protect against every known and unknown cyber threat.

Instead of spending time and resources verifying a chosen security vendors protection claims, open security lets companies focus on addressing gaps in their security technology stack and developing risk profiles for new and emerging threats. Similar to open source collaboration, security teams can leverage the cybersecurity community to identify security gaps faster than any security operations center can on its own.

In reality, security professionals have been playing defense with limited information thus far. When companies employ open security to look at their defense-in-depth, it offers a deeper understanding of how their organizations are protected.

Expand the talent pool with open security

The same information silos that lead to thousands of data breaches every year also contribute to the ever-widening cyber skills gap. By making security closed and proprietary, security vendors increase the barrier to entry for new security professionals.

As any security practitioner will admitits hard to break into the industry absent the ability to tinker with the tools to understand how they work. Security has wrapped itself in a dark-arts culture that reduces the diversity of its talent pool, deters new entrants, and encourages tolerance for complex and hard-to-use tools.

While many security practitioners get their start in the public sector, there are not enough of these hyper-skilled defenders to fill the ranks of organizations facing increasingly frequent and sophisticated attacks.

Developing security in the open lowers the barrier to entry for new cybersecurity professionals by making security accessible to a wider range of people. It encourages them to seize the opportunity to learn by letting them study the technology on a deeper level than whats available in the current market.

Cyber maturity requires transparency

While open security may sound radical, relying on security through obscurity as the primary form of protection against cyber threats does not work as an effective strategy for long-term success. The cybersecurity industry has transformed significantly in the past decade; now, its time for the next phase of growth, and an open security model unlocks new opportunities to educate and empower users.

Ultimately, customer demand will determine whether vendors adopt open security. Today, security providers may not want to open the black box of security because they know too many bypasses and questionable coding choices exist because of balancing performance and security or developing in a closed environment with minimal accountability. Open security can help right that wrong. And if customers demand that transparency, security providers will oblige.

By adopting an open approach to security, providers can invest the time to improve their products and practices while encouraging a new and diverse talent pool to join their ranks. Doing so can strengthen the security industry and better equip organizations to tackle tomorrows threats.

Mandy Andress, chief information security officer, Elastic

View post:
Open Security: The next step in the evolution of cybersecurity - SC Media