This is how I have studied the computer science career from home and for free: follow this complete agenda – Gearrice

Programming

Once we are clear about the beginnings from scratch of the world of programming, we will have to move forward with these other resources. To start coding your own projects, we must follow these contents.

Perhaps those who are not initiated in this sector of software do not know it, but everything related to math and calculus It is basic to study programming. Here we find a series of courses that will allow us to learn mathematics and calculation applied to this type of environment.

Now is when we go into the computer systems as such with these courses. Here we will learn the basic principles of computers and we will enter the world of operating systems. We will begin to work with networks and learn the different structures that we are going to find in them.

Everything related to the world of programming and coding in different languages covers many environments and sectors. Now we find a series of resources focused on learning structures, variables, algorithms, dynamic programming and more.

It goes without saying that everything related to the security in our software developments it is increasingly important. That is precisely why there is a section especially dedicated to all this between these courses. We will learn everything related to the fundamentals of security in programming, as well as how to identify possible vulnerabilities.

Obviously when we enter this sector, one of our objectives is to develop our own projects and programs. The courses that we will show you below focus on this.

When creating our own projects in this sense we must maintain certain rules and respect some behaviors. This is precisely what the contents which we are talking about now.

Once we are clear about basic concepts related to programmingwe are going to continue to delve into more advanced coding.

In this case we are going to delve into the programming structures more advanced once we are clear about everything else.

When creating our own more professional projects, we must also pay special attention to security.

Read this article:
This is how I have studied the computer science career from home and for free: follow this complete agenda - Gearrice

The 25 most popular programming languages and trends – Help Net Security

CircleCI released the 2022 State of Software Delivery report, which examines two years of data from more than a quarter billion workflows and nearly 50,000 organizations around the world, and provides insight for engineering teams to understand how they can better succeed.

Our findings show that elite software delivery teams are adopting developer-friendly tools and practices that allow them to automate, scale, and successfully embrace change when necessary. The ability to move quickly is crucial in todays competitive ecosystem, but just as important is an organizations ability to attract and retain talent, and eliminate obstacles for team success, said Michael Stahnke, VP of Platform, for CircleCI. From development languages to testing frameworks to deployment scenarios, high performers are gravitating toward tools that encourage collaboration, repeatability, and productivity.

TypeScript overtakes JavaScript as the most popular language due to its developer-friendly features.

TypeScript projects rank higher than JavaScript projects in success rate and throughput, suggesting that TypeScript helps developers catch smaller errors locally, allowing them to commit working code more frequently and reliably than JavaScript developers.

Productivity and confidence-boosting benefits are a key driver of TypeScripts adoption at the enterprise level and are a natural complement to the developer experience improvements that continuous integration provides.

Usage of HashiCorp Configuration Language (HCL) on CircleCI has grown steadily over the past several years, climbing three spots since 2019 to become the ninth-most popular language used on the platform.

HCL also appeared on the list of fastest-growing languages in GitHubs 2018 and 2019 State of the Octoverse reports suggesting that infrastructure as code (IaC) has crossed the chasm from individual practitioners to widespread adoption among organizations delivering software at scale.

Infrastructure-as-Code is increasing the speed with which IT can respond to changing business needs, said Rob Zuber, CTO of CircleCI. Engineering teams that leverage popular programming languages like HCL when deploying IaC are able to make the DevOps process more legible by recording manual processes in a clear and precise way, resulting in shorter lead times for developing features and bug fixes, as well as greater agility concerning changes in development priorities.

Gherkin projects had the fastest mean time to recovery of all languages measured in CircleCIs report, implying that Cucumbers detailed error reporting gives developers highly actionable information on which to focus their debugging efforts.

The report also shows that the most successful engineering teams routinely meet four key benchmarks. By hitting these benchmarks, high-achieving teams are getting the maximum value from their software delivery pipelines:

Read the original:
The 25 most popular programming languages and trends - Help Net Security

Open source provides the building blocks for future technology innovation – Bizcommunity.com

Open source is by no means a new technology, but it has matured over the past 30-plus years and with the latest wave of digital transformation, its role has become increasingly prominent. In the everyday business sense, it is still a cost-effective option for operating systems, databases, and analytical tools. However, where the nature and benefits of open source are truly being realised is in the artificial intelligence (AI) and machine learning spaces.

Sumit Sharma, enterprise architect and head of advisory services at In2IT | image supplied

The open, community-driven, and collaborative environment created by open source is the perfect ground for these new-age initiatives, fostering innovation while ensuring that there is no single entity driving the agenda around the technology that will shape our future.

Open-source solutions remain a viable option for organisations looking to reduce capital and operational expenses and work within increasingly tight budget constraints. The Covid economy has driven more businesses than before to consider these types of solutions.

The speed of innovation is also one of the appeals of open source, and the platform lends itself well to projects that require high levels of customisation. However, while the barrier to entry for proprietary software and systems is financial capital, there remains a barrier to entry around open source in the form of human capital.

While licensing costs are reduced and the source code is open so it can more easily be tailored and customised, open-source solutions are only free to a certain extent, and there are certain trade-offs. The skills required to develop and maintain software and systems using open source are not something that every business has or can afford to keep in-house.

For this reason, although the energy and technology sectors in Southern Africa are adopting open source, it is not the preferred solution in this region, or up into Africa, due to the well-known skills shortage and still-maturing market. There remains a lot of room for growth.

The high barrier to entry in terms of skills and knowledge is one of the main detractors for shifting to open-source solutions, especially in the South African context and through Africa in general. The decision of whether or not open source is a viable platform for any business depends on the needs and personality of the organisation.

A conservative organisation that appreciates the presence of the Original Equipment Manufacturer (OEM) is not well suited to open source, nor is an enterprise that is looking to outsource the majority of the IT function. Your IT partner can assist with ensuring the correct decision is made around the right business solutions.

However, open source is providing the building blocks for future technology innovation. It is seeing a lot of development, major proprietary vendors are acquiring open source houses, and new technologies including AI, machine learning and blockchain innovations are being driven by open source.

The futuristic concept of Artificial General Intelligence or AGI is something that can only come from the collaborative environment of open source, as is the evolution of natural language processing to natural language understanding.

Open source is driving the future of technology. It is therefore an option that is worth exploring for many businesses. To really exploit the benefits, skills and experience in the code are critical, and where businesses cannot maintain these in-house, the right IT partner is the key. A specialised IT partner can also provide maintenance and support to ensure solutions continue to deliver as required.

Originally posted here:

Open source provides the building blocks for future technology innovation - Bizcommunity.com

Why You Need to Empower Your Developers – DevOps.com

In recent years, the demand for software developers worldwide has increased. However, in 2020, when the pandemic hit, many CIOs pulled back considerably on their IT spending. This trend has quickly reversed. In 2022, worldwide IT spending is expected to reach $4.4 trillion. Additionally, according to the U.S. Bureau of Labor Statistics, employment in software development is projected to rise by 22%.

Although organizations across several industries are increasing investment into IT, the supply of technology talent has struggled to keep pace. With the demand for creativity, developers understand their importance and can better dictate their terms including higher salary compensation and enhanced benefits. Organizations that dont meet these expectations risk losing coveted talent.

Remote work continues to be a top priority for many developers, along with great company culture. To create a positive work environment for developers, business leaders must offer a culture that makes tech employees feel a sense of belonging and purpose while working remotely. Many developers also want opportunities for growth and the ability to provide agile and innovative toolsets to help increase productivity and collaboration.

Lets discuss some of the ways modern organizations can still empower developers and how business leaders can build a workplace culture that attracts some of the best developers in the business.

Open source usage has increased significantly in the workplace and continues to be a formative resource for developers. The open source community provides many new technologies that developers seek and even require in order to work at the top of their abilities. For example, open source provides some of the most popular programming languages available, including Javascript, C+++, and Python. The presence of multiple languages makes it easier for developers to write code and produce high-quality work efficiently.

Open source communities help developers build their skills and allow them to customize their technology stacks, as a result allowing them to optimize their productivity and learn different techniques from other open source projects. There is little to no debate that open source is critical to the tech industry and developers. Embracing open source can increase satisfaction and create a better workforce overall.

In the past, office perks such as free food and collaborative social areas were used to attract new employees and retain current employees. However, in the remote work world, these can no longer remain a priority. Instead, business leaders should prioritize the holistic professional experience that their organization provides its employees. For developers, in particular, building this experience falls on CIOs and IT leaders to ensure they are providing workplace satisfaction, modern technology and flexible processes. To do so, they need to focus on tools that promote productivity and communication.

Most employees are familiar with the workflow software and collaboration tools that many workplaces provide. Without these offerings, remote workers would feel confused by their workload at the worst and adrift from their co-workers at best. For developers, however, these general collaboration tools wont cut it, as they are poorly suited for engineering workflows. General collaborative software doesnt offer the ability for deep customization, nor do they adapt natively to deploy on-prem or in the private cloud. As a result, developers have little to no use for these tools.

And that lacking sense of utility makes a crucial difference. Developer burnout is at an all-time high and inefficient workflows can keep teams from performing at their best. Or, worse, it can frustrate individual employees, leading to lower retention rates and less attractive offerings for prospective employees. Business leaders looking to offer developers an improved working environment should provide a more integrated, developer-oriented tech stack.

While the market has changed, for many developers, fragmented tools remain a top productivity challenge. Investing in modern tools and programming languages while also removing tech complexities are the keys to improving productivity and simultaneously creating a better developer experience. This is why organizations must implement open source applications so developers can have complete visibility and control over the entire stack.

With tool fragmentation causing a number of distractions, developers can become frustrated and spend time responding to every individual incident. This poses the threat of a developer losing their love of the work. By doing away with fragmentation from old technology and selecting relevant solutions, business leaders allow developers to do what they do best: Develop software.

When organizational leaders empower their developers, they in turn find their product teams are more productive and feel more tied to the organization as well as their customers and each other. Thus, what is best for developers is best for the organization at large.

Excerpt from:

Why You Need to Empower Your Developers - DevOps.com

Everything You Need to Know About Version Control – Spiceworks News and Insights

Version control is a system that tracks the progress of code across the software development lifecycle and its multiple iterations which maintains a record of every change complete with authorship, timestamp, and other details and also aids in managing change. This article details how version control in DevOps works, the best tools, and its various advantages.

Version control is defined as a system that tracks the progress of code across the software development lifecycle and its multiple iterations which maintains a record of every change complete with authorship, timestamp, and other details and also aids in managing change.

The process of monitoring and managing changes to software code is known as version control, also sometimes referred to as revision control or source control systems. Software technologies called version control systems assist software development teams in tracking changes to source code over time.

Version control systems enable software teams to operate more swiftly and intelligently as development environments have increased. They are beneficial for DevOps teams because they will allow them to speed up successful deployments and reduce development time.

Version control pinpoints the trouble spots when developers and DevOps teams work concurrently and produce incompatible changes so that team members can compare differences or quickly determine who committed the problematic code by looking at the revision history. Before moving on with a project, a software team can use version control systems to resolve a problem.

Software teams can understand the evolution of a solution by examining prior versions through code reviews. Every alteration to the code is recorded by version control software in a particular type of database. If an error is made, developers can go back in time and review prior iterations of the code to remedy the mistake while minimizing disturbance for all team members.

Collaboration among employees, keeping several iterations of information created, and data backup are just a few issues that any global organization may encounter. For a business to succeed, developers must overcome each of these issues. A version control system is then necessary for this situation.

The first version control system was mainframe-based, and each programmer used a terminal to connect to the network. The first server-based, or centralized, version control systems that utilized a single, shared repository were introduced on UNIX systems; later, these systems were made accessible on MS-DOS and Windows.

Versions can be recognized by labels or tags, and baselines can be used to mark approved versions or versions that are particularly important. Versions that have been checked out can be used as a branching point for code from the main trunk by various teams or individuals. The first version to check in will always win when versions are checked out and checked in.

Some systems may offer version merging if other versions are checked out so that one can upload new modifications to the central repository. Branching is a distinct approach to version control where development programs are duplicated for parallel versions of development while keeping the original and working on the branch or making separate modifications to each.

Each copy is called a branch, and the original program from where it was derived is known as the trunk, the baseline, the mainline, or the master. Client-server architecture is the standard model for version control. Another technique is distributed version control, where all copies are kept in a codebase repository, and updates are made by sharing patches or modifications across peers. Version control allows teams to work together, accelerate development, settle issues, and organize code in one place.

See More: What Is Jenkins? Working, Uses, Pipelines, and Features

Globally, version control systems comprise a multi-billion-dollar industry, poised to reach $716.1 million by 2023 (as per MarketsAndMarkets research). In this massive market, 13 tools stand out. They are:

Software that carries out software version control, configuration management, and change management tasks is known as Configuration Management Version Control (CMVC). This system was client-server based, with servers for several Unix flavors and command-line and graphical clients for many platforms. Even after renaming a file, it can track file history. This is because developers may alter the database filename and the filename on the disk was a number. Delegating power is possible thanks to its decentralized administration.

Git is among the most powerful version control programs now on the market. The creator of Linux, Linus Torvalds, created the distributed version control system known as Git. Its memory footprint is minimal and can follow changes in any file. When you add this to its extensive feature set, you get a full-featured version control system that can handle any project. Due to its simple workflow, it is employed by Google, Facebook, and Microsoft.

A version control system called Apache Subversion, which is free and open-source, enables programmers to manage both the most recent and previous iterations of crucial files. It can track modifications to source code, web pages, and documentation for large-scale projects. Subversions main features are workflow management, user access limits, and cheap local branching. Both commercial products and individual projects can be managed using Subversion, a centralized system with many powerful features. It is one of Apaches many open-source solutions, like Apache Cassandra.

You can utilize all Azure DevOps services or just the ones you require to improve your current workflow. A group of software development technologies you can use in conjunction is Azure DevOps Server, formerly Team Foundation Server (TFS). In addition to access controls and permissions, bug tracking, build automation, change management, collaboration, continuous integration, and version control are all elements of the source code management program known as Azure DevOps Server.

One of the first version control systems developed, CVS is a well-known tool for open-source and commercial developers. You can use it to check in and out the code you intend to work on. Teams can integrate their code modifications and add distinctive features to the project. CVS uses delta compression to effectively compress version differences and a client-server architecture to manage change data. In larger projects, it saves a lot of disk space.

See More: What Is Serverless? Definition, Architecture, Examples, and Applications

Developers and businesses adore Mercurial for its search capabilities, backup system, data import and export, project tracking and management, and data migration tool. The free source control management program Mercurial supports all popular operating systems. It is a distributed versioning solution and can easily manage projects of any size. Through extensions, programmers can quickly expand the built-in functionality. For software engineers, source revisioning is made simpler by its user-friendly and intuitive interface.

Software development teams may collaborate and keep track of all code changes using GitHub. You can keep track of code modifications, go back in time to correct mistakes, and collaborate with other team members. The most reliable, secure, and scalable developer platform in the world is GitHub. You receive the best resources and services to assist you in creating the most cutting-edge communities possible. The most reliable, secure, and scalable developer platform in the world is GitHub.

Private Git repositories are hosted by the managed version control system AWS CodeCommit. It smoothly integrates with other Amazon Web Services (AWS) products, and the code is hosted in secure AWS settings. Therefore, its a suitable fit for AWSs current users. Access to various helpful plugins from AWS partners is also made available through AWS integration, aiding in program development. You dont have to worry about maintaining or scaling your source control system when you use CodeCommit.

As a component of the Atlassian software family, Bitbucket can be connected with other Atlassian products like HipChat, Jira, and Bamboo. Some of Bitbuckets key features are code branches, in-line comments and debate, and pull requests. The companys data center, a local server, or the cloud can all be used for its deployment. With Bitbucket, you can freely connect with up to five people. This is advantageous because you can use the platform without spending any money.

RhodeCode is a platform for managing public repositories. RhodeCode offers a contemporary platform with unified security and tools for any version control system, in contrast to old-fashioned source code management systems or Git-only tools.

The platform is designed for behind-the-firewall enterprise systems that require high levels of security, sophisticated user management, and standard authentication. RhodeCode has a convenient installer, it may be used as a standalone hosted program on your server, and its Community Edition is unrestrictedly free.

CA Panvalet establishes and maintains a control library of source programs, centralizes the storage of the source, and offers quick access for maintenance, control, and protection against loss, theft, and other perils. Like Microsoft Visual SourceSafe for personal computers, Panvalet is a closed-source, proprietary system for controlling and versioning source code. Users check out files to edit and then check them back into the repository using a client-server architecture.

It offers the sole source of accuracy for all development. The company behind it is Perforce Software Inc. It is a networked client-server revision control tool. It supports several operating systems, including OS X, Windows, and Unix-like platforms. This tool is primarily used in large-scale development setups. Through the tracking and management of changes to source code and other data, it streamlines the development of complicated products. Your configuration changes are branched and merged using the Streams feature.

GNU Bazaar (formerly Bazaar-NG Canonical) is a command-line utility by the company that created Ubuntu, and it is a distributed and client-server revision control system. Numerous contemporary projects use it, including MySQL, Ubuntu, Debian, the Linux Foundation, and Debian. GNU Bazaar is truly cross-platform, running on every version of Linux, Windows, and OS X. High storage efficiency, offline mode support, and external plugin support are some of GNU Bazaars finest qualities. Additionally, it enables a wide range of development workflows.

Using a version control system, one can obtain the following benefits:

It goes without saying that team members should work simultaneously, but even individuals working alone can profit from being able to focus on separate streams of change. By designating a branch in VCS tools, developers and DevOps engineers can keep several streams of work separate while still having the option to merge them back together to ensure that their changes dont conflict.

Many software development teams use the branching strategy for every feature, every release, or both. Teams have various workflow options to select from when deciding how to use the branching and merging features in a VCS.

The development of any source code is continuous in the modern world. There are always more features to be added, more people to target, and more applications to create. When working on a software project, teams frequently have various main project clones to build new features, test them, and ensure they work before uploading this new feature to the main project. Due to the ability to develop several sections of the code concurrently, this could save time.

The team tasked with the project consistently generates new source codes and makes changes to the already existing code. These modifications are kept on file for future use and can be consulted if necessary to determine the true source of a given issue. If you have a record of the changes made in a particular code file, you and new contributors may find it easier to comprehend how a specific code section came to be. This is vital for working efficiently with historical code and allowing developers to predict future work with accuracy.

This refers to every modification made over time by numerous people. File addition, deletion, and content modifications are all examples of changes. The ease with which various VCS programs handle file renaming and movement vary. You should also include the author, the date, and written comments outlining the rationale behind each change in this history.

The ability to go back to earlier iterations allows for the root cause study of faults, which is essential when fixing issues with software that is more than a few years old. Nearly everything can be regarded as an earlier version of the software if it is still being developed.

Since pushing and pulling cannot be done using a distributed version control system without an internet connection, most development can be done on the go, away from home, or in an office. Contributors will make changes to the repository and can view the running history on their hard drives.

With more flexibility, the team can resolve bugs with a single change-set, increasing developers productivity. Developers can do routine development tasks quickly with a local copy. With a DVCS, developers can avoid waiting on a server to do everyday activities, which can impede delivery and be inconvenient.

See More: Top 10 DevOps Automation Tools in 2021

Whenever a contributor copies a repository using a version control system, they are essentially making a backup of the repositorys most recent version, which is probably its most significant advantage. We can protect the data from loss in the event of a server failure by having numerous backups on various workstations.

Unlike a centralized version control system, a distributed version control system does not rely on a single backup, increasing the reliability of development. Although its a widespread fallacy, having numerous copies wont take up much space on your hard drive because most development involves plain text files and most systems compress data.

An open line of communication between coworkers and teams results from version control because sharing code and being able to track past work results in transparency and consistency. It makes it possible for the different team members to coordinate workflow more straightforwardly. There are repercussions from this better communication.

Team members can operate more productively as a result of effective workflow coordination. They can more easily manage changes and work in harmony and rhythm. This presents the many team members as a single entity that collaborates to achieve a particular objective.

Management can get a thorough picture of how the project is doing thanks to version control. They know who is responsible for the modifications, what they are intended to accomplish when they are completed, and how the changes will affect the documents long-term objective. It helps management spot persistent issues that particular team members could bring on.

The accurate change tracking provided by version control is a great way to get your records, files, datasets, and/or documents ready for compliance. To manage risk successfully, keeping a complete audit trail is essential. Regulatory compliance must permeate every aspect of a project. It requires identifying team members who had access to the database and accepting accountability for any changes.

The seamless progress of the project is ensured by version management. Teams can collaborate to simplify complex processes, enabling increased automation and consistency and progressive implementation of updated versions of these complex procedures. The updated versions allow programmers to revert to a previous version when errors are found. Testing is simpler if you go back to an earlier version because bugs are caught sooner and with less user impact.

Having many outdated versions of the same document can be prevented with version management. Errors brought on by information displayed inconsistently across different papers will therefore be diminished. One should convert absolute versions of documents to a read-only state after the evaluation is complete. It will restrict the possible modifications and leave little possibility for mistakes in the future.

See More: DevOps vs. Agile Methodology: Key Differences and Similarities

Version control systems are a vital component of modern-day software development. It helps maintain a reliable source code repository and ensures accountability no matter who works on the code. It also makes finding and addressing bottlenecks easier by simplifying the root cause analysis process. Ultimately, version control enables a single pane of glass for collaborative and iterative application development in short release cycles.

Did this article tell you all you needed to know about version control? Tell us on Facebook, Twitter, and LinkedIn. Wed love to hear from you!

Read this article:

Everything You Need to Know About Version Control - Spiceworks News and Insights

Organizations Turn to Open-Source Software to Improve HPC and AI Applications – CIO

High performance computing (HPC) is becoming mainstream for organizations, spurred on by their increasing use of artificial intelligence (AI) and data analytics.A 2021 study byInsersect360 Researchfound that81% of organizations that use HPC reported they are running AI and machine learning or are planning to implement them soon. Its happening globally and contributing to worldwide spending on HPC that is poised to exceed $59.65 billion in 2025, according toGrandview Research.

Simultaneously, the intersection of HPC, AI, and analytics workflows are putting pressure on systems administrators to support ever more complex environments. Admins are being asked to complete time-consuming manual configurations and reconfigurations of servers, storage and networking as they move nodes between clusters to provide the resources required for different workload demands. The resulting cluster sprawlconsumes inordinate amounts of information technology (IT) resources.

The answer? For many organizations, its a greater reliance on open-source software.

Reaping the Benefits of Open-Source Software & Communities

Developers at some organizations have found that open-source software is an effective way to advance the HPC software stack beyond the limitations of any one vendor.Examples of open-source software used for HPC include Apache Ignite, Open MPI, OpenSFS, OpenFOAM, and OpenStack. Almost all major original equipment manufacturers (OEMs) participate in theOpenHPCcommunity, along with key HPC independent software vendors (ISVs) and top

HPC sites.

Organizations likeArizona State University Research Computinghave turned to open-source software likeOmnia,a set of tools for automating the deployment of open source or publicly available Slurm and Kubernetes workload management along with libraries, frameworks, operators, services, platforms and applications.

TheOmnia software stackwas created to help simplify and speed the process of building environments for mixed workloads by abstracting away the manual steps that can slow provisioning and lead to configuration errors.It was designed tospeed and simplify the process of deploying and managing environments for mixed workloads, including simulation, high throughput computing, machine learning, deep learning and data analytics.

Members of the open-source softwarecommunity contributecode and documentation updates to feature requests and bug reports. They also provide open forums for conversations about feature ideas and potential implementation solutions. As the open-source project grows and expands, so does the technical governance committee, with representation from top contributors and stakeholders.

We have ASU engineers on my team working directly with the Dell engineers on the Omnia team, said Douglas Jennewein, senior director of Arizona State University (ASU) Research Computing. Were working on code and providing feedback and direction on what we should look at next. Its been a very rewarding effort Were paving not just the path for ASU but the path for advanced computing.

ASU teams also useOpen OnDemand, an open source HPC portal that allows users to log in to a HPC cluster via a traditional Secure Shell Protocol (SSH) terminal or via a web-based interface that uses Open OnDemand. Once connected, they can upload and download files; create, edit, submit and monitor jobs; run applications; and more via a web browserin a cloud-like experience with no client software to install and configure.

Some Hot New Features of Open-Source Software for HPC

Here is a sampling of some of the latest features in open-source software available to HPC application developers.

The benefits of open-source software for HPC are significant. They include the ability todeploy faster, leverage fluid pools of resources, and integrate complete lifecycle management for unified data analytics, AI and HPC clusters.

For more information onand to contribute totheOmnia community, which includes Dell, Intel, university research environments, and many others, visit theOmnia github.

***

Intel Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware thats optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? Theres always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more aboutIntel advanced analytics.

Link:

Organizations Turn to Open-Source Software to Improve HPC and AI Applications - CIO

Want to know the future of FOSS? You can look it up in a database – The Register

Opinion In IT, there is sexy tech, there is fashionable tech, and there are databases. Your average database has very little charisma, however. Nobody's ever made a movie about one.

They should. They should make lots of movies. (The Reg must note at this point that we're not counting the vendors in this. Some of them have, indeed, spent a bit of money on just such a project.)

You don't have to spend long in any aspect of IT to discover that databases are the soul of IT, its constant animating force. From one perspective, everything digital is a specialized database: word processors, spreadsheets, shoot-em-ups, streaming services, from Google to your disk filing system. The storing, sorting and retrieval of data? That's it. That's the whole game. It has been the case ever since Herman Hollerith designed the punched card tabulating machine in the late nineteenth century.

As for databases that call themselves that, they're the engine of corporate computing. Their capability, reliability and maintainability are essential, and the metrics of performance and expense are unambiguous. Corporate decisions about databases are one of the purest indicators of how IT is sourced and deployed. Hype is quickly exposed, as is the good stuff.

So when you look at the databases developers actually choose, you're seeing a market model with wider implications. Open source versus proprietary, hosted versus on-prem, innovation versus maturity: all primary concerns across IT, all crystallized in DB decisions.

But there's an equally important flip side: how the developers and suppliers of DB software manage to stay in business themselves. That's the other great question of IT in the 2020s: how do you make money either fighting or flaunting FOSS.

That's the first lesson from a feature discussing today's FOSS databases and their respective licensing terms: open source has won. It's about time too. Before FOSS was a corporate option, the big guys were ruthless at monetizing their position in the heart ofIT.

Licensing models were set at what clients could bear, not what was equitable. Random audits could turn accidental license breaches into very expensive mistakes, and it could be very hard to manage those licenses if you were trying to scale. Or if license management was curiously difficult.

Why did anyone put up with this? They had to: these were the costs of mitigating the risk of ushering in unknowns to the galactic center of your company business model.

Times change, but memories abide. It is hard to overstate the organizational resentment towards what looks, feels and costs like extortion, or the readiness to explore options that do not have that particular pistol to point. Momentum has built for FOSS, as more people use and develop it.The quality and variety of the code has increased and deployment edged deeper into risk-averse, and rich, areas of the market.

There is a lag between what developers choose and what is actually deployed, but the trend is unambiguous and continuous. Proprietary has lost and is losing market share, open source has and is gaining. By some measures Oracle was just about equal to MySQL in 2021. Guess which is sliding down the snake, and which is climbing the ladder.

This is it. This is the canonical proof that open source can achieve everything needed for corporate software, when there's a big enough community of motivated developers. Can it in turn support that community?

Again, looking at databases in particular gives a good lens for the bigger picture. FOSS was born of idealism, frustration, opportunity and optimism. It recognized the inequity of centralized control of software, born of a time when entry costs to making software were very high and distribution very difficult, in an environment when neither situation was still the case. Like so many successful revolutions, the very act of winning changed the dynamics that made the win possible.

The ideal FOSS license is completely unrestrained: take the code, do what you like, just ask those who come after to do the same. That works in many cases, where those who do most of the work can parlay their expertise into business relationships.

However, it doesn't work so well in the age of hyperscalers, where hosted services can craft deals that require minimal interaction and risk for clients, based on FOSS running behind an API. Hence the advent of ideas like BSL, the Business Source License, that fulfils part of the FOSS ideal by making source open, but restricts commercial use. That can be any commercial use, or specific cases like selling a hosted service - something that databases are very well suited for.

Is this betrayal of FOSS? Many think so, and in a model that relies on community as a proxy for closed-door development, that could be fatal. Or is it a sensible evolution, absorbing a very well-tested model of free for non-commercial use, subsidized by production use, that's been part of proprietary for decades?

The real danger isn't some dilution of FOSS ethics, but the resurgence of lock-in. While BSL and its ilk has that danger, so does any FOSS project too dependent on a powerful sponsor. The fact that the code is open is a strong safeguard: that which can be rewritten cannot be constrained. Ask IBM about its proprietary but visible PC BIOS.

This is an evolving market, but it's evolving into a more just, more sustainable and more flexible one as FOSS ideas change the landscape.

You'll have problems if you change your model in ways your initial supporters didn't expect so be aware of how the evolution is progressing and build in your long-term options at the start. If you're as open about your plans as you are about your software, that's good enough.

The evolution of the dull old database not only predicts the future, it's helping to define it. And that's as sexy as any tech gets.

More here:

Want to know the future of FOSS? You can look it up in a database - The Register

Building a Retro Linux Gaming Computer – Part 18: Run Away and Join the Circus – GamingOnLinux

Continued from Part 17: The Llama Master

In writing this seriesI have spent a great deal of time searchingeBayfor older Linux games to cover, and one night I came across acurious sight. Although being sold forWindows, I founda listing for aphysical copy of the freegame Circus Linux! as published by Alten8. At first I figured it wouldjust be another keep case in my collection with "Linux" on the cover, but upon inspecting the contents of the disc, it soon became apparent just how cheap this retail release was.

All that Alten8 seemsto have done was package the source directory with aWindows binary already built, with the install instructions urging you to "copy and paste the folder CIRCUS from the CD" and then click on the circuslinux.exe file. With the source code included I decided it would be trivial to also build the game forLinux, and in fact the included INSTALL.txt file even tellsyou how to compileand install the game on Linux with GNU Automake.

You do need the relevant SDLdevelopment libraries as packaged byyour distribution, and unfortunately Alten8 did seem to strip away some of the game's documentation files, meaning that the build willfail at first. To get around this I just used the "touch AUTHORS.txt COPYING.txt CHANGES.txt README-SDL.txt" command to create blank placeholders, but you really are just better off grabbing the source code yourself online apart from the novelty.

Circus Linux!itself is a remake ofthe older Circus Atari, which was itself a home consoleversionof the even olderCircus arcade cabinetby Taito. Circuswas a block breaker game inspired by Breakout, with the main change being that the game is now simulating a teeterboard act, with the blocks becoming balloons and your paddle a seesaw. This does have a marked difference on the gameplay, as you need to ensure your clown lands on the correct endof the teeterboard.

Circus Linux!goes all in on the theme in a way that the original Atari version never could, sportingbright colourfulanimated graphics and fun upbeat music and sound effects, showing off the power of the then still freshSimple DirectMedia Layer. One aggravation is that the mouse canleave the window when not playing full screen, butthe game does at least support a number of screen modes, including a lower graphics setting for less powerful computers.

Needless to say even on full the game did not cause my Pentium III 500 Mhz tobreak a sweat, but I appreciate the option. Beyond this the game features a number ofgameplay modifiers: "Barriers" which can block your shots, "Bouncy Balloons" that can cause the clown to careen back down on contact, and "Clear All" that demands every balloonbe popped on a stage before proceeding to the next screen.

Like most arcade gamesCircus Linux! is a test of both your dexterityand endurance, challenging you to hold on to your lives for as long as possible while racking up the highest possible score. The game also has support for local hot seat multiplayer, either in a cooperative mode where you both get the chance to help one another pop balloons, or an adversarial mode where you compete to earn the highest possible score.

Perhaps more compelling thanCircus Linux! on its own is the legacy of its creator Bill Kendrick and his development house New Breed Software, a prolific figure in the free and open source gaming scene. He is most famous for starting work on the platformer SuperTuxandcrafting the drawing programTux Paint, helping to popularize Tux as a gaming icon with others in the Tux4Kids initiative, allalongside the work of people like Steve Baker and Ingo Ruhnke.

Bill Kendrick has also created a number of other arcade conversions, edutainment, and experimental software toys which he ports to the widest possible range of platforms, all of which can still be found on the New Breed Software website. Five of them, X-Bomber, Mad Bomber, 3D Pong, ICBM3D, and Gem Drop X,were included on100 Great Linux Games.He even made a chat bot called Virtual Kendrick, inspired by a comment that he should port himself to the Zaurus handheld.

I have avoided it long enough, but I am feeling the itch to play a first person shooter again. As has already been made clear Linux has never had ashortage of them, but some are a lot harder to find today than others.The next game I am to cover isone of the rarest of themall, due to its limited physical distribution, and an attachment to a Belgian company now more knownfor maintainingan operating systemthan porting games.

Carrying on in Part 19: Sinsational

Return toPart 1: Dumpster Diving

Go here to read the rest:

Building a Retro Linux Gaming Computer - Part 18: Run Away and Join the Circus - GamingOnLinux

OpenAPIs and Third-Party Risks – Security Boulevard

With APIs, details and specifics are vital. Each API usually takes in very specific requests in a very specific format and returns very specific information, Sammy Migues, principal scientist at Synopsys Software Integrity Group explained. You make the request and you get the information. APIs can be constructed in different ways, but one of the most common forms of web-based APIs is REST.

OpenAPI is a standardization of formats for REST APIsa way for all people working on any REST APIs anywhere to have a common way to describe those APIs, said Migues. This includes the API endpoints, authentication methods, parameters for each operation the API supports and then contact information, terms of use, licensing and other general information.

By standardizing this collective documentation, it is easier for developers to understand the software and know exactly how it will behave in different circumstances.

Developers turn to OpenAPI, like they do with any open source software or component, as a way to use code thats already out there and has already been proven to work. It saves time, gets the software into production faster, is cost-efficient, integrates workflows and is easy to implement.

OpenAPI may also improve applications security posture by using the documentation format, according to Gabe Rust, cybersecurity consultant at nVisium.

Using standardized documentation allows security testers to more easily understand test APIs, said Rust. Because using formats like OpenAPI provides more transparency to users and testers, it prevents the pitfall of a big security mistake: Security through obscurity.

This allows security testers to provide more comprehensive coverage of applications, Rust added. Potentially serious security issues are more likely to be discovered and patched before damage is done.

You could say that security is a feature of OpenAPI, but thats not to say that it comes without risks.

Any time you introduce third-party software into architecture, you also introduce risk.

Third-party web APIs can access sensitive data/information which can increase security risks such as data breaches, Deepak Gupta wrote in a blog post.

Like any software or application, APIs can be infected with malware, and that can create a lot of damage for a web project, the organization and consumers.

OpenAPIs arent immune to security risks. They can be hacked, of coursenothing is totally immune from being attackedbut the most serious threats come from third parties. With openAPIs comes data sharing, and the data shared can include personal information or corporate intellectual property, unwittingly made available to third parties.

OpenAPI security is fairly limited, said Jeff Williams, CTO and co-founder at Contrast Security. It simply allows development teams to define the authentication scheme to be used with each API. This is useful to help prevent unauthenticated endpoints from exposing critical data and functionality.

Unfortunately, it doesnt protect APIs against attacks from authenticated users. Unless you fully trust all of your users, you should be very concerned about the long list of vulnerabilities that APIs can have, such as, for example, various types of injection, unsafe deserialization, server-side request forgery and libraries with known vulnerabilities, said Williams.

In OpenAPI, it is impossible to know, let alone trust, all the users. To protect sensitive data from third-party risks, it may be necessary to evaluate the use of OpenAPIs and the type of information they have access to. Protecting sensitive data and preventing data breaches from third party intrusion should be of the highest priority when using OpenAPIs.

Recent Articles By Author

See the original post here:

OpenAPIs and Third-Party Risks - Security Boulevard

Advantages and Disadvantages of Using Linux – It’s FOSS

Linux is a buzzword and you keep hearing about Linux here and there. People discuss it in the tech forum, it is part of the course curriculum and your favorite tech YouTubers get excited while showing their Linux build. The 10x developers you follow on Twitter are all Linux fans.

Basically, Linux is everywhere and everyone keeps talking about it. And that gives you FOMO.

So, you wonder about the advantages of Linux and whether is it really worth trying.

I have compiled various possible advantages and disadvantages of Linux in this article.

If you are on the fence about choosing Linux over your preferred operating system, we would like to help you out.

Before you start, you should know that Linux is not an operating system on its own. The operating systems are called Linux distributions and there are hundreds of them. For simplicity, Ill address it asLinuxOS instead of a specific Linux distribution. This article explains things better.

Considering you are curious about Linux as an alternative operating system choice, it only makes sense that you know its advantages.

You might never regret your decision if it excels at what you want it to do.

You need to own an Apple device to use macOS as your daily driver and a Windows license to use Microsofts Windows.

Therefore, you need a bit of investment with these options. But, with Linux? Its entirely free.

Not just the OS, there are many software packages available for free on Linux when compared to Windows and macOS.

You can try every mainstream Linux distribution without paying for a license. Of course, you get the option to donate to support the project, but that is up to you if you really like it.

Additionally, Linux is totally open-source, meaning anyone can inspect the source code for transparency.

Typically, when users think of trying another operating system, it is because they are frustrated with the performance of their system.

This is from my personal experience. I have had friends willing to try Linux to revive their old laptop or a system that constantly lags.

And, when it comes to Linux distributions, they are capable of running on decent hardware configurations. You do not need to have the latest and greatest. Moreover, there are specialized lightweight Linux distributions that are tailored to run on older hardware with no hiccups.

So, you have more chances to revive your old system or get a fast-performing computer in no time with Linux.

No operating system is safe from malicious files or scripts. If you download and run something from an unknown source, you cannot guarantee its safety.

However, things are better for Linux. Yes, researchers have found attackers targeting Linux IoT devices. But, for desktop Linux, it is not yet something to worry about.

Malicious actors target platforms that are more popular among households, and Linux does not have a big market share in the desktop space to attract that kind of attention. In a way, it can be a good thing.

All you have to do is just stick to the official software packages, and read instructions before you do anything.

As an extra plus, you do not necessarily need an antivirus program to get protection from malware.

With an open-source code, you get the freedom to customize your Linux experience as much as you want.

Of course, you require a bit of technical expertise to go utilize the best of it. Even without any experience, you get more customization features in your operating system when compared to macOS and Windows.

If you are into personalizing your experience and willing to put in extra effort, Linux is for you. As an example, refer to the KDE customization guide and dock options to get basic ideas.

With macOS or Windows, you get limited to the design/preference choices finalized by Microsoft or Apple.

But, with Linux, you will find several Linux distributions that try to focus on various things.

For instance, you can opt for a Linux distribution that focuses on getting the latest features all the time, or you can opt for something that only gives you security/maintenance updates.

You can get something that looks beautiful out of the box or something that you provide crazy customization options. You will not run out of options with Linux.

I recommend starting with options that give you the best user experience.

If you are a software developer or student learning to code, Linux definitely has an edge. A lot of your build tools are available and integrated into Linux. With Docker, you can create specialized test environment easily.

Microsoft knows about this part and this is why it created WSL to give developers access to Linux environments inside Windows. Still, WSL doesnt come close to the real Linux experience. The same goes for using Docker on Windows.

I know the same cannot be said about web designing because the coveted Adobe tools are not available on Linux yet. But if you dont need Adobe for your work, Linux is a pretty good choice.

There is a learning curve to using Linux, but it provides you with insights on various things.

You get to learn how things work in an operating system by exploring and customizing it, or even just by using it.

Not everyone knows how to use Linux.

So, it can be a great skill to gain and expand your knowledge of software and computers.

As I mentioned above, it is a great skill to have. But, not just limited to expanding your knowledge, it is also useful professionally.

You can work your way to become a Linux system administrator or a security expert and fill several other job roles by learning the fundamentals of Linux.

So, learning Linux opens up a whole range of opportunities!

These days you cannot use Windows without a Microsoft account. And when you set up Windows, youll find that it tries to track your data from a number of services and applications.

While you can find such settings and disable them, it is clear that Windows is configured to disregard your privacy by default.

Thats not the case in Linux. While some applications/distributions may have an optional feature to let you share useful insights with them, it has never been a big deal. Most of the things on Linux are tailored to give you maximum privacy by default without needing to configure anything.

Apple and Microsoft on the other hand have clever tactics to collect anonymous usage data from your computer. Occasionally, they log your activity on their app store and while you are signed in through your account.

Got a tinkerer in you? If you like to make electronics or software projects, Linux is your paradise.

You can use Linux on single-board computers like Raspberry Pi and create cool things like retro gaming consoles, home automation systems, etc.

You can also deploy open source software on your own server and maintain them. This is called self-hosting and it has the following advantages:

Clearly, youll be doing all this either directly with Linux or tools built on top of it.

Linux is not a flawless choice. Just like everything, there are some downsides to Linux as well. Those include:

Every so often it is not just about learning a new skill, it is more about getting comfortable as quickly as possible.

If a user cannot get their way around the task they intend to do, it is not for them. It is true for every operating system. For instance, a user who uses Windows/macOS, may not get comfortable with Linux as quickly.

You can read our comparison article to know the difference between macOS and Linux.

I agree that some users catch on quicker than others. But, in general, when you step into the Linux world, you need to be willing to put a bit of effort into learning the things that are not obvious.

While we recommend using the best Linux distributions tailored for beginners, choosing what you like at first can be overwhelming.

You might want to try multiple of them to see what works with you best, which can be time-consuming and confusing.

Its best to settle with one of the Linux distributions. But, if you remain confused, you can stick to Windows/macOS.

Linux is not a popular desktop operating system.

This should not be of concern to a user. However, without having a significant market presence, you cannot expect app developers to make/maintain tools for Linux.

Sure, there are lots of essential and popular tools available for Linux, more than ever. But, it remains a factor that may mean that not all good tools/services work on Linux.

Refer to our regularly updated article on Linuxs market share, to get an idea.

As I mentioned above, not everyone is interested in bringing their tools/apps to Linux.

Hence, you may not find all the good proprietary offerings for Windows/macOS. Sure, you can use a compatibility layer to run Windows/macOS programs on Linux.

But that doesnt work all the time. For instance, you do not have official Microsoft 365 support for Linux and tools like Wallpaper Engine.

If you want to game on your computer, Windows remains the best option for its support for the newest hardware and technologies.

When it comes to Linux, there are a lot of ifs and buts for a clear answer.

Note that you can play a lot of modern games on Linux, but it may not be a consistent experience across a range of hardware. As one of our readers suggested in the comments, you can use Steam Play to try many of the Windows-exclusive games on Linux without potential hiccups.

Steam Deck is encouraging more game developers to make their games run better on Linux. And, this will only improve in the near future. So, if you can take a little effort to try your favorite games on Linux, it may not be disappointing.

That being said, it may not be a seamless experience for everyone. You can refer to our gaming guide for Linux to explore more if interested.

I know not everyone needs it. But, there are tech support options that can guide users/fix issues remotely on their laptop or computer.

With Linux, you can seek help from the community, but it may not be as seamless as some professional tech support services.

Youll still have to do most of the hit and try stuff on your own and not everyone would like it.

I am primarily a Linux user but I use Windows when I have to play games. Though my preference is Linux, I have tried to be unbiased and give you enough pointers so that you can make up your mind if Linux is for you or not.

If you are going for Linux and have never used it, take the baby step and use Linux in a virtual machine first. You can also use WSL2 if you have Windows 11.

I welcome your comments and suggestions.

Read the rest here:

Advantages and Disadvantages of Using Linux - It's FOSS