GitHub Universe: GitHub Sponsors, Dark Mode, and auto-merge for PRs – SDTimes.com

GitHub wants to improve the daily lives of developers as well as make it easier for companies to invest in open source with new updates and features announced at this weeks GitHub Universe conference. The conference runs until December 10, and sessions can be viewed on demand after the event.

Here are a number of announcements that the company has made during GitHub Universe so far:

GitHub SponsorsThis is a feature that was launched last year to allow individuals to support open source developers. Now the program has expanded to allow companies to invest in their critical open-source dependencies as well.

For many businesses, open source provides critical components for their software and services, and they would like to support the maintainers of those projects so they can continue to thrive. However, setting up individual procurement agreements in many organizations can be a complex task for both the company and the recipient of the funds, Shanku Niyogi, senior vice president of product at GitHub wrote in a post.

Companies can use either PayPal or a credit card to make payments to open source developers.

Improved daily experienceThere are a number of new features being released that are aimed at improving the GitHub user experience, such as the addition of dark mode, which is now available as a public beta.

It is introducing auto-merge for pull requests, which provides pull request authors with the option of opting in to having a pull request merged automatically once it has passed reviews and checks, rather than having to monitor requests and merge them manually.

GitHub is also making Discussions, which was introduced as a limited beta last year, available in all public repositories. Discussions allows open-source projects to have a dedicated space to connect, ask and answer questions, and have open-ended conversations, the company explained.

GitHub Actions also got a few new updates, such as environments, required reviewers, deployments and deployment logs, and a workflow visualizer. Environments enable separation of deployment and development spaces. Required reviewers is a feature that pauses attempted deployments until a reviewer approves them. Deployment logs show which version of code is running in an environment, when it was deployed, why it was deployed, and past versions. Workflow visualization maps workflows and tracks their progression in real time, which helps teams break down and understand complex workflows and communicate status.

Another new feature is dependency review, which shows which dependencies were added, removed, or updated. It also shows release dates, how many projects those components are used in, and vulnerability information. This feature helps reviewers and contributors understand their dependencies and possible vulnerabilities in them.

GitHub Enterprise Server 3.0 RC1This will be shipping on December 16 to enterprise customers. GitHub Enterprise Server 3.0 introduces new CI/CD and automation capabilities, such as the ability to automate Advanced Security, including new features secret scanning and code scanning. Secret scanning is being introduced as a beta feature, and it allows developers to scan for hard-coded credentials in a repository. Code scanning is a feature already available in GitHub now being added to Enterprise Server. It helps developers find and prevent security vulnerabilities.

This release also adds support for GitHub for mobile, enabling developers to work in GitHub Enterprise Server instances wherever they want.

Go here to see the original:

GitHub Universe: GitHub Sponsors, Dark Mode, and auto-merge for PRs - SDTimes.com

What Are Android Skins? – How-To Geek

Samsung/Google

Have you ever noticed that Android on a Samsung phone doesnt look like Android on a Google Pixel phone? Both use the same operating system, but look completely different. Whats the deal with that?

Not all Android devices look the same, but were not just talking about the physical appearance of the hardware. Many manufacturers that produce Android devices use their own custom skins to make the operating system look unique.

There are a few things youll have to understand about Android before we dive into skins specifically. Well explain what exactly skins are, why manufacturers are allowed to modify Android, and what it all means for the Android ecosystem as a whole.

Before we get to the skins, its important to understand the OS at its core. Android is an open-source operating system developed by Google. The open-source part is what makes Android skins possible.

Google makes changes and updates to Android, and then releases the source code to the Android Open Source Project (AOSP). This original code is what many refer to as stock or vanilla Android because its a very bare-bones version.

Manufacturers, like Samsung, LG, OnePlus, and others, start with stock Android. However, because Androids code is open-source, theyre free to modify it to their liking. If they want to include Google apps and services on their devices, though, they must first meet a few requirements.

When a new version of Android is released, its up to manufacturers to customize it and send it to their own devices. Google isnt responsible for updating all Android devices. Stock Android is simply the starting point on which other companies can build.

An Android skin is most easily described as a modified version of stock Android.Here are some of the most popular Android skins:

There are varying levels of modification when it comes to Android skins. For example, Google Pixel devices dont run stock Android, but Googles user interface (UI) customizations are fairly minimal. Samsung Galaxy devices, on the other hand, run One UI, and they look quite a bit different than stock Android.

Heres the thing, though: Android skins are really much more than just skins. Theyre all actually a unique version of the Android operating system.

Samsungs One UI is probably the most widely used Android skin. Everything from the Settings menu and the lock screen, to the notification shade, has been customized in some way. This is the case with most Android skinsthe most noticeable customizations are on the surface.

However, skins are more than just aesthetics. Samsung phones have many software features you wont find on other devices. For example, the Samsung Galaxy Fold has tons of custom features for its folding display. Skins allow a manufacturer to not only customize the look, but also throw in special features to differentiate its devices.

As we mentioned above, manufacturers must meet certain requirements if they want to include the Google Play Store and other Google services on their devices. Google sets these requirements so Android apps will work consistently across different skins.

This is why Android devices that ship with Google services generally work the same. They might look very different, but, for the most part, everything will be where you expect it to be. This also means if you switch from a Samsung Galaxy phone with One UI to a OnePlus with OxygenOS, all your apps will still work.

The main takeaway here is that an Android skin is simply a modified version of the Android operating system. Still, if an Android device is going to include Google services, those modifications can only go so far.

Skins are often a subject of debate when it comes to timely updates. Many Android devices dont receive the latest updates until several months after Google releases them. But are skins to blame for this problem? Well, kind of.

As we explained above, when Google releases an Android update, the company shares the source code with the Android Open Source Project. Its then up to the device manufacturers to make their custom modifications and send it off to their devices.

Google has an advantage here, as it makes Pixel devices and the software changes are minimal. Its easy for Google to send the latest updates to Pixel devices as soon as theyre available. Manufacturers like Samsung, however, have more work to do.

Android skins are more than just skins. Try not to think about the Android version number as much as the version of the skin youre using. Perhaps your Samsung device isnt on the latest version of Android, but theres a good chance it has the latest version of Samsungs One UI.

For example, Amazon devices are many Android versions behind, but no one cares. People care more about being on the latest version of Fire OS than the latest version of Android. Its helpful to think of One UI, OxygenOS, and other skins in the same way.

If youll always need the latest Android release as soon as possible, a Google Pixel phone is the way to go. All other devices will always lag behind a bit, but, as we covered above, for most folks, that wont be a big deal.

RELATED: Pixel 5 Review: The Nexus Returns

View post:

What Are Android Skins? - How-To Geek

Red Hat resets CentOS Linux and users are angry – ZDNet

Red Hat, CentOS's Linux parent company, announced it was "shifting focus from CentOS Linux, the rebuild of Red Hat Enterprise Linux (RHEL), to CentOS Stream, which tracks just ahead of a current RHEL release." In other words, CentOS will no longer be a stable point distribution but a rolling release Linux distribution. CentOS users are ticked off.

Why? First, you need to understand what's going on. A rolling-release Linux is one that's constantly being updated. Examples of these include Arch, Manjaro, and openSUSE Tumbleweed. Here, CentOS Stream will be RHEL's upstream (development) branch. This may sound like CentOS will be RHEL's beta, but CentOS denies this.

In the CentOS FAQ, the company states: "CentOS Stream will be getting fixes and features ahead of RHEL. Generally speaking, we expect CentOS Stream to have fewer bugs and more runtime features than RHEL until those packages make it into the RHEL release."

Continuing on, the fixed-release model is the one most server Linux distributions have historically used. For example, besides Red Hat using it for RHEL, Canonical uses it for its mainstream Ubuntu Linux release and SUSE uses it for SUSE Linux Enterprise Server (SLES). In fixed releases, major distributions are made on a schedule, with security patches and minor updates made as needed.

Each approach has its advantages and disadvantages. For instance, with a rolling release, major bugs might appear in a production system. On the other hand, in a fixed-release Linux, major improvements may take months, or even years, to appear.

Some rolling release Linux distributions are used in production. These tend to be Internet of Things (IoT) Linux operating systems such as Fedora IoT, Clear Linux, and Ubuntu Core. They're not used for servers, where stability and a wide variety of programs are valued more highly than running the latest, bleeding-edge software.

In any case, it's very clear that Red Hat doesn't see CentOS Stream as a production server. As a server for RHEL customers to use to see what the next version of RHEL will bring to them, yes, but for day-to-day work? No.

As Chris Wright, Red Hat's CTO, said when CentOS Stream was introduced, "developers require earlier access to code, improved and more transparent collaboration with the broader partner community, and the ability to influence the direction of new RHEL versions. It is these opportunities that CentOS Stream is intended to address."

Then, however, it was to be, as Wright said, "a parallel distribution to existing CentOS." You see, CentOS is an extremely popular server operating system in its own right. I run it myself both on servers in my home office and TMDHosting.

I'm far from alone. By W3Tech's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%.

If you think you just realized why Red Hat might want to remove CentOS from the server playing field, you're far from the first to think that. For years, CentOS has been the choice of experienced Linux administrators who felt little need for support, while RHEL was what companies chose who wanted the belts and suspenders of full support.

Now, with this move, thousands of companies will need to move to a different Linux variant. They're not happy.

Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle. That means if you're using CentOS 7, you'll see support through June 30, 2024. Red Hat may also offer extended life cycle support for RHEL and CentOS 7, but that hasn't been decided yet.

As for CentOS 8, that's another story. Red Hat will only continue to update it until the end of 2021. CentOS 8 users had expected support until 2029. They're livid.

On Hacker News, the leading comment is: "Imagine if you were running a business, and deployed CentOS 8 based on the 10-year lifespan promise. You're totally screwed now, and Red Hat knows it. Why on earth didn't they make this switch starting with CentOS 9???? Let's not sugar coat this. They've betrayed us."

Over at Reddit/Linux, one person wrote, "The use case for CentOS, is completely different than CentOS Stream, many many people use CentOS for production enterprise workloads not for dev, CentOS Stream may be ok for dev/test but it is unlikely people are going to adopt CentOS Stream for prod."

Another Redditor wrote, "We based our Open Source project on the latest CentOS releases since CentOS 4. Ourflagship product is running on CentOS 8 and we *sure* did bet the farmon the promised EOL of 31st May 2029."

He continued, "CentOS Stream" is supposedly now the new answer, but the obvious downside is that stability and dependability get sacrificed on the altar of bleeding edge. In the past, we could bet an even money on the fact that something built in the X.0 release of the OS would still run fine when the OS went EOL. The deviations from this were few and usually happened for good reasons." He concluded, "I'm not happy. But hey, cool. If Red Hat is butchering the horse we bet our livelihood on, then we'll move elsewhere and take a couple of thousand clients with us. /shrug."

Not everyone hates this move. Jim Perrin, now a Microsoft Principal Program Manager and former Red Hat developer and CentOS Board member, wrote this new CentOS approach has three advantages:

It makes RHEL development more transparent and reliable.

It provides a way for ISVs and developers to contribute fixes and features.

It provides a way for the community to provide feedback.

For Perrin, "CentOS Stream provides a way for users to submit pull requests and to make their case for why it should be included. This obviously doesn't mean everyone will get their way, but it's a stark improvement from the past."

Wright, in a blog post, argues that the CentOS Stream is stable enough for production. CentOS Stream is a "rolling preview" of what's next in RHEL, both in terms of kernels and features. Facebook runs millions of servers supporting its vast global social network, all of which have been migrated (or are migrating) to an operating system they derive from CentOS Stream.

From where Wright sits, "CentOS Stream isn't a replacement for CentOS Linux; rather, it's a natural, inevitable next step intended to fulfill the project's goal of furthering enterprise Linux innovation."

Wright explained, "The technology world we face today isn't as simple as what we faced even a year ago, let alone five years ago. From containerized applications and cloud-native services to rapid hardware innovations and ecosystems shifting to Software-as-a-Service (SaaS), the operating system can be hard-pressed to answer even one of these needs, especially at scale and in a responsive manner. This is where we see CentOS Stream fitting in. It provides a platform for rapid innovation at the community level but with a stable enough base to understand production dynamics."

In other words, Red Hat and CentOS sees a world where the best features of the rolling release and point release methods are combined. They may be right. But many users and businesses would have appreciated more time and warning that the way they'd been using CentOS for years was going to be pulled out from underneath their feet.

Related Stories:

See more here:

Red Hat resets CentOS Linux and users are angry - ZDNet

Checkmarx makes its automated AST solution available to all DoD agencies – Help Net Security

Checkmarx announced that it has been accepted into the U.S. Department of Defenses (DoD) Iron Bank repository and is now available through the U.S. Air Force Platform One application portal.

With this, Checkmarx furthers its commitment to supporting the public sector by making its automated application security testing (AST) solution available to all DoD agencies in the form of a hardened container, helping them to confidently build and release secure software while meeting the strict security and compliance requirements of the U.S. military.

A project of the U.S. Air Force designed to deliver the benefits of DevSecOps across the entire DoD, Platform One provides Iron Bank, a centralized artifacts repository, with a pre-approved collection of solutions that have undergone extensive auditing and approval steps to streamline Authority to Operate (ATO) processes.

Checkmarxs hardened container instance was developed in coordination with the DoD and has achieved a Certificate to Field (CtF) from the USAF Platform One team. This enables all DoD agencies and developers to easily acquire and integrate the Checkmarx solution into their DevOps environments and automatically insert security into the entire SDLC, while also avoiding lengthy ATO timelines.

Notably, this expands Checkmarxs long-standing partnership with the DoD, already supporting the U.S. Navys Naval Information Warfare Center Pacific (NIWC PAC) division and the U.S. Air Force Business & Enterprise Systems Directorate (USAF BES), among other agencies, in their DevSecOps initiatives.

As software becomes more complex, bringing with it a vast attack surface, the DoD has made it a priority to arm development teams with best-in-class solutions to build and deploy applications in a more secure manner, said Nicolas Chaillan, Chief Software Officer and Co-Lead for the DoD Enterprise DevSecOps Initiative, U.S. Air Force.

Checkmarx has been a valued partner to the USAF and DoD for years and this latest step in bringing a hardened version of their solution to Iron Bank and Platform One will be invaluable as we execute on our mission to shift to a DevSecOps model across our entire branch.

Checkmarx offers automated solutions that simplify and speed up the process of security testing throughout software development. The companys solutions integrate seamlessly with developer workflows and tools to quickly find and remediate vulnerabilities in both custom and open source code before software is released.

Public sector agencies that leverage Checkmarx are able to integrate enterprise-grade security testing into their DevOps environments, while meeting compliance requirements for FISMA, NIST, and STIG, among others, and decreasing time to ATO.

The Iron Bank and Platform One synergy is a truly logical way of bringing the benefits of faster time-to-market and lower development costs to a complex enterprise like the DoD, said Peter Archibald, Federal Systems Manager, Checkmarx.

The genius and simplicity of this approach lies within the hardened containers. This is a significant evolution in how the DoD is innovating secure development, and were thrilled to be a part of the movement as we elevate their approach to modern DevOps and software resiliency.

More:

Checkmarx makes its automated AST solution available to all DoD agencies - Help Net Security

Red Hat and GitHub Collaborate to Expand the Developer Experience on Red Hat OpenShift with GitHub Actions – Business Wire

RALEIGH, N.C. & SAN FRANCISCO--GITHUB UNIVERSE--(BUSINESS WIRE)--Red Hat, Inc., the world's leading provider of open source solutions, and GitHub, the software collaboration platform home to more than 50 million developers, today announced extended collaboration between the two companies, emphasizing Red Hat OpenShift through GitHub Actions and more. Red Hat is adding Red Hat GitHub Actions to the GitHub Marketplace, bringing GitHubs DevOps, continuous integration/continuous development (CI/CD) and developer workflow automation tools to Red Hat OpenShift. This further refines the application development capabilities of the worlds leading enterprise Kubernetes platform with GitHub Actions, adding greater freedom to how developers can build and deploy applications on Red Hat OpenShift across the open hybrid cloud.

GitHub has become nearly synonymous with developer tools and serves as the home for many popular software projects, including open source communities like the Linux kernel and Kubernetes. Using GitHubs built-in powerful, flexible CI/CD solution, GitHub Actions puts automation directly in the developer path, making it possible for nearly any event in a GitHub repository, like a pull-request or issue comment, to trigger workflows that can build and deploy applications across an IT environment and automate nearly any process in the software lifecycle. Organizations often want to bring the familiar, collaborative experience of GitHub to their developers, as well as provide a more secure, common platform for working with open source communities. This makes the availability of tooling like Actions on enterprise Kubernetes platforms a crucial component for the future of IT.

Red Hat OpenShift now supports GitHub Actions, enabling organizations to standardize and scale their use of open, standardized developer toolchain components like Quay, Buildah, or Source-to-Image (s2i). This helps to meet developers where they are and provides greater choice and flexibility to OpenShift customers in how they build and deploy applications. The new GitHub Actions for Red Hat OpenShift, along with existing actions on GitHub Marketplace and action workflows, make it possible to achieve simple as well as complex application workflows on Red Hats enterprise Kubernetes platform using an extensive array of standards-based tools.

As part of the collaboration, GitHub has also joined OpenShift Commons, a community that helps drive connections and collaboration across the OpenShift ecosystem. Beyond Actions and GitHub Marketplace, Red Hat and GitHub are also exploring self-hosted GitHub runners for OpenShift. A runner is the combined application and server that hosts a job and carries out the steps for an Action workflow. Self-hosting runners gives IT teams more control and flexibility over the hardware and software included as part of their environment. This means that end users can increase memory size, enable GPUs, or install software that may only be available locally as part of a tailored application development experience.

The addition of GitHub Actions builds on Red Hat OpenShifts robust developer experience, which includes OpenShift GitOps (based on ArgoCD) and OpenShift Pipelines (based on Tekton). OpenShift is now able to provide a complete solution for DevOps and GitOps practitioners as they seek to build, deploy, and maintain cloud-native applications.

Availability

GitHub Actions on Red Hat OpenShift are available now via GitHub Marketplace.

Supporting Quotes

Joe Fernandes, vice president, Products, Cloud Platforms, Red HatRed Hat OpenShift is more than a Kubernetes platform for deploying cloud-native applications; its a powerful, flexible foundation for developers to build the latest and greatest applications. By adding GitHub Actions to our existing set of DevOps and GitOps capabilities and by working with GitHub to further refine and expand the developer experience, we aim to make Red Hat OpenShift the most complete cloud-native development platform available, one built on the open standards of Kubernetes and Linux containers and backed by the vast expertise of Red Hat.

Jeremy Epling, vice president of Product Management, GitHub"GitHub is the home for all developers, and were excited to expand our collaboration with Red Hat to accelerate software development within the enterprise. Combining Red Hat OpenShift with GitHub Actions will help our customers more securely automate nearly all their cloud-native development and DevOps workflows, providing a unified experience across the hybrid cloud that is exceptionally friendly to developers, security and operations teams. Were looking forward to working more closely with Red Hat, and helping our customers deliver better and faster with open source software and standards."

Additional Resources

Connect with Red Hat

About Red Hat, Inc.

Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

About GitHub

GitHub is the developer company. As the home to more than 50 million developers from across the globe, GitHub is where developers can create, share, and ship the best code possible. GitHub makes it easier to work together, solve challenging problems, and create the worlds most important technologies.

Red Hats Forward-Looking Statements

Certain statements contained in this press release may constitute "forward-looking statements" within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements. The forward-looking statements included in this press release represent the Company's views as of the date of this press release and these views could change. However, while the Company or its parent International Business Machines Corporation (NYSE:IBM) may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company's views as of any date subsequent to the date of this press release.

Red Hat, the Red Hat logo and OpenShift are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries.

See the original post:

Red Hat and GitHub Collaborate to Expand the Developer Experience on Red Hat OpenShift with GitHub Actions - Business Wire

Business Talk In conversation with Muggie van Staden – BusinessTech

Muggie van Staden is the managing director and chief executive officer of Obsidian Systems, an open source solutions company that he has helmed for 21 years.

Leveraging the Linux open source way as a driving force, van Staden has embedded a culture of innovation, relevance, dedication and collaboration in this niche IT service provider. As an engineer, van Stadens inherent nature is to solve problems in unique and effective ways.

He says that Obsidian is the name that has become synonymous with providing peace of mind when it comes to open solutions, putting the needs of customers first and building future-facing technology solutions together.

Obsidian Systems has a culture of dynamic innovation coupled with a strong tendency towards trying new things (among many other attributes), and not to be afraid to make mistakes. This, and taking full accountability and responsibility for actions is encouraged.

Our people are encouraged to learn and grow themselves to their full potential as managers, innovators, problem solvers, and citizens of South Africa.

Obsidian Systems, along with its subsidiaries GuruHut, Autumn Leaf, and RadixTrie, is an established supplier of enterprise-ready open source software solutions.

We focus on providing the South African market with vendor-certified products; local expertise to provide consulting, development, and support; and vendor-certified training. We help teams to get their code to the best compute and the correct data, says van Staden.

Obsidian Systems is continually leading the charge in open source with the likes of Linux and Hadoop, he says. With Covid-19 highlighting the importance of digital transformation, it has created the opportunity for an innovative company such as Obsidian Systems to compete globally in the open source market.

In this episode of Business Talk with Michael Avery, van Staden talks through how businesses, including Obsidian Systems have adapted in South Africa off the back of Covid-19.

He talks about a new way of operating post lockdown, and how in a state of economic uncertainty many look to open source, where they may not have before. The economy of it becomes appealing, he says.

He highlights some of the trends around what is essentially free software, and the open source code for businesses to thrive in the future.

The full interview is embedded below. You can find all the Business Talk with Michael Avery interviews here.

Read: Business Talk In conversation with Michael Jordaan

See more here:

Business Talk In conversation with Muggie van Staden - BusinessTech

AWS Responds To Anthos And Azure Arc With Amazon EKS Anywhere – Forbes

Amazon made strategic announcements related to container services at the re:Invent 2020 virtual event. Here is an attempt to deconstruct the container strategy of AWS.

Containers

The cloud native ecosystem is crowded and even fragmented with various distributions of Kubernetes. Customers can choose from upstream Kubernetes distribution available for free or choose a commercial offering such as Charmed Kubernetes from Canonical, Mirantis Container Cloud, Rancher Kubernetes Engine, Red Hat OpenShift and VMware Tanzu Kubernetes Grid.

Amazon has decided to jump the Kubernetes distribution bandwagon with Amazon EKS Distribution (EKS-D), which powers the managed EKS in the cloud. Customers can rely on the same versions of Kubernetes and its dependencies deployed by Amazon EKS, which includes the latest upstream updates and comprehensive security patching support.

Amazon EKS-D comes with source code, open source tooling, binaries and container images, and the required configuration via GitHub and S3 storage locations. With EKS- D, Amazon promises extended support for Kubernetes versions after community support expires, providing updated builds of previous versions, including the latest security patches.

Customers running OpenShift or VMware Tanzu are more likely to run the same flavor of Kubernetes in the cloud. Most of the commercial Kubernetes distributions come with services and support for managing hybrid clusters. In this case, ISVs like Red Hat and VMware will leverage Amazon EC2 to run their managed Kubernetes offering. They decouple the underlying infrastructure (AWS) from the workloads, making it possible to port applications to any cloud.

Amazons ultimate goal is to drive the adoption of its cloud platform. With EKS-D, AWS has built an open source bridge to its managed Kubernetes platform, EKS.

Backed by Amazons experience and the promise to maintain the distribution even after the community maintenance window expires, its a compelling option for customers. An enterprise running EKS-D will naturally use Amazon EKS for its hybrid workloads. This reduces the friction between using a different Kubernetes distribution for on-prem and cloud-based environments. Since its free, customers are more likely to evaluate it before considering OpenShift or Tanzu.

Additionally, Amazon can now claim that it made significant investments in open source by committing to maintain EKS-D.

The design of EKS-D, which is based on upstream Kubernetes, makes it easy to modify the components such as the storage, network, security, and observability. The cloud native ecosystem will eventually build reference architectures for using EKS-D with their tools and components. This makes EKS-D better than any other distribution available in the market.

In summary, EKS-D is an investment from Amazon to reduce the friction involved in adopting AWS when using a commercial Kubernetes distribution.

According to AWS, Amazon EKS Anywhere is a new deployment option for Amazon EKS that enables customers to easily create and operate Kubernetes clusters on-premises, including on their own virtual machines (VMs) and bare metal servers.

EKS Anywhere provides an installable software package for building and managing Kubernetes clusters on-premises and automation tooling for cluster lifecycle support.

EKS-A can be technically installed on any infrastructure with available compute, storage, and network resources. This includes on-premises and cloud IaaS such as Google Compute Engine and Azure VMs.

Simply put, Amazon EKS Anywhere is an installer for EKS-D with AWS specific parameters and options. The installer comes with the defaults that are optimized for AWS. It works best on Amazon Linux 2 OS and tightly integrated with App Mesh for service mesh, CloudWatch for observability and S3 for cluster backup. When installed in a VMware environment, it even provides infrastructure management through the integration with vSphere APIs and vCenter. EKS-A relies on GitOps to maintain the desired state of cluster and workloads. Customers can subscribe to an Amazon SNS channel to automatically get updates on patches and releases.

Amazon calls EKS-A an opinionated Kubernetes environment. The keyword here is opinionated, which translates to as proprietary as it can get. From container runtime to the CNI plug-in to cluster monitoring, it has a strong dependence on AWS building blocks.

There is nothing open source about EKS-A. Its an opaque installer that rolls out an EKS-like cluster on a set of compute nodes. If you want to customize the cluster components, switch to EKS-D, and assemble your own stack.

EKS-A supports three profiles - fully connected, semi-connected and fully disconnected. Unlike ECS Anywhere, EKS-A clusters can be deployed in offline, air-gapped environments. Fully connected and semi-connected EKS-A clusters talk to AWS cloud but have no strict dependency on the cloud.

EKS-A is Amazons own version of Anthos. Just like Anthos, its tightly integrated with vSphere, can be installed on bare metal or any other cloud. But the key difference is that there is no meta control plane to manage all the EKS-A clusters from a single pane of glass. All other capabilities such as Anthos Service Mesh (ASM) and Anthos Config Management (ACM) will be extended to EKS-A through App Mesh and Flux.

Unlike Anthos, EKS-A doesnt have the concept of admin clusters and user clusters. What it means is that customers cannot use EKS-A for the centralized lifecycle management of clusters. Every EKS-A cluster is independent of others with optional connectivity to the AWS cloud. This topology closely resembles the stand-alone mode of Anthos on bare metal.

EKS-A will eventually become the de facto compute environment for AWS Edge devices such as Snowball. Similar to K3s, Amazon may even plan to launch an EKS Anywhere Mini to target single node installations of Kubernetes for the edge. It may have tight integration with AWS Greengrass, the software for edge devices.

EKS-A is the first, real multi-cloud software coming from AWS. If you are not concerned about the lock-in it brings, EKS-A dramatically simplifies deploying and managing Kubernetes. It brings AWS a step closer to multi-cloud platforms such as Anthos, Azure Arc, Rancher, Tanzu Mission Control and Red Hat Advanced Cluster Manager for Kubernetes.

Though EKS-A comes across as a proprietary installer for EKS, it goes beyond that. Combined with a new addition called EKS Console, multiple EKS-A clusters can be managed from the familiar AWS Console. Of course, the EKS Console will provide visibility into all the managed clusters running in AWS.

EKS-A clusters running in fully-connected and semi-connected modes can be centrally managed from the EKS Console. AWS may open up the ability to attach non-EKS clusters to the EKS console by running an agent in the target cluster. This brings the ability to apply policies and roll out deployments from a single window.

When Amazon connects the dots between the EKS Console and EKS-A, it will deliver what Azure Arc promises - a single pane of glass to manage registered Kubernetes clusters. Extending this, EKS Console may even spawn new clusters as long as it can talk to the remote infrastructure, which will resemble Anthos. You see the obvious direction in which Amazon is heading!

The investments in ECS Anywhere, EKS Distribution, EKS Anywhere and EKS Console play a significant role in Amazons container strategy. They lay a strong foundation for future hybrid cloud and multi-cloud services expected from AWS.

See the original post:

AWS Responds To Anthos And Azure Arc With Amazon EKS Anywhere - Forbes

What’s New on F#: Q&A With Phillip Carter – InfoQ.com

Key Takeaways

Last month, at the 2020 edition of .NET Conf, Microsoft released the latest version of F#, together with .NET 5. F# is one of the .NET programming languages, along with C# and Visual Basic. It is functional-first, cross-platform, open-source, and its developed by Microsoft and several open source partners and contributors.

InfoQ interviewed Phillip Carter, program manager at Microsoft, to talk about functional programming, F#, and the new features of F# 5.

InfoQ: What exactly is functional programming?

Phillip: Functional Programming (FP) has several accepted definitions depending on who you talk to. The way I see it, Functional Programming is just programming with functions and orienting your program around some of the properties of functions. Functions take inputs and produce outputs. Do you have some data? Pass it to a function to get a result. Do you have a result that needs to be used to produce another result? Pass that to a function. Do you have a chain of functionality that needs to execute to produce a result? Write several functions and compose them such that outputs from one are inputs to another. If you have a program state, you can change it by passing it to a function and observing the result.

FP and Object-Oriented Programming (OOP) arent really at odds with each other, at least not if you use each as if they were a tool rather than a lifestyle. In FP, you generally try to cleanly separate your data definitions from functionality that operates on. In OOP, youre encouraged to combine them and blur the differences between them. Both can be incredibly helpful depending on what youre doing. For example, in the F# language we encourage the use of objects to encapsulate data and expose functionality conveniently. Thats a far cry from encouraging people to model everything using inheritance hierarchies, and at the end of the day you still tend to work with an object in a functional way, by calling methods or properties that just produce outputs. Both styles can work well together if you dont all in on one approach or the other.

InfoQ: How does F# work?

Phillip: F# works as a somewhat typical compiler in that it reads source code as a string and produces tree-like structures until it finally emits a data format (IL). That process generally looks like this:

However, this isnt the only way that we look at the F# compiler. The compiler isnt just a black box that emits IL from source code. Its also a:

This is because the F# compiler is also run as a server process in IDE tooling. When youre writing code in an editor, the kind of features youre used to need to concurrently access data from different stages of compilation. For example, code folding in an IDE depends on the syntax tree of the source code in your document. Renaming a symbol requires the ability to resolve the symbol your caret is at in a document and distinguish it from others that may have the same name, but dont refer to the same thing (semantic information). And these kinds of features need to be able to be requested concurrently, be cancellable, etc.

So, F# is both a batch process that does a whole lot, but also serves a very different role than most compilers out there.

InfoQ: Microsoft just released F# 5 at the latest .NET Conf. How does it compare with the previous F# versions?

Phillip: F# 5 is our first foray into an adjusted focus for F#. For the past few years, F# has been focused on being a great choice for cross-platform development, and we spent most of our time bringing up various aspects of the F# language, core library, and tooling to work cross-platform. F# 5 finishes this by making F# Interactive work great and it also has some awesome new features like package references that allow you to pull in a package from NuGet interactively.

The F# Interactive work is combined with some extensive support for Jupyter and VSCode notebooks to start our rethink of what interactive programming means for F#. Additionally, data science and ML are domains that heavily involve interactive programming, and so we added some features to make F# a great choice for those domains when combined with other libraries. This is our first journey into this space, and its something F# has traditionally had a strength it, but hadnt been explored since about 2010.

Finally, features like String Interpolation, nameof, open type declarations, and some enhancements to Computation Expressions were done to just make general F# programming even better. We have general areas of focus for each release, but we also bring along a bag of goodies each time we ship a new version, because we recognize that sometimes the best things you do are just making things better for everyone.

InfoQ: What are the advantages of using F# today?

Phillip: Id say that the biggest advantages of using F# today are twofold:

It is a carefully designed language intended to give you tools to get a lot done without writing much code. F# developers, especially new ones, regularly take to Twitter to exclaim how quickly and conveniently they were able to just get some shit done and know that their code is correct because the compiler guided them towards correctness.

The F# community, though smaller than others, is highly engaged and produces a lot of high-quality libraries and guidance in the form of docs and blog posts. If you run into a problem, chances are theres already a post somewhere online explaining how to solve it, or if not, someone will jump to help you out at a moments notice. There is a consistent level of excitement in the F# community and its addicting to participate in.

Whats interesting is that even though F# runs on .NET, which often has an enterprisey kind of reputation, F# itself doesnt really suffer the negative aspects of that kind of reputation. It can be used for enterprise work, but its usually seen as lightweight (as opposed to heavyweight enterprisey languages) and its community is engaged and available as opposed to stuck behind a corporate firewall.

InfoQ: Is F# 5 supported by Visual Studio?

Phillip: Yes, F# is supported in Visual Studio. It has been fully supported since 2010 and has undergone an immense number of improvements since then. As of Visual Studio 2019 version 16.8, it includes a swath of productivity features and code fixes such as IntelliSense, semantic colorization, rename refactoring, navigation features, and more. It also performs quite favorably for very large codebases, something we focused on over the past 2 years working with paying customers who had enormous amounts of code that had to work well.

InfoQ: Can I write and deploy a service or API to the cloud using F# today?

Phillip: Of course! By virtue of running on .NET, F# can run in Azure anywhere that .NET can. Azure App Service, Azure Functions, Cosmos DB, etc. all support F# through their supported .NET SDKs. And when you use .NET 5 (or .NET Core 3.1), you can use standard OSS tools like Docker and Kubernetes for your services and deploy them anywhere.

InfoQ: How is F# being used in the industry?

Phillip: F# has wide usage in the industry today. Its used to power financial engines in banks, F# is used to produce ML models that power insurance models that process billions in claims, F# powers eCommerce infrastructure, and more. But more than anything else, its used to power apps and services that do lots of boring but useful work that developers need to fulfill business requests all across the world. Its not just used in Windows apps or a backend service, though. Through tools like WebSharper or Fable, F# is also used to power the frontend web apps and embed directly in the JavaScript ecosystem. This toolchain is even used to power the F# plugin for VSCode, which is written in 100% F# code.

InfoQ: What is the best way to start learning F# today?

Phillip: The easiest way to get started with F# is to follow our tutorial on the F# homepage for the .NET website: https://aka.ms/fsharphome

But it honestly depends on what youre trying to accomplish. Another great resource for trying out F# for full-stack apps is to look at SAFE-stack.

F# also has a foundation, the F# Software Foundation, which runs various aspects of the F# community and has its own extremely active slack that I always encourage new people to join.

You can also access the F# Software Foundation forms without a (free) membership.

My advice is to ask the community what resources are best based on the kind of things youre trying to build. Youll find great resources online just by doing a quick google search, but the community typically has the best advice to give about which way to start based on what youre trying to accomplish.

Phillip Carter is a Senior Program Manager on the .NET team at Microsoft, focusing on F#, language tooling in Visual Studio, and efforts to expand .NET in non-traditional spaces. Hes been working on .NET and languages for 5 years.

Read the original here:

What's New on F#: Q&A With Phillip Carter - InfoQ.com

Researchers Discover Dangerous Security Flaws in Code Used in Millions of Devices – Gizmodo

Photo: Dean Mouhtaropoulos (Getty Images)

Researchers at the cybersecurity firm Forescout published a new whitepaper on Tuesday detailing how 33 security flaws baked into a handful of widely used code libraries could have catastrophic consequences for millions of internet-connected devices, running the gamut from smart home and industrial tech, to devices in hospitals, retailers, and federal buildings.

The issue here, per the team behind the paper, lies with four separate open-source bundles of code: uIP, FNET, picoTCP, and Nut/Net. Adding one of these libraries to a device allows it to hook onto certain communication protocols and communicate with other machines. And because theyre free-to-use and fully open source, these libraries have become pretty popular over the years, which is a boon for developers looking to get these devices out the door quickly and cheaply. It also means when these kinds of vulnerabilities come to light (as they have in the past), the impact is that much more dramatic.

Amnesia:33"as the team collectively calls this gaggle of vulnerabilitieshasnt been exploited out in the wild yet as far as they know. That said, a determined enough attacker with a clear communication path to a vulnerable device could exploit one or more of these issues issues to slam the device with a denial-of-service attack, or force the device to leak potentially sensitive internal data. Four of the more critical security flaws open devices up to remote code execution.

To get the full picture of the number of devices using these vulnerable open-source protocols, the Forescout team tapped into data from both its customer base, and from IoT search engines like Shodan and Censys. In total, the team counted devices from over 150 vendors at risk for this particular raft of exploitsthough with the caveat that getting the full scope of this issue is pretty difficult, simply because theres so many devices in so many categories that rely on these particular code libraries to some degree.

You might be wondering why the four stacks involved dont just issue a round of security patches to put this issue to bed. But as the Forescout team explained in a ThreatPost interview, component manufacturers have spent the past two decades plugging different parts of these four free-to-use libraries into code bundles of their ownwhich then end up inside devices. Due to a broken link in the supply chain, device manufacturers using off-the-shelf parts may not realize bits of a vulnerable codebase are installed on a WiFi module theyre sourcing.

G/O Media may get a commission

The result is a problem of nearly inestimable impact.

Forescout does recommend some best practices in their whitepaper, but ultimately, this is going to be an issue that at best gets slowly fixed, piecemeal, by each company currently using these codebases.

Read more:

Researchers Discover Dangerous Security Flaws in Code Used in Millions of Devices - Gizmodo

Raspberry Pi OS update: Microsoft Teams, Zoom, and Google Meet now run better – ZDNet

Raspberry Pi Trading has announced the latest release of Raspberry Pi OS, the default Debian-based operating system that ships on SD cards for Raspberry Pi devices. Raspberry Pi OS has now been updated with Chromium version 84, the open-source foundation of Google Chrome.

The Raspberry Pi OS team says it's done a lot of testing and tweaking in Chromium 84 to ensure Google Meet, Microsoft Teams, and Zoom videoconferencing apps work well on it.

The move is part of efforts by the team behind Raspberry Pi to help users participate in the online video meetings that are now essential for work and family, almost a year after China acknowledged the coronavirus outbreak in Wuhan.

SEE: Virtual hiring tips for job seekers and recruiters (free PDF) (TechRepublic)

"They should all now work smoothly on your Raspberry Pi's Chromium," says Simon Long, a user experience engineer for Raspberry Pi.

The other big change is that Raspberry Pi's version of Chromium is dropping support for Adobe's Flash Player software. This will be the last version of Chromium on Raspberry Pi that supports Flash.

Adobe, along with Apple, Google, Microsoft and Mozilla, jointly announced in 2017 that they would end support for Flash at the end of 2020. Flash historically has been a favorite target for cybercriminals but its capabilities have largely been replaced by open web standards like HTML5, WebGL and WebAssembly.

Adobe will no longer issue free security updates after December 2020, but enterprise customers can still buy patches via Samsung-owned Harman.

"Flash Player is being retired by Adobe at the end of the year, so this release will be the last that includes it. Most websites have now stopped requiring Flash Player, so this hopefully isn't something that anyone notices," said Long.

Raspberry Pi OS is also moving to the PulseAudio sound server, which deals with a lot of the complexities with audio on Linux systems.

The biggest problem, according to Long, has been the Advanced Linux Sound Architecture (ALSA), a low-level interface that Raspberry Pi hardware has needed but restricted audio output to a single app, such as YouTube. That meant no simultaneous sound from VLC, the software that Raspberry Pi devices otherwise rely on for playing audio files.

"Similarly, if you want to switch the sound from your YouTube video from HDMI to a USB sound card, you can't do it while the video is playing; it won't change until the sound stops. These aren't massive problems, but most modern operating systems do handle audio in a more flexible fashion," explained Long.

The PulseAudio update places a layer between the audio hardware and applications that send and receive audio and allows the output to shift between different devices while it is playing.

The feature furthers Raspberry Pi's ambition to be seen as a proper PC a claim the UK company has been making since the release of the Raspberry Pi 4, which is available with up to 8GB of RAM or in the new Raspberry Pi 400 computer keyboard with 4GB of RAM.

SEE: This powerful new supercomputer will let scientists ask 'the right questions'

The Raspberry Pi 4 boots pretty quickly and apps start consistently swiftly but it's not as fast as most laptops, and that's not surprising given the OS is loading from an SD card, according to ZDNet reviewer Jamie Watson.

On the other hand, starting from a low position, the only way is up and the latest improvements to Raspberry Pi OS push it in the right direction.

Long notes that PulseAudio runs by default with audio controls using PulseAudio rather than ALSA.

Raspberry Pi users can install the OS on a new card using theRaspberry Pi Imager, or download it from Raspberry Pi'sDownloads page.

Raspberry Pi's version of Chromium is dropping support for Adobe's Flash Player software.

Read more from the original source:

Raspberry Pi OS update: Microsoft Teams, Zoom, and Google Meet now run better - ZDNet